Category Archives: Blog

Change Artist Super Powers: Observation

When I was a kid, there was a party game called Pin the Tail on the Donkey. The game involved a large wall poster of a sad-looking, tailless donkey. Armed with a replacement tail and a pin, each child attempted to give the donkey a new tail—while blind-folded and a bit dizzy from being spun around by the parent hosting the party. (I know, it sounds awful.) Obviously, the chance of an accurate placement was quite small.

Without the ability to observe what is happening, any attempts to improve a situation in your organization may be similarly misplaced. Or you may succeed—purely by chance. When you hone your ability to observe, you stand a much better chance of choosing an appropriate action. My ability to observe is my second Change Artist Super Power.

A manager called me concerned that people on his team were too timid, and could benefit from assertiveness training. I observed several meetings where the manager did 80% of the talking. When someone did get a word in, the manager interrupted. When I shared my observation, the manager was shocked and chagrined. He changed his behavior, and discovered his team had a lot to say. He also realized that his first idea for a fix was misplaced.

At another client, I observed that data about system outages was presented as monthly outage minutes in pie chart form. People knew which system was the biggest culprit in the past month, but had no idea about trends or impacts. I dug into the data and created charts that showed outage minutes over time, and how many people were affected by when a give application went down. Seeing this information in a different form allowed them to address the biggest issues, rather than pointing the roving finger of blame based on a monthly snap shot.

In both these cases, observation was key choosing appropriate action.

Observing sounds simple. In fact, it is hard work and requires practice and skill. You can practice any time, by choosing just one thing, and consciously noticing that for a short period of time. However, sharing your observations can be tricky, especially if you are an outsider and have not been invited to observe. Any time you are observing, it is imperative that you share only what you have seen and heard in neutral language. Stay away from judgement and interpretation.

What might you observe to increase your ability to solve problems?  What might you gain by having a fresh set of eyes observe your organization?

Change Artist Super Powers: Curiosity

In my work, I draw on models, frameworks, and years of experience. Yet, one of my most valuable tools is a simple one: Curiosity.

In an early meeting with a client, a senior manager expressed his frustration that development teams weren’t meeting his schedule. “Those teams made a commitment, but didn’t deliver! Why aren’t those people accountable?” he asked, with more than a hint of blame in his voice. As I spent more time in the organization, I heard other managers express similar wonderment (and blame).

I also noticed that whenever someone asked, “Why aren’t those people accountable?”—or some other blaming question, problem-solving ceased.

I know these managers wanted to deliver software to their customers as promised. But, their blaming questions prevented them from making headway in figuring out why they were unable to do so.

I started asking different questions–curious questions.

  • Who makes commitments to the customers, and on what basis? How do customer commitments, team commitments, and team capacity relate to each other?
  • When “those teams” make commitments, Is it really the people who will do the work committing, or someone else?  
  • What does “commitment” really mean here? Do all parties understand and use the term the same way? 
  • What hinders people from achieving what the managers desire?  Do teams have the means to do their work?
  • What is at stake, for which groups of people, regarding delivery of this product? 
  • What is it like to be a developer in this organization? 
  • What is it like to be a manager in this organization?
  • What is it like to be a customer of this organization?

I worked with others in the client organization to learn about these (and other) factors. We developed and tested hypotheses, engaged in conversations, made experiments, and shifted the pattern of results. 

For the most part, managers no longer ask blaming questions. They ask whether teams have the data to make decisions about how much work to pull into a sprint. They examine what they themselves say and do to reduce confusing and mixed messages. They review data, and adjust their plans.

Curiosity uncovered contradictions, hurdles, confusion, and misunderstandings. All of which we could work on to improve the situation.

So, there you have it. Curiosity is my number one Change Artist Super Power, and it can be yours, too.

Forgotten Questions of Change

I’ve been thinking about and observing organizational change for a very long time.

It seems to me that –in their enthusiasm for efficiency, planning, “managing” change– people often overlook some critical questions.

A handful of questions that could lead to more effective action, but seldom get asked:

  • What is working well now, that we can learn from?
  • What is valuable about the past that is worth preserving?
  • What do we want to /not/ change?
  • Who benefits from the way things are now?
  • Who will lose (status, identity, meaning, jobs…) based on the proposed new way?
  • How will this change disrupt the informal networks that are essential to getting work done?
  • How will this change ripple through the organization, touching the people and groups indirectly effected?
  • What holds the current pattern in place?
  • How can we dampen this change, if it goes the wrong direction?
  • What is the smallest thing we can do to learn more about this proposed course of action?
  • What subtle things might we discern that tell us this change is going in the right direction…or the wrong one?
  • What is the time frame in which we expect to notice the effects of our efforts?

What questions would you add?

Using Data in Problem-Solving

Several years ago, I was called to help an organization that was experiencing system outages in their call center. After months of outages and no effective action, they appointed an Operations Analyst to collect data and get to the bottom of the problem.

Once they had data, the managers met monthly to review it. At the beginning of the meeting, the Operations Analyst presented a pie chart showing the “outage minutes” (number of minutes a system was unavailable) from the previous month. It was clear from the chart which system the biggest source of outages for the month.

The manager for that system spent the next 40 minutes squirming as the top manager grilled him.  At the end of the meeting, the top manager sternly demanded,“Fix it!”

By the time I arrived to help, they had many months of data, but it wasn’t clear whether any thing had improved. I dove in.

I looked at trends in the total number of outage minutes each month. I plotted the trends for each application, and created time series for each application to see if there were any temporal patterns. That’s as far as I could get with the existing data. In order to hone in on the biggest offenders, I needed to know not just the number of minutes a system was down, but how many employees and customers couldn’t work for when a particular system was down. One system had a lot of outage minutes, but only a handful of specialists who supported an uncommon legacy product used it. Another system didn’t fail often, but when it did, eight hundred employees were unable to access holdings for any customers.

Though they had data before I got there, they weren’t using it effectively. They weren’t looking at trends in total outage minutes… the pie chart showed the proportion of the whole, not whether the total number was increasing or decreasing over time. Because they didn’t understand the impact, they wasted time chasing insignificant problems.

When I presented the data  in a different way, it led to a different set of questions, and more data gathering.  That data eventually helped this group of managers focus their problem-solving (and stop  pointing the roving finger of blame).

As a problem-solver, when you don’t have data, all you have to go on is your intuition and experience. If you’re lucky you may come up with a fix that works. But most good problem solvers don’t rely on luck. In some cases, you may have a good hunch what the problem is. Back up your hunches with data. In either case, I’m not talking about a big measurement program. You need good enough and “just enough” data to get started. Often there’s already some useful data, as there was for the call center I helped.

But what kind of data do you need?  Not all problems involve factors that are easily counted, like outage minutes, number of stories completed in a sprint, or number of hand-offs to complete a feature.

If you are looking at perceptions and interactions you’ll probably use qualitative data. Qualitative data focuses on experiences and qualities that we can observe, but cannot easily measure. Nothing wrong with that. It’s what we have to go on when the team is discussing team work, relationships, and perceptions. Of course, there are ways to measure some qualitative factors. Subjective reports are often sufficient (and less costly). Often, you can gather this sort of data in quickly in a group meeting.

If you are using quantitative data, it’s often best to prepare data relevant to the focus prior to the problem-solving meeting.  Otherwise, you’ll have to rely on people’s memory and opinion, or spend precious time looking up the information you need to understand the issue.

When I’m thinking about what data would be useful to understand a problem, I start with a general set of questions:

What are the visible symptoms?

What other effects can we observe?

Who cares about this issue?

What is the impact on that person/group?

What is the impact on our organization?

These questions may lead closer to the real problem, or at least confirm direction. Based on what i find, I may choose where to delve deeper, and get more specific as I explore the details of the situation

When does the problem occur?

How frequently does it occur?

Is the occurrence regular or irregular?

What factors might contribute to the problem situation?

What other events might influence the context?

Does it always happen, or is it an exception?

Under what circumstances does the problem occur?

What are the circumstances under which it doesn’t occur?

How you present data can make a big different, and may mean the difference between effective action and inaction, as was the case with the call center I helped

In a retrospective—which is a special sort of problem-solving meeting—data can make the difference between superficial, ungrounded quick fixes and developing deeper understanding that leads to more effective action—whether you data is qualitative or quantitative.

Here’s some examples how I’ve gathering data for retrospectives an other problem-solving meetings.

Data TypeMethodExamples Notes
QualitativeSpider or Radar ChartUse of XP practices.
Satisfaction with various factors.

Adherence to team working agreements.

Level of various factors (e.g. training, independence)

Shows both clusters and spreads.

Highlights areas of agreement and disagreement. 

Points towards areas for improvement.

Leaf ChartsSatisfaction.

Motivation.

Safety.

Severity of issues.

Anything for which there is a rating scale.
Use a pre-defined rating scale to show frequency distribution in the group.

Similar to bar charts, but typically used for qualitative data.
Sail boat (Jean Tabaka)Favorable factors (wind), risks (rocks), unfavorable factors (anchors), Metaphors such as this can prompt people to get past habitual thinking.
TimelinesProject, release, iteration. events over time.

Events may be categorized using various schemes. For example:

positive/negative

technical and non-technical

levels within the organization (team, product, division, industry).
Shows patterns of events that repeat over time. Reveals pivotal events (with positive or negative effects).

Useful for prompting memories, showing that people experience the same event differently.
TablesTeam skills profile (who has which skills, where there are gaps)Shows relationships between two sets of information. Shows patterns.
TrendsSatisfaction.

Motivation.

Safety.

Severity of issues.

Anything for which there is a rating scale.
Changes over time.
QuantitativePie ChartsDefects by type, module, source.

Severity of issues.


Shows frequency distribution.
Bar ChartsBugs found in testing by module + bugs found by customers by module.Frequency distribution, especially when there is more than one group of things to compare.

Similar to histograms, but typically used for quantitative data.
HistogramsDistribution of length of outages.Frequency of continuous data (not categories).
TrendsDefects.

Outages.

Stories completed.

Stories accepted/rejected.
Shows movement over time. Often trends are more significant than absolute numbers in spotting problems.

Trends may point you to areas for further investigation—which may become a retrospective action.
Scatter PlotsSize of project and amount over budget.Show the relationship between two varianles.
Time SeriesOutage minutes over a period of time.

Through-put.
Show patterns and trends over time. Use when the temporal order of the data might be important, e.g., to see the effects of events.
Frequency TablesDefects

Stories accepted on first, 2nd, 3rd, demo.
A frequency table may be a preliminary step for other charts, or stand on its own.
Data TablesImpact of not ready stories.Show the same data for a numberr of instances.

What Does Your Product Do?

When it gets dark, I turn on a light.

I can work, cook, read—long after sundown. I can see where I’m going, avoid the dog toys on the floor, and not run into furniture. If I need something that’s in the house, I can find it. The simple flip of a switch makes many things possible and solves many problems.

Light SwitchWhen I ask developers and engineering managers what their software product does, often, they don’t tell me. They regale me details equivalent to explaining the production of electricity, starting from mining coal until the switch closes a circuit. It’s all about the technical how.

Your customers may be interested in the technical how. They certainly want to know the what—what is possible on their side of the metaphorical light switch when they use your product.

It’s useful for the team to know, too. A short statement that answers three questions clarifies purpose and focuses attention:

  • What benefit does our product create?
  • What problem does our product solve?
  • For which group of people?

This clarity informs priorities, and helps people defer non-essential features. It helps keep focus on who will use the software, and how it will help them. When every member of the team can articulate the answer to these questions, they can make better decisions—and that almost always results in a product that is a better fit for function.

Assessing Team Improvement

I understand that managers who have invested effort and money in training teams in Agile methods may want to see how much those teams are improving.  There are a handful of reasonable measures to look at to see whether the organization is improving over all (which I’ve written about here and here).

You can apply some of those measures to team improvement.  Are defects trending down?  Is the ratio of value work to fixing work improving? Are teams improving their practices?

But, teams don’t exist in a vacuum.

Part of team improvement comes from the way a team works together, their approaches, and skills. Another part comes from how well the environment supports their efforts:

P=f(p,e) where P= performance, p=people, e=environment.

Teams can only improve so much, unless the environment also improves.

So, rather than look only at team results, also evaluate whether the environment that supports the team’s work is improving.  When teams aren’t improving as  hoped, this is where I start looking.

Team composition and stability. Are the teams appropriately cross-functional? What is the frequency of membership changes?  If the team isn’t well designed, or isn’t a really a team, don’t expect team-level improvement.

How work flows into teams. Are teams pulling work, or is management pushing? Are stories/features well-formed and customer facing? How much new work gets added during a sprint?

Trends in dependencies between teams. Are POs working to reduce dependencies in the way they shape stories and order the backlog? Are the teams organized to reduce hand-offs and dependencies?

Clarity of the product visions and team mission. Is it clear what problem they are solving/benefit they are creating for which people? Are team missions articulated and independent?

Adaptive planning.  Are POs and Management adjusting their expectations based on team capacity?

I expect team performance to improve with agile methods. But if you really want high-performance you need to  improve the entire system–and that is management work.

Seven Agile Best Practices

Someone I don’t know offered to teach me Agile Best Practices recently.

I tend to think there are “generally good practices,” some of which are broadly applicable.  In my experience, the search for Best Practices is often a search for Silver Bullets, and a reflect a desire for easy solutions to complex problems.  It would be nice to short circuit the difficult work of seeing and evolving a system, building capacity–but I seldom see it succeed.

And, one of my practices is to challenge assumptions. So I challenged my assumptions that there are no best practices.  And I came up with some, which I suspect are not what the dear fellow who offered to school me had in mind.

#1 Think deeply about the problem you are trying to solve.

First,  understand which problem you are trying to solve, for which people, to create which benefit.  If you don’t understand this, you are relying on luck for your chosen “solution” to work.

#2 Question your assumptions about the causes of the problem.

Assumptions about how work works and how people work will determine the solution space you explore.  For example, if someone assumes people aren’t finishing stories by the end of sprint is because they are not sufficiently accountable, he or she will probably not consider the way work flows into the teams, or dependencies between teams.

Notice that I wrote the causes, not  the cause.  In a complex system, there are likely multiple, entangled influences that result in the problem you observe. Everything touches everything, change one thing and you’ll change many. You can’t anticipate all, but you can predict some, which may change your choice of action.

#3 Understand your current system and how it contributes toward the problem, and in what ways it might contribute to solving the problem.

Systems drive behavior.  The patterns you observe emerge from the system you have. Sketch what you know about the system, using CDE, DOE, influence maps, reward maps, value stream maps… what ever set of diagrams will help you reason about the system. Use these diagrams to consider which which factors you can change, and what effects they might have.

#4 Research at least three candidate actions to improve the situation. Don’t rely on claims by people who are selling “solutions.”

If you can’t think of at least three different possible approaches, you haven’t thought enough. Identifying more candidate approaches almost always improves your understanding of the situation.

Look at how you could influence different factors that contribute to the pattern.  Don’t limit your self to comparisons of three similar approaches (should we use Tool A, Tool B, or Tool C?).

Learn from what has worked for other people, but don’t follow slavishly. You need to understand enough that you know where you can modify, and where you need to follow strictly.  This sort of learning comes  from studying theory, and practicing theory.

There are no silver bullets.

#5 Develop experiments to move towards a more effective way of working and improved outcomes.

Big changes feel like existential threats. Small changes support learning.

#6 Run experiments, and examine the results. Adjust based on what you observe.

When I took chemistry classes in high school and at university, our “experiments” had an expected correct outcome. Real experiments are about learning.  You may learn that you need to increase some skill or create a different level of understanding in order to apply a particular technical practice effectively.  You may learn that your architecture is preventing your from benefiting from autonomous teams.  What ever you learn, it will help you refine your approach.

Look confirming data, and disconfirming data.  Consider what you’ll do to amplify results if things are headed in a good direction, and how you’ll dampen the effect if it reduces function.

#7 Work incrementally and iteratively to solve the problem(s).

It’s really not very agile to choose one big bang solution and then roll it out.  Try something, learn from it, see how the system changes.  You’ll learn more about the issue, have a chance to observe side-effects. Be open to the different possibilities and opportunities that emerge.

***

These best practices are likely to lead you to an approach that fits your context, your organization, your people.  Which will be the best practice for you, not something that worked for some other group in some other context, to solve some other problem.  And, since you and your people refined the approach it will be theirs, and they will support it.

 

Agile and The Chasm

Someone posed the question:  Has Agile Crossed the Chasm?, a reference to Moore’s work on marketing.

Agile is no longer the prevue of pioneers and visionaries.  Agile shows up in the popular business press. PMI is all over it.   The big accounting/consulting firms are marketing agile. Clearly (at least the term) agile is reaching the mainstream.

According to Moore’s model, people on the other side of  The Chasm, the  Early Majority, want to improve existing processes. They are not interested in a radical change in operations. They want something that works out of the box.

This makes sense if you are talking about a technology product.  But agile isn’t a product. It’s a set of practices, built on values and principles. Agile relies on the ability to inspect and adapt.

Many people want to lose weight, but don’t want to change their diet or exercise habits.  We know what happens.

Many managers in organizations with traditional functional hierarchies want the benefits of agile –without disrupting the status quo. Not going to happen.

Still no silver bullets.