All posts by Esther

Change Artist Super Powers: Patience

“It shouldn’t take this long!” John, the VP of Development, snapped. “Its just not that hard!”

The “it” John referred to was a set of measures and metrics. He believed that if all the teams reported these, everyone would be better able to plan releases. He’d be better able to build trust with customers. He was right that the measures would help both in planning work and in communicating with customers. He was wrong in his expectation that this change would happen quickly.

The metrics that seems easy and obvious to John did not seem so to the others. John had studied these measures, understood the theories behind them and believed they would help him, development teams, and the company. He’d already climbed the learning curve. The people who would need to change their routines to provide him those metrics had not. To the recipients of John’s request, it was another demand on their time, one that worried them. John said the metrics would help with planning—but would he also use them to rank teams and managers? Furthermore, using the tracking tool John preferred was truly an ordeal.

Patience is a virtue, especially when it comes to change in large systems. Before you get frustrated at the pace of change, consider these factors:

  • Number of new concepts involved
  • Number of new practices/processes involved
  • Alignment between existing structures, processes, and reward systems that aren’t slated to change
  • New and modified infrastructure required
  • Effort involved
  • What people will lose
  • What people fear
  • What else are people working on
  • Amount of slack in the system
  • Interest from the next level manager (not just a distant executive)
  • Number of other changes people are absorbing

John skipped most of this factors when he set his own expectations. To him, collecting these metrics seemed simple. To the people who would track and enter the data, there were practical and emotional hurdles.

So, be patient. Empathize, understand concerns, get curious about the context. But don’t become a doormat. Armed with a broader understanding, jointly develop a plan and support people through the change. Things didn’t get the way they are in a day, and they seldom change for the better in a day, either.

The Risks of Anonymous Feedback

In one of the online forums I participate in, someone declared that feedback between peers must be anonymous. His rationale was that people won’t be honest without anonymity.

I have found that it is possible to be honest and not anonymous. 

I’ve also found that anonymous feedback backfires in number of ways:

People veer into judgement rather than information when they hide behind anonymity.  That’s seldom helpful.  An anonymous zinger doesn’t help. Nor does a value statement such as,  “you don’t pull your weight” or “you’re stingy with information.”

Unless feedback is very specific,  the receiver may not recognize (or even  remember) what the feedback giver refers to.  With anonymous feedback, there’s no way to follow up and ask for examples, understand impact, negotiate a better way to work together, or make amends, if that’s what is called for.

It is fairly normal for people to try to guess who gave a particular bit of anonymous feedback, especially if the feedback is critical, judgmental, or conflicts with the receiver’s self-perception. People often guess wrong, and that distorts and damages relationships.

The practice of anonymous feedback erodes trust in the group.  A feedback receiver may wonder, “If he had a problem with me, why didn’t he tell me…”  People become more cautious, engage in more protective behavior, and hide mistakes or issues.

Honest, congruent person-to-person feedback requires thought and skill. But it is worth the effort. and contributes to a more resilient, and humane culture.

Change Artist Super Powers: Empathy

Some people seem to think that empathy has no place at work…that work requires a hard-nose, logic, and checking your emotions at the door. But, in periods of change, emotions—which are always present, whether we choose to acknowledge them or not—surge to the surface. Ignoring the emotional impact of change doesn’t make it go away. Rather, attempts to depress or devalue people’s response to change may amplify emotions.

Empathy is the ability to recognize and vicariously identify someone else’s experience and emotions. Empathy enables you to understand someone else’s point of view, the challenges posed by the change, what they value, and what they stand to lose by changing.

Empathizing doesn’t mean you have to feel the same thing, think the same way, make the other person feel better, or fix the situation so everyone is happy. Demonstrating empathy means you listen, acknowledge, and accept feelings and points of view as legitimate. Empathy is fundamentally about respect.

Three kinds of empathy play a part in change.

Emotional empathy, understanding another’s emotions and feelings. This is what usually comes to mind first when people hear the term. Emotions are a normal part of change—from excitement, to grief, puzzlement, loss of confidence, and anger. Too often, people who “drive” change dismiss these responses and urge people “just get on with it.”

Cognitive empathy means understanding someone else’s patterns of thought and how he makes sense of his world and events. Understanding how others think about things may help you frame a new idea in a way that meshes with their views. That also helps you—you’ll know more about the obstacles and issues you are likely to encounter.

Point-of-View empathy combines a bit of both of these, and it allows you to say genuinely, “I can see how it looks that way to you.” Once you extend that courtesy to someone, he is more likely to want to see how the situation looks to you.

Empathy provides information that helps with change in at least two ways:

You can refine your ideas about the change based on local information, which people are more likely to share when you make an effort to listen and connect with them.

People are more likely to listen to you when they feel listened-to.

The more you listen, the more you learn about the needs and values of the people facing a change. And that is the key: People rarely change because someone has a bright new idea. They change to save something they value. But you won’t learn that unless you empathize.

Jobs don’t fit in neat little boxes.

Most job descriptions decompose work into discrete chunks, clearly defining what each position must do. Competency models list required behaviors, seeking standardization across contexts. In essence, these models are akin to specifications for machine parts.

Complex knowledge work isn’t like that.

I prefer to think about jobs in terms of the work, impact on the organization, context, relationship, and collaboration. There’s a core of qualities, skills, experience, and demonstrated understanding necessary for the work. Context shapes what’s actually required to do the job and have an impact on the organization.

Sort of like a fried egg.

Doesn’t fit in a nice neat box, but closer to reality.

Job Descriptions vs. Jobs

 

Change Artist Super Powers: Experimentation

In my previous two posts, I wrote about curiosity and observation. In this one, I’ll share some ideas about experimentation— my third Change Artist Super Power.  

Tiny changes, done as experiments, may feel like you’re dancing around the real issue instead of facing it head-on. But many big problems cannot be addressed directly. The presenting problem is actually the result of many contributing factors. For example, one of my clients wanted to reduce cycle time. Obviously, you can’t walk over to cycle time and lop off a month, nor can you shorten it by decree. My client could only make progress on the big problem by doing something about many smaller contributors. 

Big changes tend to stimulate the organizational antibodies that seek to return to the status quo and often have big unintended side-effects. Experiments, by their nature, are small, time bound,not very threatening, and easier to contain if something goes wrong. 

The key with experiments is to find something you can act on now, without a budget, permission, or an act of god (or some large number of vice presidents). Your goal is to nudge the system and get fast feedback, not to design the ultimate solution. 

Here are ten questions to consider as you design and carry out experiments.

  1. What factors may contribute to the current problem situation?  (Curiosity and observation will help you here!)
  2. Which factors can you control or influence? 
  3. What is your rationale for choosing this particular experiment? 
  4. What question are you trying to answer with your experiment?
  5. What can you observe about the situation as it is now?
  6. How might you detect that your experiment is moving the situation in the desired direction?
  7. How might you detect that your experiment is moving the situation in an undesirable direction or notice undesirable side effects?
  8. What is the natural time scale of the experiment? When might you expect to see results?
  9. If things get worse, how will you recover the situation?
  10. If things improve, how will you amplify or spread the experiment?

In my experience, questions #6 and #7 are often the most difficult to answer. Most of us have been trained to think about Measurement with a capital M. So we look for big things to count, related to the big problem. That’s not what is needed with experiments. With an experiments, I look for indicators that my experiment is having an effect, even if it is a very subtle one. 

For example, at one client, when teams didn’t deliver the desired (read, “dictated by the product owner”) number of stories, the management response was to increase pressure on the teams to deliver to “commitments.” 

My experiment seeded different questions during sprint reviews to focus attention on how work was flowing into the teams, and whether the teams had the conditions to succeed. After several reviews, the managers started asking different sorts of questions without prompting, and focusing less on “holding people accountable.” I knew I my experiment was moving the situation in the right direction.

A change in questions is not so easy to count, and probably difficult to implement as a capital M Measure. But listening to the sorts of questions managers asked was quite useful as an indicator in my experiment. Different questions implied different thinking, which led to different actions, and eventually, improved project results. My experiment moved the system in a better direction and a bit closer to solving the big problem.

“But all these tiny changes will be soooo slow” you might be thinking. Most big problems didn’t appear in a day—although sometimes it looks that way. Big problems are the result of accumulated effects over time. They did’t arrive in a day, and they won’t be solved in a day. But, bit by bit, experiment by experiment, you can make things better.

Change Artist Super Powers: Observation

When I was a kid, there was a party game called Pin the Tail on the Donkey. The game involved a large wall poster of a sad-looking, tailless donkey. Armed with a replacement tail and a pin, each child attempted to give the donkey a new tail—while blind-folded and a bit dizzy from being spun around by the parent hosting the party. (I know, it sounds awful.) Obviously, the chance of an accurate placement was quite small.

Without the ability to observe what is happening, any attempts to improve a situation in your organization may be similarly misplaced. Or you may succeed—purely by chance. When you hone your ability to observe, you stand a much better chance of choosing an appropriate action. My ability to observe is my second Change Artist Super Power.

A manager called me concerned that people on his team were too timid, and could benefit from assertiveness training. I observed several meetings where the manager did 80% of the talking. When someone did get a word in, the manager interrupted. When I shared my observation, the manager was shocked and chagrined. He changed his behavior, and discovered his team had a lot to say. He also realized that his first idea for a fix was misplaced.

At another client, I observed that data about system outages was presented as monthly outage minutes in pie chart form. People knew which system was the biggest culprit in the past month, but had no idea about trends or impacts. I dug into the data and created charts that showed outage minutes over time, and how many people were affected by when a give application went down. Seeing this information in a different form allowed them to address the biggest issues, rather than pointing the roving finger of blame based on a monthly snap shot.

In both these cases, observation was key choosing appropriate action.

Observing sounds simple. In fact, it is hard work and requires practice and skill. You can practice any time, by choosing just one thing, and consciously noticing that for a short period of time. However, sharing your observations can be tricky, especially if you are an outsider and have not been invited to observe. Any time you are observing, it is imperative that you share only what you have seen and heard in neutral language. Stay away from judgement and interpretation.

What might you observe to increase your ability to solve problems?  What might you gain by having a fresh set of eyes observe your organization?

Change Artist Super Powers: Curiosity

In my work, I draw on models, frameworks, and years of experience. Yet, one of my most valuable tools is a simple one: Curiosity.

In an early meeting with a client, a senior manager expressed his frustration that development teams weren’t meeting his schedule. “Those teams made a commitment, but didn’t deliver! Why aren’t those people accountable?” he asked, with more than a hint of blame in his voice. As I spent more time in the organization, I heard other managers express similar wonderment (and blame).

I also noticed that whenever someone asked, “Why aren’t those people accountable?”—or some other blaming question, problem-solving ceased.

I know these managers wanted to deliver software to their customers as promised. But, their blaming questions prevented them from making headway in figuring out why they were unable to do so.

I started asking different questions–curious questions.

  • Who makes commitments to the customers, and on what basis? How do customer commitments, team commitments, and team capacity relate to each other?
  • When “those teams” make commitments, Is it really the people who will do the work committing, or someone else?  
  • What does “commitment” really mean here? Do all parties understand and use the term the same way? 
  • What hinders people from achieving what the managers desire?  Do teams have the means to do their work?
  • What is at stake, for which groups of people, regarding delivery of this product? 
  • What is it like to be a developer in this organization? 
  • What is it like to be a manager in this organization?
  • What is it like to be a customer of this organization?

I worked with others in the client organization to learn about these (and other) factors. We developed and tested hypotheses, engaged in conversations, made experiments, and shifted the pattern of results. 

For the most part, managers no longer ask blaming questions. They ask whether teams have the data to make decisions about how much work to pull into a sprint. They examine what they themselves say and do to reduce confusing and mixed messages. They review data, and adjust their plans.

Curiosity uncovered contradictions, hurdles, confusion, and misunderstandings. All of which we could work on to improve the situation.

So, there you have it. Curiosity is my number one Change Artist Super Power, and it can be yours, too.

Forgotten Questions of Change

I’ve been thinking about and observing organizational change for a very long time.

It seems to me that –in their enthusiasm for efficiency, planning, “managing” change– people often overlook some critical questions.

A handful of questions that could lead to more effective action, but seldom get asked:

  • What is working well now, that we can learn from?
  • What is valuable about the past that is worth preserving?
  • What do we want to /not/ change?
  • Who benefits from the way things are now?
  • Who will lose (status, identity, meaning, jobs…) based on the proposed new way?
  • How will this change disrupt the informal networks that are essential to getting work done?
  • How will this change ripple through the organization, touching the people and groups indirectly effected?
  • What holds the current pattern in place?
  • How can we dampen this change, if it goes the wrong direction?
  • What is the smallest thing we can do to learn more about this proposed course of action?
  • What subtle things might we discern that tell us this change is going in the right direction…or the wrong one?
  • What is the time frame in which we expect to notice the effects of our efforts?

What questions would you add?

Using Data in Problem-Solving

Several years ago, I was called to help an organization that was experiencing system outages in their call center. After months of outages and no effective action, they appointed an Operations Analyst to collect data and get to the bottom of the problem.

Once they had data, the managers met monthly to review it. At the beginning of the meeting, the Operations Analyst presented a pie chart showing the “outage minutes” (number of minutes a system was unavailable) from the previous month. It was clear from the chart which system the biggest source of outages for the month.

The manager for that system spent the next 40 minutes squirming as the top manager grilled him.  At the end of the meeting, the top manager sternly demanded,“Fix it!”

By the time I arrived to help, they had many months of data, but it wasn’t clear whether any thing had improved. I dove in.

I looked at trends in the total number of outage minutes each month. I plotted the trends for each application, and created time series for each application to see if there were any temporal patterns. That’s as far as I could get with the existing data. In order to hone in on the biggest offenders, I needed to know not just the number of minutes a system was down, but how many employees and customers couldn’t work for when a particular system was down. One system had a lot of outage minutes, but only a handful of specialists who supported an uncommon legacy product used it. Another system didn’t fail often, but when it did, eight hundred employees were unable to access holdings for any customers.

Though they had data before I got there, they weren’t using it effectively. They weren’t looking at trends in total outage minutes… the pie chart showed the proportion of the whole, not whether the total number was increasing or decreasing over time. Because they didn’t understand the impact, they wasted time chasing insignificant problems.

When I presented the data  in a different way, it led to a different set of questions, and more data gathering.  That data eventually helped this group of managers focus their problem-solving (and stop  pointing the roving finger of blame).

As a problem-solver, when you don’t have data, all you have to go on is your intuition and experience. If you’re lucky you may come up with a fix that works. But most good problem solvers don’t rely on luck. In some cases, you may have a good hunch what the problem is. Back up your hunches with data. In either case, I’m not talking about a big measurement program. You need good enough and “just enough” data to get started. Often there’s already some useful data, as there was for the call center I helped.

But what kind of data do you need?  Not all problems involve factors that are easily counted, like outage minutes, number of stories completed in a sprint, or number of hand-offs to complete a feature.

If you are looking at perceptions and interactions you’ll probably use qualitative data. Qualitative data focuses on experiences and qualities that we can observe, but cannot easily measure. Nothing wrong with that. It’s what we have to go on when the team is discussing team work, relationships, and perceptions. Of course, there are ways to measure some qualitative factors. Subjective reports are often sufficient (and less costly). Often, you can gather this sort of data in quickly in a group meeting.

If you are using quantitative data, it’s often best to prepare data relevant to the focus prior to the problem-solving meeting.  Otherwise, you’ll have to rely on people’s memory and opinion, or spend precious time looking up the information you need to understand the issue.

When I’m thinking about what data would be useful to understand a problem, I start with a general set of questions:

What are the visible symptoms?

What other effects can we observe?

Who cares about this issue?

What is the impact on that person/group?

What is the impact on our organization?

These questions may lead closer to the real problem, or at least confirm direction. Based on what i find, I may choose where to delve deeper, and get more specific as I explore the details of the situation

When does the problem occur?

How frequently does it occur?

Is the occurrence regular or irregular?

What factors might contribute to the problem situation?

What other events might influence the context?

Does it always happen, or is it an exception?

Under what circumstances does the problem occur?

What are the circumstances under which it doesn’t occur?

How you present data can make a big different, and may mean the difference between effective action and inaction, as was the case with the call center I helped

In a retrospective—which is a special sort of problem-solving meeting—data can make the difference between superficial, ungrounded quick fixes and developing deeper understanding that leads to more effective action—whether you data is qualitative or quantitative.

Here’s some examples how I’ve gathering data for retrospectives an other problem-solving meetings.

Data TypeMethodExamples Notes
QualitativeSpider or Radar ChartUse of XP practices.
Satisfaction with various factors.

Adherence to team working agreements.

Level of various factors (e.g. training, independence)

Shows both clusters and spreads.

Highlights areas of agreement and disagreement. 

Points towards areas for improvement.

Leaf ChartsSatisfaction.

Motivation.

Safety.

Severity of issues.

Anything for which there is a rating scale.
Use a pre-defined rating scale to show frequency distribution in the group.

Similar to bar charts, but typically used for qualitative data.
Sail boat (Jean Tabaka)Favorable factors (wind), risks (rocks), unfavorable factors (anchors), Metaphors such as this can prompt people to get past habitual thinking.
TimelinesProject, release, iteration. events over time.

Events may be categorized using various schemes. For example:

positive/negative

technical and non-technical

levels within the organization (team, product, division, industry).
Shows patterns of events that repeat over time. Reveals pivotal events (with positive or negative effects).

Useful for prompting memories, showing that people experience the same event differently.
TablesTeam skills profile (who has which skills, where there are gaps)Shows relationships between two sets of information. Shows patterns.
TrendsSatisfaction.

Motivation.

Safety.

Severity of issues.

Anything for which there is a rating scale.
Changes over time.
QuantitativePie ChartsDefects by type, module, source.

Severity of issues.


Shows frequency distribution.
Bar ChartsBugs found in testing by module + bugs found by customers by module.Frequency distribution, especially when there is more than one group of things to compare.

Similar to histograms, but typically used for quantitative data.
HistogramsDistribution of length of outages.Frequency of continuous data (not categories).
TrendsDefects.

Outages.

Stories completed.

Stories accepted/rejected.
Shows movement over time. Often trends are more significant than absolute numbers in spotting problems.

Trends may point you to areas for further investigation—which may become a retrospective action.
Scatter PlotsSize of project and amount over budget.Show the relationship between two varianles.
Time SeriesOutage minutes over a period of time.

Through-put.
Show patterns and trends over time. Use when the temporal order of the data might be important, e.g., to see the effects of events.
Frequency TablesDefects

Stories accepted on first, 2nd, 3rd, demo.
A frequency table may be a preliminary step for other charts, or stand on its own.
Data TablesImpact of not ready stories.Show the same data for a numberr of instances.