Tag Archives: Retrospectives

Using Data in Problem-Solving

Several years ago, I was called to help an organization that was experiencing system outages in their call center. After months of outages and no effective action, they appointed an Operations Analyst to collect data and get to the bottom of the problem.

Once they had data, the managers met monthly to review it. At the beginning of the meeting, the Operations Analyst presented a pie chart showing the “outage minutes” (number of minutes a system was unavailable) from the previous month. It was clear from the chart which system the biggest source of outages for the month.

The manager for that system spent the next 40 minutes squirming as the top manager grilled him.  At the end of the meeting, the top manager sternly demanded,“Fix it!”

By the time I arrived to help, they had many months of data, but it wasn’t clear whether any thing had improved. I dove in.

I looked at trends in the total number of outage minutes each month. I plotted the trends for each application, and created time series for each application to see if there were any temporal patterns. That’s as far as I could get with the existing data. In order to hone in on the biggest offenders, I needed to know not just the number of minutes a system was down, but how many employees and customers couldn’t work for when a particular system was down. One system had a lot of outage minutes, but only a handful of specialists who supported an uncommon legacy product used it. Another system didn’t fail often, but when it did, eight hundred employees were unable to access holdings for any customers.

Though they had data before I got there, they weren’t using it effectively. They weren’t looking at trends in total outage minutes… the pie chart showed the proportion of the whole, not whether the total number was increasing or decreasing over time. Because they didn’t understand the impact, they wasted time chasing insignificant problems.

When I presented the data  in a different way, it led to a different set of questions, and more data gathering.  That data eventually helped this group of managers focus their problem-solving (and stop  pointing the roving finger of blame).

As a problem-solver, when you don’t have data, all you have to go on is your intuition and experience. If you’re lucky you may come up with a fix that works. But most good problem solvers don’t rely on luck. In some cases, you may have a good hunch what the problem is. Back up your hunches with data. In either case, I’m not talking about a big measurement program. You need good enough and “just enough” data to get started. Often there’s already some useful data, as there was for the call center I helped.

But what kind of data do you need?  Not all problems involve factors that are easily counted, like outage minutes, number of stories completed in a sprint, or number of hand-offs to complete a feature.

If you are looking at perceptions and interactions you’ll probably use qualitative data. Qualitative data focuses on experiences and qualities that we can observe, but cannot easily measure. Nothing wrong with that. It’s what we have to go on when the team is discussing team work, relationships, and perceptions. Of course, there are ways to measure some qualitative factors. Subjective reports are often sufficient (and less costly). Often, you can gather this sort of data in quickly in a group meeting.

If you are using quantitative data, it’s often best to prepare data relevant to the focus prior to the problem-solving meeting.  Otherwise, you’ll have to rely on people’s memory and opinion, or spend precious time looking up the information you need to understand the issue.

When I’m thinking about what data would be useful to understand a problem, I start with a general set of questions:

What are the visible symptoms?

What other effects can we observe?

Who cares about this issue?

What is the impact on that person/group?

What is the impact on our organization?

These questions may lead closer to the real problem, or at least confirm direction. Based on what i find, I may choose where to delve deeper, and get more specific as I explore the details of the situation

When does the problem occur?

How frequently does it occur?

Is the occurrence regular or irregular?

What factors might contribute to the problem situation?

What other events might influence the context?

Does it always happen, or is it an exception?

Under what circumstances does the problem occur?

What are the circumstances under which it doesn’t occur?

How you present data can make a big different, and may mean the difference between effective action and inaction, as was the case with the call center I helped

In a retrospective—which is a special sort of problem-solving meeting—data can make the difference between superficial, ungrounded quick fixes and developing deeper understanding that leads to more effective action—whether you data is qualitative or quantitative.

Here’s some examples how I’ve gathering data for retrospectives an other problem-solving meetings.

Data TypeMethodExamples Notes
QualitativeSpider or Radar ChartUse of XP practices.
Satisfaction with various factors.

Adherence to team working agreements.

Level of various factors (e.g. training, independence)

Shows both clusters and spreads.

Highlights areas of agreement and disagreement. 

Points towards areas for improvement.

Leaf ChartsSatisfaction.

Motivation.

Safety.

Severity of issues.

Anything for which there is a rating scale.
Use a pre-defined rating scale to show frequency distribution in the group.

Similar to bar charts, but typically used for qualitative data.
Sail boat (Jean Tabaka)Favorable factors (wind), risks (rocks), unfavorable factors (anchors), Metaphors such as this can prompt people to get past habitual thinking.
TimelinesProject, release, iteration. events over time.

Events may be categorized using various schemes. For example:

positive/negative

technical and non-technical

levels within the organization (team, product, division, industry).
Shows patterns of events that repeat over time. Reveals pivotal events (with positive or negative effects).

Useful for prompting memories, showing that people experience the same event differently.
TablesTeam skills profile (who has which skills, where there are gaps)Shows relationships between two sets of information. Shows patterns.
TrendsSatisfaction.

Motivation.

Safety.

Severity of issues.

Anything for which there is a rating scale.
Changes over time.
QuantitativePie ChartsDefects by type, module, source.

Severity of issues.


Shows frequency distribution.
Bar ChartsBugs found in testing by module + bugs found by customers by module.Frequency distribution, especially when there is more than one group of things to compare.

Similar to histograms, but typically used for quantitative data.
HistogramsDistribution of length of outages.Frequency of continuous data (not categories).
TrendsDefects.

Outages.

Stories completed.

Stories accepted/rejected.
Shows movement over time. Often trends are more significant than absolute numbers in spotting problems.

Trends may point you to areas for further investigation—which may become a retrospective action.
Scatter PlotsSize of project and amount over budget.Show the relationship between two varianles.
Time SeriesOutage minutes over a period of time.

Through-put.
Show patterns and trends over time. Use when the temporal order of the data might be important, e.g., to see the effects of events.
Frequency TablesDefects

Stories accepted on first, 2nd, 3rd, demo.
A frequency table may be a preliminary step for other charts, or stand on its own.
Data TablesImpact of not ready stories.Show the same data for a numberr of instances.

Solving Symptoms

Recently, I attended two retrospectives.  Different teams, different states, different facilitators. I’m usually on the other side, leading retrospectives.

Both retrospectives followed the “make lists” pattern.  One made two lists  “What worked well” and “What didn’t work well.”  The other made three lists “What worked well,” “What didn’t work well,” & “Issues or questions.”

Once the lists were complete, participants voted on which issues to address and broke into small groups. These groups were called “problem-solving” groups. They were really “symptom-solving” groups.

There may be some agile coaches out there who guide the team to find patterns and underlying causes from these lists. Too often, that doesn’t happen, and the discussion never goes beyond solving symptoms. Too many people teaching Scrum or Agile short-change retrospectives–teaching new ScrumMasters and coaches to make lists, rather than help the team think, learn, and decide together.

Two lists,  three lists–even four lists–are not sufficient. Lists focus on syptoms, as seen from individual points of view. These lists do not build shared understanding. They do not reveal underlying causes, patterns, or structures that influence behavior.

When teams “solve” sypmptons, the problems come back, or the team piles on layers of rules and processes designed to change behavior (I visited one team that had over 20 working agreements–almost all aimed at the symptom level).

It is not surprising to me that teams who do this sort of retro usually find them less than useful.

I’m not saying you have to do retrospectives the way I do them. But please, design and lead your retrospectives to dig beneath the surface, analyze, search beyond symptoms. Otherwise, you are wasting your time.

Promoting Double Loop Learning in Retrospectives

“The thinking that got us here isn’t the thinking that’s going to get us where we need to be.”  attributed to Albert Einstein

I have  this niggling concern about retrospectives.

I have no doubt that retrospectives that are too short, don’t result in action / experiment, or fail to delve beneath the surface are a waste of time. (I suspect many retrospectives fall into this category, since many people teach that an entire retrospective consists of Keep/Drop/Add or some variant there of. This is seldom sufficient for deep or creative thinking.)

But what about earnest retrospectives that focus on an area of concern, examine data, analyze underlying issues and result in action?  I worry that some of those fall short, too.  Why?  Because the thinking that got us here isn’t the thinking that will get us where we need to be.

People work out of their existing mental models. When they examine their current actions, they may achieve incremental improvements.  But they may  take a potentially useful new practice and kill it with 1000 compromises, shaping the new practice to fit the old mental model.

This can happen even when people are trying to learn a new way of working. The first OO program I wrote looked remarkably procedural– I was trying to wrap my head around the new paradigm, I hadn’t quite gotten there yet. In a retrospective, if people try to improve their agile practices, they may improve them right back to serial development.  Or, people may make a genuine effort to improve, but they only know what they know, and the possibilities for improvement they can see are within the bounds of their current thinking.

So the task, then, is to examine the thinking and expand possibilities.

Single and Double Loop Learning

Single loop learning asks, “How can we do what we are doing better.”

Double loop learning asks “Why do we think this is the right thing to do,” and involves scrutinizing values, thinking, and assumptions.single and double loop learning

Transformational improvement and significant learning come from making beliefs, assumptions, and thinking explicit, testing them, and experimenting.

Teams may need a little help to make the jump into the second learning loop, As teams are examining their practices, ask questions that help teams surface assumptions and test them.

  • What would have to be true for [a particular practice] to work?
  • [practice or action] makes sense when ___________.
  • [practice or action] will work perfectly when _________.
  • [practice or action] will work well-enough when_________.
  • [practice or action] will be harmful when__________.
  • What do we know to be true? How do we know that?
  • What do we assume is true?  Can we confirm that?
  • Why do we believe that?
  • What is untrue, based on our investigation?
  • What do we say our values are?
  • If an outsider watched us, what would he say our values are?
  • What is the gap?  How can we make the gap smaller?
  • How could we make things worse?

Chewing on a good subset of these questions usually helps a team see their assumptions, and take a different view on what has worked, what hasn’t worked as expected, and the reasons why.

Then, they are in a better position to choose a more effective action, or design an experiment that will help them learn what action to take.

Agile Retrospectives: A Primer

From time to time, I hear from people who aren’t realizing value from their retrospectives.

When I probe to understand the situation, I understand why they aren’t getting results–the process they are using isn’t designed to actually help the team think, learn, and decide together.

So here’s a primer for Agile Retrospectives, a process to help teams do just that.

Five Tips for Retrospective Leaders and Meeting Moderators

This article first appeared on stickyminds.com

Few people enjoy meetings that waste time in swirling discussions. Fewer still like meetings where their ideas and opinions are solicited and then ignored. Retrospective leaders (and anyone else who leads group discussions) need the tools to help groups think, discuss, and decide effectively. Below are five tips to help you make the most of time spent in retrospectives (and every other meeting).

1. Let the Group Members do the Work

Some facilitators have the idea that their job is to stand at the front of the room and do most of the work—writing flip charts, interpreting data, and pointing the group to the right decision. Au contraire! As a retrospective leader, it’s your job to provide a structure that will help the group do the work. Do not do the work of the group for them. If you do it, they won’t own it.

One simple way to put the work back with the group involves flip charting a brainstormed list. Rather than writing down the ideas as group members call them out, ask people to write their ideas on large sticky notes (and write big, using a dark marker). Then, post the sticky notes. This method has three additional advantages: 1) it’s easier to group ideas on movable sticky notes; 2) it makes it easier for introverts to get a word in edgewise; and 3) it keeps people engaged in the process.

2. Record Faithfully

When you take up the marker to record for the group, record faithfully. I watched in horror as one facilitator wrote down only ideas he agreed with. Another facilitator wrote down his interpretations, which didn’t match what group members said. For example, the retrospective leader wrote down “commit to commit” when a team member suggested the team formally agree to attend the daily stand-up meeting. Another facilitator failed to capture one participant’s idea, even after she had brought it up three times.

When the facilitator ignores or misinterprets a participant’s idea, that participant feels like she hasn’t been heard and is likely to tune out the rest of the meeting.

When you are holding the pen, it’s your job to capture ideas from the group, not insert your own ideas. Sometimes, participants don’t express their ideas in a flip-chart-ready way. In that case, ask the participant to summarize the idea in just a few words. If that fails to produce a succinct statement, then summarize concisely and ask if it’s okay to capture the shorter summary

3. Use Parallel Processing

Computer professionals know how to design programs to take advantage of parallel processing. Retrospective leaders can use parallel processing to work lists efficiently. For example, suppose the group has identified three problem areas that are impeding the group. Rather than tackle them serially, break into pairs or small groups to analyze root causes for each. Then bring the group back together to look for common root causes.

Parallel processing is also useful when there are one or two people in the group who tend to dominate the conversation. Work from a small group reduces its impact on the whole group. Plus, many people who won’t speak up in a larger group will probably speak in a small group, which should balance participation.

4. Let the Group Members Draw Conclusions

Our human brains are wired to make sense of data, so it’s not surprising that retrospective leaders fall prey to the temptation to inform the group of what they see. After one team posted a timeline, the retrospective leader lectured on her perceptions of what was going on during the project while the team just sat there. Even when the facilitator is spot on (and this one wasn’t), the team most likely will reject the facilitator’s view. It’s like criticizing your brother-in-law; it’s fine for your spouse to say his brother drinks too much, but watch out if you say it.

Some retrospective leaders protest that it takes too long for the group to process the data—that it’s more efficient for the facilitator to do the job. It may be true that it takes less time for the facilitator to relay her own interpretation of the data to the group, but it’s only more effective if the facilitator doesn’t care whether the group actually buys into and acts on the interpretation.

5. Test for Agreement

Groups can go on and on discussing issues, mulling over concerns, and answering questions. It’s important to have a thorough analysis, and it’s also important to come to a decision. So after the team has asked clarifying questions related to a decision, test the agreement.

Testing for agreement isn’t the same as making a final decision. Testing for agreement creates a data point and an assessment of how much more discussion is really needed. One way to test for agreement is by using the “Fist of Five.” Each person signals her support for a proposal or option by utilizing a six-point scale, which requires no ballot other than the use of your hand. The votes are as follows:

Five fingers = I strongly support
Four fingers = I support with minor reservations
Three fingers = I’ll go with the will of the group
Two fingers = I have serious reservations
One finger = I do not support
Fist = I’ll block

(I advise against using the middle finger to indicate lack of support.)

If everyone in the group expresses strong agreement (four or five fingers), you know that you need to note reservations and mitigate risks. But the group probably doesn’t need a lengthy discussion of the topic. However, if there’s only lukewarm support (three fingers), then more discussion is warranted—possibly to identify more favorable alternatives.

If most of the people in the group show three fingers or less, it’s time to move to a different option rather than wasting time discussing an option no one wants.

Practice these tips, and pretty soon people will notice that meetings seem to work better when you lead them.

Seven Ways to Revitalize Your Sprint Retrospectives

This article first appeared on ScrumAlliance.org.

Retrospectives are an integral part of Scrum. But too often, when I talk to Scrum teams, they tell me that they’ve stopped doing retrospectives. “We’ve run out of things to improve,” one ScrumMaster said. Another complained that after six sprints, they were saying the same things over and over at every retrospective.

If your retrospectives have gone stale, here are seven things you can do to inject new energy.

Rotate leadership

Rather than always have the ScrumMaster lead the retrospective, rotate leadership among team members. Ask each team member to try one new adaptation when he or she leads the retrospective. By doing this, you’ll experiment with new activities and build the team’s group process savvy.

Change the questions

Rather than asking the two standard questions (“What did we do well?” and “What could be improved in the next sprint?”), start by asking, “What happened during the iteration?” Even when people work in the same team room, they see things from a different perspective. Ask about surprises and challenges. Then ask about insights. When you have a list of insights, ask about possible improvements.

Vary the process

If you’ve been leading group discussions, try using structured activities to help the team think together. Reconstruct the timeline of the project to help the team see patterns. Create a histogram that shows how satisfied team members are with the process they used and the product they created during the last sprint. Try using a fishbone diagram to explore root causes. Use dot voting to select improvements for the next sprint.

Include different perspectives

For the retrospective after a release, include people from the extended project community—people who aren’t part of the team, but who play a critical part in deploying and supporting the software. Focus on understanding and improving the interface between the Scrum team and other parts of the organization.

Change the focus

If you’ve been concentrating on improving how the team works the Scrum process itself, switch the focus to engineering practices or teamwork. If you’ve been focusing on a narrow goal (“How can we improve the build process?”), try a broad goal for the next retrospective.

Try appreciative inquiry

Instead of looking at where to improve, look at what’s working well and how you can build on that. Use short, pair interviews to explore questions such as, “When were you at your best on our last sprint?” “Who else was involved?” and “What conditions were present?” After the interviews, put the pairs together in groups of four or six (two pairs or three pairs together) to find common themes.

Analyze recurrent themes

If your team is bringing up the same issues at each retrospective, examine the list of issues. If nothing changes as the result of your team’s retrospectives, analyze the factors behind inaction. Are the issues related to organizational policies, systems, and structures? You may not be able to change organizational factors right away, but you can use your influence, and the team can change its response. Are the improvements the team chooses included in the next sprint plan? Change plans that are kept separate from daily work fall by the wayside.

Retrospectives are the key mechanism for examining and improving methods and teamwork. If they aren’t working for you, inspect and adapt!

Looking Back, Moving Forward: Retrospectives Help Teams Inspect and Adapt

This article first appeared on stickyminds.com.

Not long ago, I received a call from someone who wanted to hold a retrospective. “Tell me about your goals for the retrospective,” I prompted. The requestor proceeded to describe what amounted to a mini-witch hunt. If you really want to wreak havoc with a team, try having a retrospective for these reasons:

  • To motivate another group to fix the way they do their work
  • To make it eminently clear that Person X isn’t doing his/her job—in public
  • To definitively assign blame for poor design, missed deadlines, injecting bugs, and other disappointing results
  • To force two individuals to solve a long standing conflict—in public
  • To provide feedback on Person Y’s poor performance—in public
  • To prove that if those people had done better, everything would be fine

On the other hand, if you want to learn from experience, build on what works, gain perspective, and decide what to do differently, a retrospective can work for you. There are plenty of reasons in favor of well-run retrospectives, which help teams to:

  • Take a “whole system” view of their methods and practices
  • Discuss issues before they build up to a crisis
  • Look at what’s working and build on successes
  • Create experiments to evolve and improve practices
  • Fix what’s within their own control, rather than waiting for management to take action
  • Raise the visibility of impediments that managers need to work across the organization to fix

What follows is the outline of a successful retrospective.

Set the Stage

Lay the groundwork for the session by reviewing the goal and agenda. Create an environment for participation by checking in and establishing working agreements. Some people feel this preparation isn’t real work, but believe me—when you skip this part of a retrospective, you may save a little time, but you’ll pay for it later in the retrospective. Skip this part and people are less trusting and less likely to participate.

Gather Data

Review objective and subjective information to create a shared picture of the iteration. When the group members see the iteration from many points of view, they’ll have greater insight and will be more likely to move beyond their personal views to the see the big picture of how the team works.

Generate Insights

Step back, and look at the picture the team has created. Use activities that help people think together to delve beneath the surface. When people think together from shared data, they are more likely to arrive at ideas that the whole group supports.

Decide What to Do

Prioritize the team’s insights and choose a few improvements or experiments that will make a difference for the team. Be sure to identify concrete, small steps that the team can take in the next iteration—one colleague calls them “now actions.” And make sure that the action steps are folded into the overall work plan, not kept to the side in an “improvement plan.” When improvement is part of the regular plan, it’s seen as a normal part of work, not something extra that the team will get to if they have time.

Close the Retrospective

Summarize how the team will follow up on plans and commitments. Thank people for their hard work, and do a little retrospective on the retrospective so you can improve your retrospective process, too.

This may look like a lot to cover in one meeting; but each step plays an important part, and needn’t take a long time. For a two-week iteration, you can cover these steps in an hour or so. For a slightly longer project (a month or so) dedicate a half-day to deciding what to do better next time. For a project that lasted a year, you’ll want to spend more time. Short changing your retrospective means short changing your chance to do better next time. Improvement doesn’t happen by hoping; teams need to dedicate time thinking and learning.

Successful organizations know how to evolve to meet ever-changing expectations, rather than holding onto what used to work. Becoming comfortable with change, learning how to try something new, and measuring how well it works are critical business skills. And retrospectives are a great way to learn those skills on a team level.

Eight Reasons Retrospectives Fail (And What You Can Do about Them)

This article first appeared on stickyminds.com.

I’ve seen retrospectives help teams make major improvements. Yet, each time I talk to a group about retrospectives, someone always tells me, “We tried retrospectives, and they don’t work for us.” Why?

My inquiries revealed eight common reasons behind retrospective failures. Some failures happen before the retrospective starts, some happen during the retrospective, and some relate to how teams follow through on retrospective results. The good news is that most of the problems are relatively easy to fix.

1. No Preparation

Every significant working session or meeting requires preparation, and retrospectives are no exception. Showing up for a meeting with no idea of how the group will use the time is a waste of everyone’s time and results in unbounded discussions that lead nowhere. Take time to prepare an agenda that will help the team reach their goal.

2. No Focus

Every retrospective needs a focus. The focus describes the territory the team will explore, without specifying an outcome. For example, many teams start with a focus of “continuous improvement.” That may suffice for a while, but after a few retrospectives the team may run out of things to say. Sharpening the focus will help the team delve deeper. Choose a focus that reflects the challenges of the previous iteration or period of work. For example, if the team members found that the stories they selected were under-defined, they might focus on “improving our work with the product owner to groom the backlog.” (And invite the product owner to come to the retrospective.)

3. Failing to Gather Data

Many teams start their retrospectives by asking: “What did we do well?”; “What should we do differently?”; or some other variation. These aren’t bad questions, but they shouldn’t be the first questions. These questions ask for analysis and conclusion, which require data. Before talking about what to change, talk about what happened. Choose data related to the focus of the retrospective. Depending on the type of data, it may help to pull the information together prior to the retrospective meeting.

When the team skips gathering data, each team member is analyzing his or her own perceptions. When the team members review data together, they are creating a more complete picture of the iteration and are all working from the same data.

4. One or Two People Dominating the Conversation

It’s easy for one or two dominant personalities to control an unstructured discussion and push their ideas onto the group. The result is that the team as a whole doesn’t fully accept the retrospective outcome. Rather than leave the field open, use pair or small-group discussions and activities to ensure that everyone has a chance to participate and contribute to the result.

5. Focusing Only on Impediments That Are Outside the Control of the Team

We all know that many of the problems teams face are systemic problems–problems that management needs to fix. When team members only focus on issues outside their team, they can become demoralized. Worse, they come to view themselves as hapless victims. I find there’s plenty they can do to improve their own processes and methods–much more than most teams initially recognize. Focus on choosing improvements the team can do themselves or on which they have strong influence (in which case, the action is creating and executing an influence plan). Even when the team doesn’t have control, they can choose to respond differently.

6. Biting Off More than the Team Can Chew

Some teams generate long lists of things they need to improve, and then they try to do them all at once. Too much change can overwhelm the team and leave too little time to work on the product. Choose one or two actions that the team can work on in the next iteration.

7. Choosing Actions the Team Doesn’t Have Energy For

When it comes to deciding what improvement or experiment to work on for the next iteration, words matter. I often ask the team to rate the candidate actions on two scales: which one is most important and which one do they have the most energy for. The rankings are often quite different.

The team may recognize that something is important but not want to work on it. There are lots of reasons for this: They may have tried before and failed, the task may be too difficult or time-consuming given the other work they have to do, or the work may be plain unpleasant. In any case, when the team doesn’t have energy to work on an improvement, chances are pretty good it won’t get done. Go with the task the team has the energy to complete.

8. Keeping a Separate “Improvement Plan”

The most common reason I see for “do nothing” retrospectives is that the team keeps one plan for “real work” and another plan for improvements. Guess what happens to the improvement plan? When the improvement work is considered separately from “real work,” it doesn’t get done. Improving the team’s capability is real work, so put it in the real plan. That way it will be considered when the team makes commitments to working on features and will be in front of the team throughout the iteration. Write a story card for the chosen improvement, and take it into the next iteration planning meeting.

There are some organizations where retrospectives truly won’t work.

In organizations where there is a pervasive culture of blame, people may be too frightened to bring up issues. In that case, retrospectives may do more harm than good. When teams are facing relentless deadlines and not working at a sustainable pace, there may be so much pressure to produce that teams feel they can’t step back and look at the way they work or afford time to learn new skills or make changes. Both of these are systemic problems and are too big for team retrospectives to solve.

Fortunately for most retrospective failures, the remedies are quick and straightforward.

Making Retrospective Changes Stick

This article first appeared on Stickyminds.com.

Recently, a reader wrote to me with a concern about retrospectives. “We make decisions,” he wrote, “but we don’t have the discipline to carry them out. The team is starting to feel like retrospectives are a waste of time.”

If the team fails to resolve issues and make improvements after their retrospectives, they are wasting their time. But the problem may not be with the retrospective; the problem might be with how the team views the changes they’ve identified in the retrospective. I’ll share my story of two personal changes to illustrate what I mean.

A Tale of Two Changes

Last fall, I decided to make two changes in my life. First, I decided to save some money by giving up my personal trainer and going to the gym on my own. I thought this would be easy because I know how to design a good workout. I’m familiar with all the equipment in the gym, and after working with a trainer for years, I know enough exercises and variations to avoid falling into a rut.

I also decided to lose ten pounds. I expected losing weight would be quite difficult. In my thin days, I’d listened as friends complained about losing weight. I’d watched other friends go on and off diets and witnessed the gustatory gyrations of a colleague following Atkins as she traveled twenty weeks a year. I knew it would be hard, but I had to try.

It’s now been a bit over six months since I resolved to make those changes, and I can report that I followed through on one decision and the other fell by the wayside. I failed on the “easy” change. After making it to the gym once in three months, I re-hired my trainer. However, I exceeded my goal on the “difficult” change by losing 25 pounds.

What enabled me to succeed at the difficult goal? What allowed me to fail at the easy one?

Lesson #1

I didn’t have a well-formed goal for the first change. I saved money by not paying a trainer whether I went to the gym or not. I guess you could say that’s success. But there was a second, implicit part to the goal, which was stay in shape. I might have done better if I’d stated my goal differently–perhaps “Achieve exercise independence.” Teams need to look at the implicit goals and values behind their retrospective actions to make sure that both explicit and implicit goals are aligned.

Even with a better goal statement, I suspect the outcome wouldn’t have changed unless I addressed Lesson #2.

Lesson #2

I thought the first goal would be easy. Because I thought it was a no-brainer, I didn’t use my brain to plan the change in a way that would propel me to success. When teams think they face an easy change, they may not recognize how hard it is to change even a simple habit. And by treating the change as simple and easy, they make it easy to fail.

Brainful Change

I succeeded in my goal to lose ten pounds by giving myself structures and supports that would help me reach my goal, and I developed a strategy for when I knew it would be hardest to hold to my resolve.

Feedback

I used a point system to track how much I was eating and exercising. Tracking made me much more aware of my eating actions and helped me make choices. I also bought a scale so I could measure my weight and see my progress.

When the team members are deciding what to do in their retrospective, set aside a few minutes to consider how they’ll evaluate success and track progress.

One team chose a goal of increasing their automated unit testing by writing at least one test for each story they worked on. They added a column on their task board to track the unit tests for each story. A team that wanted to improve code quality by pairing created a tracking sheet and made a check mark each time they caught a defect that would have been missed without another pair of eyes.

Structure

I established a “weigh-in day” to help hold myself accountable.

A structure can be anything that creates an opportunity for the team to do what they’ve committed to do. The team who started pairing created and posted a pairing schedule. A team who wanted to improve their design skills did cooperative reading and set up a weekly lunch to share key ideas.

Support

I am lucky to have a husband who will eat pretty much anything I put on his plate and is willing to grill whatever I bring home from the store. (Plus, I suspect he had a vested interest in seeing me lose a few pounds, so he wasn’t going to work against my goal. Just guessing.)

Support can be an “atta boy,” or it can be access to books, training, coaching, and experts. The team who wanted to improve their refactoring purchased books and supported each other by walking through their refactorings with each other.

A Counter Balance

That was fine for when I was at home, but I travel. That’s where the extra pounds came from in the first place.

As soon as the waitress set the plate in front of me, I’d cut the portions down to size: a serving of meat is the size of a deck of cards. A cup of pasta is about the size of a tennis ball. I didn’t have to rely on will power to stop eating; I had a handy heuristic and a specific action to help me stick to my food plan.

When you review retrospective actions, think about what will make it hard to stick to the resolve. Use a Force Field analysis to identify the factors that will propel the change forward and those that will restrain the change. What can the team do to strengthen the drivers, and reduce the retraining factors? (See the hand-drawn chart of a force field analysis below.)

Force Field Analysis

Most changes aren’t no-brainers, even when they sound simple on the surface. Save a little time in your retrospective to identify how the team can use feedback, structure, and support to help them make the change. Consider it insurance for the investment you’ve made in a retrospective.