Tag Archives: problem-solving

Using Data in Problem-Solving

Several years ago, I was called to help an organization that was experiencing system outages in their call center. After months of outages and no effective action, they appointed an Operations Analyst to collect data and get to the bottom of the problem.

Once they had data, the managers met monthly to review it. At the beginning of the meeting, the Operations Analyst presented a pie chart showing the “outage minutes” (number of minutes a system was unavailable) from the previous month. It was clear from the chart which system the biggest source of outages for the month.

The manager for that system spent the next 40 minutes squirming as the top manager grilled him.  At the end of the meeting, the top manager sternly demanded,“Fix it!”

By the time I arrived to help, they had many months of data, but it wasn’t clear whether any thing had improved. I dove in.

I looked at trends in the total number of outage minutes each month. I plotted the trends for each application, and created time series for each application to see if there were any temporal patterns. That’s as far as I could get with the existing data. In order to hone in on the biggest offenders, I needed to know not just the number of minutes a system was down, but how many employees and customers couldn’t work for when a particular system was down. One system had a lot of outage minutes, but only a handful of specialists who supported an uncommon legacy product used it. Another system didn’t fail often, but when it did, eight hundred employees were unable to access holdings for any customers.

Though they had data before I got there, they weren’t using it effectively. They weren’t looking at trends in total outage minutes… the pie chart showed the proportion of the whole, not whether the total number was increasing or decreasing over time. Because they didn’t understand the impact, they wasted time chasing insignificant problems.

When I presented the data  in a different way, it led to a different set of questions, and more data gathering.  That data eventually helped this group of managers focus their problem-solving (and stop  pointing the roving finger of blame).

As a problem-solver, when you don’t have data, all you have to go on is your intuition and experience. If you’re lucky you may come up with a fix that works. But most good problem solvers don’t rely on luck. In some cases, you may have a good hunch what the problem is. Back up your hunches with data. In either case, I’m not talking about a big measurement program. You need good enough and “just enough” data to get started. Often there’s already some useful data, as there was for the call center I helped.

But what kind of data do you need?  Not all problems involve factors that are easily counted, like outage minutes, number of stories completed in a sprint, or number of hand-offs to complete a feature.

If you are looking at perceptions and interactions you’ll probably use qualitative data. Qualitative data focuses on experiences and qualities that we can observe, but cannot easily measure. Nothing wrong with that. It’s what we have to go on when the team is discussing team work, relationships, and perceptions. Of course, there are ways to measure some qualitative factors. Subjective reports are often sufficient (and less costly). Often, you can gather this sort of data in quickly in a group meeting.

If you are using quantitative data, it’s often best to prepare data relevant to the focus prior to the problem-solving meeting.  Otherwise, you’ll have to rely on people’s memory and opinion, or spend precious time looking up the information you need to understand the issue.

When I’m thinking about what data would be useful to understand a problem, I start with a general set of questions:

What are the visible symptoms?

What other effects can we observe?

Who cares about this issue?

What is the impact on that person/group?

What is the impact on our organization?

These questions may lead closer to the real problem, or at least confirm direction. Based on what i find, I may choose where to delve deeper, and get more specific as I explore the details of the situation

When does the problem occur?

How frequently does it occur?

Is the occurrence regular or irregular?

What factors might contribute to the problem situation?

What other events might influence the context?

Does it always happen, or is it an exception?

Under what circumstances does the problem occur?

What are the circumstances under which it doesn’t occur?

How you present data can make a big different, and may mean the difference between effective action and inaction, as was the case with the call center I helped

In a retrospective—which is a special sort of problem-solving meeting—data can make the difference between superficial, ungrounded quick fixes and developing deeper understanding that leads to more effective action—whether you data is qualitative or quantitative.

Here’s some examples how I’ve gathering data for retrospectives an other problem-solving meetings.

Data TypeMethodExamples Notes
QualitativeSpider or Radar ChartUse of XP practices.
Satisfaction with various factors.

Adherence to team working agreements.

Level of various factors (e.g. training, independence)

Shows both clusters and spreads.

Highlights areas of agreement and disagreement. 

Points towards areas for improvement.

Leaf ChartsSatisfaction.

Motivation.

Safety.

Severity of issues.

Anything for which there is a rating scale.
Use a pre-defined rating scale to show frequency distribution in the group.

Similar to bar charts, but typically used for qualitative data.
Sail boat (Jean Tabaka)Favorable factors (wind), risks (rocks), unfavorable factors (anchors), Metaphors such as this can prompt people to get past habitual thinking.
TimelinesProject, release, iteration. events over time.

Events may be categorized using various schemes. For example:

positive/negative

technical and non-technical

levels within the organization (team, product, division, industry).
Shows patterns of events that repeat over time. Reveals pivotal events (with positive or negative effects).

Useful for prompting memories, showing that people experience the same event differently.
TablesTeam skills profile (who has which skills, where there are gaps)Shows relationships between two sets of information. Shows patterns.
TrendsSatisfaction.

Motivation.

Safety.

Severity of issues.

Anything for which there is a rating scale.
Changes over time.
QuantitativePie ChartsDefects by type, module, source.

Severity of issues.


Shows frequency distribution.
Bar ChartsBugs found in testing by module + bugs found by customers by module.Frequency distribution, especially when there is more than one group of things to compare.

Similar to histograms, but typically used for quantitative data.
HistogramsDistribution of length of outages.Frequency of continuous data (not categories).
TrendsDefects.

Outages.

Stories completed.

Stories accepted/rejected.
Shows movement over time. Often trends are more significant than absolute numbers in spotting problems.

Trends may point you to areas for further investigation—which may become a retrospective action.
Scatter PlotsSize of project and amount over budget.Show the relationship between two varianles.
Time SeriesOutage minutes over a period of time.

Through-put.
Show patterns and trends over time. Use when the temporal order of the data might be important, e.g., to see the effects of events.
Frequency TablesDefects

Stories accepted on first, 2nd, 3rd, demo.
A frequency table may be a preliminary step for other charts, or stand on its own.
Data TablesImpact of not ready stories.Show the same data for a numberr of instances.

Fill in the blanks

I’ve been noticing what’s missing lately. In some ways, its harder to see what’s not there than what is. But there’s lost of useful information in what isn’t said, as well as what is.

For example:

A manager, talking about one of the people who reported to him said:

“He’s difficult to manage.”

What’s missing?

“He’s difficult (for me) to manage.”

“(When he does X), he’s difficult (for me) to manage.”

“(When he does X,) he’s difficult (for me) to manage (because I don’t understand his actions).”

“(When he does X), he’s difficult (for me) to manage (because I don’t understand his actions and I don’t know what to do).”

There may be another follow-on sentence, that hints at the crux of the matter.  That sentence might be…

And I’m worried that if I can’t bring him around, I’ll miss my goals and my boss will think I’m not competent.

And I have judgements about that behavior because I was criticized for that when I was in school.

And I feel threatened.

And I feel I have to defend my ideas.

I know what I’m asking doesn’t make sense, but my boss told me to do it.

It may have been more comfortable for the manager to say the first sentence, as he did.  He may even believe it.

As long as the manager deletes parts of the sentence, it’s easy for him to see the other person as the problem. As long as the problem resides entirely with the other person, there’s not much he can do to improve the situation (other than fire the “difficult to manage” person).  But the deletions contain important information that could help him improve the situation.

What examples would you add?

Best at argument != Best ideas

I was talking to my friend Penny the other day about a team she coaches.

There’s a really smart guy on the team. I’ll call him Bob. Most of the time Bob is an asset to the team. But when the team needs to decide on a technical solution under time pressure, he’s not.

“But Bob is a smart guy,” you may say. “How is it he’s not an asset? Won’t he have the best ideas?”

When it comes time to solve a technical problem, Bob is always first to offer his idea. Then, Bob dominates the conversation with a constant stream of words, leaving no opening for another to insert facts, ideas, points of view. When someone does find a voice and interrupts the torrent, Bob cuts him off, declaring “I’m not finished.”

When Bob does finish, and another team members asks a question, Bob implies that the other person doesn’t get it, and might be too stupid to see the brilliance of Bob’s idea.

When another team member proposes a different idea, Bob shreds it. He points out the flaws in the other person’s idea, while pointing to the strengths of his own idea.

When he does let others speak, it’s pretty clear that he isn’t listening to learn. Bob is figuring out how to score his next point.

I don’t believe Bob has bad intentions. I believe he wants to be helpful, and believes he is. Bob is helping the team in many ways, but he’s also hurting the team. Here’s how:

1. The team don’t have enough ideas. Due to Bob’s style, there is seldom more than one idea (Bob’s) that receives serious consideration. That’s not enough. Even if Bob is smart, so are the other people on the team. But Bob is the most extraverted and the best at argument and debate. That’s not the same as having the best ideas.

2. Over time, Bob’s style will wear down the other team members. It’s really not terribly satisfying to be browbeaten, or have all your ideas shot down. At some point, other people will stop offering ideas, and acquiesce rather than endure another argument with Bob.

3. As long as Bob’s ideas prevail, others don’t have a chance to develop their ideas.They are deprived the opportunity to learn, think about problems and risks, and increase their capability. Over the long run, that’s bad for the individuals involved, bad for the team, bad for the organization.

Penny has given Bob feedback on the effects of his behavior. It’s made some impact, but when there’s pressure to come up with a solution, Bob–as most people do–falls back on his default behavior. Chances are, Penny won’t get Bob to change that. Bob’s behavior is driven by his natural tendencies, and years of cultural exposure that taught him that competition and argument are the way to find the best ideas.

However, Penny can change the process so that the team has sufficient ideas to consider, and that credible ideas receive due consideration.

Here’s how:

Separate generating ideas, explaining ideas, exploring ideas, and evaluating ideas. As it is now, all of these are mashed together (and done mostly by Bob).

Equalize participation. From what Penny has told me, I suspect Bob is a strong extravert and the other team members are introverts. That means that Bob is very comfortable thinking out loud, and the other team members need a bit of time to organize their thoughts. Before they have time to do that, Bob is on a roll. One way make room for more participation is to start the process with a few minutes of silent brainstorming. Then, ask each person explain (orally or through sketching) the essentials of his or her best idea.

Apply the Rule of Three. If you don’t have at least three ideas, you don’t have enough ideas, and you probably don’t understand the problem. You may end up with deciding the first idea that came up is the best one…but you may not, or you may refine the first idea based on further discussion.

Test for agreement. Some teams get carried away with voting. But when a decisions are routinely highjacked by one individual, it can help to test for agreement using a gradient of agreement or fist of five.

Establish a small set of tests for technical solutions. In this team, some of the tests that make sense might be:

The solution …

  • is the simplest thing that can work.
  • doesn’t add to technical debt.
  • can be implemented in the timeframe required by service level agreements.

Bob may have the best ideas on the team. We don’t really know if that’s the case. No one else’s ideas are fully considered. We do know he doesn’t have a perfect record. Some of his fixes don’t work the first time. Some of his fixes break something else.  If the team had a process to consider and refine ideas, that might not happen as much.

It may sound like it will take more time to separate generating ideas from explaining, exploring, and evaluating them. It may seem like a lot of effort to find more than one idea and test the ideas for soundness and test the level of support for a given idea.

But in years of observing teams, I find that slowing down and separating the steps of the choosing a solution helps the team speed up. A mashup process forced by a dominant individual may appear to save time in the very short-term. That’s seldom true if you account for all the time costs and other effects incurred.

Fixing the Quick Fix

Here in the United States, our business culture tends to be action-oriented. We value the ability to think fast and act decisively. These qualities can be strengths. However, like most strengths, they can also be a weakness. Taking action when you don’t know the facts can lead to irreparable harm. Deciding too quickly before you’ve examined your options can lead you down the wrong path. Fixing fast may assuage symptoms but leave the underlying problem to fester.

Treating symptoms may work with the common cold, but with many technical and organizational problems it’s a prescription for disaster. Before you chase a solution, slow down and make sure you are looking at the underlying problem, not just soothing symptoms. (Sometimes, as with the common cold, you do need relief from symptoms in order to make any progress.) Of course, we can’t always tell right away whether we are fixing a symptom or an underlying issue. But, if the problem keeps coming back, you can be sure that you’ve been dancing around manifestations of the problem, not tackling the problem itself.

Start by Questioning Your Questions

Every question contains assumptions. While a question opens one avenue of inquiry, it closes others. The questions you ask constrain your thinking about the problem and your eventual solution. For example, in one company, the executives weren’t satisfied with the speed with which the IT department delivered projects. They sacked the VP of the IT department and brought in a new one with a reputation for decisive management action. The new VP immediately started asking questions, which seems like a good sign, until you know her questions:

Where is the dead wood?

How can we get the testers and developers to work harder and move faster?

These questions assume the source of the problem: lazy and incompetent people. But, what if the reason projects are late has something to do with the fact that people are assigned to several projects at the same time or that priorities change so quickly that teams never reach “done” before they are pulled to work on the latest urgent issue? The VP has already narrowed her inquiry and will only arrive at “solutions” that involve firing the “dead wood” and whipping those deemed “live wood.”

Examining our questions is critical, because not only do our perceptions influence the language we use, but also the language we use influences what we perceive.

Starting with a more general set of questions will help reduce perceptual bias and uncover the facts of a problem situation:

What are the visible symptoms?

What other effects can we observe?

Who cares about this issue?

What is the impact on that person/group?

What is the impact on our organization?

What other factors might contribute to the problem situation?

What other events might influence the context?

These questions may lead you closer to the real problem, or at least help you see whether the area you are looking at is the most fruitful for alleviating pain. Based on this, choose where to delve deeper, and get more specific as you explore the details of the situation:

When does the problem occur?

How frequently does it occur?

Is the occurrence regular or irregular?

Does it always happen, or is it an exception?

Under what circumstances does the problem occur?

What are the circumstances under which it doesn’t occur?

What else is happening around the same time? Are those factors connected?

At this point, you may start to feel you have a pretty good idea of what the problem is, so …

Back Up Your Hunches with Data

These questions will give you some clues as to the problem, but data will confirm your hunch (or point you in a different direction). I’m not talking about a measurement program here, but simple, “good enough” data gathered from observation or existing sources. Once you have data, look at it from several vantage points, or you may miss something important.

For example, at a large financial services company, operations analysts collected data on the number of minutes during which each application was unavailable during core business hours. They presented the data at a monthly management meeting using a pie chart. The chart clearly showed which application had the largest portion “outage minutes” for the month. Presenting the data in a pie chart facilitated putting the manager of the offending application on the hot seat at the management meeting. But, it didn’t help much with understanding the problem, or knowing where to look for solutions.

By the time I arrived to help them solve their problems, they had many months of data, and looking at it differently provided useful information.

I used the questions above to guide how I viewed the data. I looked at the total number of outage minutes each month. I look at the trends for each application. I looked at outages in time series. But, the data didn’t answer the question, what is the impact on our organization? I set out to answer that question, and found that not all outages were equal. A day-long outage in a specialized application only affected ten people. A five-minute outage in another system meant everyone in the department—several hundred people—couldn’t access any applications, but it had only happened once. A short outage in an application used by one-third of the department that happened every day accounted for most of the outage minutes over time. Knowing this didn’t solve the problem, but it told them where to focus their fixing.

Quick-fixers may bemoan the amount of time I spent cleaning the data, looking at it this way and that way, asking more questions and gathering more data. It took about a week. Of course it did take a bit more time to repeat this process at the application level. Still, taken in total, the time spent asking questions and gathering data was far less than the time that had passed while management tried to understand the situation with data presented in way that hid useful information—and less then the outage minutes for all affected employees for one month.

Generate At Least Three Candidate Solutions

Now, it’s time to look for possible solutions. Quick-fix thinking conditions us to choose the first idea that’s even remotely plausible. Don’t dismiss your first idea, but don’t stop there either. Develop at least three candidate solutions. You may go back to your first idea, but by developing additional options, you’ll understand the problem better. If you have difficulty thinking of more than one candidate solution, turn your thought process upside down and ask, “How could we make the situation worse?” That paradoxical question almost always jiggles more ideas loose.

As part of developing a solution, identify at least ten things that could go wrong with each candidate solution. Looking at the downsides might send you back to generating more options. Sometimes the best solution isn’t the most elegant; it’s the one with the fewest or least objectionable downsides.

Once you have several options to choose from, choose one and put it motion. Chances are it won’t work out quite as planned—and that’s an opportunity for learning. Observe and gather data, re-adjust, and try again.

Every solution carries the seed to the next problem. That’s a given. When you apply systematic thinking to the problem, it’s less likely that next problem will compound the problem you were trying to fix in the first place.

This article originally appeared on stickyminds.com.

Bridging Structural Conflict: Same and Different

No two people or groups are the same, but their differences don’t have to force them apart.

I recently talked to two groups who were feuding. On one side were the development teams, tasked with delivering new functionality every two weeks. On the other were the operations folks, who were charged with keeping the environment stable and available. Simmering resentment was getting in the way of working together. People were talking through proxies and hurling insults.

This is not the first time I’ve seen development and operations at odds.Conflicts like this are almost always structural. For example, the conflict may arise from the way the company is organized to accomplish work, different and conflicting goals, and different professional points of view. But, when people don’t realize the source of the conflict, it tends to get personal. People on each side of the conflict start talking about “those people,” as if the people on the other side are stupid, bad, or wrong.

Welcome to the fundamental attribution error. Humans tend to explain others’ behavior as resulting from personal characteristics and ignore the role of external circumstances. It’s seldom true that people have evil motives, are intentionally blocking progress, or are as dumb as dirt. The people employed in our field are reasonably intelligent, well-intentioned, and come to work wanting to do a good job.

Some structural conflicts dissolve if people are organized differently or when professional concerns are harmonized for complementary action, such as working together to create a software feature. When testers, GUI developers, database developers, and middleware developers come together in a cross-functional team, their differences can be a source of strength rather than conflict.

But, the structural conflict between the development (dev) and operations (ops) groups I was working with wasn’t going away. Dev and ops do fundamentally different kinds of work, with different rhythms and needing different skills. However, these two groups did (as they do in most organizations) need to find a way to dial back the conflict, appreciate each other, and work together reasonably well.

In cases like this, I find it helps to look at how the groups are similar and different. So, I asked the groups to work together to make a chart listing similarities and differences.  Same and Different Lists

As the list of differences grew longer, I felt a niggling worry that there wouldn’t be a difference the two groups could negotiate. So far, the differences they’d listed weren’t going away; they were inherent in the work. The items in the Same list were important—that’s the common ground where people can land when conflict arises—but this list was very abstract and “Mom and Apple Pie.” There wasn’t enough to bridge a gaping chasm.

Then, one of the ops specialists busted loose. He started listing all the ways the dev teams failed to prepare for production turnover.

And, there it was (with a little reframing to get the blame out)—an area where the two groups could work together. Redefining “done” and making a production checklist gave them a chance to work together on a small, bounded project. Working together on a shared goal would help each group understand the other’s context, know each other as people, and understand how their different concerns added up to a similarity: “Our work is critical to the business.”

There’s some wrangling yet to come before these two groups stand down, join hands, and sing “Kumbayah.” Some people may even be reluctant to give up their belief that “those people” are idiots. It is convenient to have someone to blame rather than recognize that some conflicts can’t be resolved, and it doesn’t imply anything about intelligence or motives. It’s not likely—or desirable—that the two groups become of one mind on all things. They do different work; what’s necessary is that they find a way to cooperate and coordinate in a way that meets the goal of delivery and stability.

If you find your group in conflict with another, you can try this exercise to bring some needed coherence. First, discern that it is a structural conflict. Conflicts of this sort tend to happen when there are different departments or goals, different professional interests, different types and rhythms of work, and when the teams do not have an interdependent goal at the level where they do their work. Everyone in a company may have a goal of “producing valuable products profitably,” but at a department level, groups may be at odds. For example, both developers and auditors want the company to be successful, but they have different roles in meeting that goal, which often feel at odds—i.e., structural conflict.

Here’s another check. Some structural conflicts are self-imposed, as when testers and developers report to different managers and are measured differently. Testers and developers do have an interdependent goal: It takes both testing and development skills to deliver a complete and reliable product. If you bring together both on a cross-functional team, the conflict usually dissolves.

If you feel it really is a structural conflict, find someone neutral to facilitate the session. Get both groups in the room. Avoid recrimination and blame, and establish the following ground rules to keep the discussion productive:

No labels. That means neither group can say the other group is lazy, sloppy, or a bunch of slackers. Positive labels aren’t much better; they don’t give specific information, and they imply that one side is empowered to evaluate the other. The power-difference message comes through whether the labels are negative or positive.

No characterizations about motives or intelligence. Remember the fundamental attribution error? People and groups that are in conflict develop stories about the “others.” Over time, these can evolve into harsh judgments about the others’ motivation, intelligence, and general fitness as human beings. The judgments show up in statements such as “They don’t care about quality,” “They don’t get it,” and worse.

When you catch yourself saying something like “They don’t care about X,” rebuild the sentence as “They have a different perspective on X.” Rebuild “They don’t get it” to “They don’t see it the same way I do”—which might just prompt you to think, “And I bet I don’t see it they way they do. I wonder what they see that I don’t.”

Make the list. Write down all the ways the groups are the same and how they are different.

Consider these questions to help the group process the list:

Which differences can be negotiated or changed?

Which ones are most significant?

Which similarities and differences help us do our work? Which ones get in the way?

Would it help us do our work if we were more the same? More different?

Not all differences make a difference, and not all differences can (or should) be eliminated. We need different skills, different points of view, and different professional concerns to create valuable products.

The point is to find something that the two groups can work on together to build a bridge. Then, when conflict arises again (and it will) they’ll have some common ground to land on (the similarities) and at least one experience of working together.

Sometimes, fate intervenes to give two groups a common goal—like a fire or flood in the office. After fighting fire or flood, groups tend to see each other as real human beings, not the sum of misattributed characterizations. I don’t wish fire or flood on anyone, so look around for additional natural opportunities for to work together—but don’t start fires.

This article originally appeared in Better Software Magazine.

Seeing System Problems: Expand Your Field of Vision

One of the biggest mistakes people make is attributing system problems to individuals (and individual problems to the system).  If you try to solve the problem on the wrong level, you are doomed to fail.

Here’s a simple yet classic example of trying to solve a systemic problem on the individual level.

Bob Sutton recently posted a piece on Team Guidelines. (I have some other reactions to the post, which I’ll cover some other time.)  The list starts with Show Respect, which includes “Show up to meetings on time.”  One can deduce from this that people aren’t showing up on time for meetings–hence an exhortation to individuals to be respectful and arrive on time.

People showing up late for meetings is a common problem; I see it in almost every organization I visit.

When you look at the problem on the individual level, and as disconnect events, it limits the range of solution options. Thus the ground rules, feedback directed at individuals and the non-solving of the problem.

"Show some respect!"
Showing up late for meetings is an individual matter of respect.

But if you want to understand the problem, look at the shape of the problem across the system:

A wider view of the "late to meetings" problem.
A wider view of the "late to meetings" problem.

A wider view shows interconnections, complexity, and effects beyond a single meeting. From this view, the “showing up late” problem has much more serious effects than annoying and inconveniencing other meeting participants.

Taking a wider view shows that “showing up late for meetings”  isn’t a trivial matter.  Maybe if more companies took the broader view, they’d actually try to fix the problem, rather than telling people to “show respect.”  For the most part, this behavior has little to do with intentional or unintentional disrespect (an exception described here, again, thanks to Bob Sutton).  We’ll look at that another time.

What are  you missing when you miss the wider view?

The Confusing Field of Coaching

I noticed at the recent agile conference that there were lots of people who billed themselves as agile coaches, and several sessions on coaching. Seemed like more of both than in past years.

I consider myself a coach, too, though not with a capital C.  I usually coach managers or teams, and sometimes coaches. Mostly, I’m a consultant and coaching is part of the work I do in that role.  But some people lay claim to “coach” as their job description.  And some of those people have training from a coaching school.

All this, and a little story my friend Johanna told about an experience she had with a coach got me thinking about the different sorts of problems people bring to coaches, and the confusion that results when the coach is a “coaching process” type coach, and the problem is a skills-based problem (which requires content knowledge, in addition to process knowledge). Or a problem that calls not only for a coaching model, and a bunch of other models.

Back when she had a corporate job, my friend Johanna Rothman had the opportunity to work with a coach on a problem she was experiencing at work.  It must have been an enlightened work place, because they employed Johanna AND coaches, whom they dispatched when a manager needed a bit of help. Johanna’s hope was the the coach could help her with the specific problem, which she hadn’t been able to figure out on her own.

Johanna explained the problem to the coach.  The coach responded, “The answers are inside you.”

Johanna tried explaining the problem again.  The coach answered, “The answers are inside you.”

The answers were not inside Johanna (at that time…I bet they are now).  She needed specific information, direction and guidance to develop a new skill that would enable her to solve the problem.  The response Johanna received to the problem she described was woo woo nonsense. It was no help at all. The coach was trying to be helpful, I’m sure. And she was acting out of a coaching model, just not one that fit the situation.

The Range of Coaching Practice

If we’re talking about a skill—whether it’s TDD, interpersonal feedback, or object oriented design, influencing change across the organization—the answer is not inside you.  If you are shifting from a serial mental model of software development to a iterative/incremental mental model of software development, the answer is not inside you.  Willingness to learn is inside you. The desire to maintain a good working relationships is inside you.  The yearning for pride in work is inside you. The desire to see the organization improve is inside you.

The specific skill is not.

You need teaching, training, and  direction, along with coaching and feedback. A coach in this situations needs to have task-specific (content) knowledge, in addition to coaching skills. And those coaching skills are likely different from the skills a life coach or goal coach brings to the table—unless they worked in the content field prior to studying a coach curriculum or taking up the coach label.

Life coaching—finding the answer in side you— is useful when you have a life problem; when you need a skill, you need  skill coaching

Another friend, Don Gray, recently helped three people understand how an interaction blew up. As they unwound personalities and communication styles, two of them heard some information their default preference didn’t deal (well) with.  He helped them recognize how their communication preference helped them, and hindered them. He helped them see additional options. To do this, he needed a coaching model(s), plus content knowledge on communication, human interaction, personality and cognition. Rare indeed.  The answers may have been inside these people, but it took more than a coaching model to bring them out.

And of course, some times the answers are inside us.

Satir coaching assumes that each of us has the resources to be be happy and successful as a human—but may not be using all our resources to their full potential.  Jerry Weinberg’s fab book, More Secrets of Consulting: The Consultants Tool Kit, is inspired by Satir’s self-esteem toolkit, and the book is tremendously helpful.  I’ve studied the Satir model for many years, it informs much of the work I do with individuals and groups (and certainly how I live my life).

Likewise, the Solution-focused Coaching model assumes that the person being coached has some experience solving the problem for which they have sought coaching.  This model assumes that the coachee has all the competencies needed to come to a solution.  I had a little experience of this at the previous Retrospective Faciliator’s Gathering in Tisvilde, Denmark.  Josef Scherer offered a session on Solution Focused Coaching, and since I a little stuck in my writing practice, I volunteered to be coached.  It helped me  a lot—the answer was inside me.  But this sort of coaching wouldn’t have helped if my problem was that I didn’t know how to structure a coherent sentence.

There are other Coaching models:  GROW, Achieve, and many more. More than you can shake a stick at (just google “coaching models”).

When someone is stuck, they may need a jiggle, in the form or a reframe, or a prompt to remember what they do know about solving the problem. When someone is struggling with an interpersonal issue or a life issue, they answer may lie within, and need a little help from inner resources to come out.

But sometimes, the person needs context, information, demonstration, a straight answer, or a skill.

Related:  A Coaching Toolkit

A Coaching Toolkit

As a coach, your job is not to solve or do—it’s to support other people as they develop skills and capabilities and as they solve problems on their own. When it comes to coaching, one size does not fit all. You need to have a variety of practices in your toolkit in order to approach each situation and individual differently. Here are some of the approaches I use when coaching other people.

Provide Context

Sometimes all a person needs is some context. Knowing how a specific task or skill fits into the work of the team or supports the product helps people make better decisions. And knowing the importance of an activity can motivate people to do tasks they don’t normally enjoy. For example, a person may not like test-first development when he first tries it, but when he understands how it contributes to clean code and good design, he may be more willing to stick with it.

Frame the Problem

Sometimes people need help framing the problem. When people are learning a new skill or a new way of thinking they don’t always have a clear understanding of the problem they’re trying to solve. Ask them questions to help them consider and verbalize different aspects of the problem—the what, where, when, who, and how. Having a clear problem statement is (at least) half the battle.

Generate More Options

In other cases, a team member may choose a solution that you know will not be effective. How do you help without being directive? Well, it helps to know that people always choose what they perceive to be the best option available. Always. The trouble is, sometimes people don’t have enough good options to choose from—the only options they can think of either won’t work or work only in the short term. To help them come up with a longer list of options, ask questions. These questions might include:

  • What other ways could we accomplish the same goal?
  • What would happen if we did this part differently?

Rather than reject an option (or worse, dismiss the person), walk through the option with him or her. Start by saying, “You could do that—and here are some of the risks I see.” Generate additional options together. You can offer the first option, then move to jointly generating alternatives. Between you, come up with at least three options. Having only two options is a dilemma; and it forces a choice between “your way” and “my way.”

Provide Real-Time Feedback

Many times, when performing a new skill, people need to hear some real-time feedback to get a feel for how what they are doing is affecting the project. Help them by offering course corrections and confirmation. Just remember that feedback is information that enables different choices; it’s not criticism or evaluation. Describe what you see or hear and state the impact.

Ask Questions

Sometimes people just get stuck. A few well-chosen questions can prompt new thinking. Here are some that work well for me:

  • If you did that, what would you gain? / If you did that, what would the collateral consequences be?
  • What are three things that could go wrong with that approach?
  • What else have you tried?
  • What are you hoping to accomplish?
  • Who else is affected by this?
  • Who else / what else will be affected by this solution?

Catch People Doing Something Right

You don’t have to wait until something is going wrong to provide coaching. Notice when people are performing a new skill correctly and comment on it. If the moment seems right, use the opportunity to explore the root causes of the success. When people know more about the steps and circumstances that lead to good results, they can consciously recreate them.

Demonstrate

Some individuals learn best by seeing it done. In those cases, demonstrating a new skill for them might be your best option. For example, you might teach about Test-Driven Design (TDD) by demonstrating with FitNesse. As you demonstrate ask if your pace is too slow, too fast, or just right. If you only ask if you’re going too fast, the other person may be embarrassed to admit he isn’t keeping up.

Review

Other people learn best by trying it themselves first and then reviewing it with the coach. Always start by stating what works and making global comments about the work product. Only then should you talk about the problems or issues. If there are classes of issues, discuss those rather than pointing out each instance of the problem.

Provide Information

Coaches are a source of information—and sometimes that’s all the other person needs. Depending on your own skill level, ask questions to understand the problem the person is trying to solve. After you understand the problem, offer examples of what has worked before or what factors they might want to consider. It’s common for people who are learning a new skill to think they need one thing when they really need another.

Bring In an Expert

No one expects you to have all the answers. So when you don’t have the answer, don’t hesitate to bring in another knowledgeable person. You’ll solve the problem sooner and model that it’s okay to ask for help.

Listen

One of the most powerful (and underutilized) coaching practices is listening. Being a sounding board as someone talks through a problem or proposed course of action lets the other person hear their own logic. And as people talk they often come up with new ideas or see weakness on their own. Listening also conveys that you are interested in them not just in showing off your expertise.

Coaches look for opportunities to help build skills and capabilities. The more coaching approaches you have available to you, the more opportunities you will see—and use.

This article originally appeared on scrumalliance.org.

Facing Up to the Truth

(c) 2001-2010 Esther Derby

“There is nothing either good or bad, but thinking makes it so.”

William Shakespeare’s Hamlet, Prince of Denmark, Act II, Scene 2

The other day I was skimming the Harvard Management Update when a section in bold red print caught my eye: “Why don’t more organizations stop and think? Because they don’t want to face the truth.” The article went on to say that the ability to “face the truth” is a critical business skill, and that failure to do so can have organizational and bottom-line consequences. Does this sound familiar? You and I see these consequences in software when projects spin out of control and shaky products are shipped “on time” in spite of poor quality.

What is “the truth”? Truth is a big word, so let’s settle for something more mundane: the current situation or the current state.

First let’s acknowledge that organizations can’t face truth; organizations are configurations of people and can’t really act as one human person would. But we, the people who make up organizations, can grapple with concepts like truth. So why don’t we? If it’s that important, we should all face up to the current situation, right? What makes it so hard for us?

Let’s look at two projects that didn’t go as hoped for, and how their sponsors faced the situation.

Martha was a new vice president in a software company that was growing by acquisition. Martha saw an opportunity to consolidate accounting and customer functions across acquired companies. She made the business case to her boss, Ben, chartered a project she named “One-Account,” and started the search for a project manager.

The hiring market was tight, and Martha couldn’t find anyone with the level of experience and skill she wanted for the salary she was able to offer. After interviewing a dozen candidates, she settled for a bright young man named Steve, even though he didn’t have much experience.

Pretty soon it became obvious that Steve didn’t have the skills to handle the large and complex project Martha had hired him for. Steve wasn’t able to manage scope or build even a basic plan.

“I can’t go upstairs and tell Ben this,” Martha thought. “If I tell him, he’ll think I’m a fake and a failure. I talked him into this, after all. We haven’t actually missed any dates,” she rationalized, “and we aren’t over budget, so we’re not really offtrack…”

When colleagues started suggesting that Martha needed to step in and put the project back on track, she countered by justifying her current situation. “I really did my best to find a more experienced project manger, but Steve was the fourth person I made an offer to, and by that point…what was I supposed to have done?”

The project continued to wallow as Steve frantically hired more contractors to work on the ever-increasing scope. Martha started moving resources from other projects and initiatives to cover the wildly inflating budget. “It’s all coming from my own budget, and I’ve got the One-Account project covered, so technically we’re not really over budget,” she told herself.

Martha’s boss, Ben, looked at his current situation, and realized he had a vice president who wasn’t able to face the situation and take action. Ben fired Martha.

Several times zones away, Jackson found himself in a similar spot. His organization was building a new Web application, the first for his company. He hired a project manager, Stacey, who had a good résumé and who seemed like a good fit for the organization. She was a nice person and did a good job building the initial plan.

Jackson felt things were going okay, so he turned his attention to a problem brewing with a subsidiary elsewhere.

When Jackson came back, he found that Stacey’s project team was still having planning meetings, but there were no results or tangible signs of progress. The delivery date had been moved out. When the team talked about delivery, they were pretty vague. “Sometime in maybe the fourth quarter,” he’d hear, “or maybe early next year.”

“This project isn’t going the way I want it to,” thought Jackson. “Stacey did well at the planning stage, but she isn’t able to define concrete deliverables so people can make progress. I sure like Stacey and I want her to be successful. I need to do something to put things back on track.” Jackson started by coaching Stacey, meeting with her three times a week and giving her more direction. Still, the project wasn’t turning around.

Jackson sat down and had a long talk with Stacey. It wasn’t an easy conversation for either of them. Jackson realized that he wouldn’t be doing Stacey any favors by keeping her on in a position that was turning out to be a poor fit. Stacey moved into a role where she was more comfortable, and Jackson took over management of the project.

On the face of it, both Martha and Jackson faced similar problems—an important project that wasn’t going as they wanted. And Martha and Jackson were each aware of the gap between the desired state and the current reality.

The difference was that Martha became wrapped up in her fears about what the situation might mean for her career, and her beliefs about failure. With all that emotion swirling around, there wasn’t much room left for her to think clearly about what to do. Jackson, on the other hand, looked at the facts as just that: facts—information about the difference between the current state and what he wanted. Does this mean we should suppress our emotions? No, as managers, we need to learn how to manage our own emotional state, so we can focus on solving the problem.

The current situation can seem “bad” when things are not going the way we hoped they would. But really, the situation justis. The ability to “face the truth” and take effective action rests on the ability to be in a mental state where our emotions and fears aren’t running us. And managers like Jackson have learned to face the current situation as neither good nor bad—it just is what it is. From that perspective, we can gauge where we are in relation to where we want to be, and take action to close the gap.

This column originally appeared in STQE magazine, December 2001.

Three States in Problem Solving

“Nothing is more dangerous than an idea, when it’s the only one you have.”

Emile-Auguste Chartier

There are three states in problem solving.

  • Not enough ideas
  • Too many ideas
  • Just the right number of ideas

In the first case (stuck) the task is to generate ideas.

In the second case (stuck in churn) the task is to prune the number of ideas.

In the third, to test and refine the ideas, then implement and refine.

Stuck in Neutral

Fixing the Quick Fix