Estimating hard-to-measure benefits

Last week, I wrote a post about decisions that look only at easy-to-count costs and ignore hard-to-count benefits.

Here’s one method for estimating hard-to-count benefits, subjective impact analysis:

1. Identify the proposed course of action.
2. Determine what’s important to the person who makes the decision.
3. Ask that person who they consider credible sources related to the issue.
4. Create a short interview protocol.
5. Interview the people the decision maker identified as credible.
6. Summarize and present the results.

Here’s part of a subjective impact analysis I did for a client a billion years ago (suitably scrubbed).

The client was a VP in an IT operation. The business was livid about the production problems and outages that followed when the IT group installed new functionality.

When I looked at the data about outages, it was pretty clear that the worst problems came from the hand-off between the development folks and operations folks. I poked around, and found that the two groups weren’t working together.

So I suggested a little experiment: a “readiness review” prior to installing a change where the development people and the operations people would sit down together and walk through the installation. And I created a half-page checklist for the development people to actually write down critical information for the ops folks.

Then we tried the experiment. We held a “readiness review” for a smallish update to the production systems. The meeting took about 45 minutes.

After the “readiness review,” the IT managers told the VP they were concerned about adding another meeting that took away from development time. The developers complained about the burdensome documentation (half a page).

The costs were obvious.

The benefits were not so obvious.

So I did a subjective impact analysis to show the benefits (if there were any).

Here’s what the VP valued:
• Avoiding production problems
• Improving the application
• Taking a proactive stance

I asked a cross-section of the people who participated in the review these questions:

1. What gaps and issues came up in the review that could have caused problems had they been discovered in production?

• What was the biggest problem discovered in the review?
• On a scale of 1 – 10 (1 = negligible impact, 10 = disaster), how big would that problem have been?

2. What was the team able to do to improve the application because of the review?

3. What problems did the team fix before they hit in production?

Then I summarize the data and presented it to the sponsor. Here’s the data from the first question.

Problems Discovered in the Readiness Review
(1 = negligible impact, 10 = disaster)

10 ****
9 *
8 ***
7 **
6 *
5 *
4 **
3
2 **
1

(This represents each person’s assessment of only the biggest problem, so it’s a subset of the problems. For the most part, the development folks identified lower impact problems, the ops folks higher impact problems. Very interesting…)

This little data table helped people see the benefits of the readiness review, as well as the costs.

(Obviously, there were lots of other issues with this organization. This was a starting point to bring the level of conflict down enough for people to look at underlying problems.)

I did this subjective impact analysis after the fact, but you could easily adapt it to estimate benefits before making a decision.