Re-reading the article in 2010, it seems that there’s still a place for such a satisfaction survey. Even if you are delivering incrementally, it’s useful to get a sense of trends in the response to your product over time.
It’s 10 a.m. You’re about to ship to five beta sites. You’ve met the date, you’re within budget, and the defect counts have been steadily declining for the last four weeks. Still, you’re a little nervous. How will the customers
react to this new release?
Carrie was unhappy to learn during beta testing that the product her team had been building for insurance underwriting reps had drifted from what the users had wanted.
“It’s too slow,” the beta test group said. “It takes forever to pull up the records when I have a customer on the phone!”
“But in test it performed within the parameters in the requirements,” Carrie protested. “What changed?”
Just a small design tradeoff that affected the default display order of the records. It had seemed like a reasonable decision at the time.
If you’re a project manager, you know how important it is to have visibility into the product and the project. It’s bad news to find out in a beta test that you have built a product that doesn’t meet your customer’s expectations. How can you tell if you’re still on track to meet customer expectations?
Agile methods are one option. The agile school advocates close customer contact and frequent deliverables that are directly tied to business value. Still, many projects follow more traditional models: the customers (or their surrogates) are involved up front in defining features and then come back in during testing. A lot can happen during the time in between. If you’re not using an agile method and don’t have the luxury of having a customer assigned full-time to your team, how can you stay in touch with what the customer expects? One option is to measure customer satisfaction as you design and build the product.
Take a page from marketing and use a survey to catch a glimpse of customer reactions before the product ships. During development, ask the people who are closest to the product—sponsors, designers, and customers (or surrogates)—how they feel about the project.
You probably identified a set of system attributes during initial project definition. Suppose your customers identified these four qualities that were most important to them: easy to use, fast, always available, and secure. Of course, these descriptions are too vague to be helpful in actually building a system. You probably negotiated more specific targets for the product as a whole and for specific functions within the product. But for the purpose of a satisfaction survey, these more general attributes are about the right level. For each attribute, create a bipolar scale, with the desired attribute on one end of the scale and its opposite at the other. For example: the opposite of “fast” is “slow”; the opposite of “easy to use” is “difficult to use.” Here’s what a simple survey might look like:
How do you rate the WidgetMaster system at this time?
|Difficult to use||Easy to use|
|Low availability||High availability|
What pleases you about the system at this time?
What displeases you about the system?
Then approach the cross-section of people you’ve chosen to fill in the survey with their opinions, and ask them about the version of the system under development. Be sure to leave plenty of room for comments at the bottom of the survey.
You won’t obtain scientifically precise data from your survey. However, you can find out how people feel about the project, and that ties directly to their expectations for the system.
When you first start using a survey, the comments section will provide the best clues. You may also want to investigate if the ratings vary widely. Big variations may be a sign that one stakeholder or group is unhappy with the direction the system is going. Follow up. You may learn something valuable. Jodi used a survey as a window on satisfaction for an application she was building for customer service reps. Jodi was surprised when the average ratings on “ease of use” took a dive, and she decided to investigate. It turned out that the customers had recently reviewed the screen and dialogue designs for the customer record retrieval function.
Supervisors were happy with the screen navigation because it was directive—helpful for new employees who weren’t familiar with the system. For them, it meant fewer questions and fewer errors from new customer service reps. Experienced customer service reps weren’t as happy. They were forced to respond to prompts that they no longer needed. For them the detailed navigation was a burden. They wanted to be able to take shortcuts and skip through prompts for functions that were second nature to them.
Armed with this information, Jodi was able to implement an alternate navigation scheme that better met the needs of experienced customer service agents. On the next survey, satisfaction was back up.
Best of all, Jodi was able to discover the dissatisfaction long before the system went into beta testing, when it would have been difficult to make the changes the agents wanted.
You can’t always “fix” the cause of dissatisfaction…but you can begin to manage expectations so that neither you nor the customer has an unpleasant surprise on
Why wait to discover how your customers will react to the system? Finding out how they are reacting to the work of building the system can give you valuable clues to help steer your project toward meeting expectations. This simple tool can help develop visibility into customer satisfaction.
This column originally appeared on Stickyminds.com in 2002.