The team began by thoroughly reviewing the existing program, which had been outsourced for many years at a cost of $1 million a year. Everyone had always assumed it was a good program that provided robust data, so it had never been challenged or investigated, and many important decisions had been based upon it. Our assessment was that his current program did not meet the company’s information needs at any level.
Starting with his clear and well-defined objectives, we designed a new program that could be run by his internal staff. The results were immediately apparent. His new knowledge management program provided information that was broader in scope and deeper in detail, much more accurate, much more useful, and was ongoing instead of only once a year. With continuous data he was able to customize reports to his liking – monthly, YTD, Rolling 12s, year over year, etc. At any time, he could add topical questions to the survey and get immediate feedback on hot button issues.
The cost of the new program? $300,000 a year. Executives were able to make better decisions about more things, use what they were able to learn to increase customer loyalty, and save $700,000 a year ever after.
When leaders get interested enough to start asking the right questions of the right people, remarkable things happen.
One thing we know for certain about satisfaction is that it is defined in so many ways.
If we are going to measure satisfaction, we will first need to create a working definition so a single meaning is shared by all. Shall we define it as the fulfillment of our wishes or as the pleasure derived from said gratification? Is it the agreeable feeling that shows up when we get something we want? Or is it delight, contentment, peace of mind, a sense of relief? It might even be peptides which activate the body’s opiate receptors, causing an analgesic effect.
Joan Giese and Joseph Cote wrote an article for the Academy of Marketing Science.
In it, their review of the existing literature showed the most pervasive problem about satisfaction is that no one agrees on how to define it.
One definition we should avoid is this one from a marketing textbook:
“Satisfaction is defined as a consumer’s responsive process subsequent to a particular consumption experience which is evolved through a discrepancy between some form of pre-experience performance standard with the actual performance of the product as perceived by the consumer.” Sounds like something that came out of a bad focus group, doesn’t it?
Some think it is important to treat satisfaction as an absolute measurement while others insist that real satisfaction is measured relative to what our expectations are.
If satisfaction is relative, this casts a very different light on things. As an equation, the outcome now depends on the interactive effect of two variables: what I expected would happen and what actually did happen. Have you noticed how what you expect will happen is a great deal less than what you fervently wish would happen?
When we treat satisfaction as relative, it is best expressed as the gap between expectations and reality. If I expect A-level service and get B-level service, I am not satisfied. If I expect C and get B, I am.
If I gave you a $20 bill, would you be satisfied or not?
If you had forgotten loaning it to me, you would be quite satisfied. If I owed you $100 and told you I couldn’t pay the other $80, you would be unhappy. This argument says there is no satisfaction inherent in the $20 bill.
It is easy to see that the lower our expectations, the easier they are to exceed, and thus, the higher our satisfaction. And the higher our expectations, the harder they are to meet, never mind surpass. A philosopher friend says satisfaction is easy to achieve: expect the worst from everyone we meet and we will rarely be disappointed.
So is satisfaction a cognitive construct or an emotional one?
People like to take sides on this, lining up in one camp or another against the heretics. Some see satisfaction as a mental process involving knowledge and reason. Others see it as an instinctive state deriving from circumstances. Each measures satisfaction according to what they believe to be a crucial distinction.
I found it is more useful to think of satisfaction as having both cognitive and emotional components:
“I think <these things> and so I feel <this way>.”
Satisfied with what?
Salesperson, purchase experience, how the product works, support? The appropriate focus of satisfaction varies by product and service type and by context. People’s interpretations vary widely, too.
Those who understand the real nitty-gritty of questionnaire design know that the meaning of all items varies according to the other information in the questionnaire and to the research context. Satisfaction has got a shelf life, too, so mind the expiry date.
Most companies measure satisfaction with a single point of contact about a single incident.
This fails to take into account how our thoughts and feelings change over time as we use the product more and its strengths and weaknesses are revealed to us. How many times have you been over the moon when your shiny new thing arrived, but the damn thing fell apart a week later?
Most satisfaction research tries to measure several things.
Say for example we measure the extent to which people are satisfied with price, performance, value, ease of doing business, shipping costs, and tech support.
What if a company’s representative solved my problem, but was rude about it? For you, the rudeness was so offensive that it overrode the solution. For me, I was so happy my problem was solved that I didn’t care how rude they were. Some concepts are trickier than others.
All concepts are not valued equally by customers.
Price might be my primary concern, which means I don’t care much about the warm and fuzzy stuff. Price is the very last concern of big spenders who want the star treatment and are willing to pay for it. Murray! Two down front!
When do you measure satisfaction?
Like anything else, the farther away we are from the incident, the less complete the recall. What did you have for lunch eighty-three days ago? Before we overhauled their program, one company measured satisfaction as much as twelve months after the customer’s purchase experience.
Maybe you believe that satisfaction with a purchase is only one small step in some more comprehensive satisfaction Gestalt. If you want to know how customers like using your product, give them enough time to get a good feel for it. Fit the schedule to the experience.
How the Great Depression set the stage for customer satisfaction.
For ten long, lean years after the worst economic downturn in world history, money was tight and goods were in short supply. Because people were willing to take whatever they could get, many businesses copped a like-it-or-lump-it attitude that lasted for decades. If you know a little automobile history, you know that American carmakers cared little about product quality and even less about customer satisfaction (“Any color so long as it’s black”). In 1970, only a million imported cars were registered in the United States. Last year imports accounted for $200 billion in sales.
In his article, The Rise of Customer Satisfaction Research, Ray Poynter says the 1980s brought management consultants touting customer satisfaction as the new enlightened path to success. Early adopter companies grabbed the best seats on the satisfaction bandwagon.
The next big thing was the American Customer Satisfaction Index.
In 1994 the National Quality Research Center at the University of Michigan launched the ACSI. They did some very heavy lifting when they conducted nationwide telephone interviews with more than 80,000 consumers. Using econometric modeling techniques, they produced customer satisfaction scores for more than 200 companies in 43 industries operating in ten economic sectors. The publicity surrounding the NQRCACSI prodded more companies to do their own satisfaction research.
And then along came Fred Reichheld.
This brilliant man has a set of credentials longer than your arm and mine put together: honors graduate of Harvard’s MBA program, Bain partner, founder of Bain’s Loyalty Practice, author, one of the world’s leading authorities on business loyalty, and a frequent speaker at major forums and to groups of senior executives.
Reichheld felt traditional customer satisfaction surveys were too complicated and provided too little value, so he began a two year study of Enterprise Rent-A-Car’s satisfaction research. HBR, in their review of his book, The One Number You Need to Grow, says he found the number of customers who recommended their rental car company to a friend or colleague correlated directly with that company’s growth.
The book was a big hit and companies everywhere threw out their old satisfaction research and adopted the Net Promoter Score. Business executives loved it because it was fast, cheap, easy, and famous.
NPS may be a popular tool but it has some very serious shortcomings.
The Net Promoter Score:
- Does not apply to all products and services. Lost in the sixteen years since he said it, Reichheld told HBR “Although the ‘would recommend’ question generally proved to be the most effective in determining loyalty and predicting growth, that wasn’t the case in every single industry.” Generally. Not in every industry.
- Does not tell us why people gave the scores they did. What is a 9 and why is it night and day better than an 8? And why does 6=0?
- Does not tell us where to improve. Acknowledging and confirming the validity of this particular criticism, some NPS advocates have added this open-ended question: “What can we do to raise our score by one point?”
How is the Net Promoter Score like the Myers-Briggs Type Indicator?
- Prepackaged solutions that are easily administered, scored, and analyzed by people with little or no special training required.
- All-or-nothing measurements with no middle ground.
Businesses use them because they’re valuable and they’re valuable because businesses use them.
This circular logic is a perpetual motion machine. Popularity is interpreted as an indication of accuracy and confused with utility, which leads to wider use and less inclination to question the underpinnings – ad nauseam.
Four professors at Sloan MIT were skeptical of the legendary metric.
Timothy Keiningham, Lerzan Aksoy, Bruce Cooil, and Tor Andreassen set out to find the answers to three questions:
- Does the Net Promoter metric really predict loyalty?
- Does the NPS really link to growth?
- Is it really better than other commonly used metrics?
After a two-year-long study of 8,000 customers they concluded:
- NPS explained customer behavior in no more than 20% of the cases. Ouch! Even non-statisticians know this means 80% of the explanation comes from anything but the NPS.
- The best predictor of share of spending was past share of spending, not intent to recommend.
- The best predictor of retention was intent to repurchase, not intent to recommend.
The authors take care to point out they cannot imagine any scenario where NPS would be called the superior metric and further, they do not believe there is any “silver bullet.”
Where do I stand on NPS?
I believe it opened the door to thinking about simpler measurements while also revealing the folly of trying to get too cute. Just how many of us do you think would like to take a one-question final exam? Show of hands here, how many want to go to a one-question job interview?
Satisfaction and CRM.
Customer Relationship Management began as a management philosophy that believed the best way for a company to achieve its goals was by satisfying customers’ wants, needs, and expectations.
In a few organizations, it referred to the written principles, practices, and guidelines they follow when interacting with their customers.
It wasn’t long before CRM’s meaning shifted even farther away to describe the systems companies use to manage interactions with customers. The notion of a philosophy or principles got shoved aside by marketers and disappeared entirely.
Old hands and wags call it Customer Relationship Manipulation.
- Search CRM now and you’ll find it has become the common denominator of dozens of automated sales tools.
- Companies use CRM primarily as a selling tool, often with automated pitches via email, social media, and direct mail. You see it every day – an offer especially for you (not really).
- Marketing-focused organizations have repeatedly made the mistake of placing customer satisfaction research under CRM. Marketing-led research invariably produces abnormally high satisfaction scores.
Many companies are satisfied if their customers are satisficed.
Satisfice is a portmanteau word (breakfast + lunch = brunch) that combines satisfy and suffice. Herbert Simon, Nobel Prize winner and the founding father of artificial intelligence, coined the term to describe the process of searching through the available alternatives until we find one that is good enough.
The Economist says Simon maintained that individuals cannot get access to all the information needed to maximize the benefits of taking a particular course of action. There is far too much to assimilate and far too much to digest. Even if they could collect it all, he said, they would be unable to process it properly due to human cognitive limits. So they settle for something that is “good enough.”
If your company believes your best customers are those who are just settling for your products, then you shouldn’t be spending one nickel on Customer Satisfaction studies.
Advocates of satisficing point to the millions of airline passengers who endure long lines, cramped seats, and generally shabby treatment because the low fares are enough to tip the scales. One wonders why airlines even bother with customer satisfaction studies that say passengers hate just about everything associated with airline travel. Unless, of course, they can brag about how their NPS is not as bad as yours, and when did that become something to be proud of?
How closely are satisfaction and attitude related?
So much so that it is often difficult to distinguish between the two. Satisfaction is usually related to a single incident. Attitude is an accrued set of experiences that lead to a formation of a belief, mindset, or sentiment towards a company, a product, or a service.
It would be difficult to prove that measures for satisfaction and attitude are two distinctly different and entirely unrelated behavioral constructs. If I recently had a satisfying experience, won’t I like your company more, not less?
The more you know about satisfaction research, the more you know that there is little or no evidence of how satisfaction scores are related to the bottom line.
Very few companies bother to measure the ROI because they can’t find an easy way to figure out what actually works.
When we ask for the list of factors companies use to measure customer satisfaction, we get things just like those on your company’s satisfaction surveys.
When we ask to see the data that indicate the extent to which each contributes (or doesn’t) to their scores, we get bupkis.
When I first learned the number one predictor of customer satisfaction with insurance companies is how they handled my claim, it made very good sense to me.
When I asked what the number one predictor is for people who don’t have any claims, no one knew.
Are you getting this kind of information from your research gatekeepers? Nine out of ten business are not, and they should be.
If you are never able look at satisfaction the same way again, I will be satisficed. If you ask me to come and help you figure out how much you can count on the purported accuracy of the information you’re getting, I will be delighted.
June 10th to 15th, David is giving away his newest book, Take a Closer Look, Volume 2, on Amazon.