NPS or Net Promoter Score has in the recent past gained a lot of recognition because of its simplicity. However, it is critical for CEO’s to know some of the pitfalls of the NPS before deciding to use it.

The NPS is based on one question: How likely are you to recommend your service provider to other? Customers are segmented into two critical groups based on their responses to the recommend question. Those customers who rate this question a 9 or 10 (on a 10 point scale) are defined as Promoters, while those who rate the advocacy question a 1,2,3,4,5 or 6 are categorized as detractors. NPS is the difference between the percentage of Promoters and Detractors.

The advocacy question ("The Ultimate Question" as labelled by Fred Reichheld) has been a part of every self- respecting customer satisfaction survey along with questions like intention to continue and share of wallet for decades. And rightly so, however, Reichheld asks organizations to measure only the Advocacy question and then use that to create the NPS score. Here are some important pieces of information that CEO’s must understand.

  1. While Advocacy is a good thing to measure, it is not the only thing to measure. There is some research to show that, there are many manifestations of loyalty (advocacy being one of them) and each of them impact shareholder value in a distinct way. Advocacy is one of the many facets of loyalty that impacts shareholder value.
  2. Will any CEO measure the performance of his or her company on just one financial aspect? It’s highly unlikely. The CEO will want to look at cash flows, operating margins, return on investment and several other items before understanding how good or bad the performance of the firm has been. Why then should the same CEO come to any conclusions about customers based on just one question? We have mentioned that there are many aspects of loyalty that impact profitability other than advocacy, aspects such as intention to continue, complaints, the number of services, share of wallet etc.
  3. One might argue that a person who strongly advocates will rate the other loyalty aspects high too. However, that is an invalid argument. There are customers who may recommend but not continue with the service provider. The reasons could be many – “The service is not good for me but may be good for someone else” is one possible reason. This reason actually means that the customer will churn out from the service provider but still recommend! What this means for the organization is that cost of acquisition actually goes up! Such complications also exist between recommend and share of wallet and other manifestations of loyalty too. It may also be argued the other way – why not focus on intention to continue? Or value for money? Those who rate these aspects positive are also likely to recommend!
  4. There is also the problem of the artificial definitions of Promoters, Passives and Detractors. Segmenting customers into Promoters (those rating a 9,10) and Detractors (those rating 1,2,3,4,5 or 6) is quite arbitrary. Why is a customer who gives and 8 on advocacy categorized as a Passive and not a Promoter? A person who rates a “1” on advocacy is considered to be the same as one who has rated a “6”. The latter two customers are quite different when it comes to their intensity of “likely to recommend” feeling. There is no real scientific basis for these categorizations.
  5. There are also other issues relating to the arbitrary categorizations of customers using the advocacy ratings. For example a NPS of 20 could either consist of 60% promoters and 40% detractors or it could consist of 30% promoters and 10% detractors. Which is better? The NPS is unable to answer any such question. The NPS treats both the situations as similar. However, the reality is that they are very different. In the former case (60% promoters and 40% detractors) the issues could be either indiscriminate selling or inconsistent service. In the latter case, (30% promoters and 10% detractors) the issue could be low involvement or lack of excitement in the service. Organizations taking decisions simply based on the NPS could actually create inappropriate products and services which may not improve customer satisfaction at all.
  6. One aspect about advocacy or recommendation that has not been researched adequately but is important, is the cost of recommendation. For example, when asked to recommend the buying of a nuclear reactor, the responses are likely to be very thoughtful and considered. For the person who is recommending, the costs of recommending the wrong manufacturer of nuclear reactors would be very high in terms of, credibility and trust. On the other hand the cost of recommending a grocery store or mobile service is quite low and therefore the decision to recommend is far more casual and easy. This aspect is completely absent in the NPS.
  7. In a recent study, in the Finance space, we found that on the advocacy attribute (“how likely are you to recommend the service”), more than 80% said they would recommend the service. We then asked the same customers if they had recommended the service to anyone in the past 3 months. Less than 10% said yes. It’s far more easy to say “I will recommend” than actually recommending a product or service. This is often true for the other measures of loyalty as well. This is one of the key arguments used by the database marketers against any loyalty survey research. It is also possible that very few people actually go out and seek advice, unless the product or service is a high ticket item or a very high involvement category or where the risk of buying is quite high. Given this phenomenon (intended versus actual behaviour) and the fact that there are many manifestations of loyalty, organizations should focus on measuring the strength of the relationship.
  8. We are often concerned about NPS for small brands. Unless the brand’s products and services are significantly different (Iphone at the time of launch) one is unlikely to see a significant correlation between high NPS and market share or market growth. This is primarily because there are likely to be far more numbers of promoters for larger brands than for smaller brands even if there is a significant difference in the NPS. One simple example that comes to mind is

Brand A – Market Share – 5% - NPS – 50, Number of Promoters amongst 100 customers – 2.5

Brand B – Market Share – 50% - NPS – 30, Number of promoters amongst 100 customers – 15 

Brand A is likely to do very well only when there are significant paradigm shifts or the NPS for the larger brands is negative.

 

Here is what Professor Claes Fornell – the world's leading authority on customer satisfaction measurement and customer asset management (and the person behind the American Customer Satisfaction Index) – says in his book The Satisfied Customer: Winners and Losers in the Battle for Buyer Preference:

“….aside from the fallacy of assuming that such recommendations will occur regardless of how satisfied customers are (very few dissatisfied customers recommend products they are unhappy with), this has led to foolish measurement practices. What’s done is usually something like the following: First calculate the percentage of respondents who say that they are very likely to recommend a given product (say those who score a nine or ten). Next, take those that score very low, say three to one) and calculate their percentage of the total. Now, you have the percentage of people who are very likely to recommend your product and the percentage of people who are not. Then take the difference between these two percentages. If that number is positive you have more customers who are likely to recommend your product than customers who aren’t.

What’s wrong with this? At first glance, it might sound reasonable. The problem has to do with how the numbers are assigned: A perfectly good scale is ruined to the point that it generates very little useful information. A competent measurement methodology looks to minimize error. But here, the opposite is done. Instead of getting precision, random noise is produced. From a single scale, we have not only converted something continuous to something binary, but we have done it three times (percent of customers likely to recommend, percent of customers not likely to do so and the difference between them). Each time, we have created a new estimate. All estimates contain error. Going from a continuous scale to a binary one introduces even more error.

If that’s not enough, taking the difference between the two estimates with error leads to exponentially greater error. In the end, we have produced a large amount of random noise, but very little information. When it comes to looking at changes over time, we further compound the problem. For each time period comparison, there are now six estimates and the final calculation is the percentage difference of customers that are likely to recommend. I have seen published reports sold for several thousands of dollars in which almost all the reported change is due to random noise. For managers, it’s bad enough to chase the numbers they can’t effect, but to chase randomly moving targets can do a great deal of harm to individual and company performance.

Yet, it is not uncommon to find approaches of this kind. General Electric and Microsoft have both used some variant of them. It’s not that these companies do not have competent statisticians or market researchers, but such decisions are often made at an organizational level where even rudimentary knowledge of measurement properties is slim."

Note: If you found this article interesting, you may find “The Little Book of Big Customer Satisfaction Measurement” interesting too. The entire royalty goes to charity.