Why “The One Number You Need To Grow” is one number you should probably avoid

In 2003, Frederick Reichheld published a Harvard Business Review article entitled The One Number You Need to Grow.  Reichheld’s article described a method for computing a simple, easy-to-understand customer satisfaction metric called the Net Promoter Score — and ushered in a flavor-of-the-month management practice that has left a bad taste in the mouth of academics and serious marketing researchers for a half-decade.

Net Promoter Score is calculated by asking customers a single question: “On a scale of 0 to 10, how likely is it that you would recommend our company to a friend or colleague?”  Customers are  categorized into three groups: Promoters (rating a 9 or a 10), Passives (rating a 7 or 8), and Detractors (rating 0 through 6). The percentage of Detractors is then subtracted from the percentage of Promoters to obtain an organization’s Net Promoter Score.  That is, NPS = % Detractors – % Promoters, thereby reflecting the percentage of Promoters exceeding the percentage of Detractors — thus, “Net Promoter.”

NPS has a number of practical and statistical problems, making it more valid for some industries than for others.  For the online dating industry in particular, Net Promoter completely fails to measure what it’s supposed to measure.  Here’s why:

  • Social stigma sometimes prevents online dating customers from telling all their friends. Recall that the very NPS question is, “How likely is it that you would recommend us to a friend or colleague?”  and it’s likely that you’re counting a significant number of Passives as Detractors — some customers would be happy to recommend your service, IF they were willing to admit they use it.  But they’re not.
  • Customer satisfaction with subscription services varies over time. That is, subscribers are more likely to recommend an online dating service early on during the subscription (when there are still plenty of fish in the sea) than later, after they’ve exhausted the profile inventory.  Ultimately this leads management to manipulate the timing of the survey in order to maximize Net Promoter Score.  (Smart management will move the timing of the survey continually closer to the beginning of the subscription over time, thus showing steady improvement in NPS.)
  • Research shows that Net Promoter Score has the lowest predictive ability among commonly-used measures of customer satisfaction. In this world, some things accomplish their purpose better than other things do.  Net Promoter Score is meant to provide an operational customer-satisfaction metric to managers, and very simply it’s the worst one on the lot.
  • NPS isn’t actionable. Proponents of Net Promoter make the argument that its simplicity outweights any concerns about its validity or precision (an argument, ironically, that lacks face validity.)  The reality is this: If your Net Promoter Score is 15%, or if it’s 50%, or if it’s 75%, NPS doesn’t tell you what to do to move the needle. Other customer satisfaction scales provide the (handy) advantage of indicating the appropriate lever to pull in order to get results.
  • NPS is difficult to compare across business units. Net Promoter Scores can vary widely from industry to industry based on the factors discussed above — making NPS a fundamentally flawed cross-unit operating metric.  I have seen companies with Net Promoter Scores higher than 70 and others with scores lower than -40.  (Yes, negative 40.)  All of these companies were customer-focused organizations; it just so happens that the distribution of NPS ratings can look very, very different across industries.
  • Collapsing variance only reduces the amount of available information, period. That is, downgrading a measure from one level of measurement to another can only result in less information than you originally had.  In the case of NPS, subtracting Promoters (a subset of raters) from Detractors (another subset of raters) you’re basically throwing away useful information.  This is something of a technical point, but an important one.
  • Detente. In conditions where users providing Net Promoter ratings interact directly with agents judged on that measure, agents can negotiate for higher ratings.  As an example: in a previous position I was tasked with centralizing the Analytics role into a shared service.  My supervisor at the time informed me that the group would be judged by its performance on a Net Promoter metric.  I happily agreed, pointing out that I could simply tell each requestor that if they didn’t agree beforehand to rate the service of our centralized Analytics a 9 or 10, then we simply wouldn’t do the work at all: getting a 7 or 8 is little better than getting a zero.

I hope this helps make the case that Net Promoter Score is a fundamentally flawed metric that has a lot of problems that go along with its simplicity.  In particular, due to social stigma and fact that it’s a subscription service, NPS is a particularly bad operational metric for the online dating industry.  My recommendation? Don’t catch “Anyone Can Do That” syndrome; instead, consult a professional.

Leave a Reply

Your email address will not be published. Required fields are marked *