CSAT, NPS, and CES: 3 Easy Ways to Measure Customer Experience (CX)

Aug 5, 2016

Have you ever wondered how to measure a customer’s experience? I’ve thought about it, specifically about how to measure the experience with services from government agencies. This is a complex topic because government services can be vastly different from each other. These services range from: issuing fishing and hunting permits, social security benefits, unemployment insurance, job training, business licenses, food inspection, and medical and mental health services to veterans.

The word Feedback seen on a small wooden cube sits on a laptop keyboard

Gajus, iStock, Thinkstock

Honestly, I was overwhelmed. Luckily, I got in touch with Kelly J. Ohaver, Customer Experience Manager from the City of Centennial, Colorado. This introductory blog spells three customer experience metrics that Kelly shared with me, along with some examples and graphs.

Customer Satisfaction (CSAT)

Let’s start with the simplest and most well-known metric: customer satisfaction—or CSAT. It’s a measurement that looks at how well a product or service experience meets a customer’s expectation. A customer satisfaction survey can help you determine your customers’ overall level of satisfaction.

For example, one of the CSAT survey questions the City of Centennial uses is, “How satisfied were you with the service you received?” Survey respondents are then asked to rate their experience on a scale of 1 to 5, where 1 equals ‘Very Satisfied’ and 5 equals ‘Very Dissatisfied.’ The CSAT is expressed as a percentage; the score ranges from 1 to 100%, with 1% being the lowest level of customer satisfaction and 100% being the highest level of customer satisfaction.

A printed customer satisfaction survey form.

alexskopje, iStock, Thinkstock

Net Promoter Score (NPS)

Net Promoter Score (NPS) is another way to measure a customer’s experience. This metric builds from the idea of word of mouth because it focuses on whether someone would recommend a service to a friend or colleague. A recommended service would indicate a positive experience, and vice versa. The metric encompasses when a customer first identified a need for a service, all service interactions, and the customer’s perception of the overall experience.

NPS is determined by a single calculation based on the question, “How likely is it that you would recommend [brand/product/service] to a friend or colleague?” (the City of Centennial tailored their question as, “How likely is it that you would recommend the service(s) provided to you by the City of Centennial to a friend or colleague?”). This will result in a score ranging from -100 to +100.

Responses are categorized as follows:

  • Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, fueling growth.
  • Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
  • Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.

Here’s a chart of how to calculate NPS; subtract the percent of Detractors from the percent of Promoters to determine the Net Promoter Score (note: if you have more Detractors than Promoters, it will result in a negative score):

Chart shows that the percent of Promoters plus the percent of Detractors equals the Net Promoter Score.

Customer Effort Score (CES)

Customer Effort Score (CES) is the last metric I found—and it’s powerful since it focuses in on one specific attribute or feature of a service. Its rationale is to look at loyalty as a way of expressing a customer’s level of satisfaction. If service providers can reduce the customer’s effort in getting a service, they’ll create customer loyalty. Ease of use fosters customer loyalty, and loyalty means engaged customers who will return. The following is an example of CES in action.

The City of Centennial uses two questions to measure and analyze CES. “To what extent do you agree with the following statement: The City of Centennial made it easy for me to handle my issue.” with a rating scale of 1 (Strongly Agree) to 5 (Strongly Disagree). And a follow up question of, “What specific technologies, business processes, and/or employee behaviors made doing business with the City easy or difficult?”

Here’s a chart to express CES; subtract the percent of those who disagree with the statement from the percent of those who agree to determine the Customer Effort Score:

Chart shows that the percent of Agree minus the percent of Disagree equals the Customer Effort Score.

My discussion with Kelly is just a foray into this topic. Clearly, there’s a lot more to learn about customer experience metrics and how government agencies use them to measure a customer’s experience. As a follow up to this blog, I’ll examine how agencies apply these metrics in real life and share my findings. Stay tuned.