Tuesday, February 10, 2009

What is the proper way to measure callback rate?

There are many different ways to measure callback rate....probably as many ways as there are companies that measure it.

From my experience as a consultant, no two companies have measured it the same way, either due to data cleansing (or lack of data cleansing) or the time period at which a callback "counts".

So which way is the right way? I think each company should tailor their metric for their business, but many are not doing this properly.

First and foremost, callback rate should be continuous in terms of time interval. Knowing what the callback rate is on day 1,2,3, or 35 can identify unique business intelligence that can be acted upon. What I have seen from my experience is that customers callback on days 1-3 for one reason 5-15 for another, and 30-35 for quite another. Picking one particular day as a snapshot, does no service for any of these groups.

Secondly, data cleansing is the only way that you can ensure that callbacks are for the same or a different issue. This proves to be a challenging task and often it is probably not worth the time investment to do continuously. Therefore, I would recommend that this exercise be done once or twice a year.

Thursday, February 5, 2009

Corporate Executive Board introduces Customer Effort Score!

In 2008, Corporate Executive Board trademarked "Customer Effort Score" based on their research. Some of the findings included:

  • 80% of CEB's members used Customer Satisfaction Surveys as their primary customer experience metric.
  • 12% of CEB's members used Net Promoter.
  • 8% used something else - usually a large number of metrics.

Corporate Executive Board (CEB) then went further to find how these metrics were able to predict repurchase and increased spend.

  • CSAT scores were found to be the less predictive and introduced a large number of false positives and false negatives. Repurchasing found that 20% of satisfied customers would not repurchase, while 28% of unsatisfied customers would repurchase. Likewise, 45% of satisfied customers will not spend more, but 11% of unsatisfied customers will spend more.
  • Similarly with Net Promoter Score (NPS), 14% of promoters would not repurchase, while 19% of detractors would repurchase. Additionally, 27% of promoters will not spend more and 7% of detractors will spend more.
CEB went on to examine a common denominator to find what elements could cleanse these metrics and determined that the level of effort that a customer must put into the experience clearly defined how they would spend in the future, both repurchase and increased spend. Specifically, 94% of customers that put a low level of effort into their experience, would repurchase from those companies. Meanwhile, 88% of customers that put a low level of effort into the engagement, would increase their spending, based on this perceived simplicity because of the low level of effort.

CEB went on to construct the measurement for Customer Effort with an objective and subjective component. The objective components measured callbacks within 14 days. The analysis showed that 70% of all customers said that 2 or 3 calls registed as "Moderate-to-High" effort, where only 30% gave that rating for those that made only one call. The subjective component was a customer survey.

Companies that can track customer effort, especially at the issue level, and can provide tracking at the customer, issue, and agent level, are much better positioned to solve for customer effort.

I would highly recommend getting hold of CEB's Customer Effort Score and how the subjective component is derived.

Tuesday, February 3, 2009

Customer Satisfaction/Net Promoter Surveys - Post-Survey Execution is the most important part!

From my personal experience reaching out to several companies, many companies do not have a good plan to act upon once the surveys return to them.

Survey Aggregation: I have seen on many occasions, especially Net Promoter, that the numbers are reported with one silver bullet number. The number fluctuates from month to month, and companies spin to find out why the number changed and don't even look to see if the change is statistically significant.

Time lapse: It is downright humorous to see companies collect their numbers throughout the month and even if the numbers are ready on the 1st or 2nd of the month, they are summarized in a Monthly Business Report, typically 12 or more business days into the month.

"Leading the witness": The surveys usually ask questions that lead to filtering, either by the company or the customer. For example, companies will not ask an open ended question that will quickly define the experience, instead they will ask about what they feel defines the experience upfront. Ideally, they should allow the customer to begin without any boundaries and then coach and lead based on their initial response to determine what the independent variables are that defined the experience.

Best Practices: I have seen two companies that are ahead of the rest. More later on this.

Customer Satisfaction/Net Promoter Scales - 4,5 or 10 point?

I’d like to share an excerpt from the white paper “American Customer Satisfaction Index (ACSI) Methodology” written by Russ Merz, Ph.D.
Research Director at ForeSee Results.
---------------------------------------------------------
"A common basis for recommending 5-point scales often rests on the assumed inability of people to reliably discriminate more than 5 levels on a scale, where offering more than 5 levels would introduce error into the measurement and offer weaker correlations and lower explanatory power. Research has clearly shown that people can handle more than 5 pieces of information at one time, particularly depending on their experience in a given area and ability.

A 10-point scale is within capabilities of most people with little experience, and in areas of professional expertise people are able to and will make much finer distinctions.

Because customer satisfaction data is positively skewed (where customers less frequently use the lower ends of scales), a 5-point scale is really closer to a 3-point scale, and a 10-point scale behaves more like a 7-point scale.
Since most customers don’t really use the lower ends of scales (values 1 and 2 on a 5-point scale) and mostly use values 3, 4, and 5, a 5-point scale offers little opportunity to differentiate positive responses. This negative skewness
introduces error into the measurement process and loss of critical, meaningful information compared with a 10-pointscale.

Societal norms and the fact that customers typically “like” companies they do business with tend to limit the number of customers who use the very lower ends of response scales. In most cases, if a customer is so completely
dissatisfied as to have the need to use the lower ends of the scale, they will leave and stop doing business with the company. As a result, the 5-point scale effectively turns into a 2- or 3-point scale due to limited response at values 1 and 2.

This “compression effect” also militates against the common assumption that 5-point scales offer a mid-point that can be considered as the “average response”, a characteristic not present in 10-point scales. The mid-point argument is only valid if respondents use, or at least contemplate, all points of the scale, and as discussed above, they do not, and responses are consequently negatively skewed.

The use of 10-point scales significantly enhances the information that is transmitted in the surveying process.

There is one area in which 10-point scales are not appropriate relative to 5-point scales – that is when there is a desire to label each response point within the scale (e.g. 1=poor, 2=not so good, 3=satisfactory, 4=good,
5=outstanding). There are several arguments for not attaching labels to response categories, most notably
1) added error due to violation of the interval/ratio data assumption, where it can no longer be assumed that the distance between 1 and 2 is the same as the distance between 2 and 3, and so forth, and
2) respondent burden and increased questionnaire length."

The four point index's strongest selling point is that if there is no "middle ground" (#3 in a five point scale), you force your customers into telling you if you were good or bad. The five point scale's strength, on the other hand, is that if someone is feeling complacent about the service they received, they can let you know with that #3 (or "neither satisfied nor dissatisfied"). The ten point scale allows your customers to give you a wider distribution, and thus you could perhaps see the needle move in more subtle ways over time, but I've heard some analysts do not like it because it is so broad that some customers start picking one extreme or the other just to get through your survey. Another possible criticism is that while two customers might tell you they were "mostly satisfied" in a four point scale, one might give you a six and the other an eight on a ten point scale, and that by broadening the choices, you actually have less opportunity to predict consumer behavior or build effective models using multiple regression techniques.

One of the drawbacks of a four or five-point scale is that some customers will never give the highest score, thus adding a bias and an inability to distinguish on a great customer experience. Those customers may give you a "9" on a ten-point scale and you can take away that you delivered a valuable customer experience.

For other white papers from ForeSee Results, please visit
http://www.foreseeresults.com/White_Papers.html