Friday, October 16, 2009

My comments on customer surveys seem to have struck a nerve

Just to set the record straight, let me say again that Customer Surveys are an important ingredient to any Customer Satisfaction or Customer Loyalty program. That said, I think that customer surveys do have significant gaps and not considering or addressing those gaps, could send you and your company down the wrong path.

Based on the amount of e-mail I have received that has been quite polarized (nearly 50-50 for and against), it is time I provide more context and specifics to my generalization. Just to be perfectly clear, I respect and admire those that administer customer surveys. In fact, I am one of your biggest fans. Unfortunately, many times, your company is not giving you the resources to make your insights more valuable so that you can have a stronger impact on the bottom line. In other words, I am on your side.

These gaps are the result of limitations or constraints in the survey or the supporting mechanisms. These limitations consist of, but are not limited to:

  • Cost of the survey, the analysis, and the action plan
  • Timing Delays: Experience to survey and survey to reporting
  • Unintended filtering by the way questions are worded and captured
  • Context missing from the survey results
  • Ability to connect to pre- and post-customer behavior from the experience
  • Selection bias by agent
  • Follow-up actions by the company and with the customer

Costs

To reach a critical mass and collect a relevant sample of surveys requires a lot of time and money. One company that performs customer surveys for their clients charges $42/completed survey. With low response rates, it can take 1-2 hours of outbound calls to capture a handful of surveys. Companies have different plans and frequency for data collection, but you are looking at $20,000+ just for 5000 surveys. Once those surveys are complete, report compilation and analysis requires more time and often, this is not a full-time job, but rather an ad hoc collateral duty. As a result, one often looks at trends of aggregated data that isn't actionable. This penny-wise mindset, leads to a lackadaisical and less than comprehensive approvach that fails to provide the full potential insights that can build improvements that the customer values.

Timing Delays: Experience to survey and survey to reporting

The process behind customer surveys, introduces timing delays that can destroy data integrity and value.

Experience to Survey: Imagine a customer calling in on the 9th for one reason and has a good experience and has a less than desireable experience on the 10th for another reason, and then receives a survey on the 11th, regarding the experience on the 9th. Does the customer know the survey is for his actions on the 9th? How can his experience on the 10th not influence his rating for the 9th?

Survey to Reporting: Survey results are often compiled and reported monthly. If a survey from an experience on the 1st is collected on the 3rd, but then isn't reported until the Monthly Business Review that occurs on the 12th business day of the following month, you are looking at feedback that is about 45 days old. This highlights an opportunity to improve the process by which surveys are synthesized and acted upon, by cycle time and report dissemination. The information within surveys should reach the hands of call center managers and respective product managers as soon as possible after the survey has been rendered.

Filtering

Filtering comes in many disguises. Filtering can come by the way the question is worded to "lead the witness". Filtering can also be the result of the way data is collected. For example, I spent some time analyzing the process behind outbound Net Promoter collection for a major telecomm company. I discovered that the outsourcer left 36% of the customer feedback on the cutting room floor, based on the constraints of the data collection. This data would have been useful for performing correlation, detecting cause-and-effect and issues that create other issues.

Filtering also happens through reporting where data is over-aggregated so that it fits on a PowerPoint slide. A monthly scorecard could show a Net Promoter score of +2 one month, and +4 the next month, but there is typically little information on why it changed, which takes us to our next point.

Context

Context is an important ingredient to understanding your customer and the result provided on your survey. There will be times when a customer gives a negative rating where the customer was already lost or may not be worth salvaging. Conversely, there may be times where a customer gives you a positive rating, but it is because an agent did not follow policies. There will be other times when a customer will be "satisfied", but not necessarily "loyal". A customer may have thought the agent was professional and said "no" gracefully, but that doesn't resolve their frustration with your processes, policies, and product. Having that context is critical to making improvements that the customer values.

Context can also be important in terms of length of the call, time on hold, number of transfers, and the number of times a customer had to repeat information. Context is also critical in terms of reporting and your subsequent action plan.

Customer Behavior - Before and After

Any survey, any Net Promoter score, or an other activity involving customer satisfaction or loyalty that doesn't include customer behavior analysis is missing the mark if they don't have the ability to look at account level detail and how customers responded to an experience in terms of spending frequency and share of wallet. Unfortunately, this hapens more often than it doesn't. Having the ability to find the accounts and the experience quickly to expedite the customer behavior analysis (most likely with speech analytics that has the metadata available) is something that would facilitate greater use of this approach.

Selection bias by the Agent

When agents have the ability to direct a customer to an IVR survey, agents have a tendency to drive positive selection - in other words - select customers that are likely to provide positive feedback. For one company we found that agents selected customers for a post-call survey after their request was fulfilled 89% of the time (compared to 67% of the time for the control group). This could cause you to pat yourself on the back, when it is unwarranted.

After Action

If a customer takes the time to answer a survey, hopefully your company is committed to (i) act on constructive feedback, and (ii) acknowledge the customer's feedback. If the company is not in a position to do both, then question your commitment to your survey program by your senior management. The after action is really the most important part. It is the ends to the means.

Addressing these gaps can transform the line of bearing into a fix.

Companies put a lot of time and expenses into gathering intelligence on their customers psyche. However, a gap remains between the company's perception and the customer's reality, that can often be exacerbated by customer surveys. The more one knows about their customr, the more likely that their $10 choice can be the result of a company's $10 million decision. The more one knows about how the customer thinks and about their experiences, the more confident companies will be with those decisions.

1 comment:

  1. I like your comment about filtering coming in many guises. Btw, since a lot of survey data gets displayed in Powerpoint, you might want to take a look at this survey tool that will download results directly to powerpoint. Useful.
    http://snurl.com/zoompowerpoint

    ReplyDelete