CXO

Measuring the next core service--customer service

How to develop measures and metrics for customer service


In a previous article about measuring basic administration, I explained how a former client once called me to help him realign his measurements and metrics. The processes we established together a few years earlier met his needs when implemented, but they became dangerously malformed over time. We started the effort by analyzing and addressing the problems with the infrastructure team and then moved on to the development team.

In order to ensure the problems did not occur again, my client and I decided to use broad measurement categories linked to specific metrics. These categories covered everything from budgetary compliance to quality assurance. We also implemented a method to measure the deformation caused by the metrics. This allowed us to make corrections as the business changed, rather than waiting for a disaster.

My client did not sit idly by during this process. He worked with me in the evenings, laying out visions and choosing among the dozens of measurements and hundreds of metrics. He also exerted tremendous influence on his team. The sudden influx of so many "soft" metrics played havoc with team morale. He spent years of favors to keep things moving forward while we proved our case.

Meanwhile, oblivious to his sacrifice, I set out to work on our most challenging task to date: developing measures and metrics for customer service.

Customer service's "current state"
My client asked me to start by sorting through a mountain of survey data they collected each quarter. According to the "surveys," the customer service organization bordered between getting lynched and burned at the stake on any given day. Customers consistently rated the CS organization as "unreliable, unfriendly, and unresponsive." Most supported either outsourcing the organization or replacing it entirely.

Despite this, their call closure rate looked acceptable. Data gathered at the end of a call indicated high satisfaction with the service. Problems escalated quickly, but tended to bog down once they hit the infrastructure or development organization. Resolution of anything beyond tier-two support took an average of two weeks.

Customers consistently blamed customer service for all of the stability problems. They did not understand how a jurisdictional disagreement between two people could keep a system down for over a week. It did not matter to them whether the fault lay with an unauthorized transport or with a network card's bad memory chip. They knew when systems were "down" and that the people they called for help could do nothing for them.

The customer service team felt they were already doing the job of both development and infrastructure. Like many such organizations, they also believed they deserved a chance to "advance" into the organization. Their frustration with the lack of professional opportunities also translated into sullen behavior outside of the phones.

Selecting measurements
This initial survey did not provide my client and me with a clear direction. We needed to somehow address the frustration felt by all sides. We also needed to recognize the achievements of customer service without feeding the "we already do their jobs" mentality. Finally, we needed to unravel the customer satisfaction metrics.

We settled on the following measurements: budget, customer satisfaction, follow-through, and professional development. These four measurements would, we hoped, guide us toward a greater understanding of the situation. We thought at the time, and later knew for sure, that we might have to change them as we learned more.

Designing metrics for the budget required a certain amount of hand waving. We assumed the current budget, including an unfilled FTE slot, met the company's current service requirements. If the company expanded in terms of people, systems, or IT tools, the CS budget needed to expand at the same time. If the company raised or lowered the service requirement, it had a corresponding effect on the budgetary goal.

We scrapped the current customer satisfaction measurements. The company that designed the original survey instruments used leading questions, divergent metrics, and a very unclear focus. To replace those instruments, I hauled out my college textbooks on survey design. A week of work later, we had two linked instruments: one applied immediately after the call and one randomly administered every quarter. Since they used the same language, we could make an item-by-item comparison. This allowed us to clearly spot sentiment drift over time.

We defined follow-through as the ability of customer service to communicate with their customers about unresolved issues. The customer service manager accepted responsibility for getting updates on all tickets sent to people outside his organization. Customer service personnel accepted responsibility for contacting customers and letting them know what was going on. Both could fail this metric only if they failed to communicate: lack of response from the infrastructure or development team did not count against their measurement.

Note that in the above case, we created a dissonance between CS, development, and infrastructure. We measured CS based on follow-through with the customers, but did not penalize the other groups if they failed to respond. This kind of dissonance is very common in management environments. Fortunately, we knew going in that this could happen. On the first iteration, we "solved" it by including a communications metric in capacity and growth metrics, respectively.

Finally, we turned our attention to professional development. We decided to measure CS based on the development of both operational procedure and technical knowledge. Certifications were welcome in both. However, we also instituted an effort to train CS in the ITIL methodology. As they learned about the method, we welcomed their input into how to improve the process. Each procedural change put into practice increased the value of the metric. Rejected procedural ideas caused no change. This allowed us to harness their intuitional knowledge, and it gave them a clear sense of control over their environment.

Moving forward
At the end of this process, my client and I had working metrics for the three primary services provided by most IT organizations. Now we needed to gather data and ensure the measurements had the effect we wanted.

During the data-gathering phase, my client asked me to go back and address specific service areas in greater depth. In particular, he wanted clearer metrics for disaster planning, e-mail administration, security, and server administration. These three areas spanned the three core services, creating the potential for highly malformed measurements and intradepartmental conflict.

What problems do you see with the described measures and metrics in your own organization? How would you change them to meet your own needs? What else could you measure to create similar effects? Post your thoughts in the discussion area following this article, or send us some mail at itmanager@techrepublic.com.

Editor's Picks

Free Newsletters, In your Inbox