“There are three kinds of lies: lies, damned lies, and metrics.” – Scott Lowe, with assistance from Benjamin Disraeli (1804-1881)
If your company is like most, metrics matter (maybe too much, sometimes!) Many organizations use metrics to measure productivity and to make sure that their investment in people and systems is contributing to the bottom line. However, the metrics that are selected for measurement tell a lot about an organization and scream (loudly) to the employees about what’s important and what’s not important. The very metrics that you choose to measure performance can be used against you in a number of different ways.
Metrics management is far from being an IT-only problem. In fact, every area of the organization should make an effort to choose metrics that will promote the kinds of activity and behaviors that are desired and that help an organization to meet its goals. Failure to do so dooms more than just IT.
We’ve all heard about the horror stories of the call centers that so closely track the amount of time that call handlers spend on the phone that it looks like the outcome – real, honest customer satisfaction – is almost an afterthought. After all, if you’re working with someone who has the sole goal of getting you off the phone as quickly as possible, the chances are higher that you will be given incomplete or incorrect information that would require a follow up call to correct. Although the call center employee’s average call times might be stellar, if additional metrics – eventual customer satisfaction, customer need to re-contact – were brought to bear, that employee’s performance may tell a different story. For example, if enough information is captured that shows that every contact handled by that call center staffer required the customer to initiate a second contact, that isn’t efficient nor is it good customer service. Worse, by focusing on that single metric, that employee is being actively encouraged to do the bare minimum necessary to get the person off the phone.
I use this example because it’s so simplistic; in reality, many call centers have their statistics and metrics down to a science and know what they’re doing.
In IT, that’s not always the case. The department is always under pressure to move in many different directions at the same time. There is and will always be the need to provide first-class support while, at the same time, IT staffers are being pushed to handle more value-add project work. Interrupt-driven support work is a productivity killer when it comes to project work. Every interruption requires the employee to stop and refocus on the efforts, leading to a fractured thought process and high levels of frustration.
Why do I mention this? If you use simplistic metrics alone to measure your staff performance, you run the risk of reducing their ability and willingness to focus on project work. After all, if your only metric is total number of support calls handled and that’s the primary metric by which staff members are evaluated, they’re going to focus on handling as many easy calls as humanly possible. Again, you get what you measure.
On the other hand, if you use a pure outcomes-based metric – such as customer satisfaction – you may have a person that handles one call per month and does it really, really well. The best metrics are one that balance inputs and outputs. Inputs: How much time is something taking? How many requests are being handled? Outputs: How satisfied is the customer with the end result? How many follow up calls does the customer have to make to get a situation resolved?
At the same time, some kind of metric around project work should be developed as well.
I’m looking at this from a small/medium sized IT department standpoint in which staff members have to wear both support and project hats. In these situations, I make the following specific recommendations for helping to define appropriate metrics:
One, determine the amount of time that employees must spend supporting existing systems. Subtract that percentage from 100. This is what’s left for project work. Let’s assume that it’s an even 50/50 split. Even though this is not that typical, it makes for easy math.
Now, consider using estimated project time combined with user satisfaction as a metric. For the project time, if the employee indicated that one full week is required to complete a project, give the employee two weeks (50% of time, so two weeks) to complete the project and follow it up with a survey to the user. If the employee hits the time estimate and the user is satisfied, then you have success! Use a 1 to 5 scale for the time estimate. Is the estimate is beaten, the employee gets a 5… if it’s late, the scale slides down until 1, which is the worst score. Use a 1 to 5 scale for each survey question with 5 being the best. Now, support you send a survey with three questions, each having 5 points. (The downside to this method: This method could cause employees to overestimate time necessary to complete a project.) The two scenarios below will provide you with clear evidence of what actions need to be taken to improve performance (that’s really the point of metrics, after all).
- Scenario one: Employee A: Hit the mark on time, but received 2, 3 and 4 as survey scores. Add these up and multiply by 5 (the score for hitting the time mark). Total points: 45. Here, you might ask the employee to provide longer time estimates and take more time to make sure that the user is satisfied with the end results.
- Scenario two: Employee B: Was a few days, late, so the time score is 4. The user survey returned scores of 4, 5 and 5. After multiplication, the total project score is 70. In this case, the employee was a few days late so the score was a bit lower, but the user was much more satisfied. Here, you might work with the employee on time estimates but him to keep up the good work on the overall project work. If you were to use the “on time” metric alone, this employee would not have fared as well as Employee A even though the outcome was better. However, if Employee B missed the project target by a month and got only a 1 as a result, it would be a clear indication that the person can’t hit targets, which is unacceptable no matter the performance.
On the support front, combine the number of calls closed with the user satisfaction for each call. Suppose you have a satisfaction scale of 1 to 5 with 5 being the best). Here is what metrics for two employees might look like and you’ll note that the employee that appears “more productive” may not be the best performer.
- Employee A: 20 calls closed, average of 3 on the satisfaction scale = a score of 60.
- Employee B: 15 calls closed, average of 5 on the satisfaction scale = a score of 75.
The examples here are just that: Examples. They’re intended for demonstration purposes to help you think about different ways to measure overall employee and service performance. As you can see, with the right metrics, you can better gauge remediation steps that can be taken to improve the overall performance of your entire department.