Software

Improve end user satisfaction by measuring e-mail use, impact

Keeping good metrics on e-mail can give you friends in the user community.

E-mail (and its cousin, instant messaging) so radically reshaped the business world, we do not even think about it anymore. Well, the users do not think about it much anymore. E-mail remains one of the most important services an IT staff offers, so ensuring its constant availability and security remains a very high priority for most organizations. However, e-mail's ubiquity makes it difficult to measure. What do our users do with it? How do they measure server reliability and stability? Do our users want us to improve the service, or leave it alone so they can get their jobs done?

While working on a project gathering measurements and metrics data for a client, he posed these questions and a handful more about his e-mail administration staff. His was not an idle interest; I took the hint and started to work immediately on defining the inquiry's scope and building measurements that would, I hoped, open up the black box.

What do we need to measure?

At first blush, the problem seemed simple. E-mail systems were, after all, the bread and butter of my original consulting practice. Check for uptime, check for bounces, and check problem reports from inside the organization.

However, when I diagrammed the system on the board, I noticed something. The metrics I usually used told me a great deal about service availability, but nothing at all about utilization, functionality, or impact. If I gathered the usual data, I would be able to tell my client within a hair's breath how well the system delivered mail…and not a single thing about what the mail did for the company.

One of the IT articles of faith states that e-mail is important. When it goes down, our user communities haul out the torches and pitchforks. To what extent, though, do they really use it? How many of the features do they use, for what, and to what degree?

These questions forced me to face something I knew, but never really wanted to articulate. E-mail services long ago left the realm of strictly technical measures. My beloved SMTP headers and availability charts barely touched the surface of what we really needed to know.

E-mail was no longer just a tool. Somewhere it metamorphosed into a service, with customers who used that service in ways we could only dimly imagine. In order to measure that, we needed to apply customer service style measurements.

Measurements and metrics

Determining those measurements, and the metric we could apply to them, was an even larger leap for me than the initial conception. Every time I tried to think about it, I kept coming back to charts of uptime and bad header reports. Eventually, though, I sat down in a room with a white board. On one, I listed every report I wanted to write. On the other, I listed the various areas that I knew I would have to measure for a service rather than a tool providing organization. I kept erasing the first board and filling it, brainstorming until I had reports and metrics by all of the measurement areas in the other one. Eventually I came to the following compromises: availability, budget, and functionality.

In the beginning, the availability metrics looked a lot like my old reports. However, I realized that uptime and header troubles did not tell the whole story. So I built a survey instrument to determine not only what the "technical" uptime was, but also what people thought about it. Did they feel that the system enhanced or delayed their communications? How frequently did they blame the e-mail system for delays? How often did the system, as a whole, fail to provide them with timely service? That last proved particularly interesting: A few months after we implemented the instrument, we discovered, through the questions, a delay in the document management system that our technical reports missed.

Budget is always a concern in every organization, but I was at a bit of a loss as to how to relate it to e-mail systems. After all, the basic infrastructure budget covered all of the e-mail system's expenses. So, rather than look for a positive, I broke one of the unspoken rules of leadership metrics: I fixed the budget metric to a negative. If the system required unbudgeted funds for additional expenses, thereby impacting other aspects of the infrastructure budget, it lost points from a baseline. It could make up these points by returning to baseline for a period of time.

Functionality gave me similar fits. What does it mean to measure the functionality of an e-mail system? Eventually I decided that what it really meant was that we needed to be involved with our users' use of the system. We needed to find a way to put our knowledge of how these tools worked at their disposal.

So, I designed a set of metrics that showed how much the e-mail administration team worked with the community. How many usage questions did we answer? How often did the team reach out to contact users to determine what they needed? How many of those contacts resulted in training for the user or in the implementation of a new technical solution?

At first the team strongly disliked the last measurement. It forced them out of their shells, made them make contacts in the user community, and even learn a bit about their own products. It also forced my client to be much more involved with his team's creative process. However, after two months they began to realize that their lives were a lot easier when they had friends in the user community. Problems appeared, as they always did. Rumors flew fast and furious, as they did every day. But the e-mail team, in its role as the communication experts, was a part of the process rather than its target.

Editor's Picks