In business, a certain level of controlled chaos helps you adapt quickly to change. However, as IT managers, you need to strike a careful balance between chaos and order so that you can be sure both to adapt and to get where you need to go. In order to strike this balance, you can use many tools, including but not limited to rewards, praise, goal assignment, and metrics.
In my experience, metrics prove the most difficult to apply correctly. Defining an appropriate metric for a specific position is hard enough. Unfortunately, once you have defined the metric, the role or task you measure may well shift around, requiring you to either reassign or redesign the metric. If you fail to, then the individuals you measure end up being held accountable for things they have no control over. This is something that I was fortunate enough to learn from watching others' mistakes rather than making them myself.
A real-life roles/metrics lesson
My client (a mid-size international firm) suffered a string of remarkably bad project failures. They asked me to review the projects, assess the situation, and make recommendations about how to solve the problem. Their PMI-certified project managers resented my intrusion, as I would have if I were in their shoes.
I spent three weeks talking to the project teams and reviewing project documentation. Everything seemed in order. Artifacts were filed on time. Most of the tasks were completed in a satisfactory fashion. But the teams seemed very nervous and angry about something. Worse, as the projects progressed, the team's work slowly drifted off the focus of the project. Programmers spent their time huddled in their cubes. Infrastructure folks ran ticket after ticket trying to catch up with all of the old and new problems. The operations teams fought to keep everything running. In all six projects, work simply stopped at some point, usually around the time of the pilot.
This puzzled me. So I started to dig back though my interview notes, borrowing a technique from qualitative analysis to sort the data. What emerged was a predictable pattern of rising and falling excitement and influence. At the beginning of every project, during the role assignment phase, each team expressed strong interest in the outcome of the project. That interest waned as work progressed. Eventually it stopped, as did the work.
How did they get away with it?
This level of analysis told me what happened, but not why. However, it did suggest a place to start. More precisely, it suggested a question: What was different about the beginning of the project that made it generate a high level of visible commitment?
Project beginnings always generate excitement. You try to maintain that excitement with constant reviews, rewards for achieving milestones, and other positive reinforcements. In theory, you also reward the end of a project with some kind of tangible benefit, including the satisfaction of a job well done.
Looking back over my client's reward structure, something immediately stood out. As employees of the company, each IT member had, as a metric of success for that year, a "number of projects participated in." Not successfully completed or even just completed/terminated, but merely participated in. In fact, many of the IT staff participated in seven or eight projects a year—the vast majority of which were never finished.
In essence, this metric created a conflict for each employee. In his or her role as a project team member, he or she was responsible for seeing the project through to completion. As an employee, he or she received pay and promotions for the number of projects he or she participated in. The project managers, being skilled in these positions, were careful to front-load a great deal of the planning risks and testing in each project— work that could be done in pieces around the people's daily tasks. Coincidentally, projects tended to fail towards the deployment phases, when the project would suddenly take up a significant portion of the employee's time.
The project participation metric did not align with the company's goals for the role of a project team member. This misalignment suggested that a very predictable behavior pattern would emerge. That this pattern emerged and remained unchecked for almost a year did not come as any great shock. Businesses typically only review their internal rewards and metrics once a year at most, during the times when people receive their annual reviews.
For this client, I strongly recommended increasing the strength of the project manager role, and changing the metric to indicate a number of successful projects. This aligned the metric on the individual employee role with his or her role as a project member. Stronger project managers, with the ability to directly review employee participation, lead to the creation of additional project-focused metrics.
Other applications of the idea
Over the years I have encountered other situations where a metric, designed with the best of intentions, creates unintentional conflict by measuring something that is:
- Not the responsibility of the role being measured.
- In conflict with another role held by the individual.
- Directly counter to the intention of the role being measured.
Take the extremely common example of the animosity between SAP Basis administrators and programming teams. Basis administrators typically answer to a variety of metrics about uptime and customer availability. Programmers answer to metrics regarding how quickly they turn around functions. This leads to programmers obeying their metrics and pushing things out the door—straight into the laps of bosses who sometimes have to spend days straightening out the resulting mess. The programmers get high marks for on-time delivery and the Basis team receives low marks or has to work late into the night, every night. However, on teams where the programmers are measured for "problem-free delivery" rather than speedy delivery, this animosity slowly fades: The metric the programmers receive praise on now matches what the customer really needs.
Similarly, look at the constant and continual problem between local and corporate IT. Although every organization has many political and personal reasons for these conflicts, the truth is that in nine out of 10 cases I encounter, the local IT is measured on:
- Their ability to support their site.
- Their willingness to champion their site's needs.
- Their relationships with the people on site.
Although you might expect them to play as a team member of the corporate IT team, and in fact may claim that is part of their role, the metrics by which you measure them emphasize their relationship with their local site. This puts them in a very awkward position, one quickly resolved in favor of doing what will get them the best return for their own time.
Going too far
Roles and metrics create a propensity towards a particular kind of behavior. They can't force people into robot-like patterns. By working to ensure that the metrics match with the outcomes you expect from specific roles, you set people up for success. However, it's still up to the individual to seize that opportunity.
Similarly, misaligned roles and metrics don't ensure failure; they simply establish a context in which failure is more likely than not. On an individual level, employees may or may not choose to ignore the context, depending on a wide variety of factors including leadership, personality type, and their own personal goals.