Editor’s note: This article was originally published October 29, 2001.

The dilemma

Mary was assigned to a project for the Finance Division when the prior project manager resigned. It was a tough spot, but Mary did an admirable job in bringing the project to completion. Although the solution was ultimately implemented, some question remained as to whether the project was successful.

I attended the project conclusion meeting, and I could tell it did not go as Mary had expected. Mary came prepared with a set of metrics, but the business client did not accept them at face value.

“Mary, it appeared that there was a difference of opinion on what the facts were on this project,” I began. “You attempted to initiate a fact-based discussion. Why do you think it didn’t work out that way?”

“There was a lot of emotion built up over the course of the project,” Mary replied. “I came into the project late and didn’t realize the level of dissatisfaction some of the people felt.”

“It sounded like they were also challenging the validity of some of your numbers and whether they were relevant,” I noted. “For example, you said that the project completed on schedule, but the client said that the solution was implemented without adequate testing.”

“That may or may not be the case,” Mary replied. “We all agreed that we would implement now and fix any problems going forward.”

“Yes, but the client said they were pushed into that decision because they could not afford to miss this monthly financial close cycle,” I said.

I used this example to voice my overall advice for the future. “Mary, you have the right idea about project metrics. If you measure how you are doing, you will be in a much better position to have a fact-based discussion about the overall project success or failure. But you’ve missed a couple of important points.”

Mentor advice

It’s important here to distinguish between facts and statistics. Statistics, or metrics, can be gathered on myriad combinations of project and solution characteristics.

But be careful when using them to declare the success or failure of your project. First, make sure you have an agreement with the client on the set of metrics used to declare success. Secondly, make sure the metrics are balanced and broad enough to represent the reality of the project experience.

If there is a disagreement on the metrics gathered and what they mean, you will be challenged right away. You also run the risk of losing all credibility if you attempt to show that a project was successful by displaying only a narrow set of favorable metrics.

That’s what happened to Mary during her project conclusion meeting. Her business client is not happy with the way the project was run, and they are not happy with the solution that was implemented. So, not surprisingly, they don’t want to consider a set of metrics that indicates that the project was a success. A wider range of metrics would include:

  • Actual costs expended vs. the original budget and actual delivery date vs. the original schedule.
  • Quantitative metrics that describe the solution’s performance, including response time and defects.
  • Qualitative metrics that describe client satisfaction with the solution, including ease of use, look and feel, etc.
  • Satisfaction feedback that describes the client’s satisfaction with how the project team performed, including how quickly the team responded to problems, how well they communicated, how well they partnered, etc.

Mary should have proposed a set of rounded metrics that covered cost, schedule, performance of the solution, and performance of the team. She should then make sure there was an agreement with the project sponsor on how to use the metrics to determine how successful the project was.

Get weekly PM tips in your inbox
TechRepublic’s IT Project Management newsletter, delivered on Wednesday, offers tips to help keep project managers and their teams on track.
Automatically sign up today!