One way that management gauges the progress and success of a development effort is by looking at a variety of metrics relating to individual productivity and the quality of program modules. But once the initial development is complete, managers turn their attention to how well an application performs in production.

Unfortunately, some organizations use development metrics to assess an application that’s in production and maintenance mode, which doesn’t offer an accurate picture of your application or your team. To make sure that your development efforts aren’t measured by inaccurate or irrelevant statistics—and that your team isn’t held accountable for issues that are out of its control—you need to analyze the metrics that are applied to your apps once they’re placed into production. Let’s look at some considerations to keep in mind as you evaluate the production and maintenance metrics your organization uses.

What matters in production
Developers are sometimes held to production statistics that include system uptime (in the form of application availability), help desk service level agreements (SLAs), and project completion measurements other than on-time and on-budget stats. However, these stats may not be the best indication of how you are really doing. For example, if you deliver an enhancement release containing known issues that lead to application downtime, will your team be held accountable even though you weren’t tasked with resolving those issues? Further, in development, you might be held to an estimate of how long it will take to produce a deliverable. But in maintenance, especially a production fix, you may not have time for the type of analysis it takes to produce an accurate estimate. And if you are working on code that’s already in production, the priority may well be speed of delivery at the expense of all other considerations. Just make sure you are not held to those measures of low-priority factors.

To help you determine whether metrics are valid and appropriately targeted, here are some questions you should ask about your production metrics:

  • Are maintenance metrics and new development metrics approached differently?
  • Do you understand what portions of complicated metrics, such as those associated with SLAs, affect you directly and indirectly?
  • Do uptime SLAs include the right caveats for developers? For example, will you be held accountable for production blunders? Will you be held accountable for something that an outside vendor is responsible for?
  • What is measured for quick maintenance fixes? When do complicated issues or widespread problems officially become designated as projects with accompanying project measurement stats? When do application-specific issues become enhancements to the original design, more suited for a new release of the application? Are known bugs ever rolled into enhancement projects?
  • How are response-time metrics calculated? Are you accountable for the total time a help desk ticket is open or only for when you “own” the ticket?
  • Are you penalized for delays associated with “batching” of fixes?
  • How realistic are the uptime SLAs? Is the cost to meet the target justified? If your uptime capability is high to begin with, the cost to improve it further can rise exponentially. With that in mind, are you likely to get the resources you will need (time or personnel) to meet the objective?
  • Do project metrics reflect the development methodology? For example, if you moved to an extreme programming approach, does project management modify performance indicators to reflect that shift?
  • Who controls the development of metrics? How do marketing metrics affect you? What control do you have over the outcome of these metrics?
  • How will flaws in the information architecture be accounted for? Will you continue to take a hit on items that can’t be changed but that cause downtime or delays?
  • Do you have a well-honed change control system? Because changes are often the cause of downtime, can applications placed into production be easily rolled back?
  • How instrumental is QA in application and infrastructure development? Does QA still play a role after an application is in production and under maintenance?
  • Does your organization do a good job of capacity planning and disaster recovery planning? Will you be held accountable for failures in these areas?

Collecting the right information and applying it quickly to improve performance is the true value of any metric, no matter what part of IT it’s applied to. Make sure the metrics being used to measure the quality (and quantity) of your maintenance work provide an accurate reflection of how you and your team are doing.