During lessons learned sessions, a common practice is to highlight what went well and identify areas for improvement on a project. Agile enthusiasts acknowledge project teams should conduct retrospectives (a.k.a. lessons learned sessions) after every release and iteration. In waterfall projects, this activity usually occurs at the end of the project. Regardless of your preferred methodology, I recommend comparing actual duration and actual effort to the baselined effort as part of a lessons learned session.
Project teams often spend time at the end of a project documenting lessons learned, although few measure their actual performance and record it for future estimation.
During effort estimation, project teams conduct either a bottom-up or a top-down estimate. Bottom-up estimates require a significant investment in time to define scope and build an accurate estimate. Top-down approaches use analogous estimates and rely on expert opinion to estimate duration at a high level. If you start collecting actual performance data and compare actual results against baseline estimates, you can achieve an analogous estimation tool that is based on bottom-up data from past projects.
In my system implementations, a common practice involved developing an estimation matrix that categorized reports, interfaces, conversion, enhancements, and forms (or screens) and assigning a complexity level of low, medium, or high. Low estimates were assigned 8 hours of effort; medium estimates were assigned 16 to 24 hours of effort; and high estimates were assigned 32 to 80 hours of effort. The range was tweaked based on the information available. The key benefit was it provided a starting point for estimation based on basic information.
Over the past few years, I started collecting metrics against the Microsoft Project schedule so I could develop a better estimate matrix based on actual project data.
Figure A depicts a custom table that I used to track and categorize the actual project data for future comparison. In this table, I added custom fields to categorize the deliverables, assign a complexity rating, and include a flag to include in my estimation analysis.
MyEstimation table. (Click the image to enlarge.)
Figure B includes a close-up view of the interface analysis.
Interface actual and estimate comparison. (Click the image to enlarge.)
In this section, I can quickly see where I underestimated the code phases of specific interfaces. Interface 1 had a baseline duration of 2 days, and it took 6 days to complete the work. Since the data is recorded in days or hours, I can refine my estimates using both units. By examining the actual task data, I can start building better estimation metrics. It helps to understand the root cause of why a medium interface moved from 2 days to 6 days. Based on the root cause, I may adjust the matrix to accommodate better estimation.
After I export the data to Excel, I can update my matrix in Figure C.
For a lessons learned session, I am not suggesting you track every task, although I do recommend tracking against the major activities required to produce specific IT deliverables. For your next project, you’ll need to conduct similar effort estimation activities. By comparing estimates against actuals for specific deliverables, you continue to build a better estimation matrix.
In my next column, I’ll show how to track and export this data using a variety of views and maps in Microsoft Project.
How do you track your estimates and build more objective estimation tools? Let me know in the forums.
Get weekly PM tips in your inbox
TechRepublic’s IT Project Management newsletter, delivered on Wednesday, offers tips to help keep project managers and their teams on track. Automatically sign up today!