The first rule of estimating time requirements for an IT consulting job is: Don't do it.
Okay, that's not exactly right. In fact, you should always estimate the time required for a project — if you bill by the job, that's how you'll determine your price. Even if you bill by the hour, you'll want to get a handle on exactly what you're committing yourself to in terms of scheduling. Just don't share that estimate with your client, if you can help it. No matter how many qualifications you put on it, no matter how many times you remind them that it's only an estimate, just by putting a number or a date out there, you've created an expectation.
But sometimes clients insist on getting a "rough timeframe" — or, if you bill by the hour, a rough cost estimate. Back in my early days of consulting, I'd just pull a number out of my, um, hat — a number that was usually based on the best case scenario. You know, the one where nothing goes wrong. For some reason, I was rarely correct.
A while back, Tom Mochal posted about using the PERT technique, in which you average the worst case, best case, and most likely case (heavily weighting the most likely case) to come up with a decent estimate. Personally, I'd forget about the optimistic case and divide by 5 instead. The probability of the best scenario is equal to the probability that Murphy's Law will be suspended — and we should all know from experience that Murphy's Law is more reliable than gravity.
But even so, how do you come up with the numbers to plug in as worst case and most likely case? For the worst case, you have to stop somewhere. I mean, just think of how many things could go wrong. In the most critical stage of the project, you could be taking your morning stroll and be hit by a falling piece of frozen airliner sewage and not wake up from the coma for six months. Do you build that into your worst case? Of course not! Because the coma could last for seven months. Seriously, though, you can only include the most likely delays in your worst case scenario. How's that for a paradox?
When computing the most likely timeframe, I follow a formula something like this for each activity:
T = (G * U) / R
- T is the resulting time estimate for the activity.
- G is my best guess for how long it "feels" like it will take.
- U is the uncertainty factor.
- R is my reliability quotient.
G is the least scientific quantity here. Unless your work is highly repetitive (in which case, you should write a program to do it), it's difficult to say with any precision exactly how long any activity will take. That's why I call it G, for Guess. That's also why this equation has the U and the R, which I'll explain below.
U, for Uncertainty, is a multiplier for unknowns. How much of this activity involves new research? How much has already been solved? Here's a guide:
- (U = 1) I've performed this activity myself before, and I know what I'm doing.
- (U = 2) Many others have done this before, so I'll be able to Google my way through.
- (U = 4) I read somewhere that somebody did this before.
- (U = 8) Nobody has done this before, but all the pieces should be in place.
- (U = 16) We'll have to invent how to get from point A to point B.
If the solution involves learning a new programming language or other major technology, multiply U by a factor commensurate with your ability to learn new languages, etc.
I often find that when my estimates are off wildly, it's because I failed to get U right. I'll mistakenly assume that some part of the problem must have already been solved before, but then I'll come to find that we're actually breaking new ground. The more research that has to go into a project, the less reliable your initial estimate will be — exponentially. But sometimes you can reasonably lower the value of U by doing a little prototyping up front.
R, for Reliability, takes your past performance at estimation into account. At the end of each project, compute the ratio of each activity's G value divided by the actual number of hours spent on it. Average these in with the same ratios from all your other past projects, and you have R. R may seem to represent the probability that your best guess will be right, but it isn't — it's actually far more useful than that, because it's a predictor of how far off your best guess is likely to be. Thus, if you have a history of taking twice as long as your best guess, then R will be 0.5 — which multiplies your final estimate by 2. It's the old adage "take your best guess and multiply by n", except that n is determined by just how bad your guesses have been in the past.
Now you have an estimate that you're likely to be able to live up to. But we never like to disappoint a client, so when they ask, double it. After all, we're still subject to Hofstadter's Law: "It always takes longer than you expect, even when you take Hofstadter's Law into account."Get weekly consulting tips in your inbox TechRepublic's IT Consultant newsletter, delivered each Monday, offers tips on how to attract customers, build your business, and increase your technical skills in order to get the job done. Automatically sign up today!
Chip Camden has been programming since 1978, and he's still not done. An independent consultant since 1991, Chip specializes in software development tools, languages, and migration to new technology. Besides writing for TechRepublic's IT Consultant blog, he also contributes to [Geeks Are Sexy] Technology News and his two personal blogs, Chip's Quips and Chip's Tips for Developers.