IT consultants can avoid disaster when they have to break something to fix it by identifying and planning for the potential disruptions to users and client systems.
The old saying goes, "You can't make an omelet without breaking some eggs." Often in consulting work, before we can fix a problem or create something new, we have to disassemble previous work that was inadequately designed to accommodate future needs. As when a doctor rebreaks a bone that was improperly set, these operations can prove temporarily painful to the client. Our duty as consultants includes minimizing that pain as much as possible.
First, we must identify exactly what we are proposing to break. Then, we can create a plan for minimizing the pain it will cause. You'd be surprised (at least, I am) how often ostensibly responsible people will charge right into a project without even considering the disruptions that they'll incur. Here are some of the possibilities, along with suggestions on how to make the process less painful for users:
- Users forced offline. You're going to have to take down vital resources for a period of time. Obviously, you want to minimize this by, if possible, creating a parallel system that you can ready and then drop into place. If that's not an option, then schedule the outage to be as unobtrusive and as short as possible.
- User processes and procedures need to change. The old way of doing things won't survive the change. To create an orderly transition, you need a clear plan for educating the users and getting them through the transition. You don't want them to come into work one day and find out "that's not how we do it anymore."
- Development system builds broken. Sometimes you need to make such pervasive changes that, until you're done, the entire system will not build. Ideally, use a distributed version-control system (DVCS) to isolate this effort until you're ready to merge it back all at once (although even then you may find that your best laid schemes gang aft agley). Some clients may not be ready to implement a DVCS though (and some don't use any VCS at all), which can make forking and merging more trouble than it's worth. In that case, segregate your proposed changes into the sheep (harmless) and goats (breaks things), and schedule a time to introduce and resolve the goats as close together as possible. Make sure everyone who will be affected by the breakage knows when that will happen.
- Tests broken. Developers often forget about the poor testers. Automated tests often rely on the names of window classes and other details of execution that, when changed, cause no heartburn to an end user but wreak havoc in the test suite. Make sure that adjusting the tests is part of the implementation plan, not an afterthought. Worst of all is when you change something for which no tests exist. You're playing Russian Roulette without any idea of the number of bullets in your revolver.
- Documentation broken. Don't forget that someone has to change the manuals for any operational change you make. That, too, should be part of the development plan. I know, I know: "what manuals?"
- Client code broken. This is technically a special case of "user processes and procedures need to change", but it's much more severe. My clients who develop software for programmers have the distinction of being able to release a new product that breaks lots of other products all at once. Changes of this nature not only require new user practices going forward, but they must be applied retroactively to prior work as well. Naturally, you want to avoid this at nearly all costs. Unfortunately, it's not always avoidable. Users want to be able to port to a new platform or take advantage of a new technology that is incompatible with the old way of doing things. In that case, you've got to give them plenty of warning, documentation, and education on how to best cope with the impending change. Typically, a deprecation phase is a good idea: in one release, you document that such and such practice will become obsolete in an upcoming release, and you give them alternatives with which to replace it. Then two or three releases later, you finally retire support for the old approach.
In each of these categories (and, I would venture, in any others that you could add), the keys to avoiding disaster are identification and planning. Know the ground on which you're going to fight, so that you can create a strategy that will make the most of it and avoid unpleasant surprises.
Another thing to remember: ask why. "Why do we have to break this?" Because we need this new feature. "Why do we need this feature?" And so on. Act like a four year old: "Why? why?" Once you've distilled the requirement down to its bottom-line business value, then you can rationally compare that to its business cost. But first, you must possess a reasonable knowledge of both.
I've also found a universal tendency to lump priorities together, when they have little or no interdependencies. This mass of proposed implementations gets a project name, and that creates the illusion that we can either do all of it, or none at all. Often, the Pareto principle holds and 20% of "the project" causes 80% of its pain — and the 20% isn't all that important. Don't hesitate to ask "why" about each individual piece.