Learn how to know when it's time to refactor in Test-Driven Development, and how TDD can work well even when maintaining poorly designed software.
One of the steps in the cycle of Test-Driven Development (TDD) is to make the smallest code change that will pass the new test you've just added. Emil Gustafsson points out that perhaps the simplest change is better than the smallest change.
To anyone who has maintained non-trivial code that's been around for more than a couple of years, this step may still seem counter-intuitive. We've all seen plenty of counterexamples where somebody added the minimum code to "make it do X," without considering how that change would impact other features or the overall design of the system. As all of these small, unrelated changes accumulate in a piece of software, it begins to resemble a skeleton that compensates for deformities by creating additional, even more grotesque deformities.
The key to avoiding this trouble is in the next step of the TDD cycle: refactoring. Once you've passed the test, then you figure out how it's supposed to fit into the design and gradually migrate it to that point, all the while running the tests to make sure you don't break anything. It's important to continue to take "baby steps" in this process. Sometimes, after making a trivial change to pass the test, the refactoring feels like starting over with "Okay, now how do we really do it?" There's a chance here to fall into the same trap that making the simplest possible change intends to avoid: introducing too much complexity at once. To help eliminate that disconnect, I like to make that simplest possible change elegant in its own way -- the way I would write it if I had no other requirements. Thus it acts as a first, small step into the refactoring process, rather than being the stupidest way to code it. Then I proceed to deal with all the other test failures that my change has provoked. After all, we do have tests for everything, right?
I find that TDD works better if you've done it throughout the life of a project. You have the tests for all expected functionality at your fingertips. Even if they take a while to run, and even if your changes mean modifying a whole bunch of tests, the confidence you have in your modifications means that you can move ahead more quickly overall and introduce fewer bugs in the process.
It's another story when most of the project's prior development did not employ TDD. Even if it has a large test suite, you don't have any guarantee that it covers all the modifications that developers made in the past. You have to read the existing code to understand not only how it does what it does, but why. Even if the design is brilliant, all of the whys may not be obvious. Even when it's your own code.
More often, though, a project that has not previously used TDD (or something similar) will bear an overly complex design. Starting with the big picture, building out all the theoretical abstractions, and finally creating the instances you need means that you end up creating a lot of abstractions that you don't need (YAGNI). Besides consuming the initial development time, that kind of feature can make the code more difficult to maintain by providing one more thing to break -- or at least to consume the mental cycles of the maintainer who is trying not to break it.
You can still use TDD to your advantage on projects that haven't used it before. You just need to draw a clear line between what you know and what you don't know. You'll probably need to research the existing requirements of the system, and you may need to write some tests for those features. Refactoring may involve breaking down a monolithic design. Schedule more time for QA and acceptance testing. Despite all these additional time requirements, employing TDD for your new changes will still result in a higher quality:time ratio, and will begin to build a stronger foundation for future modifications.
Keep your engineering skills up to date by signing up for TechRepublic's free Software Engineer newsletter, delivered each Tuesday.