Developer

Test-driven development lets you eliminate bugs as you code

In software development, bugs are a fact of life. However, test-driven development can reduce software bugs to a minimal annoyance. See how to set up a system for your next project.


Project post mortems often focus on how many bugs were discovered and how they were managed. Take any process evaluation under CMM, Six Sigma, etc., and, in most cases, you will find a lot of talk about minimizing and managing defects.

However, despite the importance attached to producing bug-free software, a lot of time is spent on removing bugs rather than preventing them in the first place. With unit testing, most developers test their work after they have completed it and not as they develop it. No one can blame the coders, as project schedules often provide minimal time for coding. This occurs because people like to believe that they have a super design, so coding to it will be simple. Unfortunately, it often does not work out that way.

The test-driven alternative
Test-driven development (TDD) offers something new. It regards testing as a continuous process to be carried out as an integral part of developing code. TDD deals with unit tests written by the developer to ensure that his or her code works fine and stays that way. It's more a way of doing things than anything else and doesn't rely on any specific tools. TDD is certainly not restricted to Java, as the concept is just as relevant to any other programming language. However, it is particularly popular in the Java world, and I will stick to Java for the purposes of this article.

A fundamental difference between TDD and the normal methodology of creating test plans before starting development is that, with TDD, the tests are not in the form of a forgotten Word or Excel document. With TDD, if you are developing a Java class containing some complex logic, the tests also exist as proper Java code and the developer is the one responsible for writing the test. Developers often grumble about having to work with Word and Excel, but with test cases in Java, even the most reluctant of developers should have no problem writing tests.

Kent Beck, in his book Test Driven Development: By Example, describes the TDD cycle in these steps:
  • Add a little test.
  • Run all tests and fail.
  • Make a little change.
  • Run the tests and succeed.
  • Refactor to remove duplication.

An example
To demonstrate, let's look at a simple piece of code that takes a date string as input, formats it as expected, and returns a properly formatted string. We will follow the development cycle stated above.

We are using the JUnit testing framework for this example. JUnit assists us in executing the tests we have written and in doing some basic comparisons. All JUnit expects is that we follow a few rules while writing the tests.

We'll assume that our method can get the following strings as input:
  • null
  • MM-DD-YYYY
  • MM-D-YYYY
  • M-DD-YYYY
  • MM-DD-YY

For all these possibilities, we are expected to return a string in the format MM-DD-YYYY and an empty string if null. We append 0 to a single-digit month or date and 20 to a two-digit year. So we now know exactly what the code is expected to do. Let's follow step one: write a test.

Assumptions
FormatDate is the name of the class and formatDate is the static method we intend to test.

Since we are using the JUnit framework, we have to follow some of its conventions. As Listing A shows, the class FormatDate, which will hold the tests we write, needs to extend the class TestCase and import some classes from package junit.framework.*.

The code
We can now write the tests, even before we have written any of the actual code we intend to test. The tests we write will drive the actual development undertaken. Let's write a test method for each of the possibilities, as shown in Listing B.

This simple method, on execution, will call the formatDate method, passing null as the parameter. The assertNotNull method will check whether the returned value is null. The other cases, shown in Listing C, work on similar lines. The methods named assert* are provided by the JUnit framework to make comparing the expected and the actual results a lot easier.

So we now have six tests in place that encompass all that the method formatDate is supposed to do. We write these tests in a class FormatDateTester following standard JUnit conventions. Executing these tests with JUnit is pretty simple, as shown in Listing D.

The method formatDate currently looks like this:
public static String formatDate(String strUnFormatted) {
 return null;
}

This is required only to ensure that the test case class does compile. On execution of these tests with the Swing-based test runner, you should see a display similar to Figure A.

Figure A
Failed tests


The thick red line indicates that some tests failed. In our case, we failed all the tests. That is good, as we now have completed step one and two of the development cycle outlined earlier.

Now we need to make some changes to the code—or should I say actually write the code—and get it to pass all the tests.

First, we need to modify the formatDate method to get it to pass our tests. As you develop the formatDate code, keep running the test cases at regular intervals, and you certainly will keep failing one or more tests. The minute you pass all tests, stop coding. Driven by the tests we wrote earlier, the code is now doing what it was expected to and it need not do anymore.

The last step in the development cycle is to refactor the code. Try to eliminate any redundant code and incorporate any performance optimization-related changes. Then, once again run your test suite. If you are still in the green (Figure B), your code is finished.

Figure B
Passed tests


Setup and teardown
In this example, we tested a static method provided by the class FormatDate and there was no repetition of code across tests. However, if a situation arises where you have to repeatedly instantiate a class in all tests, this repetition can be removed by moving the repeating code to the setUp method. The framework automatically calls this method before executing any of the tests. The teardown method is called after the tests finish executing. You can use this method to clear resources.

The TDD safety net
Having proper tests in place is especially useful as the code undergoes bug fixes or is being tinkered with by somebody other than the original developer. You can run the tests to ensure that nothing that worked earlier has been broken due to any changes made to the code.

By and large, once the design is finalized and some development done, few muster the courage to make changes to the design. Refactoring can be carried out safely if you have a test harness in place that ensures that while you refactor, the changes do not end up breaking anything.

Ideally, you should run the test suite once every day. View a green bar daily, and you can certainly sleep well, assured that you haven't broken anything while doing the development for that day.

Additional tools
As TDD usage has grown, many have found that JUnit is not always adequate; this has been especially true with regard to J2EE. So a slew of JUnit ports have emerged that extend the core concept of JUnit to specific requirements. J2EEUnit, StrutsTestCase, and Cactus are some of the examples.

 

Editor's Picks

Free Newsletters, In your Inbox