Integration testing will show you how well your modules get along

There is a point in software testing when it all comes together--literally. The integration test lets you test interfaces and look for bugs that unit testing didn?t catch. Find out why you need this testing step and learn two approaches to doing it.

In many respects, integration testing should be the easiest type of software testing to perform. Think about why. If all developers understand the requirements in more or less the same way and validate that their programs are accurate during unit testing, the integration test will turn up far fewer problems than earlier rounds of testing. Also, you don’t have to create all of your test cases from scratch. You can start with the combination of test cases and test data from unit testing and just add incremental cases to test logic specific to the interfaces.

We do integration testing because a set of modules that work fine individually rarely work together correctly the first time. There are a variety of reasons:
  • Differences in the understanding of business requirements between multiple developers cause them to assume different things should happen for the same test cases.
  • Fields are defined differently. For instance, one module assumes a field can hold 10 characters, while another module is programmed to hold 11.
  • There are different assumptions in field content. For example, one program expects a phone number to be 10 digits. Another expects the phone number to include the dashes.
  • The modules still have errors that were not uncovered in unit testing. Integration testing may add new test cases, some of which may have been difficult to generate during unit testing, that result in additional errors being uncovered.

From a terminology perspective, integration testing is sometimes referred to as “string testing” or “thread testing.” This usually refers to testing the programs for one business transaction. For the purposes of this column, however, I am referring to all of these as the generic integration test. There are two general approaches to integration testing: big bang and incremental.

Big bang testing
With the big bang approach to integration testing, you take all the unit-tested modules for a system and tie them together. This approach is great with smaller systems, but it can end up taking a lot of time for larger, more complex systems. One of the advantages of big bang is that you uncover more errors earlier in the testing process. In fact, you may uncover errors all over the place when the testing first begins. If the modules have been well tested in the unit test, this approach can also end up saving time. In addition, it can be quicker because you don’t have to create as many stub and driver programs. (We’ll explain these later.)

The biggest disadvantage of the big bang approach is that it is harder to track down the causes for the errors since all the modules, and thus complexity, are added at once. For instance, if a transaction is not processing correctly, you may have to track back through 10 modules to determine the cause.

A second disadvantage is that you cannot start integration testing until all the modules have been successfully unit tested. Some modules that work together may be completed early, but they cannot be integration tested until all the modules in the application are completed.

Incremental testing
With incremental testing, you test two related programs together and make sure that they’re working correctly. Then, you add other modules to the test one (or a few) at a time until all of them are successfully working together. This is usually the best approach for large, complex systems.

Incremental testing requires the creation of stub and driver programs. These are shell programs that do nothing but allow other programs to make calls to modules and databases. The difference is that a stub program is called by the program you are testing and a driver program calls your program. As an example, let’s say your Program A calls Program B, which updates a database with the information you pass to it. However, Program B is not ready to include in integration testing. So, you create a stub Program B that does nothing but accept your input parameters and display the fields. This allows you to test Program A now. Likewise, let’s say you were testing Program B, but the calling program, Program A, is not ready. In this case, you can create a driver Program A that has no logic other than to pass a predefined table of parameter input to Program B.

You might ask whether there are different ways to put the modules together. In general, there are two: top down and bottom up.
  • Top down: With this approach, the general modules at the top of the overall logic path are tested first, using stubs for the called modules. Then, the called modules are added, using stubs for modules they might call. This continues until the entire application is tested. This can be easier to logically understand, but it defers the bulk of the testing complexity until later.
  • Bottom up: This approach is the opposite of top down. You start off with programs at the lower level and use driver programs to call them. Then you replace the drivers with the actual programs and move up the program hierarchy until all the modules are in place. Bottom up checks the bulk of the processing logic earlier in the testing process, but it is harder to see the bigger picture until later, when the more general programs are in place.

Incremental testing is usually better for large, complex systems. The integration testing process can start earlier, as soon as related programs have been successfully unit tested. An incremental approach makes it easier to find errors since the application environment introduces only one (or a few) modules at a time. Finally, this approach results in more overall testing, since the earlier modules get tested repeatedly as you add new modules.

Now that you are ready to fit the pieces together and make sure they all play nice, here are main points to remember about integration testing:
  • During integration testing, you combine the individually unit-tested modules to see how they behave together.
  • There are two major approaches for integration testing—big bang and incremental. Both have advantages in certain cases. Although there is a tendency to put everything together in a big bang and see what happens, usually the incremental approach is better. It can be started earlier and is easier to control.
  • You can use the test cases from unit testing as the basis for the integration tests, along with other tests designed to exercise the interfaces.

What’s next?
When integration testing is completed, you have the basis of a system that works in a controlled environment and under limited circumstances. If you are building a Web application, for instance, you may have just proved that it works successfully for one person running on your internal network. You don’t know yet that it will work under the production load of 100 simultaneous users. You also don’t know yet if the security is sufficient or if the application could be recovered if a disaster occurred. These are examples of what you look at next in system testing. They are the most complex series of tests to perform, but they are also the most important in making sure that your application is ready for prime time.

Project management veteran Tom Mochal is director of internal development at a software company in Atlanta. Most recently, he worked for the Coca-Cola Company, where he was responsible for deploying, training, and coaching the IS division on project management and life-cycle skills. He's also worked for Eastman Kodak and Cap Gemini America and has developed a project management methodology called TenStep.