When I was a full-time software developer, I liked nothing better than sitting down and writing code. Like many of you, I could sit down early in the morning and write code for hours at a time. Of course, along with the coding comes the testing, which I never enjoyed as much. But it is during the testing process that you really find out what kind of programmer you are. Were the hundreds, or thousands, of lines of code just so much garbage, or did testing prove the beauty and logical strength of your code?
In fact, as I matured in the software development field, I realized that the software testing process shows much more than how good a programmer you are. It also shows how well you met your business requirements and how well you designed your application. As I moved into positions of greater project responsibility and eventually into project management, I made perhaps my greatest mental leap—the testing process has a life cycle of its own. For small development efforts, you may be able to get away with thinking about testing as you are writing the code. But for larger projects, the testing process begins during the analysis phase and goes through the end of the project.
The purpose of testing
It is worth stating the obvious about the reason we do testing at all. First, we want to ensure that the solution meets the business requirements. For instance, if a specific business transaction follows a logical path from A -> B -> C -> D, we want our testing to prove that this, in fact, happens. Second, we test to catch errors and defects. These may be programming errors or errors that were introduced earlier in the development life cycle. In other words, we need to prove that our solution is correct, while at the same time proving that there are no errors or defects. You may not catch any errors (ha!), but you still need to prove that the solution meets the business requirements.
Some basic tenets of testing
Regardless of the type of project you are on, you want to catch as many errors as early in the life cycle as possible. I have seen projects that purposely blast through the analysis, design, and coding phases with a philosophy that they will fix all the problems in testing. This is usually disastrous. It’s not so bad to correct programming errors that you discover in your testing. It is much more difficult when you realize that a design error resulted in information missing from your database. It is even harder if you discover that your analysis missed an entire business process.
At the same time, you can’t ensure that your software is perfect. Even with the best tools, it’s impossible to test every logic path of every program. Therefore, you can’t ever be sure that your code is bug-free. There are times when software that has been stable for 10 years suddenly breaks through an unlikely combination of circumstances that never occurred before. (I call this type of problem a “subtle” bug.)
Since you can’t build the perfect, bug-free system, the project team needs to understand the cost of testing and recognize when the cost exceeds the benefit. You can get into a lot of trouble if your testing is not rigorous enough. However, testing too much can be a waste of precious resources. You can’t deliver a perfect system, but you can determine how much testing makes sense given the characteristics of your project. Building an internal application to track customer complaints does not need the testing rigor that was required for the Apollo moon mission.
Coming soon: The testing process in more detail
If you were so inclined, you could write a thousand pages on the testing process. Builder readers will be glad to know that I am not so inclined. However, this is the first in a series of columns that will describe the overall testing process in some detail. These columns will provide information you can use on your projects today but will not get into the science of testing, which would bore you to tears. Future columns will cover topics such as:
- Thinking ahead: Defining your overall testing strategy
- Preparing for testing: Establishing the detailed testing plan within the context of the testing strategy
- Unit testing: Establishing the first line of defense against errors and defects, one program at a time
- Integration testing: Putting the unit-tested pieces together
- System testing: Making sure that the system meets all the requirements in terms of functionality and performance
- Acceptance testing: Having the clients validate that the system functions and operates up to their expectations
- Testing metrics: Measuring the effectiveness, completeness, and rigor of the testing process
- Testing tools: Automating parts of the testing process is a requirement on large projects
- Customizing the testing process: Determining the development life cycle (package implementation, RAD, waterfall, etc.) is necessary before you can determine your testing process
- Testing teams: Explaining the roles and responsibilities of software testers and how different companies organize the testing teams
As we journey through the testing life cycle, Builder will also publish a series of related templates you can use on your projects today. I hope the entire series provides you with a sense of how a proactive testing process will result in a better quality solution and a more effective and efficient development project.
Key points to remember
While future columns will cover different aspects of the testing process in more detail, here are a few points to keep in mind:
- Don’t think of testing as a series of events that take place after programming. Instead, take a holistic view of testing as a process that runs throughout the development life cycle.
- If the testing process is thought through early in the project, the actual execution will be much more efficient and effective.
- There is no such thing as a perfect solution. Each project team needs to determine how rigorous a testing process makes sense, based on the characteristics of the solution and the consequences of defects occurring in production.