Testing software is a difficult but necessary part of developing quality software. Find out why cutting corners could cost your organisation more than its reputation.

Almost as tedious as documenting your code is testing the end result to see if it works as advertised. Actually, testing your new piece of software to see if it does what you intended it to do isn’t really very hard at all. You know what it was supposed to do, so you feed in some data and see what happens. That should be fairly predictable, and then you can tweak the code until the desired result appears from your known inputs.

What’s really tricky is testing your code to see what it does that you had no idea it would even be possible for it to do. Some code is easier to test for the unexpected. If you’re building a subroutine that does some mathematical manipulation, the range of possible numbers that could end up in the grinder can be generated and tested with simple tools. But do you check at this level for input data that is non-numerical or can you assume that was done further up the line? If you check, you waste valuable cycles, if you don’t, and nobody else did, then you could produce some fabulous results, mostly contravening the known laws of the universe.

Perhaps the hardest code to test is anything that interacts with nature’s most perfect random input generator—humans. As anyone who has ever written code for keyboard input will know, mere mortals do not operate within the bounds of what programmers call logic. If the software developer is Mr Spock then surely the rest of the user community is the entire crew of the Enterprise, including all the aliens they ever encountered, and they’re likely to do almost anything with your code, least of all what your designed brief covered.

Your code might not be able to do anything useful when presented with ridiculous data from the user or the database, but it mustn’t add to the general mayhem around it, or compound the problem by adding its own spurious output to the situation. Good code is always able to handle the unexpected in a predictable and orderly manner, the same way that nothing fazed Mr Spock while the rest of the crew ran around like headless chooks.

I learned this lesson well, a long time ago, when coding software for university students. My boss liked to add his -student test” to the standard regime of software testing. This test consisted of randomly banging on the keyboard at any point where input was anticipated to see if the code would break. When I first saw this, I thought I was working for a raving lunatic, but after fixing my code to withstand his nonsense, only to discover that real students could still break the software, I began to understand that end-users are the only real sticking point in the whole computing paradigm.
Coding for every possible contingency is obviously impossible but you need to at least make an attempt. How many times have you seen obvious errors in commercially available code such as crashing when a file is read-only? The operating system returns a perfectly valid message to advise your code exactly why it couldn’t do what you asked, but for some reason, more than a few programmers can’t conceive that a hapless user would mark their files read-only.

Of course the user probably didn’t mark their files read only because they probably have no idea how to do that. But they do know how to make a backup of their data onto a CD but they probably don’t know that files parked on a CD might carry across their read-only status when restored to their hard drive. Instead, you have to know that, or more correctly you have to deal with whatever message comes back to your code from the operating system or platform regardless of whether it is reasonable or logical.

You can’t assume that the file the user chooses to open with your program was ever intended to work with your program. A slender few operating systems store intimate knowledge of the creating application along with the data files, so for the majority, you’d better check that what was opened is something you can handle. When you’re testing your code, you’ll also need to populate a valid data source with total rubbish, to make sure you can handle that reality gracefully.

Things get even trickier if you’re writing an interface to deal with something other than the operating system or the end-user. The world of communications interfaces is indeed mysterious and challenging. Not least of your problems is that you must presume that whoever wrote the embedded code or perhaps just the documentation for the device you’re dealing with, was an incompetent moron on work experience, until proven otherwise.

You can tell from these examples and suggestions that when talking about software testing, this article has only looked at testing executable code. The other side to software testing is checking the code before it’s compiled, but there are now so many excellent code-checkers out there that you shouldn’t be getting any surprise messages from your compiler, and code-checkers can also eliminate a large number of logic errors before they become compiled bugs.

There’s also a case to be made for writing your code in a tester-friendly way, or at least including optional code to help report what’s happening as the data passes through the algorithms. This can be much easier to trace than a full blown debug that often overloads the tester with information un-related to the bug that’s being chased. And once a bug is found and corrected, you’re back to square one, testing to make sure that the fix didn’t create a fault elsewhere.

Fortunately, there are plenty of automated testing tools out there these days, and so long as the tools themselves aren’t bug-ridden, your percentage of error-free code will improve remarkably by using them wisely. If you can afford to hire third-party code testing labs, or employ your own quality control team, you can still save money by sending them code that is as clean as possible, using readily available tools.

Your first stop ought to be http://www.mtsu.edu/~storm/ which advertises itself as the -first-stop” on the Web for software testing. If you haven’t been to this site you’ll wonder why it isn’t already in your favourites, or automatically linked into every compiler’s help button. After you’ve trawled through the Storm web site, click on over to Brett Pettichord’s -software testing hotlist” Web site at http://www.io.com/~wazmo/qa/.

At Pettichord’s site you’ll find extensive links to articles on automated software testing, tools, strategy, risk analysis and documented notorious software bugs. You know how easy it is to make a mistake when coding, since you do it daily if not hourly, and you also know how easy it is to correct those mistakes. But without testing you might easily miss even the simplest errors, and the developers who write the code for really critical systems such as jet fighters and ICBMs are just as vulnerable to making mistakes.

However, their mistakes often receive more publicity, such as the coding error that caused the Voyager probe to fly straight into the Sun. Or the programmer who forgot to test for every possibility with the weapons systems on a new air force bomber, resulting in the ability to release the bombs while flying upside down. The results of your coding errors might seem trivial in comparison but they were probably no more complicated in the actual errors of logic within the program.

Getting the right specifications to determine processes and outcomes is just as important as writing the code that achieves those goals, before you even begin testing. However, the eventual realisation that what the customer really wants is often not readily apparent, regardless of whether a specification exists, until the code actually starts to take shape, has seen many software houses moving towards what is generically termed -agile development”. Rather than working in a vacuum with your precious specifications, only emerging at the end with a finished program, agile aficionados don’t blame the client for not knowing precisely what they need when project begins.

Instead, the developer works with constant feedback, delivering customer-relevant chunks of code for approval and sign-off before gluing the whole lot together towards the end of the project. This focuses the developer on the business values that really matter to the client, and ensures that they are taken into account at every step of the coding process, and rewrites are no longer regarded as being due to recalcitrant customers. Rather, they are seen as the collaborative outcome of both the developer and the client discovering what helps to make the software deliver real value to the business, and at the end of the day surely that should be the goal of any business project, not just those that involve code building.

When your code has been tested until it bleeds and is finally given the good coding seal of approval, there’s an often forgotten test that is hard to perform without assistance from outside testing companies. That’s the -load test” to see what happens when a lot of people use the software at the same time, all connected in some way via a database or transaction stream. If your software is designed for multiple use by thousands of single users who are not connected to each other, you could be forgiven for not conducting much in the way of load testing.

However, if your code will be deployed in a shared and connected environment, you skip load testing at your peril, and you will pay more to have the load testing conducted in situ after the inevitable crash. Your coding errors might not end up on the evening news unless you work for NASA, but dodging the testing phase can be just as catastrophic for you and your clients. Like most onerous tasks there are good reasons for software testing, so like the footwear company says, -Just do it.”