This post was written by TechRepublic member Ravi.
I got back to my desk after a short break and found several new e-mails in my inbox. One name instantly caught my eye. It was a colleague who had been assigned to test a script I had written. There were three e-mails from him, all within the space of about six minutes. The first e-mail just said “The script does not work.” The second one, which was time-stamped about two minutes later, read, “I had not made the script executable! But now it seems to produce garbage!” And the third exclaimed, “Ravi, it is all fine! I was looking in the wrong directory!”
Though I was annoyed about the haste with which the tester had shot off these messages, I did take a moment to chuckle to myself about this series of e-mails. “Testers do try to do their bit to break the monotony of the day,” I thought to myself.
However, jokes aside, if you’re a developer, I’m sure you’ve come across some testers who seem to define testing as “testing the developer.” Their dictionary seems to read:
Testing n. the art of testing the developer, or his patience
You send them a program, and they dutifully put it through the test — either by double-clicking on the executable or typing in the name of the executable on the command line and hitting Enter. And then comes the pat response, “Ravi, the program does not work!”
Sometimes there’s something more than just that terse statement — a brief line or two describing what happens when the program executes, or perhaps an even more elaborate explanation. There may even be a screenshot, which hopefully shows you what’s wrong or at least where to start looking. Rarely is there any mention of efforts made to analyze the problem or to eliminate some factors and isolate the cause of the problem.
Of course, the error could be a comma, a colon, or some other simple thing that you missed when you were tracking the football score rather than your code. In such cases, I’m sure you are suitably contrite, apologize, and try to make up in some way. However, just as often or probably more so, difficulty arises when the tester is watching something other than the program execution or the program’s results.
I’ve come across many instances where simple actions on the tester’s part could have helped him or her get past the reported obstacle, or at least explain the cause of the apparent failure of the program. This includes checking to see if the system had a reasonable amount of free disk space, reading the messages that you — the thoughtful developer that you are — made the program display when running into problems, or viewing the log files that are, after all, meant to be read by the user.
The tester depends on the developer to think of all possible errors and trap them. He also expects that the developer will create enough of a trail to be able to track an error back to its cause. However, I believe it’s at least a part of the tester’s duty to attempt to identify the problem correctly and also the cause, if possible.
The tester should, at the very least, accurately report the problem that he or she comes across when testing. Often a program, or a set of programs, passes through several intermediate steps before concluding. If the program runs into an error, surely the tester can attempt to determine the steps that were completed before aborting. This not only gives the developer a clue about what could have gone wrong, but it also helps speed up the resolution of the error, as well as the conclusion of the development and testing cycle.
Apart from the occasional lack of cooperation from the tester, developers might come across situations where the tester appears to lack some basic skills. A developer colleague once told me about his experience with an inept tester. The project being tested ran on UNIX. It consisted of a bunch of programs and UNIX shell scripts that were put together inside a proprietary package, which ran the programs and scripts in sequence. When the tester ran into problems, he contacted my colleague for assistance, who in turn suggested that the best option was perhaps to step through the package and try to run each program or script manually, in sequence. He was flabbergasted when the tester responded, “How do I run the script manually?”
While some questions can be raised about the project manager’s choice of tester, it isn’t unreasonable to expect that the tester has a basic knowledge of the environment on which the program or project runs.
It may not seem like it, but I do sympathize with testers. They have an unenviable job, having to run the same program again and again, looking for errors that may never show up. And testers do help discover those instances when the developer bungled before a client does. But if they would pay some heed to the areas I have mentioned, life could be a lot easier for us all.