After Hours

Developers: Are testers testing your patience?

TechRepublic member Ravi vents about his frustration with testers who don't read messages or log files, fail to isolate or fully communicate problems they come across, or lack a basic knowledge of the environment they're testing.
This post was written by TechRepublic member Ravi.

I got back to my desk after a short break and found several new e-mails in my inbox. One name instantly caught my eye. It was a colleague who had been assigned to test a script I had written. There were three e-mails from him, all within the space of about six minutes. The first e-mail just said “The script does not work.” The second one, which was time-stamped about two minutes later, read, “I had not made the script executable! But now it seems to produce garbage!” And the third exclaimed, “Ravi, it is all fine! I was looking in the wrong directory!”

Though I was annoyed about the haste with which the tester had shot off these messages, I did take a moment to chuckle to myself about this series of e-mails. “Testers do try to do their bit to break the monotony of the day,” I thought to myself.

However, jokes aside, if you’re a developer, I’m sure you’ve come across some testers who seem to define testing as “testing the developer.” Their dictionary seems to read:

Testing n. the art of testing the developer, or his patience

You send them a program, and they dutifully put it through the test -- either by double-clicking on the executable or typing in the name of the executable on the command line and hitting Enter. And then comes the pat response, “Ravi, the program does not work!”

Sometimes there’s something more than just that terse statement -- a brief line or two describing what happens when the program executes, or perhaps an even more elaborate explanation. There may even be a screenshot, which hopefully shows you what’s wrong or at least where to start looking. Rarely is there any mention of efforts made to analyze the problem or to eliminate some factors and isolate the cause of the problem.

Of course, the error could be a comma, a colon, or some other simple thing that you missed when you were tracking the football score rather than your code. In such cases, I’m sure you are suitably contrite, apologize, and try to make up in some way. However, just as often or probably more so, difficulty arises when the tester is watching something other than the program execution or the program’s results.

I’ve come across many instances where simple actions on the tester’s part could have helped him or her get past the reported obstacle, or at least explain the cause of the apparent failure of the program. This includes checking to see if the system had a reasonable amount of free disk space, reading the messages that you -- the thoughtful developer that you are -- made the program display when running into problems, or viewing the log files that are, after all, meant to be read by the user.

The tester depends on the developer to think of all possible errors and trap them. He also expects that the developer will create enough of a trail to be able to track an error back to its cause. However, I believe it’s at least a part of the tester’s duty to attempt to identify the problem correctly and also the cause, if possible.

The tester should, at the very least, accurately report the problem that he or she comes across when testing. Often a program, or a set of programs, passes through several intermediate steps before concluding. If the program runs into an error, surely the tester can attempt to determine the steps that were completed before aborting. This not only gives the developer a clue about what could have gone wrong, but it also helps speed up the resolution of the error, as well as the conclusion of the development and testing cycle.

Apart from the occasional lack of cooperation from the tester, developers might come across situations where the tester appears to lack some basic skills. A developer colleague once told me about his experience with an inept tester. The project being tested ran on UNIX. It consisted of a bunch of programs and UNIX shell scripts that were put together inside a proprietary package, which ran the programs and scripts in sequence. When the tester ran into problems, he contacted my colleague for assistance, who in turn suggested that the best option was perhaps to step through the package and try to run each program or script manually, in sequence. He was flabbergasted when the tester responded, “How do I run the script manually?”

While some questions can be raised about the project manager’s choice of tester, it isn’t unreasonable to expect that the tester has a basic knowledge of the environment on which the program or project runs.

It may not seem like it, but I do sympathize with testers. They have an unenviable job, having to run the same program again and again, looking for errors that may never show up. And testers do help discover those instances when the developer bungled before a client does. But if they would pay some heed to the areas I have mentioned, life could be a lot easier for us all.

About

Sonja Thompson has worked for TechRepublic since October of 1999. She is currently a Senior Editor and the host of the Smartphones and Tablets blogs.

12 comments
programmer07
programmer07

Funny that I should find this article in my inbox on today, of all days. I've been a tester, programming instructor, developer, and am currently working as a systems analyst (known in-house as a software designer). Given that, I do sympathize with testers...to some degree. I must say, the testers on our team are top-shelf. My first meeting of the day was actually organized by our QA team. They thought we should meet to focus on the struggles that arose during our last release cycle. (How awesome is that?) A comment made by a member of another team was brought up during our meeting. The comment: "I really admire the way the different sectors of your team get along and communicate." When I was a tester, I did feel that developers and designers really did HATE testers. This doesn't hold true for my current team. I think testers could benefit from continual training, just a developers and designers do. Currently, I work for a company which develops commercial software...so the testers are required to "know their stuff". I have, unfortunately, worked in an in-house IT department without the luxury of a professional QA team. That's the group I thought of when I read this article. I recall when I first transferred to that company, I had to give a lesson on "writing well-documented bugs". :-P

Tony Hopkinson
Tony Hopkinson

Anything else is dumb. OK they might point out to you as a developer, (or even an analyst) that you've just been very dumb, but it's within the team, not right out there in public. I love my testers, even the occasionally irritating ones.

john.bluis
john.bluis

>While some questions can be raised about the >project manager?s choice of tester, it isn?t >unreasonable to expect that the tester has a >basic knowledge of the environment on which >the program or project runs. Having taken on responsibilities as tester, developer and project manager I've learned that it is a very bad idea to assume anything. The best developers and testers are the ones who know what they don't know and what others might not know so it can be communicated to the entire team.

dstadler66
dstadler66

I've worn both hats, about 90% developer. Testers are mostly honest people trying to do their best, I think. The problem can be lack of communication. Sometimes testers think that their *opinion* is a requirement, but they learn. The rare exceptions can be interesting to work with. I was on a project with a dedicated tester. The requirements tended to shift at light speed and the tester either would not or could not rewrite his tests with the requirements shifts - he'd just fail the code. So I wound up rewriting the tests when the requirements shifted because it took less time than dealing with the bug reports which followed each requirement shift.

bankerit
bankerit

I have no formal training in IT, though I have handled many aspects of IT after a long stint in business. Most often I have been on my own, having to handle all aspects of the process from identifying/understanding a business requirement to providing an IT solution. A large part of my experience was gained in Africa, before the days of internet or e-mail. Hence, perhaps, my expectations from a tester are perhaps more than that of others. However, while it may be appropriate or even required to ask the user to 'check if the unit is plugged-in' when writing a user-manual for a micro-wave oven, I do believe a developer is entitled to expect a little more from a tester. Thanks for all comments. ravi

OldER Mycroft
OldER Mycroft

Issuing it to the wrong staff for testing. It doesn't say much for the developer if they hand out testing assignments to folk who clearly are not up to the task. ;)

programmer07
programmer07

so, unfortunately, we are often forced to "hand out testing assignments to folk who clearly are not up to the task." When I worked as an in-house programmer, there was no team of Professional Testers. However, SOX dictated that the code be tested by someone other than the developers. I didn't get to "pick and choose" my testers.

Tony Hopkinson
Tony Hopkinson

and testing coverage is always welcome. But the reason you need someone other than the developer to test, is they make a very different set of assumptions. Someone completely unfamiliar with the software, can find some very interesting issues, especially on the usability front.

Tony Hopkinson
Tony Hopkinson

there's. It's when the devs or the testers get lazy or overworked, the real problems occur. We are both detail oriented roles, some times whe you get deep into a problem then environment its in is a given. How many questions on TR have you seen from devs that involve two hundred lines of code and I get a syntax error in this? Rejecting a fix for defect A because while they were setting up for the test they noticed a bug in a completely different area B, now that drives me f'ing mental...

Bizzo
Bizzo

Testers aren't testers unless they have a test plan, they're just users. I've never been (officially) a tester, but have been a developer for a number of years, and have written numerous test plans. I've come to the conclusion that, no matter what level of intelligence or competence the tester has, if your application or script or whatever produces a result that isn't documented, they consider it a fault. And in my opinion, rightly so. If you have a script for a tester to test, you can't assume they'll know where the input should come from, nor where the output should go. That has to be defined. The developer knows that if a panel pops up mid processing with a message, then it's just a message, click OK, or it will go away on it's own, but they haven't documented it in this release, because "it's always been there". If the tester hasn't been told, how are they to know? That's where testplans come in, if the results deviate from the testplan, then either the application is at fault, or the tesplan is incomplete and shouldn't have been released. It's been years since I've done any proper development, and hence testing, so maybe I'm a bit out of touch of what developers expect from testers. In my day (pick up pipe and shuffle slippers), I said in my day, unit testing was done by the developer, and unless the testplan was signed off, they hadn't finished.

Sonja Thompson
Sonja Thompson

TR member Ravi submitted this guest post, and I'm sure that quite a few developers will be able to relate to his frustration. On the flip side, I've been a tester once or twice for the dev guys here at TR, and I really REALLY appreciate it when they're patient with me. As I said to Ravi in an e-mail, even though I'm no longer an infant, there are still times when I need to be spoon fed! What's your stance on this topic?

KSoniat
KSoniat

I've never been in a big enough shop that we had designated testers. We would swap programs with each other and try to "break" each others code. Since we knew how it felt we were somewhat gentle and any errors found were well documented as to the cause.

Editor's Picks