It’s happened again. No matter how carefully you’ve tested each capability of the language, the library, or the framework you use. No matter how religiously you’ve built unit tests for each component. When you finally brought it all together into an application masterpiece, you get a failure you do not understand.

You try every debugging technique you know; you rewrite and simplify the most suspect passages; you stub out or eliminate entire components. Perhaps this helps you narrow the failure down to a particular region, but you still have no idea what’s going wrong or why. If you have the sources to the language or the library, you may get a lot further than if they’re proprietary, but perhaps you still lack the knowledge or the documentation to be able to make enough sense of the failure to solve the problem.

It’s time to get some help. You post questions on the fora, or you contact the author/vendor directly, but they can’t reproduce the problem. (Somehow, you knew that would happen.) They want you to send them a reproducing case. You direct them to your entire application, and the problem never gets resolved, because it’s just too much trouble. The end.

Okay, we don’t like that ending. How can we rewrite it? In the case of paid support we can stomp, yell, and escalate to force the vendor to spend time on the problem; but if it turns out to be too difficult to get the entire app running and debuggable, then they can still plead “unreproducible.” There is only so much that a vendor can do. Even if they stay on the problem, it could take a very long time to get to the bottom of it. Fortunately, there’s something we can do to help the vendor help us: It’s called the Small Test Program (STP).

“Whoa! Wait a minute! We already removed everything extraneous when we were attempting to debug this!” I hear you cry.

That may be true, but our goal then was to rule out other causes. You can almost always do more by shifting the goal to reducing the footprint of the test case. The two goals sound almost the same, and they overlap a lot, but they don’t cover entirely all the same ground. In the first case, we were trying to do everything we could to help ourselves to solve the problem. In the second, we want to do everything we can to help the developer to solve the problem. That means we need to take the following steps:

  • Remove reliance on a specific configuration. No doubt you’ve customized your development environment with all sorts of shortcuts and conventions to save yourself time; every one of those costs time, though, for someone who isn’t familiar with them. You either need to remove those dependencies and create a more vanilla example, or provide an instant setup for them that won’t be invasive. For instance, if you need the user to set certain environment variables, provide a script that does that and then launches the app. Preferably, eliminate the dependency on environment variables altogether — dependencies can add to the confusion by being set in more than one place, or not getting exported properly.
  • Eliminate all custom or third-party components that you can. You should have already done this, but it becomes even more important when submitting a failure. External components attract the finger of blame — as they should, because they often cause unforeseen problems. Rule them out. Furthermore, if the external components require installation and setup, that delays the developer from being able to look at the problem. Developers often have trouble getting these components to work on their system, which is all wasted time if they didn’t really need them to begin with.
  • Reduce the number of user steps required. If you think that one or two runs through the test case will reveal the problem, then your name must be Pollyanna. If they have to run your test a thousand times, every minute of elapsed execution time costs two work days. It’s actually more than that because people are human — every time the developers have to restart a long, arduous set of steps, they need a pause to sigh and wonder where they went wrong in life.
  • Clearly document the steps required. I don’t know how many times I’ve received something claiming to be the steps to reproduce a problem that reads “Run the app.” Unless the app is so simple that it requires no setup or interaction, and the failure is so obvious that not even [insert archetypal clueless person here] could miss it, this instruction will fail to reproduce. No matter how apparent it may seem, include every step — every setup command, the command to launch the app, and every input required. If you followed the previous steps, this shouldn’t be much.
  • Reduce the number of lines of code executed as much as possible. Maybe the entire program runs in two seconds, but if it executes 30,000 lines of code, then that’s at least 30,000 possible causes that the developer may have to rule out. Furthermore, it complicates debugging. If you can get the entire program down to “step, step, kaboom!” then you’re gold.
  • Include clear indications of failure. Don’t presume that the developer will recognize immediately that your Weenie Widget is 10 pixels too short — tell them so in the steps. Ideally, the application should scream out “Here’s where I’m failing!” when it’s run. Use assertions, or at least a printf or message box.
  • Include clear indications of success. How many times have I solved a problem presented by a test program, only to run into another failure immediately afterward? Did I fix a problem that they weren’t reporting, and now I’m seeing the one they meant? Usually, they know about the second one, but they just didn’t bother to prevent it since they had reproduced a failure with the first one. This is bad form. Ideally, you want your test program to be tailor-made for inclusion in a test suite so the same problem doesn’t get reintroduced. For that to happen, it needs to cross the finish line with flying colors. Let there be no doubt that it was successful.
  • Test your test. Run through the test as if you were the developer assigned to work on it to make sure you didn’t forget anything. Don’t run it on your development system, because your environment might be set up in a way that the developer’s isn’t. Use a virtual machine with a vanilla configuration to run the test and make sure it fails in exactly the way you intended. It could save you a few email round trips and avoid giving the impression that you don’t know what you’re doing.

Why you should create an STP

Why should you put the extra effort into creating an STP? It’s their bug, after all. Let them find it and fix it.

Most of my clients are software developers, so I’ve looked at this issue from both sides. I’ve been the recipient of hundreds (perhaps thousands) of failures to solve over the last 20 years, and I’ve had to submit my share of them to numerous software providers. I can tell you from my experiences that more than anything else — more than whether you pay the vendor to support the product or how much, more than all the screaming and yelling you can muster, more than the all the flattery you can lay on them, more than any reputation they may have for responding in a timely manner — the single most influential factor in determining how quickly the developers will resolve your problem is how clearly and concisely you’ve demonstrated the failure.

So, the next time you need to submit a problem report, remember the immortal words of Steve Martin: “Let’s get small.”

Note: This TechRepublic post originally published on January 10, 2011.

Keep your engineering skills up to date by signing up for TechRepublic’s free Software Engineer newsletter, delivered each Tuesday.