Developer

Talking Shop: Launching a quality assurance program

Tips for starting an informal QA program with a tech support angle


Tech support professionals appreciate quality assurance (QA) specialists who do their jobs well. When products ship with fewer defects, that usually translates into fewer support calls.

The problem is that many small- to medium-size businesses can't afford to hire full-time QA analysts, so QA testing is often left up to the developers or not done at all. This week, I invite tech support specialists looking for a career change to consider specializing in QA. To get your feet wet, here are a couple of tips to help start an informal QA program with a tech support angle.

A list by any other name
The long-term goals of a QA department include monitoring product development, ensuring compliance with standards, and identifying issues with the product for the development team. Put another way, QA makes sure that products work correctly. What QA doesn't catch, the help desk cleans up.

In its simplest form, all your QA effort needs to get started is a list. If you maintain a detailed record of all trouble calls made to your help desk, you're off to a good start, because many of the items in that database are, by definition, QA issues.

You'll hear the QA list called various things, including:
  • Defect Tracking Database
  • Incident List
  • Issues Management List
  • Problem Report

There are plenty of enterprise-capable, off-the-shelf applications available to help you track incidents. If you can't afford one, shop the shareware sites or craft one yourself using templates in Microsoft Access. At the very least, your system should be able to:
  • Assign a unique number to each incident entered.
  • Save notes or attachments from anyone who works on the incident.
  • Assign an incident to the appropriate person on the team.
  • E-mail a reminder (or the whole incident report) when the incident is assigned.

Even though you'll uniquely number your incidents, you'll want the ability to identify and rank each one according to type and severity, and to track the status of each incident for reporting purposes. The fields should include:
  • Type: Use the Type field to logically group kinds of bugs. For instance, in testing a Web site, your type options might include Interface, Navigation, or Back-end database.
  • Severity: The Severity options should range from low ("no hurry but eventually"), medium ("within 10 days please"), to high ("emergency fix requested"). In some shops, there is a special category of issue called red-light. When a red-light alarm goes out, people are expected to drop everything to respond.
  • Status: Depending on your needs, this field should show Open, Closed, Under Research, On Hold, or the like.

Setting rules
There's only one problem with establishing a QA database for incident tracking: getting your coworkers to use it. If the application is readily available on a network drive, people will want to report bugs or make change requests by sending you (or the developers) e-mail. That's one of the inefficient behaviors the QA database is supposed to correct.

If you don't "book" all the incidents into the system, there are a couple of immediate consequences. First, changes can fall through the cracks. A customer can claim, "I sent you an e-mail about that bug!" and a developer can say, "Hey, if it isn't in the incident database, I'm not fixing it."

On the other hand, if the incident database is an Access table on your desktop, e-mail requests may be the only way to get bug reports or change requests into your system. No matter which system you use, it won't do you much good if the testing and development teams don't use it.

Read all about it
Of course, establishing your company's first incident database doesn't automatically qualify you as a QA analyst. However, as the database grows and the company realizes the benefits of organizing its quality control efforts, someone may need to manage that database on a full-time basis, and it could be you.

To learn more about QA initiatives, check out these articles on TechRepublic: Lamont Adams' "The case for informal peer review in a development organization," Tim Landgrave's "The rebirth of quality assurance," and Jerry Loza's "Ideas for designing and conducting code reviews."

What's your QA quotient?
Share your QA success stories with fellow TechRepublic members. Post a note below or write to Jeff Davis.

 

Editor's Picks

Free Newsletters, In your Inbox