Tech & Work

Employ smoke tests at the start of your testing process

The most basic tests any software developer must run are smoke tests, which are a set of written, non-exhaustive tests that only deal with the most functional aspects of a software application or process.

 

Software development is often accomplished in an environment where the industry is not primarily focused on marketing and delivering software products. A small insurance company, for example, is focused on selling insurance to customers; in an environment like this, it can mean "software shop" practices are easily discarded for something along the lines of "code it now, fix it later." Thorough testing may barely exist and is considered only after it becomes clear (after an application crash) that comprehensive testing might be needed. This equates to putting the cart before the horse and can be an overwhelming challenge when trying to get a testing strategy employed after the software has been delivered to its intended audience.

Testing is a required step in any software development practice. There are different degrees of testing to employ, depending on the results you need. The most common tests are covered below.

  • Unit: Specific, lower-level tests run to ensure the behavior of an application is correct. It determines if individual "units" of software are fit for use.
  • User acceptance testing (UAT): This ensures software meets the customer requirements and usually involves the customer doing the testing.
  • Load: A system "stress" test to ensure high volumes of transactions do not cause a system to malfunction. This is also known as performance testing. (Soak tests are load tests run over an extended period of time.)
  • Regression: Repeatable tests that validate the feature-by-feature functionality of a software product.

Running tests during a build process is also a key ingredient of continuous integration (CI). Of these, the most basic tests you must run are smoke tests.

What are smoke tests?

Smoke tests are a set of written, non-exhaustive tests that deal with the most functional aspects of a software application or process; smoke tests do not handle anything at a granular level. The goal is to let you know whether your code is toxic on a higher and more basic level. Does it build? Does everything compile? Did I get my file where I expected it?

The term originated in IT when a series of tests were run against a piece of hardware where it would be plugged in and run for a period of time. If it didn't smoke, it passed. If the hardware smoked or caught fire, it failed. In a figurative sense, the purpose of a smoke testing software is exactly the same.

A common situation

Enterprise-level technology groups can suffer from being around for years and for buying too many solutions from too many vendors to solve their business needs. This is not necessarily the result of a CIO's bad decisions, but rather the nature of a company that grows and must adapt to the new demands of that growth. In smaller companies, one or two main technologies may play host to most of the applications being developed; this makes it easier to move to newer technologies. However, a large company that maintains multiple technologies while trying to move into the 21st century is the norm and the definition of heterogeneous.

If a company has been around since VB6 was the coolest thing on the planet, chances are the company has moved on to something more like .NET 3.5 or is trying out .NET 4.0. If a company has grown quickly, upgrading a sophisticated VB6 COM application, for instance, is highly improbable given new user deadlines. Such a task requires a rewrite in a language designed for this century. If a legacy application is stable, decisions are often made in the vein of "if it ain't broke, don't fix it" to keep it alive. Guess what? It creates a new fracture in an already semi-volatile period of growth.

From a testing standpoint, it can mean there is no straightforward way to thoroughly end-to-end test any process bridging different solutions where multiple technologies are supported. Translation: There's a high probability no mature testing frameworks exist to accommodate them. Eyeball tests to data result sets, gut feelings, and prayers are sometimes employed during major software releases, but these unscientific methods are highly unreliable, especially as the complexity of an application being released increases.

A real-world scenario

For example, a mainframe serves as a piecemeal system's backbone and as the system's originating data source. The data within the mainframe must be sucked out and into another relational database for easier manipulation. Those relational tables are used to display information to a user through a front-end application.

In this scenario, there are four general steps involved in the overall process:

  1. Files must be generated from the mainframe and placed in a location where another application can see and process it.
  2. Processing jobs (also known as migration) import the mainframe data files into a series of relational database base tables for further processing.
  3. Secondary (application-level) jobs process the updates to those base tables and gloss up the data and turn it into information for the applications using them.
  4. Front-end applications display and allow manipulation to the information as dictated by business demands.

So... how do you begin to test this mess? (Smoke testing is not setting a match to it and walking away, as tempting as that might be.) The easiest way is to take each major piece of the pie and to start sectioning them off. You start by listing high-level activity that must be checked for the next piece to complete. (I find it's important to do this exercise with a marker on a whiteboard first.)

In this scenario, there are verbs that serve as key-ins where the smoke tests should be (see Table A). Table A

Action (Verb) Smoke Test Metric
Files must be generated from the mainframe and placed in a location where some other application can see it. Did I receive a file where I thought it should be? Y/N
Migration jobs import the files into other base tables in a relational database. Did each job run successfully? Y/N (each job, overall migration)
Application-level jobs take base table information and gloss it up, making it more app friendly. Did each "gloss" job run successfully? Y/N (each job, overall glossing)
Front-ends see no change in behavior from new data. Can I load the applications successfully? Y/N

In each of these tests, no actual validation of the data occurs to make sure the data is not junk. If a job runs, it means the data formats were correct enough to process — that's it. In this example, these are one-shot tests that either pass or fail. This means the smoke tests are not run over and over again to verify something can happen.

Conclusion

If your company has a technology department or division where software is being crafted, you are in the software business. Getting an idea of how to employ testing is a crucial, required step in making that software functional. If you don't know where to start or the train has already left the station, you can start with asking basic, high-level questions about how your software should behave and write smoke tests for those activities. Then, you can tackle more intricate levels of testing when you understand how each area of your application should perform becomes clear.

Check out these helpful testing tools and frameworks

Get weekly development tips in your inbox Keep your developer skills sharp by signing up for TechRepublic's free Web Developer newsletter, delivered each Tuesday. Automatically subscribe today!

Editor's Picks

Free Newsletters, In your Inbox