Developer

Application Development: Unit testing your ColdFusion code

Testing code is not the most exciting activity for a programmer, but it is essential to a professionally developed application. As this example shows, testing does not have to be a chore. In fact, it should be an integral part of development.


Every developer knows he should test his code. But many don't because it's not as satisfying as banging out new code. It is not as glorifying as solving new problems. Sometimes it's simply not in the schedule. But the simple fact is that testing early and often will greatly reduce your stress levels when deployment time comes around.

Moving past those who don't test at all, we come to a group of developers who do test. And by "test", I mean running the program a few times looking for errors. While this is better than nothing, it certainly isn't very dedicated or reliable.

Good testing means you'll deliver a higher quality product to your customer, whether that customer is an external client or an internal corporate department. If that's not enough incentive, good tests make maintenance much easier. Because when you change, add, or remove code, you run the risk of introducing bugs. It is nice to be able to run a set of tests as soon as you change something, and find out immediately if there are any obvious bugs.

In this article, I'd like to try and convince the non-testers that making the time to run organized and deliberate tests is a straight line to fewer bugs and easier maintenance and, likewise, to show the casual testers that it's a very short leap to more robust and repeatable testing.

Benefits of being small and focused
Part of writing good tests is writing code that is easy to test. This may sound absurdly obvious, but it is actually something that many developers overlook. Creating small pieces of code with a specific purpose makes testing that code much easier.

Show me a code file that contains three database queries and an HTML table to display the query data, and I'll show you a fuse that's difficult to test because there's too much going on. To test this fuse, you have to test each query, and then test the display code. If the test results in an error, what is the cause? Bad SQL? Couldn't connect to the database? Missing a closing table cell? Referencing an invalid query column? It could be any or all of these.

A better approach is to create separate, small, and specific files. Three query files, one for each query, and one display file for the output. Now you can test each of these on their own, making it much easier to nail down any errors. The true purpose of a test file (a test harness) is to create a fully isolated environment in which to execute and test each part of your code.

I code using the Fusebox methodology (www.fusebox.org), which involves creating (you guessed it) small files dedicated to a specific purpose. I recommend Fusebox highly, but whatever your coding approach, try to make your files as tight and specific as possible.

Basic testing
Let's look at a simple display file. This file takes a query result set and outputs the data. It also makes a few decisions on what to output based on the values of some of the data fields. You can see the file, named dsp_productdetails.cfm, in Listing A.

Let's begin with a query where there's plenty of the product in stock and there are no special options. You create another file and call it tst_dsp_productdetails.cfm. Of course, you can call your test file anything you like, but I find that naming it the same as your code file except with a tst_ prefix makes working with the files easier. The test harness file, tst_dsp_productdetails.cfm, is shown in Listing B.

First, I label the test so when it outputs to the screen I know what test conditions I'm using. Then I set up an artificial query result set for the display file to use. Once again, you want to test the file in isolation, away from any other code in the application, and that means no real queries. The cf_querysim tag is used in place of a real query. You can also build up a fake result set using CFML's native query functions, but I find cf_querysim much easier to deal with, as I explained in a previous article.

Next, I include the code file that I want to test, and wrap it in a cftry/cfcatch block to capture any errors. That's all there is to it. If I run the test harness, I'll see the standard output, just as expected. The test passed so I'm done, right?

No. I've just tested one of the logic pathways for the display fuse. You must be thorough, and test as many pathways as you can. Listing C is another entry in my test harness file; this one tests the display when the query returns a product with a limited quantity.

This time you see almost the same output, but now you'll see a "low quantity in stock" message with the product data as expected. Now, let's add a test to check a product with zero quantity, as shown in Listing D.

You see the "product out of stock" message. The remaining logic path is to test the display for a product with special options, as shown in Listing E.

Run this and you'll see the output for a product with special options. So far so good—you've tested the main logic pathways. If you want to be extra thorough you could also test for an empty query, or the situation where the query doesn't exist. Depending on what you expect to happen, you can modify your display file accordingly.

More robust testing
You could go even further with your test. What about storing the output that you expect the code file to generate, and use that as a baseline? Then, when you run the test, you can compare the test output against the baseline to see if they match (Listing F). If they do, you know things are good. If they don't, you know you've got problems.

If the baseline fails, the test harness throws an error of type "Baseline Match Failure". This approach may be bordering on obsessive, but you are going that much further to guarantee that your code runs perfectly.

Now consider an application with 500 code files, and 500 corresponding test harnesses. Clearly, running them all, one at a time, to test the whole thing would drive anyone nuts. But the process can be automated. You can probably imagine code that would recurse through your directory tree and run each test harness on its own. If there are no errors in a test harness, then that harness is shown as "passed". If there are errors, the harness is shown as "failed", along with the error information. With the press of one button and a little waiting, you can see test results for your entire application. You can even customize it further; perhaps only running the tests in one directory, or only rerunning the tests that were listed as failed.

A good suite
I hope this article has helped explain why I think testing is so important. I also hope I've shown that with just a little effort, you can write tests that are both helpful and easy to run. A good suite of tests will make your development and maintenance easier, and allow you to deliver a higher quality product to your customer.

Editor's Picks