Testing a complex new system, whether it's composed primarily of various technologies, enhanced processes, or both, seems like something "beneath" the average IT leader. There are rafts of project management specialists, testing packages, and consulting houses dedicated to various aspects of testing, and one could almost be forgiven for assuming that this is an area best left to the experts. While many an IT leader has had minimal involvement in testing and been successful, most of the dramatic project failures headline their list of "if only we had" with strong leadership involvement in testing.
When a highly visible system fails outright or requires complex and costly rework to limp through its first few quarters, the blame is going to be firmly focused on IT leadership, not the person or software in the trenches. Rather than effectively outsourcing your testing efforts to subordinates and hoping for the best or wondering "what the heck happened" as your CFO hounds you over millions in lost or missing revenue, consider IT leadership's involvement and input into the testing process.
Trust your gut
A problem with most failed or difficult implementations is that they build and/or test the wrong thing, particularly when technology is involved. Many dramatic ERP failures at their core were really business reengineering projects that were left to the techies since software seemed to be a major piece of the project, when it should have been the grease in a well-oiled machine. As an IT leader, you should be familiar enough with the various functions and components of your business to do a rapid "sniff test" on all your major projects and get a sense of whether there is appropriate business involvement or if there is a technology ship adrift in the ocean without proper business "steering."
Seek to identify a handful of key processes or scenarios and gather people from outside IT to exercise these processes in your new system and validate the results. While it's a bit late in the game to seek this involvement during testing, better now than after you flick the proverbial switch on go-live day for what will amount to similar "testing" but at much higher stakes.
Another key failure of what superficially seems like rigorous testing is not using legitimate data and business scenarios. Sure it's time-consuming and expensive to stage a dry run of loading all your "real" data, but it's far more costly to discover you can't ship and invoice products hours after you go live. If you're implementing an order entry system, your team should be able to pick up a handful of real orders and successfully enter them in your test system. If you're releasing a new website, your team should be able to transition live content to the new platform, interact with relevant interfaces, and subject the test system to realistic loads.
Even the most low-budget theater production stages full dress rehearsals, with everything present except the audience. Your financial stakes are probably much higher, and you simply cannot afford not to do the same.
Find your metrics
As your team performs a multitude of realistic test scenarios, with real data and deep business involvement, you'll likely identify a handful of metrics and indicators that show how the system is performing. Perhaps there is a critical time requirement for a process or a financial indicator (revenue is usually a good one) that everyone checks to verify success. Capture these metrics and monitor them closely as you go live. From your experience testing real data, you should have a ballpark idea of what volumes to expect on go-live day, and while you may not meet them exactly, if you are outside a 30-40% range that is an excellent indicator that something may be amiss.
Testing is exacting and excruciating work and, like an insurance policy, is a tempting area to cut costs until you need it most. Many IT projects that affect critical business functions like supply chain, invoicing, and the like are akin to performing open-heart surgery on your business, and failure can have very real and dire financial consequences. While IT leaders need not be pouring over test scripts and installing load testing applications, having a working understanding of what processes and data are being exercised and ensuring your testing actually emulates real-world scenarios may save you hours of heartache and millions of dollars.
Patrick Gray works for a global Fortune 500 consulting and IT services company and is the author of Breakthrough IT: Supercharging Organizational Value through Technology as well as the companion e-book The Breakthrough CIO's Companion. He has spent over a decade providing strategy consulting services to Fortune 500 and 1000 companies. Patrick can be reached at firstname.lastname@example.org, and you can follow his blog at www.itbswatch.com. All opinions are his and may not represent those of his employer.