Social Enterprise

10+ priorities for testing critical systems

Testing is essential to make sure that systems function as expected, but the process can be complex and overwhelming. Rick Vanover looks at some testing strategies that will help you focus on what's important and make your installations and upgrades go smoothly.

IT folks grumble at arduous inventories of test plans and scenarios, but the fact is that testing should be made a priority for critical systems. So what can we do to make testing effective and through? Here are 10 things to stress for your test environments to save surprises and present credible test results.

Note: This information is also available as a PDF download.

#1: Make your test environment represent the live environment

Having a test environment that's quite different from the real environment is not effective. A good example is a Windows Active Directory domain. A domain with highly customized Group Policy configurations, complex DNS configurations, multiple domain trusts, many group memberships, and a large number of account objects is not representative of a separate domain that is empty and has no real configuration. Virtualization is a good option here: You can promote a domain controller on a virtual machine, move it to an isolated network for the testing, and then remove it from the live domain.

#2: Have multiple disciplines of the test achieve the same result

In outlining the steps of a test, identify components that can be tested two different ways to obtain the same result. For example, if you are considering going to a new version of Windows Active directory, within the test environment perform an Active Directory authoritative restore and a system backup restore to ensure that they both bring the system back to a workable state. This can be beneficial if, in the real world, one mechanism fails. Another strategy is to have one person prepare the test plan and another person implement to ensure that the plan is clear and that nothing is taken for granted or assumed in the testing.

#3: Test the rollback!

For test plans that revolve around an upgrade or enhancement to an existing system, you should test the reversion process. You can also test this multiple ways depending on the context of the upgrade. Some strategies include removing a hard drive in RAID 1 configurations (the removed hard drive would be unchanged), a full restore from a backup, uninstall functionality of the upgrade, database backups, or simply using new equipment only, with the current system turned off during the upgrade.

#4: Don't proceed without the testing

If situations arise that cut into the test phase, take a stance that the testing is an important part of the overall project. Depending on the situation, this may be a difficult case to make or it may have political consequences. If it boils down to someone else deciding you can't do the testing but you'll have to take the blame if it does not work, raise the red flag!

#5: Remember the goal of testing: No surprises during a go-live

Surprises are the last thing you want during a go-live. Thorough (representative) testing helps prevent any "learning experiences" when the new system is in use. Of course testing can't be 100 percent like the actual environment, so there is always the risk of something new arising. For example, if you are testing a new version of a software product with a simple security model that may have every user configured with more permissions than required, when you go live, the security model will need adjustments to meet operational requirements. This can cost valuable time and introduce risk. Thorough testing would have a documented procedure for the security configuration or scripts to run to configure the live system as used in the test environment.

#6: Use pre-existing resources and testing standards

We may not all be certified testers, but we can leverage existing resources to deliver a credible test for our IT environments. Some good starting points include the Standard Performance Evaluation Corporation and a quick Internet search for sample test plans. If you do not have rigid requirements for testing, you will have some freedom in developing your test plan. Be sure to give the plan much thought and be comprehensive. The Sara Ford blog on MSDN gives a good perspective on how to develop a test specification, which is slightly different from a test plan.

#7: Assume nothing

Sure, your testing will provide an exercise in the rudimentary tasks associated with your environment -- but some small pieces of functionality may be affected by an upgrade. Depending on your project, this can include extra options, permissions changes, and log file changes. This can come into play if you have built monitoring around a system's log file behavior. If there is slight a change in the way the log is written after an upgrade, the monitoring system may need a review. By going through the steps, even for the elementary tasks, the risk of little things getting in the way with the project as a whole are reduced.

#8: Use project management to coordinate testing

Having project management and a management sponsor will give credibility to your testing. It will allow other areas in the organization to understand that the testing is essential, and your management will have a better idea of the testing steps. Simply saying that you're testing the new version of XYZ is not as effective as engaging the project plan with management, sharing the status of the test plan, and collaborating on the testing with multiple parties. Ensure that the test plan document is available to the project management or management sponsor for an ongoing view into the progress; this will enable them to have a good idea of the work and challenges related to the testing you have laid out.

#9: Ensure that test failures are repeatable

Almost every test plan will incur some part of a test that results in a failure. With test systems, many administrators may be testing at once or changing configuration, which may effect the testing. Should a failure occur within the testing, note it and attempt to repeat it. Further, seek other testers to perform the test to see if it fails for them as well. If the failure or issue is critical to the overall success of the project, engage support resources of the product to identify the issue if possible. Depending on the scope of the failure, the overall project may not need to be stopped, and this identification process can get expectations in line to the end-state.

#10: Test with a different environment

If you're making the effort to provide quality testing, think ahead to some of the challenges you may face. This may include lesser systems running more roles, doubling or halving your workload, integrating another company, or changing a core part of your IT environment. This may be perceived as scope creep in the test process, but if you engage project management and your direct management, you may be able to make the case to allocate time and resources to test other scenarios.

#11: Hold onto your test environment

If you've gone to all the effort of creating a full test environment, why not hang onto it for ongoing testing? This could be a test environment that is used to test version updates and core functionality changes or to provide a training environment. Just be aware that there may be licensing considerations with a test environment for continued use.

How do you test?

There are many ways to approach test environments, but incorporating these tips into creating your testing strategies will help equip you for successful installations and upgrades. Share your own testing priorities below.

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

8 comments
Tony Hopkinson
Tony Hopkinson

Be very careful what you mean by this. Say it to a mangement type, and any surprise, no matter how trivial will introduce doubt. It's hard, sometimes impossible (well impractical) to predict all the impacts of a change. What you should try to do is verify critical functionality. An example might be two segments of network can still see each other, but one device which was bodged in had a weird and wonderful route which no longer works. Be clear about what you are and have tested for. Otherwise you'll get rolledback in an unnecessary panic and have to do it all again.

stewart.noakes
stewart.noakes

I appreciate that to write an article about priorities for testing its hard to get everything in there, but I do feel that a very key area has been missed around the management of testing: visualisation. You need to be able to visualise the coverage of testing, and the clustering of defects, in order to track and predict problems. very important for regression testing and for making key decisions during the testing cycle. The visualisation can be linked to requirements or to the code itself, so that you can see what has and hasnt been tested. Very often teams engage in testing without really being able to define the target of completion or see the results of their work in terms of completion of target goals. Setting these goals (such as 80% code coverage or 100% of mandatory requirements) is fundamental to a mature, efficient and effective testing effort. Stewart

ls2141
ls2141

This article comes just in time for me. I am working on a large project with new QA tester that the QA Manager is pressed for time to train them adequately. So, I am having to dig into more detail about what a great test plan should include. Thanks!

PM Hut
PM Hut

I think #5 sums it all. Usually during the testing phase people do everything else on the test machine. But when they go online, quite often you have problems maybe because of hardware/software difference on the production machine. #9 is always a nightmare, sometimes you see a bug and you cannot reproduce it anymore, not to mention that some testers pretend to not see bugs.

b4real
b4real

Always a good idea to put time-critical objectives in testing!

Neon Samurai
Neon Samurai

I've a client that can't afford a physical dev server to test everything against before it goes too the production server. I use a VM duplicate of the production as my dev too test as much as I can though his eagerness has lead to production changes before I had a chance to confirm it first. The time isn't a cost, it's a necissary step before the next step. VMs help financial limitations and planning your changes ahead of time should account for testing of those time critical objectives.

robo_dev
robo_dev

Thou shalt not test on a production network I've seen 'simple painless testing' bring down a global SAP instance. You cannot test on a production network, period. Network Impairment Simulation: On the test network you need to create conditions similar to the real-world network conditions. Simulating production network conditions can be tricky and costly, but devices like network impairment delay simulators and latency simulators are effective tools to create a real-world environment in the lab. Network Impact Analysis: (Or how to keep your telecom people from screaming at you). Performing a network impact analysis using a protocol analyzer can identify potential performance bottlenecks and also reveal security issues. A network impact analysis can identify both 'what your app does to the network' and 'what the network does to your app' (with apologies to John F Kennedy).

b4real
b4real

You speaketh the truth!