By Peter Becker
In May 2001, Sun Microsystems sought to improve the reliability of the application its worldwide sales staff uses to design hardware configurations and create customer quotes. The application’s hardware configuration options and rules are highly complex and change frequently as products are introduced and discontinued. Quotes are transmitted directly to the factory floor for assembly, making software defects that allow invalid configurations very costly. My company tested the software to improve its functional integrity.
One of the first things our team realized was that the configuration engine project had a complex organizational structure, which impeded clear communications. A third-party vendor provided software development, and separate marketing groups and engineering specialists defined requirements for each product line. An in-house QA group handled testing.
Because of the varied groups involved, the resulting requirements documents frequently conflicted and always assumed the audience knew a great deal about the products. An aggressive release schedule for the configurator application demanded a monthly version, compounding the challenges for the test engineers.
Model-based testing offers some order
My company, Software Prototype Technologies, employs a process called model-based testing, which rapidly and effectively tests software systems. The idea behind this approach is simple. You build a graphical diagram (or model) that represents the business rules and system interface behaviors you plan to test.
You derive the model directly from functional and design specification documents, enforcing clarity and rigor in defining what the system is expected to do. Once that model is complete, you use software to automatically design detailed automated test scripts. The algorithms produce test scripts that are far more effective in providing functional coverage than traditional manual test case design and implementation. What’s more, the same model can be used to create test data specifications and other documentation traditionally generated manually by the test team. In model-based testing, you build the model and let software generate a complete and highly effective test specification.
Start with the specs
Using a set of specialized Microsoft Visio templates, you translate all of the functional rules and logic from the specification into the graphical model. To build the model, you identify “causes” and their “effects” and link them together in a logical relationship. Causes include user actions, data values, and other states that represent inputs to the system, and effects are the expected results. The logical relationships are represented as standard logical “operators” (e.g., and, or, nor).
Typically, all of the business logic and interface rules for a complete business event are placed in a single model. While the modeling format is simple, it is also rigorous. All causes and effects must be linked. Consequently, any inconsistencies or omissions in the specification documents become glaringly apparent when you try to create the model. This was the case at Sun.
The application tested at Sun was an interactive hardware configuration and quote tool containing an extensive set of complex rules about how Sun hardware can be configured and connected. It addressed issues like what storage devices can be connected to which processes and what constitute valid combinations of CPUs and memory quantities.
The product managers who wrote these rules tended to leave out a lot of detail, assuming that the programmers and testers had sufficient background knowledge. Consequently, the modeling team needed to process a significant inventory of existing specifications as well as new functions and rules for pending product releases. This information is typically scattered across a variety of documents. In many cases, the specifications themselves were incomplete or contained conflicting information about the system requirements.
At Sun, missing causes, missing effects, and unclear interactions between causes and effects were quickly documented in what is called an ambiguity review.
For example, if a specification states:
“Add input value A to value B. This number must be positive,” it raises the following questions:
- Which number must be positive?
- What happens if the value is not positive?
During the specification review process at Sun, our team noticed that the quality problems stemmed from the specifications. In fact, most defects originate in the specifications, not the code.
As ambiguities were identified, our team documented each one and conveyed the information to the product manager who could resolve the issue. Getting Sun’s product managers to sit down and resolve the ambiguities in their specifications was a particular challenge. They are busy engineers, and writing detailed configuration specifications is not their first priority. To some extent, our test engineers had to become analysts in their own right, working closely with the product managers to rewrite the specifications.
It was a productive partnership—the Sun product managers didn’t need modeling expertise in order to solve the problem, yet the project benefited from rigorous software engineering at a key leverage point in the development cycle.
Automatically generate test scripts
When the test model was complete, our team used commercial, off-the-shelf software to generate detailed test scripts directly from the model. While, in principle, any test design software that evaluates logic structures could be applied, my company uses the Caliber-RBT product, which generates test scripts based on stuck-at-one, stuck-at-zero coverage criteria. By running the test design tool directly against the completed test model, detailed test descriptions were immediately available, removing the requirement for a separate, manual test case design effort. The algorithms used to design the test cases are similar to those used in testing computer chips, and they provide extensive functional test coverage with a remarkably small number of test scripts.
Watch defect levels plummet
During the first couple of releases, more defects popped up, especially in areas that had not been tested with the model-based methodology. However, after several release cycles, production bugs dropped sharply, as shown in Figure A. More importantly, the severity of the postimplementation defects has been reduced to minor cosmetic issues.
Figure A |
![]() |
This decrease in defects is directly attributable to two components of model-based testing. First, the ambiguity reviews identified descriptions in the specifications that might be misunderstood by the programmer and thus coded incorrectly. By resolving the ambiguities before coding is completed, you have accurate information from which to write code. Second, the test cases that resulted from the automated test design process were extremely thorough at exercising all of the system functionality. Together, these techniques give you a powerful and consistent process for removing defects from software.
President and founder of Software Prototype Technologies, Peter Becker has over 35 years of experience in software development. He has spent the last 20 years associated with software quality assurance and testing.