Way back before the turn of the century, when we were all concerned about Year 2000 readiness, my CIO devised a simple tool that provided a variety of benefits. The tool mapped the Y2K readiness status at the company’s different locations, offered clear communication on what needed to be done, and—perhaps most important—immediately caught the attention of the executive level.
The tool was a simple stoplight chart that mapped the criteria for various Y2K preparations and rated each category as red (in need of immediate remediation), yellow (requiring attention but not critical), or green (no remediation needed). The chart tool was so successful that our IT organization has adapted it for technology audits and security processes.
What prompted the ratings system
At the time my organization initiated the tool, we had more than 50 autonomous divisions with over 400 plants and offices and over 120,000 employees worldwide. Many divisions had multiple IT data centers that ranged from classic mainframe operations to midrange computing shops (AS/400, HP/3000) to strictly client/server-based computing environments.
Staffing varied with the size of the data center. There were large staffs in the mainframe operations but often only one or two dedicated IT pros in the midrange and client/server shops. For the most part, the company used Microsoft Exchange for interdivision e-mail. Virtually all locations ran accounting software applications, and many used ERP or MRP systems to run the manufacturing plants.
We also had a wide variety of specific applications. Each division’s IT leader reported directly to division management, with the corporate CIO at the next level. All this complexity required an easy and straightforward tracking mechanism geared to Y2K compliance issues, as well as IT audits and security projects. The color-coded stoplight chart fit the bill.
How the stoplight chart works
The chart’s color ratings system makes it very easy to summarize performance in a matrix form. Users simply list the department and the categories and then develop the ratings (red, yellow, or green).
The color-coded approach works well because it grabs the attention of nontechnical managers and executives who sometimes don’t understand what a firewall really does or why the organization needs one.
After all, everyone associates red with bad and green with good to go. Reducing the evaluation to a color expresses the result concisely so the qualitative result is clear, even if the technical details are not.
In Figure A, it’s easy to see how quickly the technology trouble spots pop out. Location C obviously needs immediate attention. Similarly, Location A has a serious problem. Locations B and E need some help to improve, and Location D is in good shape.
In the chart graphic, Location A is red overall because we decided that security was so important that a red-rated item in that category would cause an overall red rating. The main issue at Location A was that they didn’t disable access to computer systems for separated employees within the required time limits. Failure to gain IT approval for hardware purchases resulted in the yellow in networks, and an incomplete software inventory led to the yellow in software licensing compliance.
The red code in Location C was also due to a lack of a contingency plan or published data security policies (hence the reds in contingency planning and security). That location was also red in business processor and computer operations because there were no documented procedures for version control and changing production programs. Three reds automatically mean an overall red for this poorly performing location.
Determining the right ratings
A key aspect of this tool is determining the appropriate rating for the categories. For example, we had three subcategories we felt were so important that a red in any of them resulted in an overall red rating. After that, it took two red ratings to get an overall red.
It doesn’t matter how you do it, but think through the system so the rules you decide on can be applied consistently across departments. By having specific criteria tied to ratings, you’ll avoid business unit resistance to the tool—you’ll be able to easily explain why division X got a red on security and division Y didn’t. If there’s no clear-cut difference, you’re forced to fall back on more subjective criteria.
Specific criteria can reduce resistance to the ratings system, but be prepared for some resistance nonetheless. As business units quickly understand that red is bad, they immediately begin lobbying to get any rating except red to ease their pain and keep the boss off their backs. This can lead to long negotiations and unproductive discussions about ratings rather than solutions.
We were able to avoid most of this hurdle by publishing the criteria for the ratings at the same time we distributed the chart. Laying out the rules—what issues and problems constituted a red and not a green—helped spur remediation rather than debate over the rating. In the audit process, the business locations being evaluated knew exactly what problems caused a red rating and why.
The ratings criteria were also preapproved by the CIO, and people were told that up front. That way, business leaders who wanted changes made had to talk to the CIO.
Here are several more tips to make your chart work well:
- Spend time developing criteria that are as objective as possible, but leave some wiggle room because nobody can foresee every circumstance.
- Get buy-in and approval on the criteria from high authority—the higher, the better.
- Publish the criteria before the evaluations occur to give people a chance to fix obvious problems.
- Get results out quickly. We issued our final reports within three weeks of completing the evaluation work.
- Give recognition to departments that do things right.
Implementing the chart ratings system incurred minimal distribution and time costs. The chart can be distributed through e-mail and at meetings, and little instruction is needed to use it.
Once the chart system was set up and distributed, the business units responded immediately. Executives who had never expressed any reaction to technology audit reports in the past were suddenly calling to find out what their people needed to do to “get green.” Suddenly, upper management was asking questions, and people were taking action to correct problems. Important issues that were previously ignored got attention and got fixed.
Making needed adjustments
The chart has proved worthwhile and effective. However, the ratings system isn’t perfect, and we’ve made some adjustments during the past few years.
After the first year of using the tool for audits, we modified the criteria and added a new category, software licensing, because licensing tracking had become an important concern. The modifications we made to the chart were based on changing conditions and feedback from the first year of operation. It’s a good idea to do an annual criteria review and update of the chart. If your business conditions change rapidly, you may want to review the chart every six months.
Yet it’s also important not to change the criteria too often. The departments being evaluated need to understand the rules, and if the rules/criteria change too frequently, no one will have the chance to catch up. Changing the criteria will also make it difficult to track change and make comparisons over time.
The stoplight ratings system can effectively get people to take action and fix problems. Developing criteria that focus on critical issues will get everyone thinking about and acting on those issues—and that bodes well for any organization, whether the tech focus is security, IT audits, or a unique crisis such as Y2K.