CXO

Maintaining focus on communications during a crisis is important

Users are sometimes suspicious when crises are averted too easily.


One evening while working in the order processing division of a Northeastern company, I received a page. The events triggering the page set off a chain reaction of fear and uncertainty that eventually reached to the highest levels of the company. These events, which I could have prevented if I had paid attention, eventually cost my project team hundreds of man-hours and nearly lost my company our contract.

In the wee hours of one winter morning, my team and I worked steadily to replace 500 desktops before the order entry department returned to work. We had four hours and 100 desktops to go. Since the team could deploy and test 50 an hour, I didn't think we would run into any trouble. Then my pager went off. Checking it, I saw the dreaded "911" followed by one of my client counterpart's extensions.

Telling my team to keep working, I grabbed a phone. My counterpart (let's call him Phil) picked up on the third ring.

"Shannon? That you?"

"Yep. What'ya need?"

"Someone just hacked our Web site. I've cut the Internet connection and cut the primary connection from the DMZ to our internal network. Want to come up here and give me a hand getting this mess straightened out?"

As the client team and my company's server team filtered in for the morning, my counterpart and I finished up our analysis. It looked like the various parts of the security system performed flawlessly. The "break-in" penetrated the outer Web server and sent spam mail to the Webmaster and sales contact e-mail addresses. The routers remained secure, as did the internal network. Relatively content, we shot off e-mail to our respective managers and called it a day.

Dealing with fear and uncertainty
Our respective managers walked into the situation with nothing more than a stack of log analysis reports and an "everything is fine" e-mail to go on. My manager called my counterpart's manager, and they contacted the CIO with relatively similar stories. He, in turn, discussed the situation with the C-level staff. Then the CIO directed the head of security to send out a report to everyone in the company about how the IT team "rapidly and successfully contained" a security breach. By the time the message got to the fourth part of the chain, details proved scarce.

In the absence of information, people were convinced we had to be covering something up. How could everything be fine? The evil hackers were loose in the network! The users blamed them for everything from the printer not working properly to a delay on one of the WAN circuits.

In response, the CIO ordered a "visible, all-hands effort" to review the data. He felt a public show of action would help to quell the rumors born in the wake of the ambiguous e-mail. I signed on to work with the IT team, while my deployment team went out to be a visible presence. For whatever reason, when my manager asked if we could handle it, I said yes.

Some amount of fear and uncertainty is unavoidable in these situations. People's understanding of exactly what security breaches mean remains rudimentary at best. However, by forcing our managers to guess from our notes what really happened, we started a chain reaction no one could control.

If, instead, we had provided our managers with a bulleted summary using the following format, we could have given them the tools they needed to resolve the problem: For example:
  • <Server or router name> - penetrated/not-penetrated - action taken to resolve - current status
  • OutRout penetrated (admin privileges stolen) – image refreshed and router patched – restored and waiting for next attack

In an ideal world, the status report would include a visible demarcation, such as a thick black line, showing where the DMX protected the servers. With this information, our managers could have provided a much clearer picture to the executive staff and to the users during their communications.

Dealing with doubt
Our second mistake, both as consultants and as managers, was in failing to articulate the reason behind the need for extra bodies. We moved quickly without assessing the real source of the problem. The non-IT clients did not need us to be present. They did not trust us, present or not, to tell them the truth about what happened.

The circling consulting companies quickly smelled an opportunity. Within a week, the CIO and CEO listened to three flashy presentations preying on their doubts about the IT team. My own customer rep showed up with a very similar presentation, which did nothing to enhance our credibility. We later hired one for a disastrous fiasco of a security audit.

Later I realized we could have solved this problem very early in the cycle. In our initial reaction, we underestimated the importance of presenting a third-party opinion to the C-level executives. A quick phone call to one of my coworkers would have inserted a high-level expert on our side almost immediately. In turn, this would have supported Phil's efforts with the weight of a paid consultant's testimony.

I fumbled the second chance to correct this mistake when I agreed to pull my people off their regular duties. The client needed reassurance and authoritative action, not willing bodies. Both of these emotional solutions needed to come from a third party, or at least an objective one.

Fast forward
Once things settled down, we pulled together a meeting to discuss what happened. Our technical response bordered on flawless. Unfortunately, our communications "strategy," not to mention our lack of willingness to step back and consider the cause of our actions, led us to a long nightmare of recriminations.

What signs would you look for to determine if you were headed into a similar situation? What could you do to prevent it? Post a comment in the discussion following this article, and let us know what you think.

Editor's Picks