Security

Why you can't get management on board

Chad Perrin breaks down the psychology of upper managers who are willing to take a gamble on security by refusing to allocate the funds needed to implement a strong security policy. What do you think of this reasoning? Is it irresponsibility or just human nature?

You can read a lot about specific techniques and general principles of security here at TechRepublic's IT Security Weblog. As long as you have the authority and desire to secure your information technology resources, you can make use of such knowledge to help protect against malicious security crackers, catastrophic mistakes, and simple bad luck. How do you get the authority, though? Even IT professionals who should have the authority often don't, because they lack the support of upper management.

Somehow, you have to get management "on board," to make those to whom you answer aware of the importance of good security practice. Without that support, you will probably find your authority limited when it comes time to make changes in network use policy, technology implementation, and resource access control.

The obvious answer I might give is that management should be reading TechRepublic's IT Security Weblog. That's an improbable and overly simplistic solution, though — we all know how likely upper management is to bother dirtying its hands by regularly reading about IT security. That's your job.

Maybe, if you print this out and set it on that stubborn manager's desk, it'll open up his or her eyes. On the other hand, maybe it would cost you your job. Sometimes, there's just no way to get through to management.

CYA

Luckily, things are beginning to break our way just a little bit. Executives tend to want to at least appear to pay attention to security matters these days. It's not a universal trend — many executives still aren't paying attention — but with the many high-profile security breaches that are in the news, and the growing awareness of the very real financial dangers that can arise as the result of a breach, you're more likely to be able to get management on your side now than you were ten (or even five) years ago.

Unfortunately, the CYA mentality is not consistently reliable as a means of getting management to support good security policies in the organization. Why implement deep security policy reform, just to cover oneself in case of disaster, when it is so much easier to go through the motions of following "industry best practice" and ultimately shift blame around?

Especially in the upper echelons of the corporate executive officer market, it's often easy to move from one organization to another when the earlier organization falls on hard times. If any of the blame for a downturn can be pinned on an executive, he or she might be prohibited from moving up in the world by jumping ship — but taking a CYA approach to security can help insulate an executive against that kind of reputation damage without requiring the kind of hard decisions that deep security policy reform sometimes demands.

Be careful about using the CYA impulse to justify changes to security policy. You may find orders coming down to you to implement measures that don't really improve security — that may even make security worse — but do provide a way for upper management to shift blame to someone else if something goes wrong. That someone else may even be you.

Selling a negative

Many of us IT professionals prefer working with technology over working with human beings. Unfortunately, getting the people above us on the organizational hierarchy chart to buy into what we know to be the best solutions to the problems that face us often requires some salesmanship. If they don't just turn over all decision-making to you, you'll have to sell them on your ideas.

If you talk to the people in the sales department about how to sell something, you may learn that it's easier to sell someone something he or she wants than to sell that person the absence of something he or she doesn't want. To improve your chances of getting management on board with a new security initiative, it helps to sell a positive, rather than a negative. It's much easier to sell a car than insurance, which is part of the reason auto insurance carriers usually sell things to the state legislature first; if getting auto insurance is a legally required part of owning a car, every time someone sells a car someone else gets to sell auto insurance.

Part of the reason for this is that, whenever you try to sell a negative (the absence of something bad), the prospective buyer takes that negative as a trade-off against other things. Freedom from unauthorized access of the customer database? Well, what are the chances that'd happen anyway — and where else could we be spending this money? Trying to sell a negative induces people to try to calculate the odds, to make a gamble — to look for the easy way out. They'll almost always calculate the odds poorly because security costs are often considered resources that were just thrown away if the threat against which they're meant to protect you never arises.

How do you even know if it's working? A lot of the time, the best you can do to prove a security measure is working is to say, "Well, security hasn't been compromised." Would it have been compromised anyway? There's usually no way to tell, so security measures almost always look like a way to burn money.

How people think

Let's imagine an example. You have two groups of people. You'll offer a chance to get some money to Group A, and you'll threaten to take some money from Group B.

Group A has two choices. To everyone in Group A, you offer these two options:

  1. You can have $100.
  2. You can have a 50% chance of getting $200, and a 50% chance of getting nothing.

Statistical studies have shown that the most common response is to choose the sure thing, and most of the members of Group A will walk away with $100.

Group B also has two choices. To everyone in Group B, you offer these two options:

  1. You can give me $100.
  2. You can have a 50% chance of giving me $200, and a 50% chance of keeping all your money.

You can even give everyone $100 before you offer that choice, and the results will still be pretty much the same, according to those same statistical studies — the most common response is to take the gamble, and most of the members of Group B will take the 50% chance of keeping all the money.

The trend is clear: it's much easier to sell a positive (free money, as a sure thing) than a negative (protection, at a cost, against great loss that may not occur).

What that means

This behavior is actually beneficial when even a small loss might spell disaster, or when no gain at all might similarly spell disaster. If you simply cannot afford to spend $100 on security, you're better off taking your chances on a 50% gamble on a greater loss, and if you simply cannot afford to fail to profit today, a $100 sure thing is better than a 50% chance at wild, undreamed-of successes. If you aren't balanced precariously over a pit of failure, though, and have some flexibility — if you can afford, say, $150 in losses, but $200 would finish you — you're better off making the $100 payment for security against a $200 loss. By the same token, if gaining $200 would provide you with benefits you'll never achieve by receiving $100, and getting nothing today doesn't really hurt you, you're better off taking the 50% chance at $200.

People don't usually think this way, though. It takes consideration to realize these secondary effects. Snap judgments usually tend toward taking the chance on losses, and taking the sure thing on gains. Those who make their snap judgments the other way usually do so even when they shouldn't, and fail spectacularly, which of course just reinforces the tendency for the rest of us to make the "normal" snap judgment.

Most companies can afford the expense of good security policy, though, and can't afford to lose out on a security gamble. Your challenge is to either help management understand this or somehow spin security solutions as a positive sale rather than a negative.

Good luck.

About Chad Perrin

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

Editor's Picks

Free Newsletters, In your Inbox