Security

Coloring outside the lines: Security break-downs don't follow rules

One of the biggest obstacles to improving security is the tendency people have to color inside the lines without even thinking about the ways security crackers can break the rules.

One of the biggest obstacles to improving security in the world is the tendency people have to color inside the lines without even thinking about it.


What do you think of when you hear "IT security"? Do you think of strong passwords, buffer overflows, and encryption? These are, of course, important factors to consider when worrying about security. This sort of thing is only part of the story, however.

Security is really not that simple. Security is not something you can buy as an off-the-shelf product, or something you can get just by checking all the boxes on a checklist. Security is an ongoing effort to forestall and counteract the efforts of security crackers, and to defend yourself and your resources against the vagaries of chance when they may conspire to cause the same problems as security crackers, purely by accident.

Security is also refraining from pulling boneheaded maneuvers that violate your own privacy accidentally, as in the case of government agencies accidentally releasing their secret policies on the Internet or using a black rectangle on a separate document layer to "hide" text before sharing the document with the world. Never forget one of the most obvious security no-nos: writing your password on a sticky note and sticking it to your monitor (or even under the keyboard). Sure, it would not help someone trying to crack security over the network, but the guy from the next cubicle could do so.

All of this put together adds up to an important bit of personal enlightenment: to recognize the full spectrum of potential threats to your security, you need to "think outside the box." Skilled security crackers — the guys who actually recognize vulnerabilities and develop new exploits targeting those vulnerabilities — do what they do by coloring outside the lines. Just like Alexander the Great cutting through the Gordian knot, security crackers can compromise security by simply refusing to play by the same rules as everyone else. Those of us who want to maintain our security and fail to do so because we accidentally created a vulnerability often do so because we not only refuse to color outside the lines, but do not even see the lines and the wide open world beyond them.

That does not mean that coloring inside the lines is bad. What it means is that we cannot counteract the efforts of malicious security crackers if we refuse to acknowledge that there is a world outside the box in which we think, outside the lines within which we color.

A simple example of where people focus so much on coloring inside the lines that they miss the point of security is that of categorization. What kinds of bugs do you consider to be security bugs? What kinds of bugs are not security bugs? Obviously a buffer overflow that can allow arbitrary data to be written to executable memory is a security bug — but what about stability issues?

Do you consider a memory leak that gradually fills up memory on an MS Windows system as you run the program over and over again to be a security issue? It just leaves ghosts of itself behind with application state in it until the computer slows down and eventually needs to be restarted. On the other hand, as RAM fills up, it will start writing to the page file. If some of the data the application handles is sensitive, private data, it may get written to a pagefile on the hard drive and forgotten there. It may still be on that hard drive somewhere after the computer has been restarted several times.

What about a crashing bug that causes your application to fail? Surely, it is just annoying — right? You lose half an hour's work because the application fell over before you got a chance to save your work. On the other hand, there are times when malicious security crackers actually want nothing more than to crash your software. This type of security cracking activity is known as a denial of service (DoS) attack.

Things can get scarier, though. Once again, if your application is handling sensitive data, you may be more vulnerable than you thought. What if the application drops a core dump file on the hard drive in a location that can be accessed more easily by a malicious security cracker than if the data was merely in volatile memory? Now you have to worry not only about losing work because someone decided to exploit a crashing bug in your application, but also about someone else getting access to some of what you were doing after the fact.

What if you are using an application that handles sensitive data, and the whole operating system crashes? People think of RAM as volatile memory that is cleared the moment the computer is turned off, but the truth is that it takes a little while for all the power to drain off from your RAM, and thus for the memory to completely clear. If your application does not get a chance to zero out its memory when shutting down because the OS crashed, all that sensitive data the application was managing might still be in RAM for a few minutes after the crash. If you step away from your desk to use the restroom, someone else might have the tools to read the contents of RAM. The application itself might not have an exploitable bug, but this is definitely a locally exploitable vulnerability.

Real-world examples of bugs in a system — bugs that are not normally associated with security — having surprising security consequences are out there to be found. Two in particular that come to mind are the case of an air traffic control system that crashed and a restaurant that burned down:

  • A new MS Windows server in California nearly caused an 800 airplane pile-up in 2004 when a bug caused the air traffic control system to shut down. It was just a stability issue, because the system would shut down after fifty days if it was not manually rebooted at a "safe" time before then, but it could very well have caused one of the worst commercial airline disasters in history if luck had not been with the pilots that day.
  • It looks like MS Windows Update caused a fire at a cafe in Santa Rosa, CA in October 2009, too. Lives were not lost, but livelihoods were. Again, it was not a security issue in the sense that a buffer overflow or unnecessarily open network port would be a security issue, but lives were at stake because of unexpected behavior.

Does it become a security issue when someone takes advantage of the kinds of problems represented in these two cases to intentionally cause fires or endanger airliners full of passengers? That being the case, it is a security issue from the very beginning — because the best way to deal with an attack on your security is to prevent it. Waiting until after someone finds a way to turn a bug that is not immediately associated with security into a security incident that threatens lives is a remarkably short-sighted way to secure a system.

We allow lines to be drawn around where we are going to do our coloring so that when we are done we have a coherent picture. That does not mean we should pretend it is impossible for the crayon to cross the line. It is all too easy to get into a rut where we become blind to the possibilities outside of the everyday, intended way to use the tools we have in our hands. When we are trying to ensure that others do not destroy what we have worked hard to create, we ignore such possibilities only at our peril.

You do not have to color outside the lines, but you should at least acknowledge that they are just lines, and that those willing to break the rules when they wish to destroy all our hard work will not be held in check by those lines.

About Chad Perrin

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

Editor's Picks

Free Newsletters, In your Inbox