Security

How should we handle security notifications?

A team of researchers at Carnegie-Mellon University studied the statistical relationship between rates of identity fraud and laws that require customers to be notified when there's been a security breach. As a security professional, this should raise a question in your mind: What should breach notification laws achieve?

A team of researchers at Carnegie-Mellon University studied the statistical relationship between rates of identity fraud and laws that require customers to be notified when there has been an information security breach. The team's findings are reported in the paper, "Do Data Breach Disclosure Laws Reduce Identity Theft?", available as a PDF download. If you think anything like a security professional, this should raise a question in your mind:

What should breach notification laws achieve?

The breach

When the security of a database of sensitive information is breached, this can lead to some real problems. It wouldn't be sensitive information if it was information that couldn't do any damage in the wrong hands. A common example is personally identifying information that can be used for purposes of identity fraud.

We should, of course, concern ourselves with reducing the incidence of such breaches. Approaches to that goal can include a number of methodologies, such as:

  • Deter would-be criminals from attempting to commit identity fraud in the first place.
  • Induce those we trust with our personally identifying information to guard it carefully.
  • Trust fewer people with our personally identifying information.
  • Increase the difficulty of using our personally identifying information for identity fraud.

The blame

One way to induce those we trust with our personally identifying information to guard it carefully is to impose strict penalties on those who do not guard it so carefully. One might be excused for making the mistake of believing that breach notification laws serve such a purpose, but I really don't believe they do, for a number of reasons. Among them is the fact that the very legal existence of a corporation makes it difficult to pin blame on any individual for negligent treatment of others' personally identifying information by agents of that corporation.

Worse yet, while they masquerade as means of providing greater security for sensitive information, one of the most significant effects of security "standards" is to give corporations that handle sensitive information a way to show they performed their due diligence, regardless of whether any positive security benefits were realized. Such security compliance standards, for many corporations, serve more as an indicator of when they get to stop paying attention to security than as a means of improving security. They work that way because a corporation, even when it is forced to notify customers that their personally identifying information was accessed by an unauthorized person, can say, "We performed due diligence, did everything we were supposed to. Sometimes, these things just happen. It's not our fault."

Breach notification

As such, if breach notification laws are meant to induce corporations to better protect the personally identifying information entrusted to them, they fail in that regard.

Notification serves another purpose, though. Would you like to know when some miscreant gets his or her hands on your credit card information? At least that would give you the option of cancelling that credit card — and you should cancel it, before someone else uses it to make purchases with it.

The purpose of breach notification laws should be regarded as being the same purpose for your company's security policies when they require you to perform filesystem integrity audits. Specifically, that purpose isn't to prevent a breach, but instead to mitigate the damage the breach may cause. In other words, the purpose of breach notification is damage control. Breach notification laws should be regarded as a means of protecting people by giving them the opportunity to take steps in their own defense.

Vulnerability notification

The same principle applies to vulnerability notification. When someone discovers a vulnerability in an application, that person has some options. Among them:

  • Exploit the vulnerability to engage in malicious security cracking activity. Obviously, this is the wrong answer.
  • Inform the software vendor, and otherwise keep quiet. This is the answer most software vendors would have you employ. Everyone's favorite example of a software vendor, Microsoft, is very adamant that this is the "correct" approach — that the vendor (e.g. Microsoft) should be informed whenever a vulnerability in its software (e.g., MS Windows) is discovered, and the reporting party should keep silent so that Microsoft can develop a patch for the vulnerability without any more people than necessary finding out the vulnerability exists.
  • Inform the public. There are those who suggest that this is the most important thing you can do with information about security vulnerabilities, because the software users are the people most at risk. They need to know there's a vulnerability, regardless of any attempts to patch it, so they can limit the danger to their own systems until they get a chance to patch the software (if ever).

The notion that only the vendor should ever be notified is predicated on one or both of two ideas. The first is that it could damage the vendor's reputation, and the vendor surely doesn't want that. The second, and the one cited by vendors like Microsoft when they want to convince security researchers to keep their mouths shut about vulnerabilities, is that announcing vulnerability discovery lets the "bad guys" know the vulnerability exists — thus theoretically increasing the risk to users of the affected software.

Unfortunately, this idea that informing the public involves informing malicious security crackers as well is a particularly insidious example of the fallacy of security through obscurity. The truth is that vulnerabilities are open to discovery by anyone with the skill to find them, and any time you keep quiet about a vulnerability you ensure that, if malicious security crackers find that vulnerability, only malicious security crackers will know about it.

A common compromise is to inform the vendor, then give the vendor a certain period of time in which to provide a fix before announcing the vulnerability to the public. The length of this grace period is open to debate. Microsoft would like it to be as long as possible. Users of MS Windows that have the ability to provide a temporary work-around for a given vulnerability, so they're protected for however long it takes to get a patch from Microsoft, would surely like to know right away. The reasons for a time limit are many and varied, but probably the most obvious is to encourage the vendor to produce a patch quickly — not something for which Microsoft and similar corporate vendors are well known.

To wait, or not to wait?

Corporations have no right to hide behind any grace period. The customers — the software users and the people whose personally identifying information was accessed by malicious security crackers — have a right to know when their software is defective, when their finances are at risk, and so on. They need this information if they are to effectively limit the damage to themselves.

On the other hand, a lot of end users of software will never get around to doing anything about a vulnerability until there's a patch (if even then), and a lot of potential targets of identity fraud will never do anything to limit the damage to their reputations (including financial reputation, naturally). Similarly, if evidence of a security breach in a customer database is detected, a company may discover through additional research that the initial discovery was a false alarm. As such, perhaps in practice some grace period is reasonable.

A week is more than enough in most cases. Major open source software projects routinely turn out security fixes in less than a day; there's no reason a software vendor shouldn't be able to achieve the same record. The fact they're allowed to take months, or even years in some cases, to fix a glaring problem with security is the reason they often take months or even years to fix such problems. The tighter the schedule under which they must operate, the higher the priority security issues will become.

About

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

Editor's Picks