Security

How should we handle security notifications?

A team of researchers at Carnegie-Mellon University studied the statistical relationship between rates of identity fraud and laws that require customers to be notified when there's been a security breach. As a security professional, this should raise a question in your mind: What should breach notification laws achieve?

A team of researchers at Carnegie-Mellon University studied the statistical relationship between rates of identity fraud and laws that require customers to be notified when there has been an information security breach. The team's findings are reported in the paper, "Do Data Breach Disclosure Laws Reduce Identity Theft?", available as a PDF download. If you think anything like a security professional, this should raise a question in your mind:

What should breach notification laws achieve?

The breach

When the security of a database of sensitive information is breached, this can lead to some real problems. It wouldn't be sensitive information if it was information that couldn't do any damage in the wrong hands. A common example is personally identifying information that can be used for purposes of identity fraud.

We should, of course, concern ourselves with reducing the incidence of such breaches. Approaches to that goal can include a number of methodologies, such as:

  • Deter would-be criminals from attempting to commit identity fraud in the first place.
  • Induce those we trust with our personally identifying information to guard it carefully.
  • Trust fewer people with our personally identifying information.
  • Increase the difficulty of using our personally identifying information for identity fraud.

The blame

One way to induce those we trust with our personally identifying information to guard it carefully is to impose strict penalties on those who do not guard it so carefully. One might be excused for making the mistake of believing that breach notification laws serve such a purpose, but I really don't believe they do, for a number of reasons. Among them is the fact that the very legal existence of a corporation makes it difficult to pin blame on any individual for negligent treatment of others' personally identifying information by agents of that corporation.

Worse yet, while they masquerade as means of providing greater security for sensitive information, one of the most significant effects of security "standards" is to give corporations that handle sensitive information a way to show they performed their due diligence, regardless of whether any positive security benefits were realized. Such security compliance standards, for many corporations, serve more as an indicator of when they get to stop paying attention to security than as a means of improving security. They work that way because a corporation, even when it is forced to notify customers that their personally identifying information was accessed by an unauthorized person, can say, "We performed due diligence, did everything we were supposed to. Sometimes, these things just happen. It's not our fault."

Breach notification

As such, if breach notification laws are meant to induce corporations to better protect the personally identifying information entrusted to them, they fail in that regard.

Notification serves another purpose, though. Would you like to know when some miscreant gets his or her hands on your credit card information? At least that would give you the option of cancelling that credit card -- and you should cancel it, before someone else uses it to make purchases with it.

The purpose of breach notification laws should be regarded as being the same purpose for your company's security policies when they require you to perform filesystem integrity audits. Specifically, that purpose isn't to prevent a breach, but instead to mitigate the damage the breach may cause. In other words, the purpose of breach notification is damage control. Breach notification laws should be regarded as a means of protecting people by giving them the opportunity to take steps in their own defense.

Vulnerability notification

The same principle applies to vulnerability notification. When someone discovers a vulnerability in an application, that person has some options. Among them:

  • Exploit the vulnerability to engage in malicious security cracking activity. Obviously, this is the wrong answer.
  • Inform the software vendor, and otherwise keep quiet. This is the answer most software vendors would have you employ. Everyone's favorite example of a software vendor, Microsoft, is very adamant that this is the "correct" approach -- that the vendor (e.g. Microsoft) should be informed whenever a vulnerability in its software (e.g., MS Windows) is discovered, and the reporting party should keep silent so that Microsoft can develop a patch for the vulnerability without any more people than necessary finding out the vulnerability exists.
  • Inform the public. There are those who suggest that this is the most important thing you can do with information about security vulnerabilities, because the software users are the people most at risk. They need to know there's a vulnerability, regardless of any attempts to patch it, so they can limit the danger to their own systems until they get a chance to patch the software (if ever).

The notion that only the vendor should ever be notified is predicated on one or both of two ideas. The first is that it could damage the vendor's reputation, and the vendor surely doesn't want that. The second, and the one cited by vendors like Microsoft when they want to convince security researchers to keep their mouths shut about vulnerabilities, is that announcing vulnerability discovery lets the "bad guys" know the vulnerability exists -- thus theoretically increasing the risk to users of the affected software.

Unfortunately, this idea that informing the public involves informing malicious security crackers as well is a particularly insidious example of the fallacy of security through obscurity. The truth is that vulnerabilities are open to discovery by anyone with the skill to find them, and any time you keep quiet about a vulnerability you ensure that, if malicious security crackers find that vulnerability, only malicious security crackers will know about it.

A common compromise is to inform the vendor, then give the vendor a certain period of time in which to provide a fix before announcing the vulnerability to the public. The length of this grace period is open to debate. Microsoft would like it to be as long as possible. Users of MS Windows that have the ability to provide a temporary work-around for a given vulnerability, so they're protected for however long it takes to get a patch from Microsoft, would surely like to know right away. The reasons for a time limit are many and varied, but probably the most obvious is to encourage the vendor to produce a patch quickly -- not something for which Microsoft and similar corporate vendors are well known.

To wait, or not to wait?

Corporations have no right to hide behind any grace period. The customers -- the software users and the people whose personally identifying information was accessed by malicious security crackers -- have a right to know when their software is defective, when their finances are at risk, and so on. They need this information if they are to effectively limit the damage to themselves.

On the other hand, a lot of end users of software will never get around to doing anything about a vulnerability until there's a patch (if even then), and a lot of potential targets of identity fraud will never do anything to limit the damage to their reputations (including financial reputation, naturally). Similarly, if evidence of a security breach in a customer database is detected, a company may discover through additional research that the initial discovery was a false alarm. As such, perhaps in practice some grace period is reasonable.

A week is more than enough in most cases. Major open source software projects routinely turn out security fixes in less than a day; there's no reason a software vendor shouldn't be able to achieve the same record. The fact they're allowed to take months, or even years in some cases, to fix a glaring problem with security is the reason they often take months or even years to fix such problems. The tighter the schedule under which they must operate, the higher the priority security issues will become.

About

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

22 comments
therealbeadweaver2002
therealbeadweaver2002

Upon further thought, I have come to ONE conclusion. The EULA needs to be changed. IN EVERY CASE i have dealt with where a vulnerability is found, the EULA states that the software maker cannot be held responsible for loss or damage, etc. IF the software makers were held accoutnable for their vulnerabilities, then you bet they would disappear within a SHORT time.

Marquisem
Marquisem

Here's the thing: forensics on breach incidents and software fixes for vulnerabilities are both time-consuming to do correctly. You can't evaluate the extent of most breaches overnight. It can and does, take months. Even some government regulators are advising their regulatees that they should expect a delay because the forensics leading to the root cause can take time. Likewise, if I have my choice between a patch that fixes Problem A that takes 6 months to be released or a patch that fixes Problem A and also causes Problem B, C, D that I get a week after Problem A is discovered, I'll take the former over the latter. We all need to be a little more realistic. And no, I don't work for Microsoft, Cicso, Apple, ChoicePoint or TJX.

dawgit
dawgit

most methods curently in use are meant to passify the masses, and do very little in the way of improving the infrastructure. There needs to be a multifacited approach to solving these situations. Some of them should be how to prevent such losses, with some teeth behind doing so. An examination of those incedents that do happen, with a thorough and public review, an after action review. (with input allowed) Of course the indivuals involved should be notified, so that they can try to minimize potential losses. The unfortunate part is that this doesn't seem to be getting done. Another sad statement of our times? -d

apotheon
apotheon

Security notification policy is a matter of intense debate in some circles. How it should be handled is necessarily dependent on a number of factors, including what you hope to achieve by notification, and who there is to notify. I made a case for almost immediate notification. Do any of you have a counterargument?

alaniane
alaniane

It's simple debugging. Most security vulnerabilities are not deliberately built into the system; they are bugs. The bug could be in the original specs or in the implementation. If the software is not spaghetti code then you can isolate the code giving the errant results rather quickly. It shouldn't take months to find the bug even in complex code. The problem most of the time is that developers tend to not want to go back over their code and look for bugs. It's boring work and doesn't generate new revenue. They like developing new projects and the company only receives revenue from new projects, not bug fixes. If your taking months to figure out what the bug fix is going affect then your code isn't worth maintaining. You need to redo it.

apotheon
apotheon

"[i]You can't evaluate the extent of most breaches overnight. It can and does, take months.[/i]" Evidence suggests that this supposed limitation on how quickly breaches and vulnerabilities can be assessed is more a function of priorities than actual capabilities. Open source projects do it in less than a week all the time. The fact that some major closed source software vendors routinely take months to do the same thing doesn't mean it can't be done more quickly. The greater alacrity of patch development in open source projects doesn't lend itself to a greater incidence of additional problems, as far as anyone has been able to determine, either. In fact, the opposite seems to be true -- the incidence of security patches creating new problems seems to be far greater, for instance, for Microsoft than for the FreeBSD project. "[i]We all need to be a little more realistic.[/i]" I'm not sure how we can be more realistic than taking the example of what is done on a regular basis by many software development projects as an indicator of what can and should be accomplished in the pursuit of greater security. Furthermore, if we're going to be "realistic", we should recognize that regardless of whatever difficulties a vendor may encounter, [b]the users must take priority over the vendor[/b] in the considerations of security researchers. Being left unprotected for more than a week after a vulnerability was discovered, with no way to know there's a vulnerability against which users must protect themselves short of discovering vulnerabilities themselves, is unacceptable. I, for one, am not interested in being [b]unwittingly[/b] hung out to dry while it takes three months for a vendor to develop a patch. I want to know that I'm at risk, and how I'm at risk, so I can implement some protective measures.

apotheon
apotheon

Among the most important considerations is prevention -- prevention of further harm to those whose data has already been compromised, and prevention of future compromises. The trend, however, is toward CYA behavior. Everyone wants to be able to point to a checklist and say "We did all of that, so it's not our fault." Nobody seems to be willing to go above and beyond the "compliance" checklist.

bikingbill
bikingbill

If I receive poor service say from my garage, common courtesy requires that I first ask the garage to remedy the situation. Only if they fail to do so do I take the matter further. Applying the same principle, the software supplier should be given the opportunity to remedy their fault. On the other hand, last year my bank card was cloned. Within 12 hours, the bank had 'phoned me personally to ask about some unusual transactions which were of course fraudulent. Applying this principle, the software supplier should 'phone me as soon as it finds a problem in its software (even if it will take several days to fix it). Needless to say, this has never happened. Therefore, I think it is reasonable to first notify the software supplier. But until they put in place a reliable process of notifying their customers of the problem, it is legitimate to "go public" almost immediately and give people the opportunity to either secure or switch off exposed systems.

dawgit
dawgit

It's sad to even think about, all those check lists and guidlines are meant to be a starting point to be built on, not the end point of liability. There once was a time when striving for excellence was greatly admired and encourged, not discouraged and despised. Today it's the 'Standard' to achieve mediocrity. Efforts at the concept of enbetterment are subjects of ridicule. -d

tuomo
tuomo

I don't know if it's still happening but at the time I did work with mainframes any and all problems were immediately relayed to "users" and the (often temporary) fix was (almost) guaranteed in 24h. Now - the user wasn't the end user who couldn't do anything anyway but as a systems programmer I got flooded some days, which created it's own problems in 24x7 systems, when and what to fix and is this really something which is relevant to our infrastructure? And maybe the fix makes something else to fail, how to test, etc? To the subject, I still think 24h to get an (intelligent) answer from vendor and 48h to fix (at least a temporary way) should be enough. Then it is up to the "users" if and when they use the fixes. I know, not many end users can or will make those decisions but maybe over time both sides will get better? And even if it looks the right thing to tell the world about a problem, I still think it is just fair to give the vendor a chance. Now - I know not everything can be fixed in 48h or even temporary fixed working through those 48h but that's IT for you. Maybe the next time the system is better designed upfront and made more maintainable, which (too) many systems seem not to be today?

Dumphrey
Dumphrey

I do not see many closed source operations having the nimbleness to achieve this, but then again, there has been no real pressing need for them to do so... No stick and no carrot, just an open road. I would say a month is more then fair enough amount of time between informing the vendor and going public.

therealbeadweaver2002
therealbeadweaver2002

If you take your car to the garage, and they find a major defect in the design of the car, the first thing they should do is try to fix the defect. THEN, they should notify the make of the auto and inform them both about the defect and the steps THEY took to fix it. IF the garage feels the resposibility to notify OTHER GARAGES about the defect and fix, then so be it. If the garage feels that the defect is so bad as to be dangerous, and they should be warning the public in general, then they should do so. But if it is just an obscure defect that poses no IMMEDIATE danger, then the garage should give the automake time to FIX the defect before going public. In the case of automobiles, the time would be long enough for them to do a recall, a couple month, at least...SOFTWARE is a different story. like someone said, opensource programmers get fixes out in a day or two.

apotheon
apotheon

That's exactly my take on the matter. Do you have any specific suggestions for determining how long between notifying the vendor and "going public"? Does my one week grace period make sense to you? Judging by what you said, I wonder if you think one week might be too long.

Neon Samurai
Neon Samurai

I've a friend who is a great meter when I want to consider somethign from a business angle. Anything I suggest usually starts him with; "great, that's really cool.. so how does it make me more money?" I get the same responses when I ponder things with the management at work; "so how does this increasing profits?" - and if I can't draw a strait line from idea to more money... I took the logical business/accountent step and equated accounability too financial cost. If the company could be held accountable but without effect too there gross profit; would they care? It would be nice to see more true accountability instead of token demonstraitions from business though.

Dumphrey
Dumphrey

but keep in mind that this is a man that blanket refuses to buy $15 replacement batteries for UPS units, saying they are a waste of money... He and my grandfather would get along like peanut butter and jelly.

alaniane
alaniane

disaster continguency document to show management why they can be hung out to dry if they don't take the necessary precautions. If you can cook the books and get away with it, most management now days will cook those books. Knowing that the CEO can do real time for cooking the company's books prevents many of them trying it. Make the C-level managers financially reliable to security breaches and you will see them have their staff plug them in a hurry.

Dumphrey
Dumphrey

We do not need to buy support contracts on this equipment (Exchange server and Ironport spam filter) because whats the percentage they will actually fail in the first few years? Very low. He knows that if they die, it will cost 5x what the support contracts cost, and is willing to make the gamble to "save money". This seems to be a common mentality among management from what I have seen. But its interesting to note, these same people who do not see a need for service contracts (read instant replacement) refuse to only have minimum coverage on their car... Double standards where their personal money is not at risk. The people paying for the security many times lack the knowledge to make informed decisions about the value of a proposal, and many times refuse to believe their IT staff is actually presenting them with a reasonable proposal. Unless your accounting department is above average, most people will be lucky to get the minimum to satisfy [insert acronym here] compliance.

therealbeadweaver2002
therealbeadweaver2002

it is all about accountABILITY. If the company can point to the MINIMUM requirements and say "see, we complied" and get off, then why SHOULD THEY go further. I say hold them accountable EVEN IF they comply, moreso if they don't.

Neon Samurai
Neon Samurai

The company will be minimally compliant because not being so may cost more in lost data, fines or law suites. The company will only be minimally compliant because being anything more increases the cost. The path of least expenditures; it's all about the profit margin and minimizing anything which cuts into it. Hurray corporate law.

seanferd
seanferd

not being in the U.S. :D

dawgit
dawgit

But, I didn't think they'd have the same sense of humour as I do. So I usually just keep it to myself. (Any fix I might manage to come up with that is, the bug I forward to them, than wait... and wait... to see if they came up with something close to what I did.) -d

seanferd
seanferd

The garage should not attempt to fix defects or design flaws in an auto. They may not be able to do it correctly, and why should the owner pay for a repair to something that was the manufacturer's fault? In software, if it is open source, go ahead and send your notification and patch to the software maintainers, they'll appreciate it, and they will check it, make any necessary corrections, patch their software or add the patch to their repository, and notify users. For closed source software, you better darn well be one of the few who has legal access to source code or the right to decompile the binaries. Don't ever tell, for example, MS that you've reverse engineered their software and wrote a patch for them. If you do, look out.