Security

Security 101, Remedial Edition: Obscurity is not security

Chad Perrin reinforces his argument that obscurity is not security by defending open source security solutions against claims that it is inherently more vulnerable.

I know I've addressed this security issue before — many times, in fact. Apparently, it needs to be said again:

Obscurity is not security!

Arun Radakrishnan wrote about how Red Hat decided to open the source to its security certificate system in TechRepublic's IT News Digest blog, in the article Does open sourcing security framework lead to more secure software?

In the article, he references not only Red Hat's announcement, but also a ZDNet post by Dana Blankenhorn wherein he decided to take on the "open source security meta-hole", as he calls it. His comments imply that, just by making the source code for a piece of software available, that software's security is somehow compromised. He fails to actually make a case for that line of reasoning (probably because it's based entirely on assumptions, and not at all on any actual understanding of principles of security and software design), but he does link to an article in ZDNet UK that discusses the uninformed security concerns of Australian Taxation Office CIO Bill Gibson (not to be confused with speculative fiction author William Gibson), and the open source community's reactions to his expression of those concerns.

In a ZDNet Australia interview, Bill Gibson said:

We are very, very focused on security and privacy and the obligations that we have as an agency to ensure that we protect those rights of citizens' information in that respect. So, we've continued to have concerns about the security related aspects around open source products. We would probably need to make sure that we will be very comfortable — through some form of technical scrutiny — of what is inside such a product so that there was nothing unforeseen there.

There are basically three different types of people telling the world what to do to ensure their computing environments are secure, in my experience:

  1. There are truly knowledgeable security experts such as Bruce Schneier and Phil Zimmerman, people who articulate security principles for the rest of us to help us understand how best to protect ourselves, and who develop legendary security solutions like the Blowfish cipher and PGP. These people universally understand one of the most basic, important principles of security — Kerckhoffs' Principle — which states that a cryptosystem should be secure even if everything about the system except its key is public knowledge. A reformulation known as Shannon's Maxim states:

    The enemy knows the system.

    The lesson to take from this is simple — the effectiveness of your security policy should not depend on the secrecy of the policy, because it can always be discovered or reverse-engineered. These are the security experts who understand the value of peer review. They tend to understand that the benefits of security through visibility are far more important than any unwarranted fear of losing the obscurity of the system.

  2. There are those supposed security experts who, regardless of whether they understand Kerckhoffs' Principle, exhort others to use systems whose implementation details are kept secret. The justification is that this secrecy somehow reduces the likelihood of someone being able to crack the system by examining the implementation details. These are people who are typically either plagued by a conflict of interest (they want to sell closed-source software, but can't sell it if they're telling people their software would be safer if it were a popular open source project) or not nearly as knowledgeable as they thought.

  3. There are, finally, people who hear some security-unconscious CIO's uninformed statements in an interview and run with it, without bothering to actually read up on the subject at all.

It's harsh, but it's true.

Educate yourself. Understand that hiding the implementation details of your security system doesn't help anyone but the "bad guys", because it prevents the "good guys" out there in the general public from helping you improve the system — but the malicious security crackers will use the same reverse-engineering, vulnerability fuzzing, and stress-testing techniques to find chinks in the armor that they always use. Only the most obvious security issues in an implementation (like a complete lack of input validation in a typical Web application) can be found very easily by looking at source code, and any errors that simplistic can be found in moments by way of other techniques.

Not only does open source software provide for a development process more likely to result in secure software, but it also places security software like GnuPG, Nessus, ClamAV, OpenSSH, WinSCP, and PuTTY in the hands of people everywhere who might otherwise never use them. Open source software is near and dear to my heart, as a security professional interested in helping as many people as possible better protect themselves from the malicious security crackers (and unscrupulous, privacy-invading corporations) of the world. Because of that, I tend to get a little annoyed when people spread such nonsense as the notion that open source software is somehow inherently less secure.

Sure, maybe I'm biased, but in this case, it's because I value actual security over the mere illusion of it.

About

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

Editor's Picks