The many eyes that matter for security are the friendly eyes

Security through obscurity is an illusion, and formal principles of information security have codified this fact in modern terms for almost a century and a half.

In Donovan Colbert's recent TR Out Loud article, Linux vs. Windows: Suspending logic and reason for blind faith, he makes an argument for the logical weakness of the "many eyes" explanation of the security benefits of open source software. The "many eyes" theory of security itself comes from three sources.

1. Linus' Law

One of the sources for the "many eyes" concept is what Eric S. Raymond calls Linus' Law, referring to Linux creator Linus Torvalds:

Given enough eyeballs, all bugs are shallow.

In more specific terms:

Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone.

Linus' Law does not pertain specifically to security vulnerabilities. Rather, it applies to all software bugs — including security flaws. A number of studies have suggested that code review reduces bug rates in released software. Some studies also show a correlation between low bug rates and open source development processes. While these studies do not necessarily prove the "many eyes" principle as the key factor, there is at least a strong relationship between it in practice with safer, less buggy software.

2. Kerckhoffs' Principle

More directly relevant to matters of security is Kerckhoffs' Principle. A famous 19th century cryptographer named Auguste Kerckhoffs wrote foundational articles about the subject of military cipher design. What may be the single most widely known principle of information security arose from these articles, and states that:

A cryptosystem should remain secure even if everything about it other than the key is public knowledge.

Claude Shannon, now known as the Father of Information Theory, made a statement of assumption that he contended should be made by all developers and users of secure systems. This statement became known as Shannon's Maxim, and is a reformulation of Kerckhoffs' Principle:

The enemy knows the system.

The upshot is that the security of a system's design is in no way dependent upon the secrecy of the design, in and of itself. Because system designs can be intercepted, stolen, sold, independently derived, reverse engineered by observations of the system's behavior, or just leaked by incompetent custodians, the secrecy of its design can never really be assumed to be secure itself, at least by a reasonable person.

Security through visibility

As explained in an earlier TechRepublic article, the principle of security through visibility connects three factors:

  • improved attention to secure design of a system
  • the benefits of peer review
  • widespread access to the design of a system

Because of widespread access to the system's design, more people can review the system's design — and given a percentage of the people with such access to the design who are inclined to review it, the more it is accessible, the more people will review it. Given that reviewing the design of a system contributes to discovering and fixing vulnerabilities, as shown in studies on the effectiveness of code reviews as a tool for reducing software bug rates, increased review means increased security benefits.

Pulling it all together

The upshot is that "many eyes" is a gross oversimplification. Taking only the minimal summary of the ideas involved — that many eyes on the source of a piece of software ensures greater security — one might be excused for leaping to the conclusion that it is a flawed argument. The more complete explanation, however, explicitly supports the open source approach as consistent with rigorous principles of security, and implicitly leads the thoughtful reader to further support of the open source model's benefits for security.

For instance, while there is some benefit to the process of detecting security vulnerabilities in having access to the source of a software system, there is not nearly so much as people imagine. Reverse engineering techniques serve quite well to discover the most common types of vulnerabilities. An entire class of software known as "fuzzers" exists, its sole purpose being to throw abusive input at a target application and observe its behavior under that stress to quickly detect potential security vulnerabilities.

Pretty much everybody has access to fuzzers and other tools and techniques of vulnerability discovery, and those with some coding skills can write their own, because fuzzers are not in principle the most difficult type of software to create. These are the tools that malicious security crackers use every day to find a way to exploit your favorite piece of software, whatever it might be. That is the ice cream, the slice of pie, and the whipped cream on the pie. Source code is just the cherry on top — largely unnecessary, but nice to have. Where access to source code becomes much more important is in the case of trying to figure out why a particular vulnerability exists, and how to fix it.

The exception is source code that is so amateurish, so badly designed, that only the most inexperienced novices (and the most disinterested daycoders) would ever make the sorts of security mistakes you are likely to find in the source code. In that case, one of two things happens:

  1. The source code is widely available. Good guys and bad guys both have increased access to it, and benefit from it. For a short time, there may be a flurry of exploits as the good guys race to get the problems identified and patched, after which the worst security vulnerabilities are addressed and the value of the source code to malicious security crackers is significantly diminished.
  2. The source code is kept secret. Only the inexperienced, unskilled amateurs who wrote the bad source code, and perhaps a very small number of somewhat more competent programmers, have access to it. Malicious security crackers are not immediately aware that the software was designed with all the airtight qualities of a screen door — and the better programmers "on the inside" are probably similarly unaware their coworkers are total incompetents whose areas of responsibility have been so badly neglected for security purposes. The vulnerabilities remain in place until the bad guys start to notice the security issues, and start to exploit them.

The major difference is that in the first case, openness ensures that the major problems tend to arise more quickly, and to be addressed more quickly, resulting in a more secure system in the long run. Meanwhile, in the second case, a stubborn adherence to the fallacy of security through obscurity results in an early veneer of security, a superficial and mistaken feeling of safety, that merely masks the ticking time bomb of a metric crapton (for lack of a better term) of security issues that will come back to haunt you when least expected.

The Donovan Colbert Argument

Donovan Colbert asked some rhetorical questions, and made some statements, that suggest the "many eyes" approach to open source security does not constitute a valid argument.

Who are we afraid of? Not the privileged eyes who would bury and hide security flaws in code.

In point of fact, that is someone to fear. If we are smart, we fear their incompetence, and we fear their tendency to cover up the possibility of security vulnerabilities rather than rush to fix them. They are, after all, developers of software whose decision makers feel their source code contains something to hide from its users.

We're afraid of people on the outside.

This is true. It is only part of the story, however. We are, in fact, in a race with them — and while depriving them of source code may slightly hinder them, that is far from the only tool at their disposal. Without that tool at the disposal of friendly security researchers, however, we eliminate many of our allies in this race, who could give us a substantial boost in our ability to beat the malicious security crackers to the punch. It seems likely that a best-case scenario where we try to keep the source secret (and hope it does not leak somehow) is that there is no significant change in the pragmatic, in-effect security of the system. A worst-case scenario is that the bad guys find out the problems first, and the good guys — badly outnumbered in this scenario — must play a desperate game of catch-up while the user base is vulnerable, and repeatedly victimized.

In fact, by itself, the "security through obscurity" security model is also a valid, inductive argument.

It is easy to make statements like this when one has not really investigated the matter in more depth than just making facile counter-arguments to discussion forum "sound" bites. Looking into it in more depth, with an understanding of the history, founding principles, most thoroughly tested and demonstrated practices, and most rigorously constructed arguments of information security yields a far more sophisticated, nuanced understanding of the situation. The superficial simplicity of assumptions that attempting to keep system design secret equates to effectively ensuring a meaningful disadvantage for the "enemy" becomes increasingly obvious the more the matter is investigated.

There is, in short, a very good reason that no cryptosystem is considered secure by any well-educated, respectable security expert unless and until it has been examined in excruciating detail by extensive peer review. These are the most-expert security experts in the world, and they not only do not mind sharing the designs of their secure systems with others — they demand it, and distrust any system whose design is kept secret. That means everyone from Auguste Kerckhoffs to Bruce Schneier.

The fact of the matter is that those who believe keeping the source code of a software system secret can keep it secure are at best only considering an overly simplistic view of the circumstances, and only about half of it at that. An experienced developer without much knowledge of principles of security is only knowledgeable enough to be dangerous; a security professional who does not understand the principles of software design are knowledgeable enough to be dangerous to everyone who is willing to listen.

Someone who is neither a student of security principles nor a developer with an understanding of system architecture is prone to making naive mistakes when trying to assess the security benefits of a particular approach to software development, especially if they leap to easy conclusions or base an analysis on the marketing material of proprietary, closed source software vendors.

Donovan Colbert appears to have experience and knowledge of both software design and security. A third factor comes into play here, however: that of taking a well-reasoned, thoughtful approach that goes beyond the simplest correlative relationships between the most superficial details of a problem. I see none of that in his response to the ideas he labels the "many eyes" security model. His intent may have been honest, but his perspective lacks depth.

Ultimately, his argument is that both "many eyes" and "security through obscurity" are unproven assumptions. In truth, however, the "many eyes" theory of security review is the basis of a formal system of peer review that undergirds the entire academic information security field, while "security through obscurity" is a wholly fallacious argument that has been an effectively disproven bit of wishful thinking, regarded as patently false for reasons of logical reasoning and practical evidence for more than a century.

Acknowledgment and dedication

I owe fellow TechRepublic contributor Sterling Camden thanks for bringing Colbert's article to my attention, and I dedicate this — as with most of my articles, even if I do not do so explicitly — to the readers I hope will not be led astray by the misconceptions, errors in judgment, and misrepresentations of fact that infest the public picture of IT security. It is because I recognize the importance of their security and the dangers of misinformation about security that I write articles like this.


Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

Editor's Picks

Free Newsletters, In your Inbox