Any security professional worth his salt should be familiar with Kerckhoffs’ principle, which states that a cryptosystem should be secure even if everything about the design of the system is public knowledge. The same concept was expressed by Shannon’s maxim, “the enemy knows the system”. In either case, the implication is clear: Don’t rely on obscurity for security.

The term “security through obscurity” has become a pejorative one in professional security circles. The way many people describe it, it refers to hiding the details of a set of security procedures because they aren’t strong enough to stand on their own. One might define security through obscurity as security that relies on the stupidity of the enemy — which is generally regarded as a bad idea.

There are two sides to the “security through obscurity” coin:

  1. Intentional Security Through Obscurity: Security through obscurity may refer to an intentional act of trying to maintain or strengthen security by keeping security policies and procedures secret. This approach to security is behind such common vendor behavior as attempting to keep any and all vulnerability discoveries secret until after the vendor has the opportunity to release a patch (and spin the story to make the vendor sound good, of course). This occasionally has the effect of actually punishing security researchers for doing their jobs, and is generally more of a means of protecting the vendor than the end user. When security professionals talk about “security through obscurity”, this is usually what they mean.
  2. Accidental Security Through Obscurity: In a more casual sense, the term “security through obscurity” is sometimes used to refer to the idea that a less well-known, less common, and thus less inviting target appears more secure statistically, even if it is not more secure technically. This is the concept behind statements commonly made on the Microsoft Windows side of the Windows/Linux security debate such as “Linux will have just as many security problems as Windows if it ever becomes as popular.” The way the argument works is expressed by another formulation of the same idea: “Linux only looks more secure because it’s so unpopular that nobody bothers to attack it.”

There does, in fact, seem to be a correlative connection between the security of an operating system and its popularity much of the time.

  1. MS Windows suffers a greater statistical incidence of breaches than MacOS X.
  2. MacOS X suffers a greater statistical incidence of breaches than (most) Linux distributions.
  3. Linux distros tend to suffer a greater statistical incidence of breaches than FreeBSD.
  4. FreeBSD suffers a greater statistical incidence of breaches than OpenBSD.

Correlation does not imply causation, however — and, even if it did, one could not be certain based on that data alone which way the causation ran. We do not know, based on nothing more than a correlation, which of the following is true:

  1. Does the popularity of MS Windows make it a bigger target, thus leading to a greater statistical incidence of security breaches?
  2. Does a poor technical design with regard to security contribute to greater popularity for MS Windows?
  3. Is there some single cause of both greater popularity and poorer technical security design?
  4. Is there some single cause of both greater popularity and higher profile as a target aside from popularity itself?
  5. Is this apparent correlation all the result of a biased sampling of operating systems?

Point 1 is the accidental obscurity argument. Point 2 suggests that secure design interferes with design that builds market share, which matches the sometimes offered suggestion that security and usability are to some extent incompatible with one another.

Point 3 might support an argument that making technically correct design decisions secondary to the mandates of a vendor’s marketing department is the real cause of reduced security for major vendors, as well as the real cause of those vendors’ software gaining significant market share. There is no implication here that marketability and security are incompatible — only that the people making decisions are good at making decisions for marketability and bad at making decisions for security. This seems to match the observations of many developers who are frustrated with their work environments, as well as those of people who become increasingly frustrated with the direction of certain Linux distribution projects as they focus increasingly on “user friendly” operation, often at the expense of other concerns they consider more technically correct.

Point 4 is a bit difficult to define clearly. It differs from point 3 in that it still assumes, like point 1, that “size” (one might say “footprint” or “profile”) of the target is the primary determining factor in security breach statistics, rather than purely technical design characteristics. It bears similarity to point 3, however, in that it does not establish a direct causal relationship between popularity and security breach statistics. Depending on the specific form of target profile used to justify this hypothesis, it may end up supporting the notion that technical design characteristics are a more significant factor than popularity, or the opposite — but more likely would support neither, particularly.

Point 5 is sort of an “escape clause”. It is the most direct route to invalidating any connection between popularity and increased security breach statistics of MS Windows as compared with other OSes. One could simply point to other OSes not commonly considered as exceptions to the perceived trend, and if there are enough exceptions the trend itself might be shown to be statistically insignificant.

I’ve examined all these possibilities, and a few more that are less obvious than these, at some length. I intentionally challenge my own beliefs about (and understanding of) security principles constantly. Where security is concerned, it is my opinion that it is better to be right than to be perceived as being right, and even when I debate matters of security with someone I am always looking for signs that any opposing debaters might be right due to an insight I’ve missed. As things currently stand, however, I find that the evidence and logical principles that apply seem to support the theory that popularity only overshadows technical characteristics for impact on security up to a point — and that point is where popularity is great enough to matter at all.

If your system is so unpopular that someone who wants to breach security simply cannot find a vulnerability without using reverse engineering and fuzzing techniques to find it himself, then popularity is a factor for determining the actual security of a system for purposes of deciding whether it is acceptable to use. It is doubtful whether even a system as rare as Plan 9 fits into this category, let alone one with such widespread deployment as Linux. Anything more popular than the absurdly low level of popularity of something rarer than Plan 9 suffers from wide availability of information and established techniques for finding and exploiting common vulnerabilities that are characteristic of that system.

OSes like OpenBSD, FreeBSD, and major Linux distributions are all well within the range of popularity where obscurity does not provide security, particularly considering the similarities between these systems, the commonality of software between them, and their ubiquity as Internet-connected server systems. Couple this with the fact that — in the case of open source projects like Linux distributions and open source BSD Unix systems — the matter of security through visibility is a significant factor, and the accidental security through obscurity argument starts looking pretty thin.

Let’s just assume for a moment that you have some staggering, undeniable argument, sublime in its logic and rock-solid in its evidenciary support, that the conclusions in the above paragraph are inaccurate. Let’s just assume that you know The Secret Proof that the only thing that makes an OS like Debian GNU/Linux or OpenBSD, or even OpenVMS, more secure than MS Windows is its relative obscurity in the home desktop computer market. Just for argument’s sake, I’ll go ahead and assume that such a rebuttal to the “security through visibility” and “security through obscurity doesn’t work” arguments actually exists. What then?

Well . . . then the question becomes:

Why does it matter if that’s the reason something like NetBSD suffers fewer security breaches per system in play than MS Windows, or even MacOS X? Isn’t the important factor, for security purposes, that a system is less likely to be breached?

If obscurity works, use it.