IT Employment

Design simplicity is an important element of open source security

Complexity can help kill some of the security benefits of open source software development, especially for small projects.

In the design of a system, one of the strongest influences on that system's security is the complexity of its design. Studies suggest that for most nontrivial software there is a nearly linear, direct relationship between the number of lines of code and the number of bugs in the software, all else being equal. Right away, this tells us that if we can, we should design simpler software.

That "all else being equal" qualifier is an important part of the whole situation, however. Playing golf with your code -- trying to complete a program in as few (key)strokes as possible -- changes the conditions within which the relationship between line count and bug count takes shape. This is in part because golfing your programs down to minimal source code size can actually make it much harder to read, and thus harder to maintain, improve, and debug.

So long as the structure of the code is still easy to follow, and the way it is laid out is optimized for readability and understandable design, succinctness can actually serve to make the code easier to read (and thus to maintain, improve, and debug) instead. The lesson to take from this is that trying to keep your program source code line count down is an activity that should be pursued by trying to simplify the design of the program, and not by simply reducing the number of characters in the source files.

Aside from the merely human effects of complexity on security, there are also the technical reasons. More complex designs mean more complex interactions between different parts of a program. As these interactions increase in complexity, the potential for unforeseen and unsafe effects to emerge from that complexity grows. The best way to manage this kind of problem when trying to design a system securely is to try to simplify the way different parts of the program interact with each other, and with the data the program handles.

These are factors of the influence of complexity on security that apply to any software design, whether designed by a number of large teams working on separate parts of the complete system or developed by a single motivated code hacker, whether developed using a waterfall model or an agile model, and whether open source or closed. Complexity and simplicity can have significant effects on the potential security benefits of the open source development model in particular, however.

Open source software offers a great many potential security benefits. Few, if any, of them offer any guarantees, but the potential is often realized as a strong probability of greater security. The power of dilettantism, also known as the principle of "many eyes", is of particular interest when considering the complexity of your software design.

Dilettantes are dabblers, casual participants who engage in some activity only when the spirit moves them, as opposed to professionals or passionate, dedicated volunteers. In the case of open source software, a dilettante might be someone who decides to look at the source code of an application mostly to see if there are funny comments in the code and, while there, reads a little of the code itself; it might be someone who just wants to tweak a regular expression or two for the purpose of improving on a program feature; it may also be someone who comments out a section of unneeded code for personal use because the user wants better performance or the removal of an annoying feature. Dilettantes also sometimes feel inspired to help out by investing a few hours in improving something they use regularly, without getting so involved as to become a regular contributor.

The "many eyes" theory of open source software development suggests, in the words of Eric Raymond's attribution to Linus Torvalds, that given enough eyeballs all bugs are shallow. This is obviously true, in and of itself, for the same reason that an infinite number of monkeys banging away semi-randomly on typewriters are sure to eventually produce the complete works of William Shakespeare. Unfortunately, we do not have infinite eyes looking at our source code trying to fix bugs, so we need to think about just how much benefit our software gets from the broad accessibility of source code and whether we actually get any measurable security benefit from it.

A number of factors play into how much attention our software gets, not only from dedicated regular contributors or corporate users with professional developers assigned to the job of ensuring the software meets the companies' needs, but also from those legendary legions of dilettantes with their collectively copious free time. The most obvious is popularity, of course; the more popular the software, the more people who have real software development skills will be using it, and thus the more potential contributors there are who have a stake in the success and quality of the software. The bigger the open source project gets, the more prestige there is in getting one's name in the core contributors' roster as well -- which may actually serve as a nice resume bullet point. What Unix software development shop would not be interested in hiring someone who contributed to the core code of a major subsystem of the Linux kernel? Obviously, such a person knows something about developing software for Unix.

The bigger a project is, the bigger the codebase can be and still attract a lot of attention from potential contributors. Bigger corporations and other organizations will be likely to use the software, and potentially modify it for their own purposes. The flip side of this is that smaller projects will generally attract less attention, and thus fail to benefit as much from that "many eyes" theory of software security.

Dilettantes are unlikely to spend a lot of time poring over source code looking for bugs to fix. Even if they did, for the most part security problems are not discovered by people reading source code. If you are a programmer with any nontrivial experience at the craft of coding, you have probably run across code like this before:

while (arity == 1 || level = 'high') {

refresh_register(i);

i++;

}

You have also probably gone right past an error like the one in that while condition without noticing it, possibly for hours at a time while trying to figure out why the code is not behaving as expected. Hint: == checks for equality, but = assigns a value. Thanks to that assignment, the condition will always evaluate as true.

The harsh truth of the matter is that even when knowledgeable, experienced developers are looking for problems, they often look right past the most obvious and simplistic problems without even noticing them. Ultimately, it is most productive to think of source code not as the place to find bugs, but the place to fix them. The methods for finding security vulnerabilities are somewhat different, and typically do not require the source code itself. Even when looking at code is helpful to find a vulnerability, that purpose is often served just as well by disassemblers, hex editors, and other tricks of reverse engineering as by the pre-compilation source code itself.

Once a vulnerability is discovered, however, delving into source code is the way to fix the problem. This is also where dilettantes are more likely to help out. If there is a known problem that needs fixing, the casual passer-by is more likely to actually try to pitch in and help. When they know what needs fixing, dilettantes experiencing a moment of inspiration can go right to the problem and start working on it, but when they do not know what needs fixing, they are unlikely to put in the time and effort to see if there are vulnerabilities and other bugs as a prelude to fixing the bugs. The result of having a lot of potential dilettantes handy is that, when bugs are found, they tend to get fixed much more quickly thanks to the free labor offered by helpful bystanders with coding skills.

Complexity is a huge problem for this kind of helpful dilettantism, however. Because complexity increases the time and effort that must go into understanding and altering source code in helpful ways, more-complex software design imposes higher barriers to entry for dilettantes to get involved in contributing. Larger, more popular projects might be able to get by despite significant complexity, thanks to the larger pool of prospective developers and the greater interest people are likely to have in making the software as good as it can be, but even they will suffer somewhat for their complexity because of the daunting effect it has on the interest of dilettantes.

For a smaller open source project, this kind of barrier to entry for dilettantes can be downright fatal to the probability of the "many eyes" theory of software security ever coming into play. An IRC bot that simulates the effects of rolling dice for roleplaying games, written during the 2010 Christmas vacation period by a tech-savvy gamer, is far less likely to attract the attention of legions of dilettante contributors than qmail (a popular mail transfer agent), for instance. If your plan for securing your software is to sprinkle some magical open source dust on it, you had better be working on a very popular project with a simple design that is easy for developers to reason about, and cleanly formatted source code.

This is of course not to say that the "many eyes" theory of software security is not valid. It certainly is. It just pays to take note of the other factors that play into the effectiveness of such security benefits of open source software development and to take steps to nudge those factors in beneficial ways toward encouraging those benefits. The most likely killer of the power of dilettantism for a small open source project is complexity, so keep your system simple or you are likely to get no more benefit from your obscure software's open source development model than you would from a closed source development model for the same obscure software.

There are still other benefits to be had, regardless of that power of dilettantism, of course -- but to ignore or even discourage such casual contributors from helping is a handicap nobody should be willing to impose on his or her software projects if it can reasonably be avoided. Keep your open source code, and your open source software design, simple; it may pay off when it matters most.

About

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

0 comments

Editor's Picks