In December 2003, someone managed to make an unauthorized change to the Linux operating system kernel, purposely creating a root exploit. Had discovery of the flaw and consequent correction not been so quick, it could have become a serious problem.
One reason the discovery of the vulnerability was so speedy was due to the management process of the source code for the Linux kernel. Thousands of people contribute to the Linux kernel, and perhaps several dozen people work on different portions of the Linux kernel at any given time.
The very nature of the availability of the Linux source code also makes it possible for someone to identify and create a purposeful weakness. The source code for Linux is available for anyone to see and look for vulnerabilities—or, in this case, purposely insert an exploit.
But getting that code back into the master copy of Linux is a pretty big undertaking. The number of contributors requires a process to eventually synchronize all of the changes with the master system.
Surprisingly, a commercial source code management system, BitKeeper by BitMover, not only manages the master Linux kernel, but the company also hosts the current Linux kernels. Because of how BitKeeper works, it isn't possible to directly modify files without someone taking notice.
The people (or individual) who tried to modify the Linux kernel did so by exploiting a different source code control system—not BitKeeper. They made the change directly on the file system of another source code control system, bypassing normal checks and balances normally found in a shared source code control system. In addition, they attributed the change to a long-time Linux kernel contributor.
But someone quickly discovered the changes and fixed them, partly due to the fact that the Linux source code is available to everyone for review. It took less than a day to find and correct the exploit, which was simply two lines of code. Two lines were all that was necessary to exploit an entire operating system.
I'm sure that there will be quite a lot of hype about the "dangers" of open source software, specifically the "insecurity" of Linux, in the weeks to come. But before you buy into the marketing fodder, know that putting backdoors into software occurs all of the time—whether it's open source or closed source.
For example, take the discovery that an older version of Borland's InterBase contained a backdoor, which also made its way into the open source version of InterBase, called Firebird. But an open source contributor was not the culprit; the introduction of the backdoor code to InterBase occurred at Borland.
Developers have since removed the code from the open source Firebird project, and Borland subsequently released a "fixed" version of InterBase. The vulnerability had existed since 1994.
I'm sure that the search for the responsible parties for the attempted exploit of Linux is well under way. As for whether debaters will use this incident to justify closed source versus open source arguments, we'll see.
Just remember that the only way to determine if someone has infiltrated a program is to have access to the source code. It doesn't take a lot of effort to put a backdoor into a program. However, if people can see the source code, there's a much better chance that someone will find the exploit—even if it's only two lines of code.
Jonathan Yarden is the senior UNIX system administrator, network security manager, and senior software architect for a regional ISP.