Tech & Work

Understanding the market for buggy software: Complexity blurs line between bugs and features

The prevalence of bugs in mainstream software is difficult to deny, but the reason for it is poorly understood and subject to debate.

The prevalence of bugs in mainstream software is difficult to deny, but the reason for it is poorly understood and subject to debate.


The question of why people accept buggy software, but not other products with similar levels of flawed design, is perhaps not a valid question. It is unrealistically optimistic to think that people would not accept "bugs" in products such as cars. People do not accept design flaws in software that, in practice, end up killing people in very obvious ways — and the same is true of automobiles. On the other hand, most American cars seem to start developing major mechanical issues somewhere around fifteen or twenty thousand miles.

In both cases, there are people who do not accept such design flaws as manual transmissions that are always rough when shifting between first and second gear after the first ten thousand miles, or Web browsers with memory leaks. The major difference seems to be that in the automobile world people who care about reliability buy Japanese or German cars, while many people who care about reliability in the software world just complain a lot and refuse to accept the notion that switching to a competing software system is a reasonable option.

There are further problems with the question, and trying to answer it. For instance, a driver is quite qualified to recognize that his car is backfiring a lot or her car's acceleration has gotten incredibly slow and laggy. The situation with software has been made much more complex, though, thanks to the upgrade business model so well exemplified by Microsoft Windows.

Thanks in part to the fact that computing hardware advances so rapidly, five years seems like a very long time between operating system versions for the common consumer, where cars that do not last twice as long are consigned to the junk heap of history along with the Yugo. This makes it easy for software vendors to convince people to buy upgraded versions of their products on a regular basis, ensuring a steady and lucrative revenue stream. For this to work, however, there have to be upgrades that the common consumer can recognize, which means that for purposes of supporting that business model the single most important thing to develop is superficial "features".

This results in deprioritizing real security, real stability, and real functionality in favor of the appearance of security, stability, and functionality — especially since there is no equivalent to dramatic crash test footage to use in software marketing. True security is architectural privilege separation that cannot generally be circumvented; true stability is a system that behaves in a deterministic way, so that every single time you perform the same action the system does the same thing; real functionality is new ways to do things that improve your productivity and allow you to use the system to accomplish tasks that were not previously possible.

The appearance of these things, without their substance, is achieved by piling up features like a mad bricklayer to trick the user into believing he or she is getting such improvements: pop-up dialogs that ask you to enter a password to perform a given task, providing the appearance of privilege separation; certifying drivers so that those drivers that do not bear an OS vendor's logo on the packaging will not be used by the system, providing the appearance of taking steps toward protecting system stability; a card-deck graphical representation of an application pager, giving the impression of a new way of organizing and managing open applications without truly changing the user's actual workflow or productivity.

This entire software development, marketing, and business model creates a situation in which complexity has to grow for the user to feel like the new version of the software is better than the old, even if the user is not going to find the vast majority of these features the least bit relevant. The truth is that reducing complexity by focusing on what the user actually needs would provide a better product for that person, but the industry is dominated by the notion that complexity is part of "improvement".

With this increasing complexity, we get more bugs. More to the point, though, people accept these bugs. A big part of the reason for this is that as the system gets more complex, and more focused on trying to find ways to make it less complex to use even while the complexity of the system's operation increases, results in an effectively non-deterministic system that tries to second-guess the user's intent and make decisions for the user.

With the increase in features, we get a decrease in the predictability of the system's operation. Considering that bugs manifest in behavior that was not expected when we assume that the system will work the way we like, and the fact that intentional features also result in unexpected system behavior confuses the issue. This means that ultimately users have no simple, quick, and easy way to differentiate between bugs and features. By contrast, adding a six-disk changer to your car will not cause unexpected vehicle behavior like an exhaust leak — and, even if it did, you would know about it fairly quickly and take it back to whoever installed it and get the problem solved.

Vendors like Microsoft who deal in the business of increasing complexity are motivated — by the desire to build market dominance — to pretend there are no bugs, and as a result, they claim users are just using the system incorrectly, users should defer to their computers when there are decisions to be made and simply accept the system's behavior as correct, and that "it's not a bug, it's a feature!" Under such circumstances, to the extent that users allow themselves to be indoctrinated, it becomes nigh-impossible to differentiate between unexpected behavior resulting from bugs and unexpected behavior resulting from the system working "correctly".

In short, it seems obvious that one of the biggest reasons most software consumers simply accept buggy software is that, all too often, they have been subjected to an environment where they cannot reasonably be sure that anything is a bug at all. Many are afraid to even question a lot of the system's behavior for fear of being told that it's not a bug at all, and they have simply been using the system in the "wrong" way. Call it Redmond Syndrome, if you like — Stockholm Syndrome for computer users.

Among the reasons that vendors can get away with this are a few key points:

  • As C.A.R. Hoare pointed out, there are two ways to build software: you can make it so simple there are obviously no bugs, or you can make it so complex that there are no obvious bugs. The relevance of this to the pursuit of a business model dependent upon increasing the complexity of software should be clear.
  • Closed source software is more difficult to examine to determine what is necessary and what is not; what is a genuinely intended feature and what is not; what could reasonably be separated from the core system as a removable module; whether changes in a new version are real functionality improvements or merely superficial features; and even whether the software is acting against the user's best interests.
  • Vendors that grow too large and develop too strong a stranglehold on their market niches also gain the ability to behave anticompetitively, squeezing competitors out of the market through underhanded tactics that have nothing to do with their customers' best interests. For instance, in cases where it is nearly impossible to eliminate MS Windows from an organization's networks, Microsoft might offer deep discounts on licensing if the organization eliminates competing OSs instead. Similarly, pressuring hardware distributors to offer only MS Windows as the default install in exchange for cheaper licensing ensures that many consumers will just come to regard the OS as part of the computer, never questioning its ubiquity.

Let us pick out the parts of this that are particularly relevant to security:

  1. Complexity is the handmaiden of the common copyright-enforced business model in the software industry. Complexity both creates, and obscures, bugs. Any bug is a potential security vulnerability, from denial of service to arbitrary code execution.
  2. Real, architectural security is harder to build into something that already exists than superficial security features because the former requires refactoring and rewriting while the latter just involves tacking on some new code. For this reason, an upgrade-dependent business model will have an obvious, necessary bias toward the latter, slapping band-aids on sucking chest wounds.
  3. Verifiable, open source software can help improve confidence in its quality amongst those who know how to make use of social network effects to gain some assurance of security, as well as those who know how to audit the software themselves — if the software's quality is actually good. On the other hand, closed source software is generally necessary to allow a vendor to improve customers' confidence despite actual poor quality of software that would be obvious if the source code was available.

Finally, of course, there is a bit of a chicken-and-egg problem that arises from a self-fulfilling prophecy of sorts. The feature-growth business model of software vendors increases the complexity of the software, and simultaneously depends on that complexity for its success. As its complexity grows, the job of securing the software becomes much more difficult. As a result, people think of software security as "hard", and as a result they want nothing to do with having to secure their systems, and they furthermore accept that there will be regular security failures in their software.

The end result is that we have an entire industry wherein failure is expected, accepted, and considered the norm, but where such failure is anything but inevitable in principle. Unless something miraculous happens, we need to open the eyes of the common consumer to the fact that there are alternatives to the current model worth considering — alternatives that can improve the security, stability, functionality, and usability of our software — before such alternatives will make it into the mainstream.

As I said in my segment of Michael Kassner's article, and said again in this article, the answer to the question of why people accept buggy software is not simple. Entire books could be written about the subject, and it could be argued that such books have been written. For that very reason, this article only skims the surface of the subject, and covers only some of the reasons that are most relevant to security. Hopefully, it may help motivate some readers to educate consumers in the software market to choose more wisely, and help others break free of the problem of buggy software on the provider side by examining and improving their business models.

About Chad Perrin

Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

Editor's Picks

Free Newsletters, In your Inbox