IT Employment

Buggy software: Why do we accept it?

Consumers would never accept a car or other traditional goods that are flawed, yet they are willing to buy software that is. Why do you think that is?

During one of the breakout sessions at TechRepublic's Live 2010 Conference this past week, I was questioning why we put up with software that has bugs and vulnerabilities. To IT-security types like me, it's a concern. Eliminate bugs and you shut the door on most malware.

After that particular breakout session, Toni Bowers, Head Blogs Editor for TR, and I talked about my concerns. She suggested that I pass what I learned on to you. So, here goes. I cajoled the "software savvy" TR writers into answering the following question:

Consumers would never accept a car or other traditional goods that are flawed, yet they are willing to buy software that is. Why do you think that is?

Here are their answers. I hope you find them as interesting as I do:

Chad Perrin

The question of why software vendors produce buggy software and why consumers accept it has no simple answer. The reasons vary from incompetence plus overconfidence to being the dominant business model at the other extreme. Here are some of my thoughts:

  • The dominant business model in the software industry is one that creates and relies on otherwise unnecessary complexity. That complexity both creates bugs and hides them from view. Paraphrasing C. A. R. Hoare, there are two ways to build software: Make it so simple that there are obviously no bugs, or make it so complex that there are no obvious bugs. The former is much more difficult and does not lend itself well to enticing people to upgrade to the next version.
  • People are so focused on feature marketing that they do not stop to think about bugs until it is too late. After generations of this, and of the problem getting worse all the time, end users have developed a sort of Stockholm Syndrome with regard to buggy software. They believe it is normal, expected, and inescapable.
  • Features and bugs act very similarly a lot of the time once software exceeds a particular level of complexity. They do things that are surprising, or at least unexpected. People grow used to this until they become inured to surprise without the surprising behavior being reduced at all -- in fact, it only gets worse. "It's a feature, not a bug" starts to sound reasonable and believable.
Chip Camden

Having worked in auto parts for several years, I can tell you that very few cars roll off the assembly line without any flaws. That's why they have a thing called recalls.

Furthermore, a serious flaw in an automobile can cost someone's life. That usually isn't the case with software, and where it is the case (medical, missile guidance, aircraft navigation), then the extra expense of a higher attention to flawlessness is considered worthwhile.

Ultimately, it's market-driven. We could make software that performed to much more exacting tolerances, but it would be much more costly. The buying public is content to pay a near-zero cost for "good enough" rather than putting a dent in their wallets for "flawless." [Editor's note: you can read more from Chip Camden in TechRepublic's IT Consultant blog.]

Erik Eckel

I think the software industry is very different from most any other. Vendors must try writing software that will work on multiple platforms (Linux, Windows, Mac) and be used by a variety of users with greatly differentiated skill levels at companies working in numerous different industries. That's a pretty tall order.

Imagine trying to make a car that could be driven by a 5'4 woman or 6'5" man that could run on gasoline, diesel, or propane; while also possessing the ability to carry up to eight people or 6,000 pounds of payload. Oh, and it must get 28 miles to the gallon and cost less than $25K and go 100,000 miles between tune-ups.

You couldn't do it!

So, I feel for software manufacturers. The Intuits, Microsofts, Apples, and Symantecs of the world have a wide constituency to satisfy. Someone's always going to be complaining.

I think the 37signals guys may have it best. In their current best-seller ReWork, they note that one of the keys to their success is saying no to customers and limiting the amount of features they include in their software programs.

I think there's a lesson there for all of us. [Editor's note: You can read more from Erik in TechRepublic's Macs in Business blog.]

Jack Wallen

The answer is very simple: Marketing. If you ask the average consumer (those who buy the bulk of computers) if they knew there was an operating system out there far superior, safer, and more reliable than the one they used AND it was free, they would react with surprise. Their first question might be "Why didn't we know about that?" The reason is because Microsoft is a HUGE company with a HUGE PR budget and the ability to shove advertising down the throats of the consumers.

To continue with your analogy:

Tesla has a roadster that is 100% electric, can go over 300 miles on a single charge, can go from 0 to 60 in 3.7 seconds -- yet the majority of people don't know about it. Why? Marketing. If one Linux company could start to produce witty, well-done television commercials things would quickly change.

But think about this: Linux has done fairly well for itself without having to spend a penny on advertising (relatively speaking). Word of mouth has been a powerful alley to the Linux operating system. However, in order to raise it to a higher level, PR and marketing will have to be used. [Editor's note: You can read more by Jack Wallen in TechRepublic's Linux and Open Source blog.]

Justin James

Some thoughts that come to mind (as someone struggling with a phone heralded by others and the media as a "miracle phone," but it is plagued with problems):

  • "No warranty, express or implied" is attached to every piece of software ever made and is enforceable. Consumers know that they have zero rights, so they feel happy when it works.
  • "Gadget lust" blinds people to issues. People don't want to admit that they bought a piece of junk, so they just deal with the problems and tell everyone how much they love the software/device/etc.
  • In corporate environments, the people who live with the bad software are often the people who do not pick it. Those who did select it sweep the problems under the rug because it makes them look bad, or they feel it's a question of "stupid users" who "just don't get it."
  • Too many problems do not appear until whatever initial return period of contract cancellation period is over.
  • People expect to have problems.
  • People assume that the problems are their own fault ("I'm too dumb to use this right!").
  • In corporate environments, many products require a lengthy and expensive integration process; there is no way to accurately judge their quality until that is done, and afterward, it is often not clear if the base product or the integration work is the root cause of problems. To make matters worse, once you dump, say, $150,000 into customizing a $200,000 package that you spent $50,000 on hardware to support, do you really want to say, "gee, it looked good when we started, but this is a dud, let's dump it"?

Overall, it's a combination of people feeling helpless on the user end of things, and the decision makers being unwilling or unable to do anything about it once a commitment is made. [Editor's note: You can read more by Justin James in TechRepublic's Programming and Development blog.]

Patrick Gray

I think there are two factors at work that would cause me to question your premise:

Perceptions of software "flaws" are often based more on market saturation than technical elegance.

Most mainstream technical products (hardware and software) seem to have a higher incidence of flaws because they have a higher user base. This is the classic "Windows is buggy versus [a more obscure OS]" argument.

I don't think Windows is inferior, it's just a mass-market product and thus gets used and abused by the highest percentage of the population. Because Mac OS X has gained traction, it's now getting hit with malware as more people use the software rather than due to some inherent flaw.

There are considerations that outweigh flawed products, mostly getting valuable features early.

I think technical elegance often becomes second fiddle to other concerns at both a corporate and personal level. Why? We want new features and are willing to put up with partially baked software. This extends to your automotive analogy as well.

I bought a new motorcycle from BMW in its first model year (ever hear the old bromide never to buy the first model year vehicle?). The bike has had four recalls, including replacing the front axle (a front axle failure at 80 mph would be bad). Despite this product having flaws, the trade-off of having an extra year's riding was worth it to me.

If we all wanted perfect and bug-free code, first and foremost, we'd probably all be running MS DOS or a text-based Linux that hadn't had any features added in a decade. [Editor's note: You can read more by Patrick Gray in TechRepublic's IT Leadership blog.]

Rick Vanover

While software quality should be the first priority in whether or not we implement something, many times IT customers have their hands tied. Simply forgoing a piece of software if all offerings will not meet their needs will not be an option.

The natural alternative is to develop something in-house, but that too, may be cost prohibitive. This is an age-old battle of having our hands tied in a way to get pushed along to new products, and history has done nothing but continually confirm this for us.

One example is the file server market, Novell NetWare is still a superior file server product to Windows NT, 2000, 2003, or 2008; yet we all know which way the market and broader supported configurations went. There is no simple answer on how we can address this, in my opinion. [Editor's note: You can read more by Rick in TechRepublic's Network Administrator blog.]

Final thoughts

It seems we the users want the latest and greatest software, even if it means accepting buggy code. Do you agree with the TR gurus? I know we all are anxious to learn your opinions, so let them fly.

Chad Perrin wanted me to mention that he has a lot more to say about this subject. Please look for his article in the IT Security blog of TechRepublic.

About

Information is my field...Writing is my passion...Coupling the two is my mission.

299 comments
tanernew
tanernew

I think it is a normal process. I'm working in automotive industry. If you're making mass production, the probability of having a bug is very low but there are still some problems that can not be foreseen. It is similar for MS, Apple, IBM, Oracle etc. products. But if you request a special software from a software company it is like requesting a special refrigerator with unique design for your home. If you don't have very big money and request the refrigerator with the price of a regular one then you'll get a lot of bugs. So: - Mass production softwares have less bug but there are still problems - If a software is produced for only one company then design, documetation, development etc. costs must be addressed to one company - If you request special software and pay a regular price then you now what you'll get.

dcolbert
dcolbert

Chip nailed it on the head. I've argued for a long time that many bugs are the result of a developer staying out at the sports bar too late, having that extra beer, and coming in a little off his game to work the next day. Which is also the basic process by which we end up with math equations that throw interstellar probes helplessly off course - or that cause a doctor to amputate a limb when he was supposed to remove a pancreas. Unless you've got someone who can literally walk-on-water to write your code, or design the O-Rings on your space shuttle, or build the structural integrity of your balcony, then human error is *bound* to enter the formula. Placing the blame on Microsoft or on overly complex software design engineering is just irresponsible hyperbole from the members of the Temple of The Penguin. Anyone who has worked with *any* platform has seen a Kernel panic, a SigSev error, a guru meditation, a bomb, a BSOD or any number of other ways for code to report, "Something didn't work like I expected it to". Because there are bugs and errors in anything a human writes, designs, engineers or creates. The answer is so self-evident, it is almost silly to ask the question in the first place. The next time your car breakes down, think of it as a RROD or a BSOD, and the light should come on over your head. To be fair, a couple of other responses to the original question were pretty good analysis of additional reasons *why* we "tolerate" the bugs. That doesn't change the fact that we tolerate varying degress of serious design flaws in all kinds of purchases on a daily basis - but there may be incidental reasons *why* we put up with those flaws. Those reasons may be very wide ranging and cover the gamut from logical and reasonable to irrational and unreasonable.

geekware
geekware

I almost feel like a traitor writing this because I am a hardcore perfectionist, however even though I am a perfectionist I strongly disagree with the general attitude displayed here. My first question is - Have you actually written software yourself? Do you realize how many thousand lines of cryptic looking text make a modern piece of software? I as a small scale developer sometimes find it frustrating when the users keep wondering why this couldn't work a little different or an error happens when you do that. They wonder why this couldn't happen automatically, or that be done in a more streamlined wizard way. True, each of their requests could be meet, but they have no idea how complex or mind boggling it might be to add one seemingly simple feature. Catch the point? I think we should be thankful and surprised for every feature that does work, and view each bug as a mere indicator of how much modern technology is pushing the frontier of human ability. Having said all that, I must admit that I run linux and mac because I can't stand the clunkiness and bugginess of Microsoft. Let's keep striving for perfection, but have mercy. The average person has no idea how complex it is making a complete piece of software.

Tony Hopkinson
Tony Hopkinson

For mass production to make sense, you get the design and the tools and the process right before you produce anything. Software's value is that you don't have to do that. You are trying to equate copying an executable with developing it.

AnsuGisalas
AnsuGisalas

In physical commodity industry; every product is made as a seperate process, and you have error ratios. In software, every product of a production is the same product - copied. Except for bad disks and bad downloads, every error is systemic. There are no ratios, or every ratio is one. How the automobile industry prevents systemic error applies to software industry too, though.

Neon Samurai
Neon Samurai

The two primary people discussing modularization and complexity are BSD users and discussing things in rather platform agnostic terms. Nothing to do with the penguin kernel or specific distributions based on it. Nor is anyone claiming that any OS platform or development model results in completely bug free code. Actually, the discussion has been relatively clear of the usual "my daddy can pee further than your daddy" stuff. Why do you have to through out an unwarranted snide jab regarding software platform choice?

Sterling chip Camden
Sterling chip Camden

People are willing to forgive, overlook, or work with problems that they can understand. Part of the problem is that most people can't understand the complexity of most software, so they can't appreciate why it doesn't "just work". That's partially because the software is more complicated than it needs to be, as apotheon pointed out. Sometimes it's because the problem it solves is more complicated than the user realizes. And sometimes it's because the way the user uses the product is only one of thousands of ways it can be used -- although the user thinks his/her way is the obvious use case. When it doesn't satisfy that case, they immediately think "this piece of crap." Now, the way to minimize that problem is to stick to a simpler design that doesn't try to do too much or protect the user from himself. Rather than anticipating specific use cases, design something that does one thing well but can be easily combined with other components to solve larger problems. That's the Unix philosophy. If it doesn't work, it's because the user didn't combine the pieces correctly -- but they can, if they're willing to learn.

Sterling chip Camden
Sterling chip Camden

Numerous compilers, interpreters, libraries and general frameworks; a few report writers; a couple of text editors; code generators; visual design tools; numerous vertical market applications. And you're right -- you're not going to write one of those without any bugs.

Tony Hopkinson
Tony Hopkinson

since 1976 and you identified a key problem as some sort of given. Why is it cryptic? It doesn't have to be! One of the things I pride myself on (in modern environments) is writing the most readable code I can for other programmers. Badly named entities are well over 50% of all bugs. The orginal idea of a wizard was to automate repetitive (fragile or long winded) operations. Unfortunately they have become a crutch for the incompetent, allowing someone with no understanding of the underlying principles to produce code. Whether it's the right code seems to be considered irrelevant. Complexity is a matter of perspective. You are a hardware guy, you want a network in a PC, you plug a card in, how the damn thing works should be irrelevant. What happens in software is they use a wizard or a template for a card that doesn't do the job and then start soldering extra wires and components onto it and then jam the bugger in the printer port. Now I'll agree we should be grateful and surprised if that works, but that doesn't make it any less stupid an idea. The point of the article was if you want a USB,Printer,Video,mouse,keyboard,sound and network combo card, there are going to be problems, and maybe if we had one card for each we'd drop a lot of the apparent complexity.

Justin James
Justin James

Been programming for a long while now. :) For me, I have two major sources of bugs: * Poor specs, which lead to the kinds of bugs where the application does not meet user expectations. * Lack of time, which leads to bugs where valid input leads to invalid output (including error messages, application crashes, etc.). A secondary source of bugs is the testing process... the more people and the more time I have for testing, the better it will be. If my only resources for testing is myself, well good luck. I sometimes overlook test cases in functional testing (who doesn't), and that's when bugs slip through the cracks. J.Ja

sysdev
sysdev

I have been programming (in over 40 languages) since 1965. Back then and for many years after that, bugs were not tolerated. One of the systems was an order processing system for Sears Catalog (yes, I know it went away). The system was written in assembler language and ran under OS/360 on an IBM 360 mainframe. At the time, Sears was one of IBM's largest customers. The changes required were not to fix bugs, but to handle different business rules and options across the country. There were millions of lines of code and my team of up to 10 spent most of our time attempting to tweak the code to be smaller (less memory - the design limit at the time was 90k) and faster (it processed millions of orders on a very short time schedule - this was running on IBM 360/40 machines). The next system for which I had major responsibility was also for Sears and was called the Service Inquiry System. This was one of the first CICS systems anywhere and it was one of the first two at Sears. It was also a very large system, also in assembler language, and was tested extensively both by developers and the users until it was 100% bug free. After installation, the only changes required wre again due to different business rules across the country and changes in business needs as time passed. Bugs were simply not tolerated. I began consulting in the mid seventies, and one of the first projects I was involved with was with the Veterans Administration in Washington. That was were I first heard the phrase "good enough for government work" and the beginning of systems I encountered with known bugs. Bug free systems are possible. Yes, it does cost time and money, but being Bug Free saves both time and money in the long run.

apotheon
apotheon

Have you actually written software yourself? In a word: Yes. In fact, of the seven people who were quoted in Michael's article, I know for a fact that at least four (more than half) have written a fair bit of code. Two of them are more "programmers" than other IT task professionals, and one (me) has been a "programmer" more than some other type of IT task professional for quite a while (and may be again, depending on how the career develops). As for the fourth -- I don't know whether Jack has ever been primarily a programmer. Of everyone who contributed to Michael's article, only one did not at least imply complexity as a major factor in bugginess of software -- and that one person didn't dispute it either. Furthermore, that one person is one of the two guys whose relevant experience with software and why people choose what they choose is not related to writing software. Regardless of this, everybody involved knows a fair bit about software, bugginess, and user choice. Simply dismissing everybody's responses because you don't want to consider whether their analyses might have some merit to them is kind of short-sighted and narrow-minded, in my opinion. There really are options other than just assuming that all software absolutely must be incredibly buggy because it all has to be unbelievably complex all the time. Conceding defeat is what's not an option for some of us. Catch the point? I think we should be thankful and surprised for every feature that does work, and view each bug as a mere indicator of how much modern technology is pushing the frontier of human ability. My take might be a bit more extreme than that of others in that list, but I think we should be thankful and surprised for every bug -- err, I mean feature -- that commercial software vendors leave out of their offerings. edit: I just realized I mistyped something, making the intent of a sentence somewhat confusing, in the original version of this comment.

Ocie3
Ocie3

FWIW, I've written computer programs that exceeded 100,000 lines in structured C, and others which were relatively large and complex in Fortran, as well as some RPG, RPG2, and "COBOL maintenance" (ugh!). Sometimes when I read the comments posted in this discussion I am tempted to demand documentation, or at least one example, of the writer's allegations, [i]i.e.,[/i] "which software was that"? Not that I have never found bugs in software, and I have occasionally used an application which had a poorly-designed and implemented user interface, regardless of whether it reliably produced valid output from valid input. There is one application installed on my computer that arguably qualifies as "buggy", and I would happily replace it if, in fact, I used it more than once or twice in the course of any given sequence of six months. I would not use [i]any[/i] application of that type often enough to justify the expense of replacing it, so I grit my teeth and use it anyway.

Michael Kassner
Michael Kassner

The reason I chose those contributors was for the very reason that they understand software. You can go to the link at the end of their comment and see for yourself.

Neon Samurai
Neon Samurai

The distinction was made earlier between software that has a reasonable number of bugs versus buggy software where complexity has driven it well beyond an acceptable level of programming errors. Software won't ever be perfect but the average for consumer quality code could be higher than it is.

apotheon
apotheon

Why do you have to through out an unwarranted snide jab regarding software platform choice? This might sound a little OS-partisan, but I think it needs to be said: In my experience, people who "like" to defend MS Windows against all comers tend to see statements that seem like apt descriptions of problems with MS Windows as direct, almost personal attacks, even when there is no particular "attack" intended, nor any particular focus on MS Windows and/or its users. This is, probably about 52% of the time (to pull a number out of my fourth point of contact), the major incipient event of MS Windows vs. Others flame wars at TR.

Slayer_
Slayer_

Also, you said further, that measures time, you have to say farther, which measures distance :D. Unless of course your saying you daddy can piss so far it moves into another timezone... damn!

Ocie3
Ocie3

apparently works for UNIX and operating systems derived from it, or developed with the same model, for the hardware on which they were designed to run. (IMHO, whether that is the best approach [i]possible[/i] remains to be seen.) When I examine the contents of C:\Windows\system32, and look at the display of MS Sysinternals Process Explorer, what I see is an OS that is composed of many pieces. Like Linux, it has a kernel and a command shell. Unlike Linux [i]per se[/i], the Windows GUI is evidently implemented in the kernel. Microsoft calls the Windows NT design the Component Object Model (COM). So if you want to argue, go argue with them. I didn't write it or choose to describe it as such. With regard to UNIX OS's, in particular, the only [b]people[/b] who were intended to actually install, configure and use it were system administrators and operators -- not bankers, doctors, lawyers, welders, plumbers, secretaries, clerks, writers, and other members of the community who are not trained for that work, and who should not be expected to learn how to "combine the pieces correctly". That is someone else's job, and they have their own to do, one which that sysadmin is unlikely able to do nearly as well. He or she might be able to "combine the pieces correctly" for a UNIX-like OS, but are they consequently qualified to be a pipefitter? Today, many application programs are not just one monolithic executable, and it has been a long time since those which implement the most complex systems were created as such. It is common for software to be subdivided into individual components that are designed, and eventually implemented, by two or more people working as a team. Experienced developers will tell you that, for any given project, there is an optimum size for the development team, which is not necessarily related to how many files that the software [i]per se[/i] will have. A smaller team will often produce more in the same amount of time than a larger one can produce, if only because too much time becomes devoted to coordination of the activities of team members. Too many people writing source code for too many pieces is more likely than not to produce buggy software, [i]i.e.,[/i] software that has more flaws than it is feasible to identify and correct. It is not easy to ascertain why that has often proven true. Sometimes the best choice is to abandon the work and begin again, perhaps with a better design and, hopefully, lessons learned in the course of the first effort. "C" is the name of a programming language because it wasn't the first and only endeavor of the people who created it to write one like it. Textbooks have been written about software development, and I don't intend to write one here. "Keep It Simple, Stupid" is not always feasible for writing programs with the tools that we have today, and the paradigms that we have today, to address the uses that we need to make of computing technology today. For example, systems that can be, and traditionally have been, designed and built with "individual components" are often better understood as a whole, which is why the Inuit are the best aircraft mechanics. They don't divide an airplane into subsystems, such as the propulsion system, the airframe, the avionics, etc., and just work on it within that frame of reference. Rather, everything they do is done in consideration of how it affects the aircraft as a whole. It would be interesting to see whether they can create software the same way, or perhaps they could [i]maintain[/i] it better because they can see it in relation to the enterprise as a whole. With regard to "buggy software", both Justin James and Chip Camden offer the best reasons for why the buyers and end-users "put up with it". But in my experience, it seems that every context in which flawed software is used is different, but it also depends upon the nature of the flaws.

apotheon
apotheon

It's interesting the way dcolbert started by oversimplifying the "answer" to the question so egregiously and, in the process, both praised you for hitting his answer and essentially insulted you for believing that particular answer isn't the entire answer -- and that what some of the rest of us said is also just as true, as is the idea that the Unix philosophy is a better approach to avoiding bugs than Microsoft's monolithic feature-heap philosophy. Frankly, I think you're being particularly clear-headed about the whole thing. edit: typo

dcolbert
dcolbert

Shifts the complexity to the user's responsibility - which is a more effective solution for the far smaller number of people willing to invest the time necessarily to master the additional complexity required to change those far more simple apps together to achieve the desired results. I mean, the last 30 years shows that one paradigm of app design is clearly preferred by the massive number of end users, and it isn't the *nix approach. :) As a developer, I can understand why simple, interoperational apps that operate more like OOP-code objects that an actual app would be preferred - but that approach will never have a massive appeal to PC end users. The complexity of apps is a necessary evil. On the other hand, I can also see that end users will never understand the complexity involved in even simple pieces of code. End users routinely ask my development group to design custom apps and literally say, "it is just a simple app, it shouldn't take more than a couple of hours to code, right" - when in fact that simple app may represent days of code for two of my developers.

apotheon
apotheon

Now I'll agree we should be grateful and surprised if that works, but that doesn't make it any less stupid an idea. Brilliant! I had a good belly laugh at that.

Sterling chip Camden
Sterling chip Camden

That parenthetical remark is key. Why? Because anticipating all use cases is exponentially more difficult than solving the problem as stated. However, I agree with apotheon that simplicity circumvents this problem. Rather than shuffling the responsibility for finding problems to the test cases, we should be nailing down the design to satisfy the smallest set of requirements that meet the user's need, leaving all other possibilities open, but unaddressed. Trying to address too many situations is the source of most unnecessary complexity.

geekware
geekware

First of all, congratulations on the effort you put in to programming in assembly language. This gives you a lot of credit in my eyes. I am currently learning assembly language, and I find that it can be quite difficult on modern computer architectures. I have only written very small utility type programs in assembly language. I totally agree with you that bug free software is possible. But one thing you did not mention is how much these companies were spending on their software. Average consumers simply would not be willing to pay for this kind of rigorous development. And they probably don't even care that much if there are a few bugs, by this I mean they are more worried about flash and features than stability.

Splash101
Splash101

It is still Pride and integrity that should rule the developer's stature. I was sure that this (as you stated, bugs not tolerated) was the norm at the onset of this great big internet deboggle. But, something has been lost. It's the developers as you that I admire. I'm sure you would fix any bug ASAP if one was discovered and not just say "oh, we're going to fix that in our next upgrade release, or new paid version" That was UNHEARD of in those days. I hope that you can pass on more of the Great stamina that you have in protecting your trade. GOOD FOR YOU! Thanks

geekware
geekware

It was good for me to read what you had to say. And I am sure that all of you contributors have more experience writing software than I do. I am quite young (18) and still growing my roots in the field of writing complex software. Most of my clients right now only have moderately complex issues, and the really complex stuff is just geek things that I use myself. So it may be that as I gain experience and learn better design practices my view point will change a little. Thanks for putting up with the thoughts of "green horns" like me.

Justin James
Justin James

... I re-read my original response, and I never addressed where bugs come from in the first place. I thought he just wanted to know why we tolerate it in the software packages we use, so I ignored that end of it (which is another article or series of articles all by itself). J.Ja

Tony Hopkinson
Tony Hopkinson

I'd have to kill you.. :p Can't see any of our employers being impressed if we published their code and then pointed out all the faults. :( One thing you have to take into account is the cookie cutter revolution on the advent of the PC boom. There's a big difference in the approach taken for say accounts processing in a banking main frame and a basic CRUD GUI on a PC. As the front end, if we hit save in a word doc and it throws it away, that's iritating, but as long as it works most times we live with it. Be a very different picture if the your transaction processor 'randomly' lost or boogered them up. What I can say for definite is the constant drive for adding features as cheaply as possible necessitates a dramatic loss of quality. Not just in terms of current bugs, but fixing them and adding the next feature. We all know that if a good way into a project you discover a catastrophic omission, there's a serious cost associated with rectifying it. What happens now is they spend money not in avoiding the original mistake but patching the consequences of it. They do that 'succesfully', therefore the omission never occurred. You don't have to be a software guru to figure out a few successive versions of that approach will leave you in complete tip.

apotheon
apotheon

Sometimes when I read the comments posted in this discussion I am tempted to demand documentation, or at least one example, of the writer's allegations, i.e., "which software was that"? Why don't you yield to temptation? Tell us what you find so objectionable, or at least difficult to believe. There is one application installed on my computer that arguably qualifies as "buggy", and I would happily replace it if, in fact, I used it more than once or twice in the course of any given sequence of six months. I would not use any application of that type often enough to justify the expense of replacing it, so I grit my teeth and use it anyway. What piece of software is this? edit: TR auto-formatting is particularly naughty lately

apotheon
apotheon

I really have no idea how far my father can piss, so I'll just have to take your word for it.

Slayer_
Slayer_

Glad my daddy can definitely piss farther than yours, and in more degrees :D

apotheon
apotheon

I think Skitt's Law is quite relevant here: "Any post correcting an error in another post will contain at least one error itself." The word "further" is not about time, especially considering that time is just another dimension along with with height, length, and width. Rather, while "farther" refers to dimension, "further" refers to degree.

Neon Samurai
Neon Samurai

and.. er.. I .. uh.. was meaning.. timewise.. yeah.. that's it.. ;D

apotheon
apotheon

My first reaction to the Wikipedia reference was that it fails the NPOV test. You caught the "original research" test as well. It's certainly a fairly comprehensive (for the context) explanation of your thoughts, but not a Wikipedia entry.

Ocie3
Ocie3

might, but I would need to add citations of the literature. The whole is a blend of personal experience, the experience of colleagues, and of academic case studies. The remarks are a pretty rough draft. .... Thank-you, nonetheless. :-)

apotheon
apotheon

Are you so unfamiliar with the subjects you discuss that you didn't get the literality of my statement and its relevance to Wikipedia? If not -- I have no idea where you object.

santeewelding
santeewelding

Do you adjust your eyeglasses lower on your nose, looking over the rims, to say that shilt?

apotheon
apotheon

It quite egregiously fails the NPOV test.

Sterling chip Camden
Sterling chip Camden

... and I try not fall into the trap of thinking that the answers are simple at all here. The simpler the software, the fewer bugs it will have: #!/usr/bin/env ruby print "hello world" Oops, that won't run from cron, unless you give cron the full path to ruby. Just an example of how the complexity of relying on the environment can break even the simplest case. But -- the relationship between features and complexity does not have to be linear (or, in most cases, exponential). As you pointed out, certain business models provide incentives to add unnecessary complexity, or to avoid removing incidental complexity. That's where the overabundance of bugs comes from -- but not all bugs.

apotheon
apotheon

The unix philosophy: Shifts the complexity to the user's responsibility Poppycock. Applications can just as easily -- more easily, actually -- be constructed by way of glue code and small tools that each do one thing well as they can by a monolithic, complex uber-program approach. The end user need not ever do any of that application assembly, and as such the end user need never have any of that complexity shifted to him or her. I mean, the last 30 years shows that one paradigm of app design is clearly preferred by the massive number of end users, and it isn't the *nix approach. No -- what it shows is that a particular business model has become the dominant means of developing and distributing commercial software, though that dominance is now being challenged a little at a time. Even during all that time that business model has been dominant, though, there are particular niches where it has very thoroughly been dominated by a Unix philosophy based model. You seem to have failed the "logic of causality" test: correlation does not imply causation, though your argument is predicated upon the opposite assumption. As a developer, I can understand why simple, interoperational apps that operate more like OOP-code objects that an actual app would be preferred - but that approach will never have a massive appeal to PC end users. The complexity of apps is a necessary evil. Again: poppycock. It will, however, always be a necessary evil for a particular business model, which actually thrives on the bugginess of software.

apotheon
apotheon

She would use a unit test framework -- and of course the framework itself would have a test suite, built using the same unit test framework (since it would be perfect).

Ocie3
Ocie3

have to test her perfect bug tester too? :-)

apotheon
apotheon

If someone was perfect, (s)he would test for bugs anyway -- because failing to do so would be imperfection.

apotheon
apotheon

Rather than shuffling the responsibility for finding problems to the test cases, we should be nailing down the design to satisfy the smallest set of requirements that meet the user's need, leaving all other possibilities open, but unaddressed. I just have one addition to that: ". . . then testing for bugs anyway."

HAL 9000
HAL 9000

The Mass Market Software that the majority of People use has not improved to the same extent that it should have since the Mainframe days. It's the development method of places like Microsoft who farm out sections of the code or functions to different groups and then when the different modules are assembled they then set about making it work. All of the Big Software Makers for the Windows Platform are guilty of this so I'm not specifically deriding M$ for this action just the results. ;) The cost argument is ridiculous as when you look at the number of Copies that the different makers sell it would only add at most a few $ per copy over the production run of those applications. Back in the MainFrame Days the cost of Software was extremely high and when you added in the cost of the In House Developers who had to be in place to rewrite Code it got even higher. The PC on the other hand is just like the old Space Invaders Chip. Back in the days before PC's could be used to play games or for that matter where a High Market Penetration Device the cost of a replacement Space Invaders Chip was several thousand Dollars. After the PC Revolution the same chip which was integrated into Plug In Cartridges in early Personal Computers like the Vic 20 and so on, the price dropped to several cents. The difference was that the developers could only realistically expect to sell several Million World Wide when these where being fitted to Dedicated Game Machines like those used in the Amusement Arcades, but when Millions of people in a single country could buy a PC and a Game Cartridge the Development Cost was spread over Thousand's of Million end users and got so cheap that it wasn't worth making a dedicated Chip and just using the Base Code to run in that Microprocessor Controlled Device. Of course the increased RAM and faster CPU's that these new devices used also made it possible to remove the need for a Dedicated Chip and control Circuity so that the previous underpowered systems could play the game in an acceptable manner. While M$ is the Big Boy in Town and they measure their product sales in the Billions of copies or more, even the small bit players expect to sell lots of copies to make their products price competitive with the M$ offerings. Unlike the Lucas Developed software for Industrial Light and Magic this software has a very small market and a horrendously high price tag to go along with it. Even then when rendering some video it still takes a very high end unit several weeks to render 30 seconds of Video. And for that software which may have bugs it's not cheap and works well all of the time that i have been involved with it. The people who work with it all day every day are more than happy with it's current development and their complaints are more related to the supplied feature set not bugs in the code that cause the software to crash or otherwise produce undesirable results. That piece of software cost in measured in the Millions of $ mark per copy and it still is economical to justify it's purchase and use for those few companies who use it. If it was for a Large Market the costs would be defrayed over Hundreds of Billions of copies and wouldn't even rate a cent per copy in Development costs. So I find the continual Cost Arguments silly, but maybe that's just me. :D While I completely agree with your arguments about the M$ Business Model the Hardware Makers are just being led by their noses and unlike M$ and the other Big Boys in the game only have their front trotters int he Food Trough. The Big Boys not only have their front trotters and rear trotters int he food trough but have their curly tails under the food so they remain piratically invisible to people outside the Slops Trough. The Hardware Makers just don't see a Great Return for their outlay in developing good code for Drivers for their Hardware and in the case of most of them who are not in a Monopoly Situation are far more concerned that the competition will discover their [b]Secrets[/b] if they where to release the Source Code type things to the Open Source Community who would do the job much better than the Hardware makers possibly could. Basically it all boils down to the Money Trail and the way that all parties involved jealously guard their tiny slice of that pie. Col

apotheon
apotheon

Not all software has gotten worse. It's actually quite possible to write portable code without taking that "close enough" approach. The major problem is that at companies like Microsoft and its "partners" (i.e. third-party developers for the Microsoft platforms) the assumption is that the OS handles the portability, and all you need to do is write code for that platform on your own system and, if it works there, it's fine. The truth of the matter is that Microsoft Windows' portability across hardware platforms is crap. Different hardware acts differently, and Microsoft's hardware abstraction doesn't take that into account very well. This is, in large part, because it's closed source software with its "secrets" jealously guarded, but the drivers are written by third-party hardware manufacturers. One of these things needs to change if Microsoft wants a platform whose hardware abstraction works well enough and stably enough that third-party application developers can make software that isn't buggy for no reason other than the fact that two MS Windows systems will interact differently with the applications (thanks to the hardware differences). Of course, the Linux project isn't really handling thing in an ideal manner, either. While it's open source software, allowing hardware manufacturers to write stable drivers to hook into its hardware abstraction in a stable manner, the Linux project seems to have almost zero interest in maintaining stable APIs. This means that, despite otherwise having an ideal situation for encouraging development of a stable environment for bug-free software, hardware vendors find that there's a lot of effort involved in maintaining current drivers. Microsoft does the same thing, but new OS versions tend to be far less common, even when one takes into account both the server and desktop/workstation Microsoft OS lines, so in practice this particular issue is far more of a problem for Linux-based systems than for Microsoft OSes. Hardware vendors could of course solve this problem for themselves very easily by simply opening the source to their drivers once they write a first, complete, stable driver, but most vendors are pretty stupid about this sort of thing, and it never occurs to them to deviate from Microsoft's business model. By contrast, the major open source BSD Unix systems (Dragonfly BSD, FreeBSD, NetBSD, and OpenBSD) are hitting all the right points: open source, stable APIs, stable software, et cetera. They even use copyfree licenses, maximizing the legal reusability of the code in these systems. Still, hardware vendors won't release drivers under open source licenses, and they tend to overlook BSD Unix systems because they aren't as buzzword-compliant as Linux-based systems.

apotheon
apotheon

If you're making software for broad distribution (i.e., end-user desktop systems), the cost per customer gets vanishingly small. Part of the reason it cost so much (per customer) to develop bug-free software for mainframe systems is that there were far fewer customers for the software, so distributing the cost amongst consumers meant dividing it by a far smaller number. If you want to know something about the reason there isn't more effort put into ensuring the software is bug-free, follow the money (as with any other situation where someone is doing something other than making a best effort to do a good job). As the canonical example for end-user desktop software, Microsoft is an ideal demonstration. Rather than spend money on fixing things up and making it work well, Microsoft just piles on more features to encourage more customers to buy it -- and Microsoft's profit margins, thanks in part to that and in part to the sheer number of licenses it sells, are obscenely high. It would be less profitable to actually print money. edit: punctuation typo

HAL 9000
HAL 9000

Before IBM in hand with Microsoft destroyed everything. Back then you had to have a staff of Programmers/Developers or whatever you wanted to call them because what worked on one unit would very rarely work on another model unit or even sometimes the same model Main Frame. There where hours of Debugging Code to make a new mainframe install work with the existing software and it was joked that when your Developers had finished Porting some App to your New Hardware it was time to replace it and begin the process all over again. OK it wasn't quite that bad but you didn't dream of changing Platforms without a Major Development Cycle being involved. M$ got things to the way that they are today by saying that Close Enough is Good Enough which allowed the one application to work on more than just one platform. Back in the early days of PC's the CPU was very important and something that worked on a Intel probably didn't work on a AMD or Cryix and vise versa. But then again it wasn't uncommon to have Hardware issues where a part that worked with others from the last manufacturing batch wouldn't work when you reordered from another manufacturing batch. While things are better now on the Hardware front the Software has got considerably worse but it's more likely to work on all of the different Hardware and Software Platforms that are currently in Use. ;) Col

Ocie3
Ocie3

that, with regard to computer hardware and software, "average consumers" are first concerned about the cost. But no one likes software which is difficult to use and which is unreliable with regard to its output. That said, I was trained and began working as a programmer in 1972, and our attitude towards "bugs" indeed was that they were not tolerated. Of course, we created "commercial" software which was used by our employers. In that environment, the output must be valid and its integrity without question. There was also a trend to license software to other companies which did not want to develop it themselves. There was no such thing as a "PC", at first. When microcomputers began to be bought by the general public, a stampede of entrepreneurs began developing applications for them. Why they did not observe the same professional standards, I really don't know. Certainly, the ones with whom I became acquainted were of my generation, and usually they did release relatively "bug free" software. One challenge was that there was a large variety of hardware which ran MS-DOS, PC-DOS, or DR-DOS, of course, but the OS [i]per se[/i] did not absolutely guarantee that all software that ran problem-free on one microcomputer, which ran that OS, would also run problem-free on all other microcomputers which ran the same OS. For that matter, even programs that ran on a mainframe OS would not necessarily run without changes on another mainframe which ran "the same OS". I knew fellows whose job was to "install" ("adapt" is more accurate) software that was developed on one mainframe system to also run on another mainframe system which ordinarily was, ostensibly, the "same" hardware and OS. Often they compiled the source code on the target computer system and ran the binaries with a test dataset, then examined the output to see whether everything came out right. If not, then they had to find out why and rectify it -- if they could. There were differences among the output of compilers, too, even ones written to the same "standard". For one PC application that I developed, I compiled the source code first with the Borland C compiler, until it produced a clean compile and did not have any output errors. Then I would re-compile the same source code with Microsoft C. Once it produced a clean compile, then ordinarily the resulting program would produce the same output as it did after being compiled by Borland C. But the first compile with Microsoft C usually had "exceptions" because of differences between the respective compilers. I was not satisfied until the same source code compiled, and the program ran and produced the same output, regardless of the compiler. From the "average" consumer-user's point-of-view, there is no difference between a "bug" that is a flaw in the software, and a "bug" that arises from a mismatch between either the software and the OS, or the software and the hardware. Security vulnerabilities are a flaw of another kind.

apotheon
apotheon

By "corporate" I take it you mean "working as a corporate code monkey". I'll run with that: While I haven't done corporate development work, per se, I have done other corporate technical work. In one of the worst cases of dealing with impersonal corporate incompetence that I've ever seen or even heard about second hand, it took about four and a half months before I "tore [my] boss a new one." My tolerance and restraint are apparently somewhat greater than you suspect. Of course, the reason it took so long is based on the fact that I didn't want to quit or get fired before completing my first six months in that hell hole, for purposes of maintaining a resume that doesn't look too bad.

Tony Hopkinson
Tony Hopkinson

development. All those compromises you don't like and are sure must exist, are just the tip of the iceberg. I'd give you week before you tore your boss a new one.

apotheon
apotheon

The truth of the matter is that, of the six other people who answered Michael's call for input on the matter of buggy software, at least half are probably much more accomplished developers than me. Contemplating the achievements of someone like Sterling, in particular, makes my head spin sometimes. I know there are areas where I'm more knowledgeable than others among them, though, just as there are areas where each of them is far more knowledgeable than me, to say nothing of the fact that the half of them that are more accomplished developers than me are simply more practiced and able to whip up some code more quickly and naturally than me, all else being equal. We all have something to learn. Being eager to learn it is often what sets the good and the great apart from the mediocre (and worse).

Tony Hopkinson
Tony Hopkinson

is when you look at some code and think "what idiot wrote this", and then you see your name at the top. :p My own design practice developed over decades is get the basics right. Good names, encapsulation, consistent approach, self documenting and never be clever for it's own sake. The more comprehensible your code is, the less complex it will appear.

apotheon
apotheon

I tried to address both ends, at least a little. It's far too big a (pair of) topic(s) to address in any real depth around here, though.

apotheon
apotheon

Sorry to hear about your troubles with CD burning software and hardware. I might recommend some alternatives, but frankly, with as rarely as you use it, I doubt it would be worth your while to look into it for now. What about the other request for information?

Ocie3
Ocie3

NTI CD-Maker 6 Gold The system integrator installed it on the computer before I took delivery. The CD-Maker interface behaves in unpredictable ways. For example, while I am preparing to "burn" a CD, if I choose an option which has longer filenames than the default, CD Maker might or might not implement that option. Sometimes, without any notice or evident reason, it will change my choice to another option instead (usually back to the default) after I am no longer looking at the list (and before burning the CD). So, I must double-check every setting, and sometimes must close CD Maker and relaunch it before it will work as it apparently should. The separate application for making CD covers, labels, etc., apparently works if I use its features in only one sequence (which is also not obvious), but its output is unreliable regardless. It crashes if I "save" the work in progress at some points, but not at others. So I documented the bugs and reported them to the developer. They recommended that I buy the most recent version(s). I replied that, in my experience, a developer who releases buggy software will always release buggy software. IOW, the most current version might have the bugs in the previous version fixed, but it is likely to have too many new, uncorrected bugs that were not in previous versions, too. Either I would buy a competing developer's software, or nothing to replace the existing software because it is not worth spending the money with regard to how often I use it. The relatively few CDs would be pretty expensive per unit, which might sometimes be worth it, considering the value of their content. Recently, I discovered that evidently the Sony CD-R/W drive no longer functions correctly. It burned a CD a few weeks ago, and verified the integrity of the files afterward. I could and did access and display files that were on the disc. But now it won't read the disc and neither will the DVD drive. It also won't read other discs which the DVD drive [i]will[/i] read. So, I'm considering whether it is worth the cost to replace it instead of applying the funds to buying a new complete system. Maybe I can buy a CD drive now, and simply transfer it to a new box later. Doing that depends mostly on whether current mainboards use the same interface for the DVD and CD drives (thus, the same cables). It would also make sense only if I buy a desktop box to replace the one I have, and not a laptop. If I buy a laptop, I doubt that I would want to continue running the desktop unit, too.