I am going to say something that will not surprise my regular readers: I am not a fan of rapid application development, agile programming techniques, "test in production," perpetual betas, the continuous patch cycle, and other current programming trends. I am not the only one, either. I think that this paradigm of programming is incredibly broken. I am of the opinion that patches should be rare, not because the developer should not fix bugs, but because bugs should be rare. I wholeheartedly support only a few minor version releases, and long cycles between major releases.
I also am fully aware that "this is not how things are done."
The current business environment is all about "now, now, NOW!" As a result, we have become accustomed to getting software that is slow, buggy, and insecure from Day 1, and (hopefully) slowly improves through its lifetime. Our best case scenario is software that is only mildly broken, but usable, and it eventually morphs into what it should be. The worst case scenario is a system that as it evolves and new features are added and existing features are fixed, backwards compatibility is damaged, security holes allow sensitive data to be stolen or maliciously modified, and users become frustrated and angry.
Right now, the development cycle is predicated on the first mover principle: get a product, regardless of quality, out the door, dazzle users with the feature set, and hope they stick around long enough for the problems to be fixed. Sadly, what ends up happening is that another competitor invariably pops up, so instead of fixing the existing bugs, new features are cranked out in an effort to keep the existing user base and maintain growth. The new features are buggy too, the old features become buried, and the code gets crustier and kludgier until it is a warm pile of compost. Eventually, the only way to fix the problems is a complete tear down and rebuild.
At the end of the day, I have a responsibility to my customers. That responsibility is to solve the why of their business process, whether it is selling products online, properly determining their sales, marking hazardous shipments in accordance to federal law, or whatever. The why is not "join the latest technical revolution" or "support a favorite software vendor" or "make decisions based upon my emotional reactions to certain things." Usability, reliability and security must always be primary in my development process. Unusable software does not get used, or is used incorrectly, which means that the why is not being accomplished. Unreliable software loses data, wastes the users' time, and often makes the users afraid to do anything (we have all used an application where we knew that certain operations would crash it out, so we ignored those features). Insecure software can destroy my user's business, and the resulting lawsuits can put my employers out of business as well.
Microsoft did exactly all of this for over twenty years: they put the development of new features ahead of perfecting existing features and producing reliable, secure code. It actually worked for them too. They achieved market domination. The end result though, was hundreds of millions of computers having severe vulnerabilities. Many of the features that helped Microsoft achieve business success (Office macros, IIS/ASP, ActiveX, and more) had holes in them so big you could drive a Mack truck through them. Microsoft has been the butt of endless jokes as a result. In response, Microsoft ceased development on a large numbers of new features to concentrate on fixing code, and has had to completely reengineer their development process to put reliability and security at the forefront of their efforts.
What I find most interesting, is that this is exactly the path every other developer seems to be going down now! The "Web 2.0" crowd especially, but OSS is headed towards this direction as well. One project I keep an eye on has not released a production release in years, that I am aware of, but keeps cranking out "alpha" and "beta" releases. There never seems to be any actual final, production release of any version, but they run both minor and major versions through "beta" constantly. Google seems to have 75% of its products in "beta" at any given time, some of them on the market for years. What I hear in many comments and discussions is that "we need to make the best of what we have today." No, we really do not. We need to write the best code possible today.
Here is an example of what I mean: today, I made hotel reservations with a Hyatt hotel on their Web site. I went back to change my reservation, because I had made a minor mistake in the days. Their reservation lookup form allows me to enter the credit card number to look up my information if I do not have my credit card handy. Except there is one huge flaw: it does not use HTTPS! So not only is my credit card number being sent in plain text (along with my first and last name), but it is going to be stored in my browser's auto-fill system in plain text! This is a failure on every level. All because someone did not think about what they were doing when they wrote code and no one gave their code a second review. All in the name of rolling out a feature to help the user. I am sorry, but I would have preferred to have to call customer support or find my confirmation number than to give out my credit card number over a plain text connection and have it stored in my browser.
I find this very ironic. Just as Microsoft works to tighten down the screws as hard as they can, without breaking Windows' backwards compatibility (and other features which help make it unreliable and insecure), Java starts growing its sandbox, developers begin to advocate putting out buggy releases and fixing them later, and the rest of the world in general transitions to releasing code quickly without proper review and testing.
George Ou has discovered that that tens of thousands of defaced web sites had something in common: most of them had a common hosting solution, and the hack appears to have occurred through sloppy ASP code. Worst of all, the suspected security hole was known since April, 2005. Let me repeat that for extra clarity. A bad programmer possibly wrote bad code that was known to be bad for about fourteen full months without being fixed, leading to a massive security breach and thousands of defaced Web sites. If I were that developer I would be writing my will right now, buying cans of baked beans by the crate and making plans to hide in the mountains for the next century. If I were the company that employed this programmer, I would be calling John Edwards and asking him to contact the spirit of Johnnie Cochran for legal advice. It is that serious. Talk about negligence of a criminal nature! This is why I am such a hard case about these things. Do you want to be the systems administrator who did not patch the managed code environment that was known to be buggy, allowing hackers to destroy your business? Do you want to be the CIO of a company that outsourced their development to a third party vendor with no internal review of the code, allowing a bug to linger for years that caused millions of dollars to be redirected to some bank in the Bahamas? Do you want to be the programmer who did a copy/paste on some code from some website that contained a buffer overrun exploit, giving hackers root on your servers?
Look at the why of some projects I have worked on: "selling products online, properly determining their sales, marking hazardous shipments in accordance to federal law." Imagine what happens if I write buggy code for any of these projects. Credit card numbers could be stolen. A federal investigation could be launched into the company's violation of SOX, or the incorrect payouts of millions of dollars in bonuses and other incentive compensation could occur. The company's transportation network could be shut down by the federal government while they audit the HAZMAT controls. The code I write must be bug free. The lives of real people and millions of dollars are on the line.
A short essay written by Professor Kurt Wiesenfeld entitled It Was a Rookie Error is something that I read during college, and I keep the ideas from this essay in mind whenever I write code. When programmers write bad code, people can die, companies can collapse, millions of dollars can disappear, or lives can be ruined. If this sounds overly dramatic, think again. Sure, you may only be working on a small personal Web site today. But if your small personal Web site gives root to a hacker, someone else's Web site that may be much more sensitive could be compromised. You may just be writing a "quick and dirty" script, but "quick and dirty" code tends to linger forever and have real systems built on top of it. Your small driver for an obscure device may end up in a system for air traffic control; if it crashes the OS, people die.
So the next time you think about releasing untested code, or code with known bugs for the sake of getting new feature in the face of users, think again. "Good enough" is never good enough in the real of real IT.