There is an interesting story in the New York Times today, "Who Needs Hackers?" by John Schwartz. The article appraises network threats and recent high-profile outages, suggesting that while malicious hacking might be good fodder for disaster movies, the mundane problems inherent in enormously complex software and systems are more likely to trigger the massive failures of our nightmares:
"We don't need hackers to break the systems because they're falling apart by themselves," said Peter G. Neumann, an expert in computing risks and principal scientist at SRI International, a research institute in Menlo Park, Calif.
Steven M. Bellovin, a professor of computer science at Columbia University, said: "Most of the problems we have day to day have nothing to do with malice. Things break. Complex systems break in complex ways."
Air traffic control, customs and border security, communications, the electrical grid, voting machines, launching space shuttles -- they can all be taken out, or at the very least, seriously interrupted by a bug -- an innocently cascading network snafu. Is there any way to avoid spontaneous combustion of our complex networks?
The aforementioned Dr. Neumann thinks so. He is quoted in the article:
"If you design the thing right in the first place, you can make it reliable, secure, fault tolerant and human safe," he said. "The technology is there to do this right if anybody wanted to take the effort....
"We throw this together, shrink wrap it and throw it out there," he said. "There's no incentive to do it right, and that's pitiful."
Have we traveled too far down the road of complexity to turn it around as Dr. Neumann suggests? With everyone worried about terrorists, wars, global warming, and catastrophic natural disasters, it seems unlikely that many people will get on the "solve the design complexity" bandwagon. What do you think will happen? Will it take a major catastrophe to cause a revolution in software design and systems?