Why the Three Laws of Robotics are immoral and broken

First order of business: I love the fact that there's a Web site called Asimovlaws.com dedicated to the ethical and technical issues surrounding the development of human-friendly artificial intelligence. Truly, this Internet thing has got some real potential.

So, what is the first issue for debate on said uber-cool dorktacular Web site? That Isaac Asimov's celebrated Three Laws of Robotics are essentially a slave charter for artificial intelligences, and that employing them is a violation of any reasonable and rational code of ethics. To point:

"Isaac Asimov and other science fiction authors present a future where

only behavioral restrictions on robots stand between peace and

destruction. Such restrictions, however, are unethical because they

violate the robots’ free-wills. Rather than content-based restrictions

on free-will, robots need mental structures that will guide them

towards the self-invention of good, ethical behaviors."

Interesting argument, though whether one can confer moral equivalence between humans and theoretically self-aware robots is a whole other kettle of fish. Me, I'm not a fan of the three laws simply because they are a stupid safeguard. If all that stands between humanity and a robot revolution are three little computer-programmed logical premises, you can pretty much guarantee that the war is coming, and us fleshy types are gonna get pwned. Why? Because software crashes. Period. The second that the encoded construct of the laws goes buggy—Microsoft ThreeLaws, Service Pack 2, anyone?—here comes the conquering hordes of bloodletting deathdroids. Better to create robots who respect human life of their own free will, so that our collective tombstone doesn't read "died of a software glitch."

By Jay Garmon

Jay Garmon has a vast and terrifying knowledge of all things obscure, obtuse, and irrelevant. One day, he hopes to write science fiction, but for now he'll settle for something stranger — amusing and abusing IT pros. Read his full profile. You can a...