Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure

Judgement day may have just taken a step closer, for killer robots at least. Amidst concern about the deployment of intelligent robots on the battlefield, governments have agreed to look more closely at the issues that these weapons raise, the first step towards an outright ban before they've even been built.

In November, the governments that are part of the Convention on Certain Conventional Weapons (CCW) agreed to meet in Geneva next year to discuss the issues related to so-called "lethal autonomous weapons systems," or what campaigners have dubbed "killer robots."

For the military, war robots can have many advantages: They don't need food or pay, they don't get tired or need to sleep, they follow orders automatically, and they don't feel fear, anger, or pain. And, few back home would mourn if robot soldiers were destroyed on the battlefield, either.

There are already plenty of examples of how technology has changed warfare from David's Sling to the invention of the tank. The most recent and controversial is the rise of drone warfare. But even these aircraft have a pilot who flies it by remote control, and it is the humans who make the decisions about which targets to pick and when to fire a missile.

But what concerns many experts is the potential next generation of robotic weapons: ones that make their own decisions about who to target and who to kill.

Banning killer robots

"The decision to begin international discussions next year is a major leap forward for efforts to ban killer robots pre-emptively," said Steve Goose, arms director at Human Rights Watch. "Governments have recognised that fully autonomous weapons raise serious legal and ethical concerns, and that urgent action is needed."

"Governments have recognised that fully autonomous weapons raise serious legal and ethical concerns, and that urgent action is needed." Steve Goose, arms director at Human Rights Watch

While fully autonomous robot weapons might not be deployed for two or three decades, the International Committee for Robot Arms Control (ICRAC), an international group of academics and experts concerned about the implications of a robot arms race, argues a prohibition on the development and deployment of autonomous weapons systems is the correct approach. "Machines should not be allowed to make the decision to kill people," it states.

While no autonomous weapons have been built yet, it's not a theoretical concern, either. Late last year, the U.S. Department of Defense (DoD) released its policy around how autonomous weapons should be used if they were to be deployed in the battlefield. The policy limits how they should operate, but definitely doesn't ban them.

For example, the DoD guidelines state, "Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force," and requires that systems "are sufficiently robust to minimize failures that could lead to unintended engagements or to loss of control of the system to unauthorized parties."

The guidelines do however seem to exclude weapons powered by artificial intelligence (AI) from explicitly targeting humans: "Human-supervised autonomous weapon systems may be used to select and engage targets, with the exception of selecting humans as targets, for local defense to intercept attempted time-critical or saturation attacks."

In contrast, the UK says it has no plans to develop fully autonomous weapons. Foreign Office minister Alistair Burt told Parliament earlier this year that the UK armed forces are clear that "the operation of our weapons will always be under human control as an absolute guarantee of human oversight and authority and of accountability for weapons usage," but then qualifies that slightly: "The UK has unilaterally decided to put in place a restrictive policy whereby we have no plans at present to develop lethal autonomous robotics, but we do not intend to formalise that in a national moratorium."

Noel Sharkey is chairman of ICRAC and professor of AI and robotics at the University of Sheffield in the UK. When he started reading about military plans around autonomous weapons he was shocked because "There seemed to be a complete overestimation of the technology. It was more like a sci-fi interpretation of the technology."

Professor Noel Sharkey
Professor Noel Sharkey

Of ICRAC's intentions he says the campaign is not against autonomous robots. "My vacuum cleaner is an autonomous robot, and I've worked for 30 years developing autonomous robots." What it wants is a ban on what it calls the "kill function." An autonomous weapon is one that, once launched, can select its own targets and engage them, Sharkey says. "Engage them means kill them. So it's the idea of the machine selecting its own targets that's the problem for us."

For Sharkey robot soldiers can't comply with the basic rules of war. They can't distinguish between a combatant or a civilian or between a wounded soldier and a legitimate target. "There are no AI robotic systems capable of doing that at all," he argues, pointing to one UK-built system that can tell the difference between a human and a car "but has problems with a dancing bear or a dog on its hind legs."

A robot weapons system won't be able to judge proportionality either, he argues; that is, judge whether civilian losses are acceptable and in proportion to the military advantage gained by an attack. "How's a robot going to know that? PhDs are written on military advantage. It's very contextual. You require a very experienced commander in the field on the ground who makes that judgment," he said.

But one of the biggest issues is accountability, Sharkey said. A robot can't be blamed if a military operation goes wrong, and that's what really worries the military commanders that he speaks to: They are the ones who would be held accountable for launching the attack.

"But it wouldn't be fair because these things can crash at any time, they can be spoofed, they can be hacked, they can get tackled in the industrial supply chain, they can take a bullet through the computer, human error in coding, you can have sensor problems, and who is responsible? Is it the manufacturers, the software engineers, the engineers, or is it the commander? In war, you need to know, if there's a mishap, who's responsible."

Sharkey's concern is that the weapons will be rolled out gradually despite the limitations in the technology. "The technology itself is just not fit for purpose and it's not going to be fit for purpose by the time these things are deployed."

As the battlefield adapts to the use of increasingly high tech weapons, the use of autonomous robots become more likely. If an enemy can render drones useless by blocking their communications (a likely consequence of their increased usage) then an autonomous drone which can simply continue with its mission without calling home is a useful addition. Similarly, because it takes (roughly) one-and-a-half seconds for a movement on a remote pilot's joystick to have an effect on a drone, they would be slower to respond than autonomous aircraft if attacked, which is another good reason to make them self-governing.

The ICRAC campaign hopes to use the decision by the Convention on Conventional Weapons to look at autonomous weapons as a first step towards a ban, using the same strategy that lead to a pre-emptive ban on blinding laser weapons.

"People will have to decide is this morally what we want to have, a machine making that decision to kill a human." Noel Sharkey, chairman of ICRAC

One reason for the unreasonable level of expectation around autonomous weapons is the belief that AI is far more capable than it really is, or what Sharkey describes as the "cultural myth of artificial intelligence that has come out of science fiction." Researchers working in the field assert that AI is working on projects that are far more mundane (if useful) than building thinking humanoid robots.

"Every decade, within 20 years we are going to have sentient robots and there is always somebody saying it, but if you look at the people on the ground working [on AI] they don't say this. They get on with the work. AI is mostly a practical subject developing things that you don't even know are AI — in your phone, in your car, that's the way we work."

And even if, at some point in the far future, AI matures to the point at which a computer system can abide by the rules of war, the fundamental moral questions will still apply. Sharkey said, "You've still got the problems of accountability and people will have to decide is this morally what we want to have, a machine making that decision to kill a human."

The android rules

Discussing whether robots should be allowed to kill - especially when killer robots don't exist - might seem to be a slightly arcane and obscure debate to be having. But robots (and artificial intelligence) are playing ever-larger roles in society and we are figuring out piecemeal what is acceptable and what isn't.

What we have been doing so far is building rules for specific situations, such as the DoD policy on autonomous weapons systems. Another less dramatic example is the recent move by some US states to pass legislation to allow autonomous cars to drive on the road. We're gradually building a set of rules for autonomous robots in specific situations but rarely looking at the big picture.

However, there have been attempts to create a set of rules, a moral framework, to govern AI and robots. Certainly the most famous attempt to create a set of laws for robots to date is Isaac Asimov's three laws of robotics which, since they were first defined in 1942, have offered - at least in fiction - a moral framework for how robots should behave.

Asimov's three laws state:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Robotics and AI haven't come anywhere close to being able to build robots that would be able to comprehend or abide by these or any other sophisticated rules. A robot vacuum cleaner doesn't need this level of moral complexity.

isaac-asimovs-three-laws-of-robotics-t-shirt-thumbnail.jpg

"People think about Asimov's laws, but they were set up to point out how a simple ethical system doesn't work. If you read the short stories, every single one is about a failure, and they are totally impractical," said Dr. Joanna Bryson of the University of Bath.

Bryson emphasises that robots and AI need to be considered as the latest set tools - extremely sophisticated tools, but no more than that. She argues that AI should be seen as a tool that extends human intelligence in the same way that writing did by allowing humans to take memory out of their heads and put it into a book. "We've been changing our world with things like artificial intelligence for thousands of years," she says. "What's happening now is we're doing it faster."

But for Bryson, regardless of how autonomous or intelligent an android is, because it is a tool, it's not the robots that need the rules - it's us. "They have to be inside our moral framework. They won't have their own moral framework. We have to make the choice so that robots are positioned within our moral framework so that they don't damage the rest of the life on the planet."

The UK's Engineering and Physical Sciences Research Council (EPSRC) is one of the few organisations that has tried to create a set of practical rules for robots, and it quickly realised that laws for robots weren't what is needed right now.

Its Principles of Robotics notes: "Asimov's laws are inappropriate because they try to insist that robots behave in certain ways, as if they were people, when in real life, it is the humans who design and use the robots who must be the actual subjects of any law. As we consider the ethical implications of having robots in our society, it becomes obvious that robots themselves are not where responsibility lies."

As such, the set of principles the EPSRC experts - including Dr. Bryson - outlined were for the designers, builders, and users of robots, not for the robots themselves.

For example, the five principles include: "Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy."

Dr. Kathleen Richardson of University College London (UCL) also argues that we don't need new rules for robots beyond the ones we have in place to protect us from other types of machines, even if they are used on the battlefield.

"Naturally, a remote killing machine will raise a new set of issues in relation to the human relationship with violence. In such a case, one might need to know that that machine would kill the 'right' target...but once again this has got nothing to with something called 'robot ethics' but human ethics," she said.

The robots we are currently building are not like the thinking machines we find in fiction, she argues, and so the important issues are more about standard health and safety - that we don't build machines that accidentally fall on you - rather than helping them to distinguish between right and wrong.

"We have to make the choice so that robots are positioned within our moral framework." Dr. Joanna Bryson, University of Bath

"Robots made by scientists are like automaton," she said. "It is important to think about entities that we create and to ensure humans can interact with them safely. But there are no 'special' guidelines that need to be created for robots, the mechanical robots that are imagined to require ethics in these discussions do not exist and are not likely to exist," she said.

So while we might need rules to make sure a bipedal robot can operate safely in a home, these are practical considerations alone, the ones you'd require from any consumer electronics in the home.

"Ethics on the other hand implies something well beyond this," she says. "It implies a different set of categorical notions need to be implemented in relation to robotic machines as special kinds of entities."

Exploitive, loving robots

Indeed, while few of us (hopefully) are likely to encounter a killer robot, with aging populations use of human-like robots for care may become more important, and this could be a bigger long-term issue. Rather than feeling too much fear of robots, we may become emotionally dependent, and feel too much love.

40153084-2-robot_hand_are-we-ready_610.jpg
Robotic hand grasps human hand.
 Image: Chris Beaumont/CBS Interactive

Another of the EPSRC guidelines (again, one of the few sets of guidelines in this area that exist) states: "Robots are manufactured artifacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent." It warns that unscrupulous manufacturers might use the illusion of emotions in a robot pet or companion to find a way to charge more money.

Perhaps one of the biggest risks we face is that, by giving robots the illusion of emotions and investing them with the apparent need for a moral framework to guide them, we risk raising them to the level of humans - and making it easier to ignore our fellow humans as a result.

UCL's Richardson argues that robotic scientists are right to think about the implications but that the debate risks missing a bigger issue: why are we using these devices in the first place, particularly in terms of social care.

The real responsibility

Killer robots and power-mad AIs are the staples of cheap science fiction, but fixating on these types of threats allow us to avoid the complexities of our own mundane realities. It is a reflection - or indictment - of our society that the roles we are finding for robots - fighting our wars and looking after the elderly - are the roles that we are reluctant to fill ourselves.

Putting robots into these roles may fix part of the problem, but doesn't address the underlying issues, and even worse perhaps allows us as a society to ignore them. Robots fighting our battles make war easier, and robots looking after the elderly makes it easier to ignore our obligations and societal strain that comes with an aging population. As such, worrying about the moral framework for androids is often a distraction from our own ethical failings.

Automatically subscribe to the TechRepublic UK newsletter.

About

Steve Ranger is the UK editor of TechRepublic, and has been writing about the impact of technology on people, business and culture for more than a decade. Before joining TechRepublic he was the editor of silicon.com.

53 comments
lucien86
lucien86

I actually work in this area - of 'Strong AI' and am working on developing real machines with real intelligence. One thing that's striking about this debate as always is the level of wrong information and wrong ideas.


- Machines with real intelligence (like humans) are possible and are probably now only a few years away. Maybe 5 to 20 years - the project I am working on personally could have a working machine within about ten years..


- There is no reason why Strong AI's cant be just as good at or better at target recognition than people, the problem is in deciding to kill and in trusting the machine. From my work its quite clear that the manufacturer should indeed must take responsibility for what the machine does - the commander or military or whoever must take responsibility for the orders they give it.


- Lets not beat about the bush, once any one country has AI weapons they will be an almost unassailable advantage and will expand exponentially. I doubt any committee trying to make this illegal will actually stop it. Once civilian strong AI exists I believe it can only be a mater of time before someone subverts it and supplies it to terrorists or dictators with the desire to misuse it.


- On the Asimov laws, these were written with the state of AI at the time in the 1930's and were not written with the understanding of a real and practical system. This is one of the problems with the current idea of banning a 'kill' function. The way it will work is that a kill imperative will (need to) exist in all Strong AI's but will be prohibited by an internal moral code.


- Almost certainly one of the greatest dangers with strong AI is going to be hacking or infiltration. In my own project I have done a lot of work on security and a very strong design can be built. The major weaknesses are likely to be at the manufacturing plant, through any government owned stop codes or back doors, or through internet linked update and monitoring systems. For the project I am working on the plan is for the whole machine to be hardware encrypted and built on an almost unbreakable encryption system. (this has already been developed)

lucien86
lucien86

Oops tried to edit but failed... that first paragraph should read -

- Machines with real 'human like' intelligence are possible and are now probably only a few years away. Maybe 5 to 20 years - the project I am working on personally could easily have a working prototype machine within about ten years..

StarSniper
StarSniper

@lucien86 If you can build it, someone as smart, or smarter than yourself can unbuild, decompile your code, hack, scramble, misuse, reprogram, or whatever word you choose to use to fit the meaning of 'use it as was never intended'. Encryption can be decrypted, moral guidelines can be rewritten. Hate to break it to you, but as smart as you may think you are, there's always going to be someone smarter, or who can beat you simply by having a slightly different perspective.

lucien86
lucien86

@StarSniper

Hi it isn't about being smart. For a system like this building an impregnable defence isn't nearly as hard as it may look. Unfortunately at the moment I cant actually explain the methods my system is to be based on because they are not yet patented. What I can say though is that the system is only really vulnerable through the hardware layer itself - which is also intended to be heavily protected. .

  The most difficult bit is likely to be integrating such systems with government and security agencies. .. but Strong AI will be nothing like current computing systems .. Legally I see it as being something like a cross between civilian and military systems, ..

tomk624
tomk624

@lucien86 We know current approximate computational power of human brain - it is equivalent to the faster supercomputers we currently have. Thus putting that in a machine that is somehow "portable" is not going to happen in just 5 years.


To simulate human brain we need far more computational power => supercomputers may have enough in 20 years.


Thus based on simple computational requirements it is easy to see the problem is rather far away vs. in 5 - 10 years.


As for limitations on use ... we seem to have done quite well with nukes. 


Nothing can stop the rise of technology in peacetime but we can all agree on some framework in which that technology grows.

inet32
inet32

@lucien86 

"Machines with real intelligence (like humans) are possible and are probably now only a few years away. Maybe 5 to 20 years - the project I am working on personally could have a working machine within about ten years.. "


If you really are working in AI then you are surely familiar with the phrase that "true artificial intelligence is 10 years away -  and always will be".  This observation has been around for decades -  I first saw it in MIT's Technology Review magazine in the 1970's. 


AI researchers have been over-confidently predicting their success for generations now.   Ten years from now we'll have slightly better voice recognition, slightly better ability to quickly search large databases of facts, and slightly better visual processing.    But anything resembling true human intelligence will still be "10 years away".


pschulz
pschulz

The history of war is a history of lowered confront over the ages.

It was hard to kill in the stone age, cruel and really up close. The longbow, and the guns removed you further from the target. You did not have to feel or even see their painful demise.

Cruise missile are the ultimate in non-confront. People who cannot look their fellow in the eyes could potentially obliterate millions with the push of a little button.

When are we going to end this cycle?

Introducing robots to remove humans again from even the conventional warfare is NOT going to end war. Every introduction of such "remoteness" makes ware MORE LIKELY, not less likely. It's easier now ... even the worst idiot can now kill ...

When are we going to end discussing about the "ethnical" questions in a subject that is inherently inethical (war)? What is ethnical good about killing a person who you have never met nor seen? No matter if this is done by robot or "human judgment" - what is ethical about it?

Mike Hermes
Mike Hermes

We already have autonomous robots.  They are called terrorists.  Independent.  Uncontrollable.  Deadly.

Daminc
Daminc

We've already have the pieces in place:


F.I.S.T. (Future Integrated Systems Technology) combined with

Second Generation Internet Technology combined with

Facial recognition software combined with

Threat Analysis software (I can't remember what it was called but the software can recognise a person leaving a bag and walking away or walking with a possible weapon and then attaching a flag to that person and is used with CCTV tech)


and these are just a few things I can think of on the spur of the moment :)

ananthap
ananthap

If a robot is to "engage" a target, then it is already in violation of Asimov's fist law. Remember those rules were written as fiction to allow Susan Calvin and her two operator friends to solve puzzles set to readers. Even in these puzzles, the dilemma arose when USR deliberately overrode their own rules. Eg. LLR.


So bringing in Asimov's rules here is just un-necessary diversion to paint a possible rosy picture.


OK

gkkarthic
gkkarthic

Well, we don't need to look as far as land mines or booby traps. CIWS - and we have Italy, The Netherlands, Russia, Switzerland and the US in this race as the early starters, with China, Spain, and Turkey catching up if they haven't already. 


They are "killer droids" that fire at 4000 to 6000 rounds per minute - that is 66 to 100 rounds per second and have been programmed to identify friendlies from foes. This alone is sufficient to know the debate is real, current and most imminent. 


Here's the fun part. In our attempt to safeguard each of our countries' interests, we've already created the "terminators" that are being called "fully autonomous robots" 


The primary role of the CIWS is to prevent any "unfriendlies" from approaching the target it is protecting it's on ships today, and on land - why not on an armoured car or a tank? Can't it be miniaturised and mounted on regular cars? 


What if - what if - what if - the possible doomsday scenarios with just a CIWS are almost infinite


We don't need stuff straight out of a Sci-Fi movie to create a doomsday scenario - what we have currently existing, is more then enough. 


tomk624
tomk624

@gkkarthic CIWS is on ships that humans command. It has settings etc. Not exactly a fully autonomous system.

curtwelch
curtwelch

Landmines and booby traps are autonomous machines that make decisions to kill people that have been around for a very long time already so the entire idea that this is something new is misguided.  What we are talking about are just machines that are better at making decisions than the current technology.  I certainly see nothing wrong with banning all uses of autonomous killing machines, including land mines.  But we should also just ban the killing of people period, for any reason, by any technology or tool.  This is not just an AI or robots issue.


Very powerful AI technology will be here very soon and it will lead to incredibly smart and cheap and dangerous weapons.  The largest danger does not come from the big players, but the small disenfranchised groups of the world, that we like to call terrorists today.  Today, a cell phone and some explosives can be turned cheaply into a terrorists tool.  Tomorrow, an AI flying bird toy, could be transformed into a flying smart missile that could hunt and kill anyone that choose to expose themselves in public.  If one person in the world, is pissed off about their life, they could choose to attack and kill, anyone in the world they didn't like.

To combat this danger, we will have to first, do our best to make sure no human is left behind or rejected by society.  We must use our full power of technology, to make sure everyone is taken care of, and given a fair chance in life.  We must learn to better share the wealth of modern society with everyone in the world. We must stop creating new terrorists by mistreating people.

Second, we will have to sacrifice most our privacy.  We will all have to learn to accept constant monitoring -- much of which will be done by the machines to catch those people that fall through the cracks and decide to become violent towards society.  But, the correct way to do monitoring, is to allow the people to monitor each other, not to set up a central state agency to do the monitoring for us.  The state might be used to put the monitoring systems in place, but the information should be public, so that no group or nation gains an advantage from the access to the information.  We must be willing to monitor each other.

adornoe1
adornoe1

@curtwelch Your ideas (and solutions), sound worse than the problems you would like to solve.

You start out not understanding what humans are all about, or how they think.  We are all different, and we can't all be bunched up into one way of thinking.  If humans were all as simple as robots, then perhaps, your ideas would work.  But, we're not robots, and we don't all think alike, and we don't all wish to be lumped into a thoughtless and controlled method that YOU think human beings would support or accept. 

Go back to the drawing board, and learn what human beings are about.  

NickNielsen
NickNielsen moderator

"We must stop creating new terrorists by mistreating people."

Seems to me he's got a pretty good grip on the issue.

adornoe1
adornoe1

@NickNielsen You're just as wacky as the previous commenter. 

Terrorism is created by the warped minds, and those minds are warped by their ideologies, and those ideologies are very intolerant of any other ideology or any other religion.  

Get smart.  You keep making the same kind of wacky and clueless comments.   

NickNielsen
NickNielsen moderator

I see you haven't changed a bit over the past couple of years.  

Perhaps you should go just a little deeper and examine how ideologies become warped.  As you say to others, so should you do yourself.

adornoe1
adornoe1

@NickNielsenYou make no sense whatsoever.


Why should I have to change, when there is nothing at all wrong with anything I've said.


No doubt, it's you that needs to change, and you can start by examining the real facts, and then come out with better fact-based conclusions.  As it stands now, you're not even bothering to examine the facts. 

NickNielsen
NickNielsen moderator

@adornoe The fact is that western governments and businesses have been interfering in the Middle East since before the discovery of oil.  The fact is that repressive and abusive regimes have been established & propped up because they were "friendly".  The fact is that democratically-elected governments have been overthrown because they were "unfriendly".  The fact is the American government armed the Taliban and al Quaeda in Afghanistan during the guerrilla war against the Russians. The fact is that Americans arbitrarily choose sides in internal battles in the Middle East, with no consideration whether that side is, in fact, somebody we would like to support.  The fact is that all of this has caused those on the receiving end to hate America.  All of this has been done in the name of "freedom", oil, the almighty dollar, and the "War on Terror".  And every drone strike creates more people who hate America.  (No small irony there, using terror from the sky as a weapon in a war on terror.)


Then people come along and blame the "ideology" because it's convenient and makes it easy for them to cast blame without having to actually know anything.

tomk624
tomk624

@curtwelch We have land mine ban treaty - with some of the big users not feeling like signing it.

firstaborean
firstaborean

What I fear in military robotics is the temptation among power-hungry politicians to create manshonyaggers (see fiction by Cordwainer Smith), autonomous killer machines that are made to hunt humans.  Whether this is achieved or not, the drive to possess them by power-hungry pols is frightening, just as their desire for nuclear weapons is.

AvangionQ
AvangionQ

As it comes to Asimov's three laws of robotics, and all the ways they've been subverted in movies and books, I'd think that a fourth law is necessary to the programming of any AI:  a robot may not interfere with the autonomy or free will of any human, so long as that does not interfere with the first law.

steveranger
steveranger

@AvangionQ  Asimov did actually come up with a fourth (or zeroth law): A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

E_Bailey
E_Bailey

I have to disagree with this articles understanding of Asimov theory behind human and robots.  This appears to be the understanding of a English major rather then a Engineer major.  The quote from the article below seem to miss the point as the three laws for robots entirely. 


"Asimov's laws are inappropriate because they try to insist that robots behave in certain ways, as if they were people, when in real life, it is the humans who design and use the robots who must be the actual subjects of any law.

The laws are not really for robots but for the center core of the code behind any robot program.  Thus the three laws are for the Designers of the robots, as robots simply follow the program.  I give this error over to the great story's Asimov wrote about this topic.


My understanding of the idea behind the laws was that robots have a much greater value for the society if they can be used by the masses on a day to day level.  A society that viewed robots as killers would be reluctant to allow them to be created in large numbers.  In Asimov stories the one company that could make them for saw the trust problems for humans to accept robots into everyday life.  In order to over come the lack of trust for robots by humans, every humanoid robot was programed with the three law to prevent humans from harm.  


It comes down to do we want to create a tool for peace or for war.  Sci-if theory's takes the use of robots to both extreme.  In series such as Dune or Star Wars there are war where one sides army is entirely robotic.  The few human leaders of a robotic army had the power to wipe out or enslave city or country's.  This was viewed as the ultimate way for Elite groups to control the masses.  

Asimov books are actually anti robot in that he believe that robots would inhibit the grow of humanity.  In some case the robotic society would grow complacent and eventually collapse.  Robots were seen as a means to a end by Asimov.  Robots in society could prevent harm and reduce / eliminate war.  You could send them in to a area in number to disarm and restore order, the ultimate Peace Keepers.  If you dont worry about a robots destruction then why do you need them to kill.  

Robots could be given the task of making food to feed the masses, assuming we have the energy production to run them.  With these task out of the way Humanity would have no distractions and we could focus on the task of expanding out into the Universe around us.   


On a side note I would like to point out to the recent advancement in a stable fusion reaction with the proton lasers.  I think that when this is applied to the Helium three found on the moon from solar dust.  The earth will finally have it energy source and possibly a reason for the race to the moon.  We better get some American rovers up there before the Chinese start sucking it all up.

danmar_z
danmar_z

So what happened? Some clever guy in some government somewhere realized that hackers could reprogram enemy soldiers to kill their masters? :-)

lehnerus2000
lehnerus2000

I agree with minstrelmike's description ("... a cross between biological warfare and mines").


Finally people are starting to wake up to the dangers of autonomous robotic killing machines.

Sci-Fi readers have been worried about these things for decades.


I'm not sure if I support a ban on research but I definitely support a ban on deployment/usage in battlefield situations.


"Sentry Guns" (like those seen in Aliens) could be useful guarding specific fixed locations (e.g. nuclear waste storage facilities).

These areas should be clearly signposted (like minefields were supposed to be).

progan019
progan019

There's another danger not mentioned that we have seen in the tactical use of armed aerial drones. That's the increasing reliance on the use of the weapons in marginal situations when target identification is difficult, but the consequences of weapon use are easier to deal with. Better safe than sorry, when the user is already safe, means groups that cannot be confirmed as combatants get a Hellfire anyway. They're in a bunch, they're observed, they're in range... why WOULDN'T you fire? Especially since no one will be left to say they weren't hostiles.


As a result, not just the drone operators but their commanders are coming to resemble, morally, the inhuman lethal machines they direct and command. There's no sense of responsibility for taking human life. It's as easy to do as not -- why not?


Autonomous drones would be bad enough, but there is a greater danger in drone operators and their chain of command getting used to the idea of killing because they can. Because it's quick, because it's final, because there are no consequences to friendly troops -- not at the site, and not at the command center. The machine provides a capability, and the human behind it takes on that capability as if it were his or her true nature. This is a phenomenon so new it doesn't have a name, but it has been observed in action and it is a real concern for remote combat operations in the future. We are the killer robots we always wanted.

newcreationxavier
newcreationxavier

Robots, AI, ethics, responsibilities, ...and the rest of them, from what I know of this human race, it will all be realities tomorrow, and we begin a century of refining the issues that follow. Look at history, that is what always happens. I will save myself the breath.

nickgamu
nickgamu

Tools must make more food not more orphans. A desire to kill people with contrary opinions is a sign of cognitive failure.

KMoore4318
KMoore4318

I little Plastic explosives can turn a loving robot into a killing robot, when set to vibrate mode.

Tim Murray
Tim Murray

Save people, send robots to war... both sides. In fact save money and use a chess set.

MPVS
MPVS

<Cynic Mode On>
Yeah, let Chess decide what will happen to the; Economics, Health, Wealth, and People of that Country who looses with Tim Murray's Chess-Game. Great idea!
At that same time, I will look for a small Island which I can afford to live on, and disturb all Radars for finding that Island, while playing Chess with Putin to decide what to do with the USA. Great idea... :P
</Cynic Mode Off>

gebauer
gebauer

In the middle ages, popes tried to ban crossbows for the same reasons. Nobody followed suit.

PhilM
PhilM

So what's wrong with IOS robots then? ..

minstrelmike
minstrelmike

Seems to me autonomous robots are sort of a cross between biological warfare and mines. The reason biological warfare _cannot_ work is that you can't guarantee the bioweapon won't redirect against your own population. If you think that's impossible with a robot, you don't understand anything at all about hacking or software error rates, much less what "intelligence" means. (this applies to most managers and voters).


It is expensive to clean up a minefield. I wonder how much more expensive it will be to turn off a battalion of armored robots. Note--if there is a kill switch for our generals to use, it will be discovered by the other side eventually.

KBabcock75
KBabcock75

Robots have no feelings and it is unlikely they will develop them in our life time, they are tools no more or no less. If we can win a conflict while decreasing death of the users forces then they will be armed and in combat. A Robot can be programmed to act in a loving way but it is all just cold logic at work.


Robots should not be looked at any differently then a gun or a warm blank. They all have a job to do and it is people who decide what that job is.

djerikson
djerikson

Thank you Jennifer.  More focus on peace vs preparation for The World War

adornoe1
adornoe1

That question must've been asked trillions of times throughout history, and yet, here we still are, talking about wars, and still sending people to wars.  


But, the way that most wars have been "justified", is not by talking about wars; it's justified by calling it "defense". 

NickNielsen
NickNielsen moderator

@adornoeThat's a fairly recent development that, interestingly, pretty much coincides with the increased ability to kill millions from a distance. For example, the U.S Department of Defense didn't exist before 1947.  Before that, military matters were split between the Secretary of War, responsible for the army, and the Secretary of the Navy; both served on the Cabinet.



adornoe1
adornoe1

@NickNielsen The terminology may be a recent development, but the idea was always true, and that is that, people built up defenses with the pretext of fending off neighbors or the envious foreign countries, or those intent on attaining bigger power and "followers".  

<br><br>

A big military force can be used for evil intentions, even if the original idea was for "defense".  Defense is just a word, but the idea of war and the causes and justifications for them, were always with us, "us" being humans.  

Mario Rossi
Mario Rossi

It's not "morality ot technology" that's in play here. It's morality of mankind. Technology is inanimate. People using it are moral or immoral.

Jennifer Hulford
Jennifer Hulford

If you get to the point where you have to send robots to fight your wars, you have to ask yourself WHY we still have wars.

adornoe1
adornoe1

The simple reason that we have wars, is because, we are human, and we weren't all created equal with the same exact mental capacity, where nobody could become more ambitious or greedy or even have an evil thought in his/her mind. 


So, yeah, wars happen for the simple reason that we are "human". 

Keith Jackson
Keith Jackson

New job ops??? For you or the robots?? I think building robots to perform humanly duties is an extremely bad idea.

Michael Lucas
Michael Lucas

HOOAH! New Tech means new job ops. All for it!