Leaders at the UN in Geneva just agreed to officially discuss guidelines for the design, development, and engineering of autonomous weapons. Here's why it matters.
Many current fears around AI and automation center around the idea that superintelligence could somehow "take over," turning streets around the globe into scenes from The Terminator. While there is much to be gained from discussing the safe development of AI, there's another more imminent danger: Autonomous weapons.
On Friday, after three years of negotiations, the UN unanimously agreed to take action. At the Fifth Review Conference of the UN Convention on Certain Conventional Weapons, countries around the world agreed to begin formal discussions—which will take place for two weeks at the 2017 UN convention in Geneva—on a possible ban of lethal, autonomous weapons. Talks will begin in April or August, and 88 countries have agreed to attend. This week, the number of countries that support a full ban on killer robots went from 14 to 19.
"By moving to a group of governmental experts to formalize the official process, it takes it from being led by these kind of outside academics, and means that they have to find government experts to handle it," said Mary Wareham, coordinator for the Campaign to Stop Killer Robots. "It raises the expectation that they're going to do something about this," she said, although what will be done is not yet clear.
"It is great to see universal recognition of dangers coming from weaponized artificial intelligence," said Roman Yampolskiy, director of the Cybersecurity lab at the University of Louisville. "It is my hope that, in the future, general danger coming from malevolent AI or poorly designed superintelligent systems will likewise be universally understood."
In an address to the UN—which included a briefing by the Campaign to Stop Killer Robots—Toby Walsh, professor of Artificial Intelligence at University of New South Wales, highlighted the necessary steps involved in obtaining a ban on fully autonomous weapons.
Walsh referenced an initiative, announced on Tuesday from the IEEE—a group of a half-million members in the tech space—that "defined ethical standards for those building autonomous systems." The IEEE report "contained a number of recommendations including: there must be meaningful human control over individual attacks, and the design, development, or engineering of autonomous weapons beyond meaningful human control to be used offensively or to kill humans is unethical."
Last year, Walsh wrote an open letter, signed by thousands of leading researchers from the AI community, voicing concerns about an AI arms race, and what could happen if these lethal weapons—which can kill with a superhuman speed—end up in the wrong hands.
Earlier this week, nine members of the US Congress also wrote a letter to the secretaries of state and defense, supporting a ban on autonomous weapons.
"This is a very important issue that has suddenly become urgent," said Vince Conitzer, computer science professor at Duke University. "Where it concerns AI, the border between science fiction and reality is getting blurry in places, and autonomous weapons are on the fast track to crossing over to the reality side. Now is the time to act on this."
Bonnie Docherty, who represents the Human Rights Watch and Harvard Law School's International Human Rights Clinic, co-authored a report this week highlighting the dangers of fully autonomous weapons. While Docherty is disappointed that the talks will be limited to two weeks—she'd been hoping for four—she is still encouraged by the decision.
"This week is a key moment for international efforts to address the concerns raised by fully autonomous weapons," Docherty said. "We are pleased that the countries at this major disarmament forum have agreed to formalize discussions on lethal autonomous weapons systems, which should be an important step on the road to a ban."
Roughly a hundred countries were involved in the discussions at the UN. Most of them have been entirely on board. "They've been over and over saying 'yes, we need to go to the next level,'" said Wareham. Even China, she said, agreed that international law on the issue was critical.
Only one country—which got on board on Friday—"expressed skepticism, hesitation, and said it's premature," said Wareham.
The country in question? Russia.
- AI will destroy entry-level jobs - but lead to a basic income for all (TechRepublic)
- The future of AI in the US: What it could look like in the Trump Administration (TechRepublic)
- Q&A: Former AAAI chair discusses future of AI research and what's coming up at AAAI next month (TechRepublic)
- Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research)
- Obama's report on the future of artificial intelligence: The main takeaways (ZDNet)