Elon Musk believes nations’ competition for artificial intelligence (AI) superiority may lead to World War III. And, he’s not the only tech giant worrying about the dangers of AI.

More than 100 leaders of AI companies, including Musk, have signed an open letter to the United Nations Convention on Certain Conventional Weapons, voicing their concern that companies building AI systems may convert the technology into autonomous weapons. The letter specially mentions:

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

SEE: Cyberweapons are now in play: From US sabotage of a North Korean missile test to hacked emergency sirens in Dallas (free PDF) (TechRepublic)

Never mind killer robots

Scientists and engineers are publicly agreeing with those who signed the open letter–even taking it a step further. Academics are warning complex AI systems are so unpredictable, it is possible for well-intentioned robotic environments, under certain conditions, to turn adversarial–even dangerous.

“Research into complex systems shows how behavior can emerge that is much more unpredictable than the sum of individual actions,” writes Taha Yasseri, research fellow in computational social science at the University of Oxford, in his The Conversation column Never mind killer robots–even the good ones are scarily unpredictable. “Ecosystems of relatively simple AI programs–what we call stupid, good robots–can surprise us, even when the individual bots are behaving well.”

SEE: IT leader’s guide to the future of artificial intelligence (Tech Pro Research)

An example of why the lack of predictability is disconcerting would be self-driving cars and their decision process. This TechRepublic article asks what is supposed to happen when a self-driving car encounters another vehicle heading directly towards it and a pedestrian is in the only escape path. Who does the car’s AI system put in harm’s way? Now imagine a busy intersection where several autonomous vehicles are interacting with each other and facing the same circumstances.

“These systems are often very sensitive to small changes and can experience explosive feedback loops,” explains Yasseri. “It is also difficult to know the precise state of such a system at any one time. All things that make these systems intrinsically unpredictable.”

Real-world examples of unpredictable behavior

Yasseri and fellow researchers Milena Tsvetkova, Ruth Garcıa-Gavilanes, and Luciano Floridi, all from Oxford’s Internet Institute, in their research paper Even good bots fight: The case of Wikipedia, report on their analysis of online bot activity, in particular, the interactions between bots that automatically edit articles on Wikipedia. “These different bots are designed and exploited by Wikipedia’s trusted human editors…,” writes Yasseri in The Conversation article. “Individually, they all have a common goal of improving the encyclopedia. Yet their collective behaviour turns out to be surprisingly inefficient.”

In his column, Yasseri explains that the Wikipedia bots function using established rules, but lack central management. He adds, “As a result, we found pairs of bots that have been undoing each other’s edits for several years without anyone noticing. And of course, because these bots lack any cognition, they didn’t notice it either.”

Chatbots are another example of unpredictable responses. This YouTube video may seem humorous, but imagine that type of exchange between physical robots that have motor responses tied to key phrases and, it could get interesting.

Unpredictability increases with complexity and number of devices

Studying Wikipedia edit bots and chatbots was not by chance. The edit bots were chosen because they represent simple systems being used in large quantities, whereas chatbots exemplify the interactions between a few sophisticated programs. The fact that unexpected conflicts emerged in both cases has Yasseri and the other researchers convinced that complexity, therefore unpredictability, increases exponentially as you add more systems to the environment. He adds, “So in a future system with many sophisticated robots, the unexpected behavior could go beyond our imagination.”

The researchers hint that an environment full of self-driving cars might be the autonomous perfect storm. “We don’t know what will happen once we have a large, wild system of fully-autonomous vehicles,” explains Yasseri in his column. “They may behave very differently than a small set of individual cars in a controlled environment. And even more unexpected behaviour might occur when driverless cars ‘trained’ by different humans in different environments start interacting with each another.”

SEE: The Advanced Guide to Deep Learning and Artificial Intelligence Bundle (TechRepublic Academy)

Back to killer robots

The research of Yasseri, Tsvetkova, Garcıa-Gavilanes, and Floridi helps substantiate the concerns presented in the open letter to the United Nations, and, realistically, adds another dimension: Predictability falls off as the number of robotic devices increases. “Think of the killer robots that Elon Musk and his colleagues are worried about,” concludes Yasseri. “A single killer robot could be very dangerous in [the] wrong hands. But what about an unpredictable system of killer robots? I don’t even want to think about it.”