But while the narrow AI systems of today aren't a threat to the human race, that doesn't mean we should have blind faith in their decisions.
Warnings that mankind is on the brink of developing The Terminator's Skynet and other homicidal AIs are 'nonsense' and will be for decades to come, according to a Microsoft research head.
The idea that humans are on the verge of developing an artificial intelligence whose abilities far outstrip our own is ridiculous, said Chris Bishop, Microsoft's director of research at Cambridge, highlighting the many limitations of AI systems today.
"This is a good moment for a little reality check," he told a public discussion hosted by The Royal Society in London this week.
While recent breakthroughs in machine learning have allowed computers to become as adept as the average person at recognising faces and objects and to make huge strides in areas such as voice recognition, Bishop cautioned against assuming that machines are outstripping human performance across the board.
"Yes, deep learning has achieved human-level performance in object recognition but what does that mean? It means the machine makes about the same number of errors as the human.
"The reason the machine is as good as the human at this is because it can distinguish between 157 varieties of mushroom, whereas it makes all kinds of stupid mistakes that humans wouldn't make."
Even some of the most celebrated examples of machine intelligence, such as a Google DeepMind system beating a world champion in the notoriously complex game of Go, need to be understood in context of the time and effort that went into building the system, he said.
"[Take] the Go example, where the machine has just about crept ahead of the best human. The machine saw at least 10,000-times as many Go games as the human saw. Human capabilities still far outstrip machines in many areas," echoing researchers who highlight the trouble robots have with picking up items and walking.
Another common confusion is that because machine learning systems can do some of the individual tasks that people can do — such as drive cars or write photo captions — that they are on the verge of matching the much more general abilities of humans.
This pursuit of a general human-level artificial intelligence dominated early research in the 1950s and 60s and enjoyed a resurgence in the 1980s but was eventually abandoned in favor of a narrower AI focus, which concentrated on building systems that could learn how to do a single or small group of tasks. This shift away from developing general-level intelligence was for good reason, said Maja Pantic, professor of affective and behavioural computing at Imperial College London.
"What people were thinking at the time was to build generic systems that would solve any possible problem. Then they realised this was completely impossible," she told the debate.
Microsoft's Bishop agreed that general artificial intelligences will be impossible to develop until far into the future and that worries about such machines wiping out the human race are misplaced.
"What about Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away."
What should we worry about?
While the narrow AI systems of today aren't a direct threat to the human race, that doesn't mean we should have blind faith in their decisions, the gathered AI researchers said.
"There are dangers and risks associated with AI," said Bishop.
"They're not killer robots running around zapping people with lasers but they're much more mundane."
Such is the complexity of the deep neural networks that underpin modern machine learning systems that understanding how a system arrived at a particular conclusion is impossible.
The opaque nature of neural nets increases the possibility that the machine's decisions could be subject to unknown biases, originating from the huge amount of data such systems are trained on.
"The system has learned from examination of large amounts of data to produce a solution and it must be a good solution otherwise we wouldn't have deployed it. But it could have learned subtle biases," said Bishop.
"There's no hope of opening up one of these deep neural networks and really understanding in terms of human rules what's going on. It's just too complex."
The resulting risks mean it's important to ask lots of questions about how machine learning systems are trained and the uses they are being put to, he said, stressing the need to ask: "Where does the data come from? Who controls the data? How's the data being used?
"Those are the things we should be thoughtful of because they matter today, and will do in the next year and so on, long before the killer robots arise."
More about AI...
- How to use AI to automatically schedule your appointments with x.ai(TechRepublic)
- 7 trends for artificial intelligence in 2016: 'Like 2015 on steroids'(TechRepublic)
- The original robot "butler": What Zuckerberg can learn from Carnegie Mellon's HERB (TechRepublic)
- Why robots still need us: David A. Mindell debunks theory of complete autonomy (TechRepublic)
- How AI and automation could hollow out the US job market (TechRepublic)
- Q&A: A powerful look at the future of AI, from its epicenter at Carnegie Mellon (TechRepublic)