At AAAI-16, AI guru and philosopher Nick Bostrom explored the broad landscape of machine intelligence-and the questions we need to be asking about our future.
Nick Bostrom is not a typical speaker for the Association for the Advancement of Artificial Intelligence (AAAI). The 16th annual conference, held February 12-17 in Phoenix, Arizona, is one of the premier events in the AI world. It is full of more than a thousand students, professors, and researchers—many of them specializing in computer science, engineering, and mathematics.
Bostrom is a philosopher.
Head of the Future of Humanity Institute, a multidisciplinary research program at Oxford concerned with tackling the question of how humans should be preparing for our future, Bostrom is considered a prophet when it comes to an "intelligent" future. He brought significant attention to the field of AI with his 2014 bestseller, Superintelligence: Paths, Dangers, Strategies, which outlines the possible ways that we could reach an age of superintelligence—defined as a general machine intelligence that exceeds human capability.
The birth of superintelligence, Bostrom said at AAAI-16 in Phoenix, Arizona, will be a monumental event in human history. "We can compare the rise of a superintelligence," said Bostrom at AAAI-16, "to the rise of homo sapiens, in the first place."
But the ape-to-human analogy may, in fact, be too modest. The transition from human to machine is "even more radical" than animals to humans, he said.
Bostrom's keynote focused on what's possible when considering superintelligence. What's real? he asked. And, "what should we leave to the science fiction authors to explore?"
He urged the group to consider a "view of the landscape ahead. What are the practical implications if you zoom out?" This broad view, Bostrom said, can inform the questions we ask today.
He proposed three categories—the short-term future, which includes technological advances like self-driving cars, the long-term future, which includes things like AI assistants, humanoid robot companions, etc., and the deep future. The deep future could include things like a cure for aging, "uploading," "ancestor simulations," and more.
Bostrom's talk, which reflects the message in his book, was a call to action. He warned of the dangers of what will happen when superintelligence is reached. We cannot, he said, presume that superintelligent agents will adopt human values. The most likely scenario, he believes is that they will pose a threat to humans, who could likely stand "in its way."
"If the robot becomes sufficiently powerful," said Bostrom, "it might seize control to gain rewards."
Ignoring this danger could prove fatal for humankind.
"Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb," Bostrom wrote in Superintelligence. With no adults around, it doesn't matter if most of them are sensible.
"Some little idiot is bound to press the ignite button just to see what happens," he said.
Bostrom is not alone in his concerns. The Future of Life Institute, which has Stephen Hawking and Elon Musk on its board, among others, recently received a $10 million donation from Musk to ensure the same goals—the use of AI to benefit humanity. There's also the Leverhulme Center for the Future of Intelligence and MIRI (Machine Intelligence Research Institute) at Berkeley.
SEE: Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research)
Informally, many people I spoke to at the conference agreed that Bostrom had an important message for the AI community. Vincent Conitzer, a professor at Duke, said there would be nothing wrong with a "breath of fresh air."
Yet there was an undercurrent of resistance, as well. Many researchers believe that the ideas presented are too far off to be of immediate concern. Oren Etzioni, director of the Allen Institute for Artificial Intelligence tweeted: "We run code while Bostrom runs arguments. Philosophy is not science or engineering—it is highly speculative."
Pushback also emerged in the Q&A session. Several voiced concern over why Bostrom does not focus on a potential world in which machines and humans coexist.
Bostrom spoke with TechRepublic after his talk. He said he was, "surprised by the acceptance" from the AAAI community which, he believes, would not have been as welcoming a few years ago.
In his answer to the audience member, Bostrom stressed that we consider this in the proper context—which is very far away. An ideal future, he agreed, is one in which we coexist.
Yet, he "would not want to make a plan that presupposes that machines will not replace humans."
He said, "You don't want to be in a situation where, in 30 years, they're here—and then we start thinking about the consequences."
- Carnegie Mellon invests $12M into AI to 'reverse-engineer the brain' (TechRepublic)
- Why robots still need us: David A. Mindell debunks theory of complete autonomy (TechRepublic)
- How Google's AI breakthroughs are putting us on a path to narrow AI (TechRepublic)
- Q&A: A powerful look at the future of AI, from its epicenter at Carnegie Mellon (TechRepublic)
- Smart machines are about to run the world: Here's how to prepare (TechRepublic)