Last week, President Obama began a national conversation around AI. Experts are impressed, but reinforce the need for clarity around safety when it comes to artificial general intelligence.
At last week's White House Frontiers Conference, President Obama outlined a vision for the development of AI: A plan that involves the collaboration of the US government.
The event, co-hosted by the University of Pittsburgh and Carnegie Mellon University, was the first of its kind. Earlier that week, the White House had unveiled a report—Preparing for the Future of Artificial Intelligence—which outlined how the government can be involved in researching, developing, and regulating future technologies.
So how well does the American president understand AI? And how powerful will these policies be? TechRepublic spoke to several AI experts to see what they think. While each of these five experts agreed that this is a big step in the right direction, and welcome the president's attention to supporting research in AI, here are a few key areas they want to address.
Jobs displaced by AI
"I am not as optimistic as Obama that as many jobs will be created as destroyed by technology," said Toby Walsh, professor of AI at the University of New South Wales. "Just because this was true in the past, does not mean it is necessarily true in the future. There is no fundamental law of economics that requires this."
Susan Schneider, associate professor of philosophy at the University of Connecticut, emphasized the need for a national conversation on unemployment caused by lost jobs. "Given that AI is projected to outmode so many people in the workplace," she said, "it's time for a national dialogue on universal basic income."
Artificial general intelligence
One area that a few of the experts agreed needed clarification was "artificial general intelligence," or AGI. This should be distinguished from "narrow AI," which is what is involved when computers can be programmed to master a specific skill. In Wired, Obama wrote: "If you've got a computer that can play Go, a pretty complicated game with a lot of variations, then developing an algorithm that lets you maximize profits on the New York Stock Exchange is probably within sight."
"I do not think this is true at all," said Vincent Conitzer, professor of computer science at Duke University. "Go is a game with very clearly defined rules that does not require general knowledge about the world," he said. In contrast, in the financial markets, there will always be a need for broad and integrated understanding of how the world works, he said. "And this is very hard to replicate in AI systems."
Schneider also called President Obama's comments about AGI "a bit disappointing."
SEE: Google DeepMind: The smart person's guide (TechRepublic)
And while AI is currently used in activities such as financial trading, Conitzer said, we still need humans. "This is a good example of a very common fallacy in thinking about AI systems," he said, "where we are overly impressed by a system's performance on a narrow, cleanly defined problem, and then incorrectly conclude that similar performance in open-ended ambiguous real-world domains is just a short step away."
Why is clarification around AGI important? It has critical ramifications when it comes to safety. President Obama's plan to wait for benchmarks around AGI before taking action "falls short," said Schneider.
"Obama jokes that we can 'pull the plug' on superintelligent AI," said Schneider. "Of course, as those working on AI safety have warned, a truly advanced AI will be able to anticipate such moves."
"By the time we detect such benchmarks, it may be too late," Schneider said.
Conitzer talked about the two camps approaching the future of AI: One that worries about the existential risk (such as the Nick Bostrom, Elon Musk, Bill Gates group), and another, the traditional AI research community, that he said "dismisses the idea that we need to worry about human-level general AI at this point."
"It appears that the report (and the President) are generally more on the latter side," he said.
But, Conitzer said, this is problematic, if we are to believe the study cited in the report, in which half of AI researchers say there is a 50% chance of achieving AGI by 2040.
"Holy cow!" said Conitzer. "I am not among that half. But I do think that, if you really believe that that survey accurately captures the best prediction we can make about the future of AI...then the response of just 'monitoring progress' seems wholly inadequate."
"You need to either argue that the survey did not do a good job in generating a good prediction and that you have a better prediction, or you need to respond in a significant way," he said.
One way would to be funding research on keeping AI systems safe, Conitzer said, "or understanding the consequences and significance of human-level general AI."
"I am worried that instead of addressing the question head-on, many people just avoid it because they do not want to be seen as loopy crackpot futurists," he added, "which is not a good reason."
Conitzer also said that making "future superhuman general AI systems safe is a very different type of problem from making today's self-driving cars safe" and that it's "not responsible to just avoid making a judgment on whether superhuman general AI is somewhat likely to emerge in a few decades or not."
Walsh agrees with President Obama that government shouldn't over-regulate. Still, he is "disappointed that he is not advocating more oversight from the US government in two areas where regulation is already too late, autonomous cars and autonomous weapons."
"Tesla cannot be left to experiment on the public," he said. "And we need to curb the arms race that is starting to occur in the development of autonomous weapons that will undoubtedly destabilize global security."
"If Obama wanted one act to do before he finished as president, he could make a bold move here that would be remembered for centuries: Have the US [support banning] lethal, autonomous weapons at the CCW conference in the UN this December," Walsh said."Be the President that ended killer robots."
Overall, the experts were excited about President Obama's embrace of AI research. "We cannot leave it up to the large tech companies," said Walsh. "Better algorithms to rank ads are not going to get us to thinking machines, greater prosperity and reduced inequality."
Marie desJardins, AI professor at the University of Maryland, Baltimore County and former chair of AAAI (the National Conference of the Association for the Advancement of Artificial Intelligence), said she was excited about the plans, although "as always, one hopes it's going to be a sustained interest in the big/hard problems, not just temporary hype."
Walsh echoed the point. "I would, of course, be even happier if he had backed this by putting actual dollars on the table for basic research."
The president's main job is "not to cause panic," said Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville. "And he does it well."
"If the next President is anywhere near as able to learn, digest, and synthesize such complex material," said Conitzer, "I think we are in luck."
- Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research)
- Obama's report on the future of artificial intelligence: The main takeaways (ZDNet)
- White House details new plans for unmanned aircraft, talks drone delivery and privacy concerns (TechRepublic)
- What Hillary Clinton's technology policy agenda means for business(TechRepublic)
- White House to hire its first chief information security officer (ZDNet)