"We have a new approach to organizing science," Demis Hassabis told a packed room at AAAI-16, the annual conference for the Association for the Advancement of Artificial Intelligence in Phoenix, Arizona.
Hassabis is the head of DeepMind, founded in 2010 and bought by Google in 2014 for $625 million. The mission of his company, he said, is to "solve intelligence."
In January 2016, DeepMind achieved a major victory towards this goal—AlphaGo, a division of the company, beat the ancient Chinese game Go. The win came about ten years before experts predicted it would, said Hassabis.
Deep Blue, an artificial intelligence developed by IBM, famously mastered chess in 1996 when it beat world champion Garry Kasparov. But Deep Blue, Hassabis pointed out, still had a limited ability—"it couldn't play tic-tac-toe without being totally reprogrammed."
In contrast, AlphaGo was created through a "prism of reinforcement learning," Hassabis said. It's general AI in the sense that it's "training from experience," he later told TechRepublic. "It can learn from data rather than being told what to do."
That's different from how Deep Blue worked, which was programmed specifically for chess. In Deep Blue, "you program in the heuristics and rules and strategies," Hassabis said. "Our machine learned it."
Games, Hassabis said, "are the perfect platform for testing AI algorithms. There's unlimited training data, no testing bias, parallel testing, and you can record measurable progress."
Although there are other ways to reach AI goals, Hassabis believes we have much to learn from the brain, which, he said, is the "only existing proof we have that general intelligence is possible." So he's looked for inspiration from neuroscience—and, it should be noted, has personal experience in studying the hippocampus.
And although we're nowhere near complete knowledge about the brain, the "exponential increase in understanding the brain," in areas like optogenetics, connectomics, and two-photon microscopy, "allow a wealth of new ways to understand the mind."
Go, Hassabis said, is the "most complex professional game man has ever devised." Why? It has simple rules, yet a huge number of potential moves. Unlike chess, which is played strategically, Go often relies on intuition.
So how did DeepMind beat it?
Essentially, it trained a supervised learning policy network to mimic human players, and then allowed the program to play against itself, improving itself through reinforcement learning. What's important, beyond the win, is the how said Hassabis. "We used general-purpose learning algorithms—they weren't hand-crafted for Go." The chart below gives a more detailed picture of AlphaGo's strategy:
Hassabis has another hurdle ahead for AlphaGo—beating Lee Sedol, who he calls the "Roger Federer of Go." DeepMind will challenge Sedol in a live-streamed event in Seoul, South Korea starting March 9.
Most professionals, Hassabis said, don't think AlphaGo has a chance of winning.
"But," he said, "they're not programmers."
- Carnegie Mellon invests $12M into AI to 'reverse-engineer the brain' (TechRepublic)
- Why robots still need us: David A. Mindell debunks theory of complete autonomy (TechRepublic)
- How Google's AI breakthroughs are putting us on a path to narrow AI (TechRepublic)
- Q&A: A powerful look at the future of AI, from its epicenter at Carnegie Mellon (TechRepublic)
- Smart machines are about to run the world: Here's how to prepare (TechRepublic)
Hope Reese has nothing to disclose. She doesn't hold investments in the technology companies she covers.
Hope Reese is a journalist in Louisville, KY. Her writing has been featured in The Atlantic, The Boston Globe, The Chicago Tribune, Playboy, Undark Magazine, VICE, Vox, and other publications.