How Google's DeepMind beat the game of Go, which is even more complex than chess

Demis Hassabis, head of DeepMind, explains how its AI system, AlphaGo, beat one of the most complex games in history at AAAI-16.

The game of Go
Image: iStock / zilli

"We have a new approach to organizing science," Demis Hassabis told a packed room at AAAI-16, the annual conference for the Association for the Advancement of Artificial Intelligence in Phoenix, Arizona.

Hassabis is the head of DeepMind, founded in 2010 and bought by Google in 2014 for $625 million. The mission of his company, he said, is to "solve intelligence."

In January 2016, DeepMind achieved a major victory towards this goal—AlphaGo, a division of the company, beat the ancient Chinese game Go. The win came about ten years before experts predicted it would, said Hassabis.

Deep Blue, an artificial intelligence developed by IBM, famously mastered chess in 1996 when it beat world champion Garry Kasparov. But Deep Blue, Hassabis pointed out, still had a limited ability—"it couldn't play tic-tac-toe without being totally reprogrammed."

SEE: Google AI gets better at 'seeing' the world by learning what to focus on

In contrast, AlphaGo was created through a "prism of reinforcement learning," Hassabis said. It's general AI in the sense that it's "training from experience," he later told TechRepublic. "It can learn from data rather than being told what to do."

Demis Hassabis, head of Google's AI, DeepMind
Image: Hope Reese/TechRepublic
That's different from how Deep Blue worked, which was programmed specifically for chess. In Deep Blue, "you program in the heuristics and rules and strategies," Hassabis said. "Our machine learned it."

Games, Hassabis said, "are the perfect platform for testing AI algorithms. There's unlimited training data, no testing bias, parallel testing, and you can record measurable progress."

Although there are other ways to reach AI goals, Hassabis believes we have much to learn from the brain, which, he said, is the "only existing proof we have that general intelligence is possible." So he's looked for inspiration from neuroscience—and, it should be noted, has personal experience in studying the hippocampus.

And although we're nowhere near complete knowledge about the brain, the "exponential increase in understanding the brain," in areas like optogenetics, connectomics, and two-photon microscopy, "allow a wealth of new ways to understand the mind."

SEE: How Google's AI breakthroughs are putting us on a path to narrow AI

Go, Hassabis said, is the "most complex professional game man has ever devised." Why? It has simple rules, yet a huge number of potential moves. Unlike chess, which is played strategically, Go often relies on intuition.

So how did DeepMind beat it?

Essentially, it trained a supervised learning policy network to mimic human players, and then allowed the program to play against itself, improving itself through reinforcement learning. What's important, beyond the win, is the how said Hassabis. "We used general-purpose learning algorithms—they weren't hand-crafted for Go." The chart below gives a more detailed picture of AlphaGo's strategy:

A chart of of AlphaGo's strategy
Image: Hope Reese/TechRepublic

Hassabis has another hurdle ahead for AlphaGo—beating Lee Sedol, who he calls the "Roger Federer of Go." DeepMind will challenge Sedol in a live-streamed event in Seoul, South Korea starting March 9.

Most professionals, Hassabis said, don't think AlphaGo has a chance of winning.

"But," he said, "they're not programmers."

Also see


Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox