The Rivers Casino in Pittsburgh may not seem a likely setting for a major scientific breakthrough. But on Tuesday, it was: Libratus, an AI system developed by Carnegie Mellon University, beat the world's top four human players in a 20-day tournament of Head's-Up No-Limit Texas Hold'em poker.
Libratus, developed by Carnegie Mellon's Tuomas Sandholm, a professor of computer science, and Noam Brown, a Ph.D. student in computer science, competed against Dong Kim, Jimmy Chou, Daniel McAulay, and Jason Les in a competition called "Brains Vs. Artificial Intelligence: Upping the Ante"—during which 120,000 hands were played.
"This is the last frontier," said Sandholm during a press conference on Tuesday. "This is a landmark in AI game-playing."
A key element of Libratus's success was the program's ability to improve after each day of play—learning from the human players. "After play ended each day, a meta-algorithm analyzed holes the pros had identified and exploited in Libratus' strategy," Sandholm said in a press release. "It then prioritized the holes and algorithmically patched the top three using the supercomputer each night."
This is very different from how learning has been used in the past in poker, Sandholm said. "Typically researchers develop algorithms that try to exploit the opponent's weaknesses," he said in the release. "In contrast, here the daily improvement is about algorithmically fixing holes in our own strategy."
The victory marks another important milestone in AI's success in the game world. IBM's Deep Blue beat world chess master Gary Kasparov in 1997 (Deep Blue, it should be noted, also originated at Carnegie Mellon University). And in March 2016, AlphaGo, Google DeepMind's machine learning platform, achieved a massive AI victory, defeating world champ Lee Sedol in the ancient Chinese game Go—a victory that came about a decade before most experts predicted. The victory was impressive because, unlike Deep Blue, which was programmed for chess, AlphaGo taught itself through reinforcement learning. And with 200 possible options per move, compared to 20 on a chessboard, Go is also a highly complex game that experts say depends on intuition.
Libratus's victory is seen as a big achievement by AI experts because of poker's unique challenge: Incomplete information.
While top human players have lost to computers at chess and Go, Vincent Conitzer, computer science professor at Duke University, sees this win as different. "I consider beating top human players in Heads-up No-limit Texas Hold'em to be quite a breakthrough," he said. Poker, he said, is "a game of imperfect information." This means it's "more relevant to real-world strategic decision making, where usually nobody has the complete picture of everything going on, whether it is in business, politics, security, or even one's social life," said Conitzer.
Toby Walsh, AI professor at the University of New South Wales, echoed the point. "Poker is, in some ways, a greater challenge than chess or Go because it is a game of incomplete information," said Walsh. "You don't know what cards the other players have or what cards you or they will be dealt in the future. This means there are many more possibilities to consider than in chess or Go. There are also extra complexities due to betting and bluffing."
Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville, said he also sees this as pointing to AI's capabilities in other domains. Beyond a victory in games, "it is much more interesting in terms of what AI can do for us in the domain of business deals, war strategy and interstate negotiations," he said.
It's important to note, said Walsh, that we still don't know exactly how Libratus works, although he expects it uses "good old fashioned AI techniques like game tree search, abstraction and game theoretic analysis.
"Progress in AI is not just about deep learning," he added.
Walsh also noted that we shouldn't "get carried away" by the victory. "AI hasn't solved the game of poker," he said. "Libratus was only playing two person poker. These techniques will struggle to cope with the much larger game tree when more players are playing. It is likely years yet before machines can play poker as well with more players."
Also, "just as AlphaGo could only play Go, Libratus can only play a special type of poker," said Walsh. "It has no sentience, no desires, no consciousness. It's not going to wake up and decide it wants to do anything else than play poker. That's not in its code. Liberatus is an idiot savant at poker. And to adapt these ideas to anything else is going to take person months or years of effort."
"Nevertheless, it's another step down the road to machines equaling and eventually surpassing humans at tasks we consider intelligent," Walsh said.
- How Google's DeepMind beat the game of Go, which is even more complex than chess (TechRepublic)
- Can Google's DeepMind beat world Go champ? Watch live this week (TechRepublic)
- IBM Watson: The inside story of how the Jeopardy-winning supercomputer was born, and what it wants to do next (TechRepublic)
- Google AlphaGo AI clean sweeps European Go champion (ZDNet)
- Google AI gets better at 'seeing' the world by learning what to focus on (TechRepublic)
- How Google's AI breakthroughs are putting us on a path to narrow AI (TechRepublic)
- Smart machines are about to run the world: Here's how to prepare (TechRepublic)
- Google AI beats humans at more classic arcade games than ever before (TechRepublic)
Hope Reese has nothing to disclose. She doesn't hold investments in the technology companies she covers.
Hope Reese is a journalist in Louisville, KY. Her writing has been featured in The Atlantic, The Boston Globe, The Chicago Tribune, Playboy, Undark Magazine, VICE, Vox, and other publications.