Earlier this year Google revealed AlphaGo Zero, a machine-learning system that in a short space of time was able to become a world master at the notoriously complex game of Go.

AlphaGo Zero played “completely random” games against itself, and then learnt from the results.

In just three days it was able to defeat by 100 games to 0 the version of AlphaGo that defeated the Go world champion Lee Se-dol in March 2016, a victory hailed as a milestone for AI development. After 21 days of playing itself it had gone even further, besting AlphaGo Master — an online version of AlphaGo that won more than 60 straight games against top Go players, and within 40 days was able to beat all other versions of AlphaGo.

At the time, DeepMind lead researcher David Silver said that achieving this level of performance in a domain as complicated as Go “should mean that we can now start to tackle some of the most challenging and impactful problems for humanity”.

But what is the significance of the extraordinary success of AlphaGo at the game of Go and how does it advance the practical capabilities of AI?

Go is more limited than the real world

Joseph Sirosh, Microsoft’s corporate VP for AI & research, said that while AlphaGo is an impressive demonstration, its real-world applications are limited.

“The thing about AlphaGo, we see it as a very unrealistic problem, because it is a completely self-contained problem,” he said.

“You can develop as much training data as you want, there’s no variability, it’s completely deterministic.”

Peter Norvig is director of research at Google Inc and author of a seminal book on artificial intelligence. He agrees there are a limited number of possibilities in a game of Go compared to many real-world environments.

“It is true that Go is fully observable and deterministic (literally black and white): players can see the whole board, and they know exactly what will be the result of playing a stone,” he said.

“Many ‘real world’ problems (such as robot navigation) take place in partially-observable and non-deterministic environments. So in those two respects, Go is easier.”

However, the complexity of Go is such that taking place in a deterministic environment is of limited help to computers, as the brute force approach of running through all the possibilities doesn’t work. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view.

“The uncertainty and thus the challenge in Go comes from predicting the future. A player doesn’t know what the opponent will do, and in fact players don’t know for sure what their own future moves will be,” said Norvig.

“That means that the Go environment becomes effectively nondeterministic: there is uncertainty about the ultimate outcome of a move.

“Predicting whether, say, a given Go move will effectively stop an opponent’s ladder before it reaches safety therefore turns out to be similar to the real-world problem of deciding whether slamming on the brakes will stop a car before it reaches the intersection — in both cases we rely partially on a model of the “physics” of the world, and partially on experience in past similar situations.

“Again, it is true that in Go we theoretically have a 100% accurate model of the physics of the world, but without near-infinite amounts of computation, we can’t make use of that model to reason dozens of moves into the future. A Go player (whether human or machine) has to rely on pattern recognition and experience.”

Data isn’t as readily available in the real world

Sirosh also says the way AlphaGo gathers training data, by playing random matches of Go against itself, also puts it in an advantageous position compared to a machine-learning system trying to master real-life tasks.

“AI and machine learning is constrained by what you can learn,” he said.

“If you’re in an environment where there is unlimited data available to learn, then you can be incredibly great at it, and there are many, many ways you can be great at it.

“The smarts about AI comes when you have limited data. Human beings like you and me, we actually learn with very limited data, we learn new skills with one-shot guidance.

“That’s really where AI needs to get to. That’s the challenge. We are working towards enabling true AI.”

Google’s Norvig says there is significance in demonstrating a system can explore and learn on its own, in a “rich and complex environment” without the need for external training data.

“In one sense it is true that AlphaGo has access to “unlimited training data,” because of the accurate model,” he said.

“But the way I look at it is that AlphaGo starts with no training data whatsoever, and has to explore and decide which positions are worth exploring further. No data is just given to it; it has to make good choices to create data.

“Starting from random play, it learns to channel its explorations effectively so that in 3 days of exploration it can play at world champion level, and in a few weeks greatly exceeds what all other expert players have done over centuries of dedicated study.

“Up to this month, most computer scientists would have said this is not possible.”

Real-world applications

The success of AlphaGo and its variants won’t necessarily have a significant effect on enterprise, according to Sirosh, who views it as more of an academic achievement.

“AlphaGo is an interesting computer science accomplishment, this is algorithm development. [But] I don’t think it is necessarily a big meaningful step,” he said.

“It does allow you to explore a whole bunch of things, related AI algorithms, what are called reinforcement AI algorithms and so on, in that sense it does contribute to the whole thing.

“But when it comes to real-world applications in enterprises, I’m not sure AlphaGo makes by itself a significant difference.”

From Microsoft’s perspective, he says that pursuing research that will make it easier for people to chat to computers using text or speech will really transform what’s possible with AI.

“Really solving every language in every kind of context, being able to create conversational applications and doing so really well, I think that’s an incredibly important part of AI innovation, because no matter what, the vast majority of high-value interactions in this world happen using language.”

Microsoft’s focus on getting AI to understand language has been evident in a string of world-class results in language and speech recognition. Earlier this month, Microsoft’s Artificial Intelligence and Research group reported it had developed a system able to transcribe spoken English as accurately as human transcribers. These more accurate algorithms for understanding language support many of Microsoft’s core AI services, whether they are the speech and natural language APIs available via Azure Cognitive Services, the Microsoft Bot Framework collection of services for building chatbots, or the virtual assistant Cortana.

For his part, Norvig sees the potential for technologies developed to power AlphaGo Zero to be applied more widely.

“To what extent is this relevant to real-world applications in enterprises? Since the AlphaGo Zero result is brand new, we’ll have to wait and see,” he said.

“But the core technologies behind AlphaGo are certainly relevant to a variety of applications. Consider the problem of recommendations: an e-commerce site has a visitor, and has to choose what products to recommend/display to the visitor.”

In this example, he said the site’s interaction with the visitor can be pictured as a series of turns, and on each turn there is a fixed set of moves to make–the site recommends one or more items from inventory, the visitor chooses a link to click on, or not.

“As in Go, the key to success is not in memorizing a specific sequence of moves, but in doing pattern recognition and generalizing over past experiences.”

Read more on AI