Go is the world’s oldest and arguably most interesting game. The fundamental mechanics of the game are easy to learn, but can take years to master. Go, or Weiqi in its native China, is stark in it’s simplicity, yet mathematically complexity.

At it’s core Go is a game that teaches lessons. The origins of the 3,000-year-old game are intertwined with an amalgamation of Chinese intellectual, spiritual, and military history. To learn Go one must also master proverbs that explain tactics, strategy, and a peaceful but engaged mindset.

READ: Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research story)

Artificial Intelligence has evolved dramatically in the two decades since IBM’s Deep Blue conquered Chess legend Garry Kasparov. Anders Kierulf, developer of the SmartGo mobile and desktop app, explained to TechRepublic how DeepMind won:

[I am] very impressed by what Google has done with deep neural networks and reinforcement learning. Especially how they have bootstrapped learning–with 100,000 game records–learning to predict moves, then using that move prediction to generate a dataset of 30 million independent game positions that they can then use to learn positional evaluation. Rinse and repeat to get improved learning. This process is probably applicable to other situations where you have a way to generate sample data and can use what you’ve learned so far to evaluate that data.

Now that AI has defeated a Go master, a more nuanced question has emerged: can machines also learn the deeper philosophical lessons of the ancient game?

We attempt to answer that question on this week’s episode of the TechRepublic podcast.

Our panel this week:


To listen to the episode, you can use the player below, listen directly on SoundCloud, subscribe to the podcast on iTunes, or grab the RSS feed and drop it in your favorite app or player.

Thanks for listening.

Read more about AI and Go