Innovation

Google's DeepMind 'Lab' opens up source code, joins race to develop artificial general intelligence

Earlier this week, DeepMind, Google's machine learning platform, unveiled a new 'lab' on GitHub that will offer an open source environment for AI development. Here's what it means.

dmlab-blog-logo-final-161202-r01202812920281292028129-width-1500-2.jpg
Image: DeepMind website


This has been a major week for open sourced AI. On Monday, nonprofit research group OpenAI announced Universe, a software platform that includes thousands of games and acts as a virtual "curriculum" to train AI on, OpenAI's Jack Clark told TechRepublic.

Following right on its heels came Google's announcement that DeepMind, its machine learning platform, has created a "lab." The DeepMind Lab offers a 3D training environment, and its entire code library is now available on the open source hosting service GitHub.

With Universe, OpenAI aimed to address a primary challenge for AI research, which is a lack of "a large variety of realistic environments [for AI agents to] learn progressively more complex skills, and where we can accurately benchmark their progress," the company stated. By making OpenAI open sourced, the organization said it believed it could begin measuring and accelerating progress.

DeepMind was founded in 2010, and was bought by Google in 2014 for $625 million. As reported by TechRepublic, Demis Hassabis, the head of DeepMind, said the mission of his company is to "solve intelligence." So while DeepMind Lab is run by Google, a for-profit entity, it seems to have a similar goal to the OpenAI's Universe: Developing artificial general intelligence.

Greg Brockman, cofounder of OpenAI, told TechRepublic that these platforms offer "a new world to train reinforcement learning algorithms in, and new ways of evaluating their performance."

"DeepMind Lab, along with other virtual worlds like Microsoft's Project Malmo or Facebook's TorchCraft, points to a belief in the AI community that we're moving from simply classifying the world and towards building agents that act within it," Brockman said. "A truly useful agent will need to apply knowledge learned in one world to perform well in another world."

And the race to create an intelligent agent is heating up.

"We are starting to see competition to lead development of artificial general intelligence intensify," said Roman Yampolskiy, head of the University of Louisville's Cybersecurity Lab. "Data availability is a major bottleneck for deep neural networks researchers, which has now been addressed by three major players in that domain." (Facebook, Yampolskiy noted, also has code available on GitHub.)

In the announcement, DeepMind Lab noted that the "only known examples of general-purpose intelligence in the natural world arose from a combination of evolution, development, and learning, grounded in physics and the sensory apparatus of animals." Animal and human intelligence, according to DeepMind, is likely a result of our environment, and probably would not have evolved without specific circumstances. "Consider the alternative," the post said, "if you or I had grown up in a world that looked like Space Invaders or Pac-Man, it doesn't seem likely we would have achieved much general intelligence!"

SEE: OpenAI unveils 'Universe,' a platform that helps AI learn from complex environments (TechRepublic)

In November, DeepMind announced a partnership with game development studio Blizzard Entertainment to use the popular StarCraft II game as a testing platform for machine learning research. "StarCraft is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real-world," DeepMind said in a post. "The skills required for an agent to progress through the environment and play StarCraft well could ultimately transfer to real-world tasks."

Yampolskiy said he believes these types of environments may soon be turned into "a single testbed, perfect for testing cross-domain learning and decision making."

But the real world, Yampolskiy said, doesn't reflect these artificial environments, in which you can control factors. "The question is," Yampolskiy said, "Do things learned in the simulated worlds translate safely to the real world?"

Toby Walsh, professor of AI and the University of New South Wales, echoed this in his observation of Universe: "Many of the goals we need to address to get to general AI will not be solved," said Walsh, "even with this software platform."

While Walsh said the move to make the library open source is "great for the AI community," and that DeepMind Lab "provides a richer play world than OpenAI's Universe," he doubts that it can fulfill some of its promises. Specifically, Walsh doesn't think that a 3D world will be "fundamentally easier to develop intelligence," as DeepMind claims.

Why? Because it is missing some of the important pieces that go into intelligence, Walsh said, such as natural language and "interaction with other intelligence, not just simple agents."

"AI is about more than playing games intelligently," said Walsh. "Games have proved to be a useful starting point, but they are unlikely to be an end point." For example, children, Walsh pointed out, move away from games to "richer learning environments" in the real world.

Also see...

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks