Innovation

Training AI models is getting faster and taking less data, thanks to NVIDIA researchers

Using unsupervised learning techniques can benefit AI training across industries, according to a paper being presented at the Conference and Workshop on Neural Information Processing Systems.

NVIDIA researchers found that it's possible to significantly cut down the amount of time and data it takes to train artificial intelligence (AI) models by using generative adversarial networks (GANs), according to a blog post published Sunday.

At the Conference and Workshop on Neural Information Processing Systems (NIPS) this week in Long Beach, CA, NVIDIA researcher Ming-Yu Liu will present a paper demonstrating how unsupervised learning and generative modeling can give machines the ability to be more "imaginative," creating how a wintery scene would look like in the summer, for example.

Researchers trained one GAN on a photo of a winter scene, with overcast skies, bare trees, and snow covering the ground. A second GAN that is trained only to understand generally what summer looks like can look at this specific winter image and determine what it would look like in the summer.

SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)

slide1.jpg
Image: NVIDIA

The unsupervised learning developed by the team removes the need to capture the exact same winter image in the summer, and label that data, which would take lots of time and manpower, Kimberly Powell, vice president of healthcare at NVIDIA, wrote in the post.

"The use of GANs isn't novel in unsupervised learning, but the NVIDIA research produced results — with shadows peeking through thick foliage under partly cloudy skies — far ahead of anything seen before," Powell wrote.

Using this technique can produce benefits in a number of areas, Powell noted. "In addition to needing less labeled data and the associated time and effort to create and ingest it, deep learning experts can apply the technique across domains," she wrote. "For self-driving cars alone, training data could be captured once and then simulated across a variety of virtual conditions: sunny, cloudy, snowy, rainy, nighttime, etc."

NVIDIA has made several moves to advance AI recently. Last week, the company announced a partnership with GE Healthcare to deliver a host of new health products that use AI to improve patient care and data collection. It also recently unveiled the world's first AI computer, designed to drive fully autonomous vehicles by mid-2018.

The 3 big takeaways for TechRepublic readers

1. In a paper being presented at the Conference and Workshop on Neural Information Processing Systems this week, NVIDIA researchers found that it's possible to significantly cut down the amount of time and data it takes to train AI models by using GANs.

2. Researchers trained one GAN on a photo of a winter scene, with overcast skies, bare trees, and snow covering the ground. A second GAN that is trained only to understand generally what summer looks like can look at this specific winter image and determine what it would look like in the summer.

3. This technique can benefit AI models in a number of areas, including autonomous cars.

Also see

About Alison DeNisco Rayome

Alison DeNisco Rayome is a Staff Writer for TechRepublic. She covers CXO, cybersecurity, and the convergence of tech and the workplace.

Editor's Picks

Free Newsletters, In your Inbox