Artificial Intelligence

Nvidia researchers create AI, deep-learning system to enable robots to learn from human demonstration

The paper detailing the method is being outlined at a conference in Brisbane, Australia.

Nvidia researchers have created a deep-learning system that can teach a robot simply by observing a human's actions.

According to Nvidia, the deep learning and artificial intelligence method is designed to improve robot-human communication and allow them to collaborate. The paper will be presented at a conference in Brisbane, Australia.

Researchers trained neural networks powered by Nvidia's Titan X GPUs. The neural networks incorporated perception, program generation and ultimate execution. Simply put, a human could demonstrate a real world task and the robot could learn a task.

SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)

The robot would see a task via a camera and then infer positions and relationships of objects in a scene. The neural network would then generate a plan to explain how to recreate perceptions. The execution network would carry the task out.

A flow chart of the method goes like this:

nvidia-ai-training.png

Nvidia said its method is the first time where synthetic data was combined with an image-centric approach on a robot.

A video highlighted how the neural network enabled a robot to see a task and then recreate it.

nvidia-ai-training-2.png

Originally published on ZDNet.

istock-643956998.jpg
Image: iStockphoto/Jirsak

About Larry Dignan

Larry Dignan is Editor in Chief of ZDNet and Editorial Director of TechRepublic.

Editor's Picks

Free Newsletters, In your Inbox