Innovation

Four ways machine learning is evolving, according to Facebook's AI engineering chief

Yangqing Jia, director of engineering for Facebook's AI platform team, on the changing field of machine learning.

Machine learning is slowly changing the world — helping cars to "see" the world around them and virtual assistants to understand our questions and commands.

Driving forward machine-learning research are companies like Facebook, Google and Baidu — each of which are identifying new applications for the technology.

But how is the field of machine learning changing and what factors are shaping its future direction?

Yangqing Jia, director of engineering for Facebook's AI platform team, spoke about the changing nature of the field at the recent AI Conference presented by O'Reilly and Intel AI in London.

Training datasets are getting too big for humans to handle

In supervised learning, the system learns by example, typically by analyzing labelled data, for example, photos annotated to indicate whether they contain a cat.

The size of training datasets is often massive and continues to grow, with Facebook recently announcing it had compiled 3.5 billion public images from Instagram, labelling each image using attached hashtags.

"Data becomes a super important part in this AI ecosystem," said Jia.

"We know that lately, due to the internet era, we have a huge amount of data. That gives us a mass of data we can deal with."

SEE: IT leader's guide to deep learning (Tech Pro Research)

The difficulty when datasets stretch to billions of images or videos is that manually labelling each one becomes too expensive and time-consuming.

"Data has become a gold mine but can we actually mine gold out of it?" said Jia.

Facebook's approach of automatically labelling images using attached data, such as hashtags, provides one way to tackle the problem of how to label such vast datasets. Other possibilities for sustainably ramping up training data include automatically generating fresh data from small samples using Generative Adversarial Networks (GANs).

Regardless, there is an incentive for increasing the size of training datasets, due to how machine-learning performance scales with data.

"We're at a golden age where the power of the model is not constrained by the amount of data it gets and as long as we can feed it with more data, that leads to better quality models," said Jia.

For example, by using one billion of its automatically-labelled photos to train an image-recognition system, Facebook achieved record levels of accuracy — of 85.4 percent — on ImageNet's benchmark.

Training machine-learning systems can require datacenter scale

Alongside the need for massive datasets, machine-learning also demands huge amounts of compute, which scales up as the volume of data grows.

"To train one typical ImageNet model it takes about one exaflop of compute," said Jia.

"To put that into perspective, that means if every person is doing one float [floating point] operation per second and you take the whole population of London, it currently takes 4,000 years for people to train that model."

While machine-learning models were initially trained and run on desktop computers, today training is carried out vast arrays of specialized processors, such as Graphics Processing Units (GPUs) or Application-Specific Integrated Circuits (ASICs), such as Google's Tensor Processing Units.

"Because of the massive amount of computation that is needed, we start basically building datacenter-scale clusters or environments for us to do the computation," said Jia.

Interest in machine-learning is fuelling AI's own Moore's Law

Google DeepMind founder Demis Hassabis recently discussed the need for another dozen or so research breakthroughs before humanity would be close to developing general AI, a machine capable of understanding the world as well as any human, and with the same capacity to learn how to carry out a huge range of tasks.

However, the chances of such breakthroughs are increasing, with Facebook's Jia saying research into machine-learning and AI is growing as rapidly as the rate at which computing power advanced under Moore's Law.

"The beautiful part is that we're seeing growing interest in the AI field," he said.

Jia said this sustained surge in research was reflected by the year-on-year growth in citations of Yann LeCun's seminal paper on training neural networks, the brain-inspired mathematical models that underpin machine learning.

"It's exponential, it's very similar to Moore's Law," he said, adding this accelerating growth was reassuring "because all those challenges can probably be solved as long as we know there is somebody working on them".

There aren't enough tools for checking machine-learning quality

Jia also signalled a note of caution over the quality of tools available for checking the quality and reliability of trained machine-learning models.

Unlike software development, where processes such as continuous integration (CI) have been developed to ensure the robustness of the finished software — it is still far harder to verify trained machine-learning models, he said.

"In classical software engineering, these kind of things are being done in a much better way. One example is continuous integration: we know that when we're writing software we have complex systems to version control our code, to make sure that things are being correctly tested and the quality adequately monitored.

"The question is how do we do build the modern-day SDK and modern-day CI for our AI systems? It becomes really difficult because the algorithms are changing, the data changes everyday, which in turn changes the models that we're going to be deploying."

Nevertheless, Jia is optimistic about the possibilities that machine-learning will open up, citing the prediction by Stanford professor Andrew Ng that "AI will be the new electricity", allowing us to rethink how we do software engineering, how we reason, and how we work.

"I can't wait to see how it can help bring our society to a new level," he said.

Read more:

About Nick Heath

Nick Heath is chief reporter for TechRepublic. He writes about the technology that IT decision makers need to know about, and the latest happenings in the European tech scene.

Editor's Picks

Free Newsletters, In your Inbox