Innovation

AI pioneer: AI will definitely kill jobs, but that's OK

Louis Monier is on a mission to eliminate "boring jobs" and give them to machines. Is this a good thing?

Image: iStockphoto/pixone

If big data is overhyped, AI or deep learning are stratospherically so. Things were relatively controlled until Facebook started talking up its Messenger bots, and all rational talk (or thought) ended.

To get a little common sense on AI, I reached out to Louis Monier, perhaps most famous as the founder of the Altavista search engine, the Google of its time, and currently the chief scientist at Import.io, a web-based platform for extracting data from websites without writing any code. Monier is one of the world's leading authorities in deep learning, with research roots going all the way back to Xerox PARC in the early 1980s.

While Monier acknowledges the "stunning applications" that AI facilitates, he's also cognizant of the perils it presents to outmoded labor markets. In Monier's worldview, AI gives more than it takes, but it definitely upsets the existing world in ways that will be painful to some.

Defining AI and deep learning

Given the breathless hype about AI, it's worth settling on a definition.

In traditional programming, an engineer carefully specifies what the computer should do. With traditional programming, in other words, you pretty much try to anticipate all cases, then use heuristics to tie these rules together. This has never really worked well for some tasks. For example, imagine how many rules you'd need to describe a face, any face, even with glasses on, and a hat, and face paint, with a partial shadow, making a face, lying down, etc....

This worked pretty well with text, up to a point.

Deep learning, Monier notes, is different. A neural network, he said, is just a very large function that can modify its internal parameters. You show it a lot of data: One question (say, a picture) and a label/answer/truth (say, "cat"), and then you repeat millions of times. Every time the neural network modifies its parameters (millions to billions of numbers) to match the questions to the answers.

Over time, Monier continues, it evolves into a specialized engine that becomes really good at answering this particular type of question. So, when shown a picture that it has never seen, it will say "it's a cat" with high probability. The magic is that this learning takes place automatically, and the basic techniques are universal. They work for images, voice, text, and more.

Why now?

A convergence of several things help to explain why this approach to computing has taken off recently. First, Monier suggests, we now have a lot more data, thanks to the web. These days the race between large companies is not about software, but data, as "Data is the best defensible barrier," a thought popularized by Baidu's chief scientist, Andrew Ng.

In addition to this explosion of data, Monier details, we also have a few clever techniques to make these neural networks converge. "Nothing amazing," he insists, "It's college-level math, but you need to know what to apply and when. It's still a bit of an art."

SEE Microsoft research chief: AI is still too stupid to wipe us out (and will be for decades) (TechRepublic)

Add a big boost in computing power to these first two trends, couple it with the plummeting price of that power, add a swelling population of people working on AI, and we have the perfect storm for AI's rise.

In 2012, participants in the Imagenet visual recognition contest were crushed by one team using Deep Learning (DL) for the first time (half the error rate of the second place finisher), marking a turning point for AI, Monier said. The next year all but one team used DL, and after this there was no question, only those with a death wish would do it "by hand."

And, the number of people interested in AI continues to grow. These folks organize in communities, and share. It's standard now to publish a paper and put the code, and sometimes some data, on GitHub. All of this leads to stunning applications, and it's obvious that we are just at the very beginning, fueled in part by fear of missing out (FOMO) in the executive suites of many companies.

Suffer the little workers

Deep learning clearly has the potential to destroy many kinds of jobs that humans do today. When I asked Monier if he had any ethical qualms about this potential, he was clear: "It will [destroy jobs], and I don't [have ethical qualms about it]."

He went on:

If we stick to the idea that employment is the only way to make a living, then we will face a massive crisis as low-skilled and medium-skilled jobs disappear. And, very few people will say, "I want to pay 3x more for this product or service in order to keep others employed." The current thinking is that the jobs that will survive will be either creative, or require a human touch (some aspects of healthcare for example). But, manufacturing, driving a cab or a semi truck, filling out paperwork, and (ah!) trading stock will go. They are not happy jobs, they are not jobs that people chose because of passion or a sense of mission. They are means of putting food on the table. I believe we will learn to decouple making a living from a job.

In Monier's AI world, then, the future involves "delegat[ing] to robots/AI the boring jobs, and we keep the good ones for ourselves." This, he insists, is "just like we have always done, from farm animals pulling the plow to steam power and so on."

SEE More women developers? Hell yes, says Holberton School (TechRepublic)

Importantly, Monier is helping to shape this future by teaching at the Holberton School, something I've covered before. Explaining the different approach taken by the Holberton School, Monier tells me: "Most classes assume so many years of programming and statistics and linear algebra—we don't. We teach with the goal of giving regular programmers access to an amazing new toolbox that is evolving very quickly, and will make a huge difference in their career."

Basically, Holberton's approach is to "skip a lot of the math and low-level details" and instead train students in the high-level frameworks like Keras that "essentially turn deep learning into a game of Lego." The point, he stresses, is to "give students the right intuition of what is taking place, how to make certain high-level choices, where to get data and how to prepare it, and how to reuse existing models."

It's an approach that has students writing their own deep learning models by the end of the first class...and cutting out "boring" jobs in the process. It will be messy, but necessary, in Monier's opinion.

Also see

About Matt Asay

Matt Asay is a veteran technology columnist who has written for CNET, ReadWrite, and other tech media. Asay has also held a variety of executive roles with leading mobile and big data software companies.

Editor's Picks

Free Newsletters, In your Inbox