Innovation

Robots beware: Humans will still be bosses of machines, say Davenport and Kirby in new book

A new book outlines ways that work done by machines can augment, rather than replace, human work.

Image: Tom Davenport

While it is clear that we will have machines work close to us in the near future, if we don't already, it is less obvious what, exactly, will happen to jobs for people. But in Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, a new book by Thomas Hayes Davenport and Julia Kirby, the question is less about who will be replaced and more about how humans and machines can work together.

The key? Augmentation.

TechRepublic spoke to Davenport, professor at Babson College, about how he sees these changes playing out.

There is so much anxiety about automation or robots replacing jobs. What would that look like?

I think for a lot of people the anxiety is heavily related to the fact that that's the only way that we make our livelihoods in the US. Even beyond that, Freud said that to work and to love are the two things you need to be satisfied in life. There've been a few guaranteed-income experiments. The idealistic view would say if you have a guaranteed income that you would spend time socializing, and doing art, performing in dramas, and so on. What people actually do in the US, in the few experiments there've been, is watch more TV. I think there are plenty of reasons why we would like to give people something to do even if machines took over a lot of their activity. In Switzerland, they're voting on a guaranteed income in a couple of weeks. That will be the largest experiment in the world, if it is voted in.

And what about for students planning to enter the workforce? What questions are you hearing about future careers, and what advice are you offering?

Well, it's funny. I haven't taught this topic all that much at Babson, but yesterday I was teaching it to a bunch of PhD student at Harvard. What I mostly did before this was big data and analytics, and they thought that was what they were coming to hear, but I talked a fair amount about this whole issue of automation. They were very bullish before yesterday on data science as a career, but even that is somewhat threatened, just because machine learning can produce quantitative models a lot faster and, in many cases, better than any human. They were saying, "Well, is this still a good career for me?"

When I talk to IT people, they say, "Well, how about programming? Is that safe?" I point out that Bill Gates said, "Well, it's safe for a few years." It's hard not to defer to Bill Gates on issues like that.

There's just a lot of uncertainty about what will be a viable career for anybody these days. The confident feelings that we used to have, thinking, "well, I'm a doctor and that's always a safe role," or a lawyer, don't apply so much anymore.

Can you really say anything is safe, when you look at a bigger picture?

I think it's hard to say anything could be totally safe. We're relatively optimistic, compared to some people writing about this. The pace at which entire jobs get eliminated is quite slow. There are about as many bank tellers now in the US as there were in 1980, before ATMs and online banking took off. It's a slow process, but I think that it will certainly affect a number of jobs on the market. It may not be as bad as some people say, but nobody can afford to be complacent. In order to increase your chance of still having a job, you need to be proactive.

davenport-120718-0025.jpg
Image: Russ Campbell

Can you expand on that? What are the ways that you see humans and machines working together?

There are two basic ways to augment. One is to work alongside smart machines, and complement their activity. The other is to dip into what smart machines are unlikely to be able to do any time soon.

For the first set, working closely with machines, it's a day-to-day colleague sort of role. Just as with a human colleague, you'd know what they were good at and what they were not so good at, so you can step in when they're unable to do a task.

Then, there's the computer's boss role. I think of hedge fund managers or something as an archetypal role like that, where the trading may all be done by machines these days, but somebody's got to look at the whole portfolio and see how it's performing. Do we need more automation, less automation, different types of automation?

Then, there's the step-forward role, which is building these things and installing them, maintaining them, marketing them, and so on. IBM is hiring thousands of people to do that sort of thing.

SEE: How will AI impact jobs? High-powered panel tackles the big question

Those are all the augmentation categories for working closer with machines. There's also a "step aside" role for doing things they're unlikely to do. Because it's too hard, or it's such a niche role that few would be tempted to automate it. It wouldn't be economical.

Do you see that jobs for lower skilled workers just will disappear? Do you worry about the kind of difference in the education that's going to be needed?

I do. Clearly, this has already started happening in our society. The returns to college degrees are very high these days, relative to non-college degrees. I think it will probably accelerate. There are a lot of roles involving, for example, manufacturing technology that don't get filled now. These don't necessarily require college degrees, they could be vocational. But we don't have a good vocational training system in the US. I think there are some possibilities for less well-educated people, but in general, it's not a happy story.

For example, there are three million truck drivers in the United States. If we create autonomous vehicles, what are all those people going to do? That's probably where guaranteed income would come in.

Will we need to rethink our education system?

First of all, mostly at the college level, and even professional school level, it would be really nice if we told people something about these machines that they're going to work with. The vast majority of law schools say nothing about eDiscovery or Predictive Coding, or these technologies that are quite common now in the legal profession, but not really mentioned in law school. Same in medical schools.

Secondly, I think starting to make people aware of the options. Do you want to be in roles that involve working closely with computers, or not? If not, what kinds of viable possibilities are there? Encouraging people to follow their passions. We have these broad messages about everybody should do more STEM education, but in a lot of cases, those are the things that computers do better anyway.

I've also heard that all our emphasis on coding may be a little bit misguided.

Yeah, exactly. I think it's still not a bad idea for anybody to understand how computers work and the kind of logic of programming. As far as actually doing it and becoming a programmer, within 10 years there probably aren't going to be a huge number of jobs. Automated code generation is already somewhat of a reality today.

Alec Ross said that he would recommend that young people learn Chinese—what do you think?

Well, yeah, I told one of my kids, "I have three words for you"—the equivalent of "plastics," in The Graduate—I said, "Mandarin, statistics, and mineral science." He ended up becoming a TV comedy writer, which is probably a lot smarter, in terms of guaranteed jobs. Computers are really bad at comedy, so far anyway.

What are the big takeaways for businesses? How can managers start thinking about integrating smart machines with humans, and the new roles human workers could have?

I think it's hard for a senior manager to anticipate all that by him or herself. What I recommend, if you are bold and can pull it off, is to say, "look, a lot of this automation stuff is going to be happening, and we would like you to think about how to use it in your own part of the organization. We won't fire anybody or lay anybody off as a result of this automation."

So, you encourage people to use their creativity without fear that it's going to come back and bite them in the rear. A few companies that I've talked to have done that, and they're re-deploying workers to other things. I think it's a great way to get everybody mobilized on this augmentation issue. Unfortunately, it's pretty rare that an organization would make that kind of decision, instead of just trying to save the money.

When you think about jobs that do not exist today, but may exist in 10 years, what do you see?

At a high level, I think there would be a fair number of directors of automation, or chief cognitive officers. There will be a lot of machine-tenders of various types. Assistants to automated technologies. Smart machines will be making lots of decisions, but somebody needs to figure out how well it's going, and improve them, and shut them down when they need to be shut down, and so on. Every field will be different.

In healthcare, there will be people who oversee a variety of automated technologies. For example, there have been some technologies available for a while—they've been slow to take off—in the automated anesthesiology area. What anesthesiologists have told me is that, it's not that it's going to eliminate them, but you might have six different operating rooms all monitored, all of which have one of these machines in them, and monitored by a single anesthesiologist. It's kind of a machine's overseer in a sense, kind of an anesthesiology factory, with a bunch of robots and so on. I think we'll see that in other domains, as well.

SEE: 6 ways the robot revolution will transform the future of work

There will be a lot of people who primarily do other things, like doctors, for example. General practitioners will have a lot more capabilities. They'll be able to access the knowledge of the best psychologist with a system. They'll be able to have radiology interpretation machines that tell whether a lesion is likely to be cancerous on a radiological image and whether a biopsy is necessary. All things that required specialists. We've had this tendency toward increasing specialization, and I think that will sort of flatten out as some of the specialties get taken over by technology. One way to keep a job is to become an ultra-specialist. Instead of just being a general radiologist, you become an interventional radiologist, which is kind of half surgeon, half radiologist.

What do we need to be aware of when we develop technology that has the potential to become very powerful, when we put a lot of responsibility into these systems?

It's good to keep a way to turn them off. I heard a moral philosopher talk about the idea that we might have coded versions of moral philosophy embedded into computers. We don't have that now, but it's a really intriguing idea that—just as you download an operating system, you download a conscience to a machine.

There's been some recent research about ethics and robotics—about people's behavior changes in the presence of machines.

It's an interesting idea. If you think about the computer as a co-worker, they're kind of pushy and egocentric. They insist that you take their view of the world. They're constantly taking over new jobs. They're gaining intelligence, smarter than you. It won't be an easy co-worker for people to have. If they were people, we wouldn't like them very much.

Also see

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox