When Nick Carr started his blog Rough Type over a decade ago, he began questioning the impact of the internet on things like our concentration, relationships, and world views. Today, with the current proliferation of mobile phones, tablets, wearables, and other devices, we are more connected than ever. And, these queries—and what the answers imply—are even more important.
Utopia is Creepy: And Other Provocations, released today, culls together 80 posts from Carr's blog, along with an original piece "Is Google Making Us Stupid?" from The Atlantic. The collection, coming from a Pulitzer-finalist who has earned a reputation for his sharp insight into technology and society, revisit some of the issues raised in his early writing, as well as examine new issues like driverless cars and robots.
TechRepublic questioned Carr about these issues and below is the conversation, lightly edited for clarity.
You started your blog, Rough Type, largely about the internet in 2005. Over the last decade, what big changes have you noticed in how we interact with the online world?
A lot of the changes have been technical. If you look at how we defined personal computing in 2005 it was still very much built around personal computers—desktops and laptops. Now it's changed entirely so that personal computing takes place over smartphones, through apps that are constantly connected. Thanks to the smartphone, we've gone from a world where computers were important in our lives but we weren't using them all the time to a world where we're connected to computers from the moment we wake up to the moment we go to sleep. When you use a technology so intensively, it changes the way people behave. It changes the way they think. It changes the way they interact with others. I think in those 10 years we've seen a radical expansion of the role of computer hardware and software in our personal lives.
Do you imagine that this will continue to become an even bigger part of our lives?
All the signs seem to point to a continuation. We'll become even more dependent on computers—in one form or another—and apps and social media services to do things in our lives, to communicate and so on, though maybe we may eventually bump up against some limits. I think the failure of Google Glass, for instance, would seem to indicate that there is a limit to the intrusiveness of computers and computer screens that people are willing to accept. I think there's a small group of very geeky people who have no problem with having a computer monitor in their field of vision pretty much at all times but I think most people that makes them feel disoriented and kind of strange.
What do you think of driverless cars? How do you expect them to affect the way we live?
What's become frustrating in discussions of self-driving cars or autonomous cars is a fuzziness of the definitions. For instance, look at what Uber's doing in Pittsburgh—they're having autonomous car experts at the wheel, ready to take over when the software can't handle a particular situation. That is very similar to what we've been seeing with Google, and even in a way with Tesla's Autopilot. It's certainly an expansion of the stuff we've already seen, but I don't think there's anything particularly new in what Uber's going to be doing.
SEE: An Interview with Nick Carr (CBS News)
The billion dollar question is, "can we actually completely get rid of the human being as a backup driver for everyday driving?" I can certainly see the ways you can isolate particular driving chores in very limited ways in particular speeds and so forth, but there's still a pretty big question about whether we really create cars without steering wheels and without other controls, and expect them to be able to do everything that a driver does now. It may turn out to be possible, it may turn out to be trickier than we think, but clearly companies like Uber and Google and Ford and a lot of others see this as one [possibility] of the future of driving.
You write about automation, and the ways that humans are important in the equation. Can you illustrate this?
There was a very interesting story about Toyota a couple of years ago. Toyota has been a big leader in factory automation and factory technology in general, but it's been having quality problems in recent years. There've been a lot of recalls, and this is something that is not only a business problem for Toyota. It's something that goes exactly contrary to their culture which is a culture of quality and of meticulous manufacturing cost and so forth. So they brought some new leadership into the company and what they realized is that their dependence on factory automation and robots meant that they were losing their rich tradition of human expertise.
There were limitations to what robots could do, even in the factory setting. Robots certainly can be extraordinarily efficient and extraordinarily productive, and you don't have to pay them benefits and so forth. Yet robots are not able to examine in a critical fashion what they're doing in the way that human experts [can]. What Toyota decided is that they needed to bring back the human expertise, the human craftsman. They began to switch from robots to talented artisans in certain facets of their production. I think it started with the crankshaft line in one of their Japanese plants. What they found is that actually there is a way to achieve a partnership between human beings (human experts) and robots that makes both sides of the equation operate better.
What do you think of co-bots, or collaborative robots?
If you look at a lot of the studies about automation, many came from aviation, where there's a lot of history of human-computer interaction. It certainly suggests that the best way forward is not simply to assume that you want to replace everyone with software or with robots. It's a more of a human-centered approach that says, "How can we get the best of the machine and the best of the human being?" These are not substitutes for one another. The technology is very good at some things and people are very good at some things. If you can figure out a way to automate processes without assuming that you need to get rid of people I think you can get both efficiency and innovation and creativity over the long run.
There's a lot of pressure on the other side to simply go the replacement route. I think software programmers are often rewarded by thinking about, "How can we reduce the role of human beings in a process?" When you proceed in that fashion it often becomes a self-fulfilling prophecy because when you give people little to do they never develop rich talent, they never develop rich insights.
How do you respond to those who say that automation will free humans for higher-level tasks?
I think the belief that we're freeing up humans for higher level tasks is often a rhetorical fig leaf that covers up the fact that you really want to get rid of as many people as possible. You even see this in some of the utopian rhetoric that comes out of Silicon Valley. That "oh, in the future none of us will have jobs, but we'll all be artists or something." Well that's highly dubious and most people don't even want to be artists. I think it's very easy to use up the term "free up people" in order to avoid saying we're going to fire people, which is really what you're going to do. I think the history of technology in the workplace shows us that there are jobs that are lost to technology and there are other jobs that are created by technology.
What I think will happen is that we will see some categories of jobs disappear as we always have seen. We'll see some new categories of jobs appear, but the real question is: Will the future structure of the workforce be such that we have good well-paying middle class jobs, or will we see a desk-skilling effect that reduces wages in a broad fashion? If I'm nervous about the labor effects of automation. It's not so much from widespread unemployment as from widespread underemployment and poor pay, or part-time work.
What kind of advice would you give to someone heading off to college today?
My first advice would be not to listen to other people's advice. As important as it is to make a living, you also want to do something that's fulfilling. I wouldn't base decisions on predictions by futurists, because they're usually wrong. If you look back even in the recent past you can see a lot of when predictions about particular jobs are made they sometimes are way off the mark. For instance, when ATM machines became common everybody thought, "Oh that's the end for bank tellers." In fact there are more bank tellers now than ever before—ATMs changed things, but not in the way people predicted.
More complex and less routine work is going to continue to hold the best prospects. If you decide you're going to make a career picking heads of lettuce you may find that there are robots that are going to be able to do that. On the other hand, if you decide you're going to be a small scale farmer who's going to grow crops or animals that are going to be in demand by particular restaurants and local restaurants, you may do fine.
It's a good point. I talked to Alec Ross, who thinks that the emphasis on coding, for example, may be misguided, since those jobs won't be available later on.
You're already seeing a considerable amount of automation of coding, so this idea that there's going to be unlimited numbers of good jobs for coders is not true. I'm sure there are going to be plenty of good jobs, but I'm not sure that programming is going to absorb the entirety of the unemployed or underemployed people. There are plenty of people who simply are not going to have much aptitude or interest in programming and we have to remember that as well. Not all people are the same and not all people are going to do the same things.
- Nick Carr's Big Switch (ZDNet)
- Nick Carr on the amorality of Web 2.0 (ZDNet)
- Hot tech books of 2016: Check out TechRepublic's top picks (TechRepublic)
- The best and worst books on tech and society I read in 2014 (TechRepublic)
- Amazon, robots and the near-future rise of the automated warehouse (TechRepublic)
- AI will destroy entry-level jobs - but lead to a basic income for all (TechRepublic)
Hope Reese has nothing to disclose. She doesn't hold investments in the technology companies she covers.
Hope Reese is a journalist in Louisville, KY. Her writing has been featured in The Atlantic, The Boston Globe, The Chicago Tribune, Playboy, Undark Magazine, VICE, Vox, and other publications.