The impact of our digital life has been explored by sociologists, psychologists, scientists—and now, a philosopher. While there are certain concrete ways that our screens are changing us, philosopher Michael P. Lynch believes that the very presence of the internet has created a "new way of knowing"—a way of accessing information that, he argues, carries a cost. When we rely on "Google-knowing," Lynch says, other forms of learning, creativity, and true understanding are at risk.
TechRepublic spoke to Lynch, professor of philosophy and the director of the Humanities Institute's Public Discourse Project at the University of Connecticut, whose book The Internet of Us: Knowing More and Understanding Less in the Age of Big Data is out today.
What is Google-knowing, and how is it different from traditional knowing?
By Google-knowing, I mean not just knowing by web search—although it includes that—I mean knowing via digital interface, whether that interface is through our smartwatch or through an app. And by now, most of what we do, what we know, what I'm calling Google-knowing, is through apps on our phones.
What makes Google-knowing distinctive is that it has a certain combination of features that we really haven't seen before in one form of knowing. On the one hand, Google-knowing is becoming increasingly a way of knowing that we treat with default trust. Even my 10-year-old knows not to trust everything that you see on the internet, and we tell each other that all the time. But that doesn't stop us from checking it when any matter of fact comes up, right?
How many times have we been at a dinner party, in a bar, or in a meeting, and somebody says something and there's a question about whether it's true, and there's a race to the smartphone? We treat digital interfaces as our go-to source of information.
How do you see this as a new form of perception?
We used to say that seeing is believing. Recently, in a sort of default-trust way, we think googling is believing. Also, our interface with it is increasingly seamless or transparent to us—a lot of that credit goes to the designers who have worked so hard to make our interface with digital reality seamless and transparent.
It is so transparent that we don't really think of ourselves as doing anything particularly conscious or active. Some of that has to do with the fact that we're integrating the technology more and more into our bodies, and closer to our bodies.
You also write that Google-knowing is dependent on others.
This other cluster of features is associated with, let's say, asking for directions on the street, or reading something in a book. The internet is like a giant testimony machine. We consult it for the opinions of other people. Sometimes those opinions are aggregated in certain ways, useful ways. Sometimes they're not. Sometimes we're just checking to see where the opinions are.
How is it kind of intrinsically different from, say, imagining us carrying around an encyclopedia that had all the same information in it? Is it just faster, or is the actual content distorted in some way when we access it online?
It's certainly the case that the features Google-knowing has are not in themselves different from other ways of knowing. What's different is the fact that these are linked together in one way of knowing. We've never before had the ability to get the opinions of millions of other people in this automatic, seamless, almost automatic, default sort of way.
You might say, "well, it's just the same thing speeded up"—but that overlooks the fact that the speed has broken down certain barriers.
The written word really allowed us to transcend time in a certain sense. If I read the thoughts of others, I can go back in time and see, at least in part, what they were thinking. What the speed of the internet allows us to do is to reach out to all those people across gulfs of space, and that really is a radical game-changer. It's the combination of these features that are different.
You say an overreliance on this form of knowing is weakening our other senses. What kind of creative thinking are we losing? Is there a way to get some of that back?
One of the things that I'm very concerned about is the fact that, like everything else in life, when we find something that works really well for certain purposes, we tend to get so excited about it, so reliant on it, that we tend to value it more. We think it solves more problems than it might actually do.
Google-knowing has given us lots of benefits, but it doesn't allow us to synthesize those facts all by itself. It can give us more facts, more stuff into the hopper, but in and of itself it doesn't tell us how to understand what we're processing. That is something that in a sense involves a whole different set of cognitive abilities, some of which are connected to creativity.
Right now we're talking a lot about analytics that we're able to use on massive sets of data. The internet of things is producing more data, which we are then able to employ our mathematical techniques on to find incredibly helpful and predictive correlations. That knowledge of how things hook together is the sort of thing that only can come through what I call understanding. Understanding is not the kind of cognitive ability that's being exercised or used when you're just passively receiving information.
We can't lose sight of that fact of understanding as it ties to knowing, and to figure out how our digital platforms can be used to facilitate understanding. It can't be used to reproduce understanding.
You talk about the internet promoting a passive way of knowing. Are there inherent values embedded in the internet?
The passivity is exemplified by the sort of things we all do on our social media accounts. Re-tweet things, "like" things. That seems like a little bit of activity, but actually we're just sort of passively saying, "here you go. Here's some more." It's almost as if we're standing at an assembly line just watching things go by and saying "yup, keep going."
Sharing is something I do a lot of, but it's also not particularly active. The internet is the greatest fact checker, but also the greatest bias-confirmer ever invented. What does that lead to? One of the problems it leads to is what I call epistemic overconfidence. We think that since so much information is out there, that we know anything anybody says, we know more than we actually do.
You talk about companies like Google and Amazon that are collecting all this massive data. How are our intellectual lives online being affected and kind of monetized by companies?
I think you just hit upon something that's absolutely crucial for us to be aware of as we move into this new, more integrated relationship with the internet. There are these big companies like The New York Times and so forth that produce content that a lot of people read and pay attention to that are public, but the media now includes everyone's twitter feeds and Facebook.
We've got a democratization of servicing of the media. All these different voices that can speak up. What we aren't often aware of is what you're talking about, which is that the big data firms can help manufacture consent just by controlling how information is flowing around the internet, not the content itself but how the information flows. Companies like Google and Amazon are helping to produce, distribute, and process the energy of our time, which is information.
Can you talk about Google Glass? What do you worry about with a world in which the internet becomes a seamless part of our lives?
I think it's really interesting that when they introduced that, one of the things that they said was, "what we think is really cool about this is that it helps get technology out of the way." Right, you're not fumbling with your phone. But the weird thing is—and this is true of all these technologies—attempts to get technology out of the way are literally putting it in the way. Literally putting it between us and everything we interact with. It compounds the sorts of problems we're already seeing with Google-knowing. You're filtering everything through it.
- How our attachment to tech is downgrading our conversations: A word with Sherry Turkle (TechRepublic)
- Is technology at work taking the humanity out of our personal relationships? (TechRepublic)
- 'Digital industrialism': Why we need to rethink the purpose of our economy (TechRepublic)
- Columbia professor warns of dangers of our online footprint, offers tools of 'resistance' (TechRepublic)
Hope Reese has nothing to disclose. She doesn't hold investments in the technology companies she covers.
Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.