The intersection of technology and society has created a new world filled with AI algorithms that determine everything from what we see on Facebook newsfeeds to the ads that we see online.

Learning how to navigate these potential hazards was the focus of a session this week at IdeaFestival 2017 in Louisville, KY. Emily Dreyfuss, senior writer for Wired, discussed the impact of AI algorithms and what we can do about it.

“What I’d like to do is tell you about how I, over the past couple of years, have come to the conclusion that the mantra that you’re all familiar with, ‘move fast and break things,’ that animates Silicon Valley, has outlived its appropriateness,” Dreyfuss said. “In looking at the world that we have created and that we’re all living in today, I’ve come to the conclusion we need to rethink it because ‘move fast and break things’ has enabled us to move fast and lose control of our creations.”

SEE: How machine learning’s hype is hurting its promise (TechRepublic)

The abundance of information on our friends and co-workers that we can find on social media are one of the many obvious ways that technology is changing our lives and relationships. The less obvious ways are more invisible but permeate every aspect of our lives, including jobs and healthcare.

“I’m talking, of course, about algorithms. Artificial intelligence, machine learning, algorithms, these are all pretty much the same thing,” Dreyfuss said.

The many faces of AI

AI is awake and out there in many versions. Google’s AlphaGo is an example, in that it was an AI that Google trained to play the complicated game Go. The game beat the world’s best human player by using a move that no human player would have ever done, Dreyfuss explained.

“The computer was able to see into the future so much better than humans were, and it was the most brilliant and unprecedented game that had ever been won,” she said.

An algorithm is, of course, a bit of code. It’s a program that humans give parameters and the data that is fed to it is used within the rules that are set up. “The thing that’s crazy about AI and crazy about AlphaGo, is that the people who created it have no idea where it came up with that move. They have no idea how it figured that out because algorithms and AI are not like my telepresence robot. They are not like the [Sony] Walkman. They move past us. They can do things beyond our own expectations. They are everywhere. Invisible to us,” she said.

Other examples Dreyfuss shared include Microsoft’s creation of a chatbot named Tay. Microsoft unleashed Tay on Twitter and within 24 hours, she was a Twitter troll of the worst kind, quoting Mein Kampf and behaving as an actual Nazi. Microsoft executives were so embarrassed that they removed Tay from Twitter within a day, Dreyfuss said.

The reason that Tay the chatbot became a Nazi was because she was interacting with humans who taught her that Nazis exist, and that harassment is a real thing. “This is how algorithms work. They don’t exist without us, they’re fed by us, which means that we imbue them with our own biases and our own values. We create them overtly to quote unquote, ‘Make the world a better place.’ That’s what our technology is for. In the parent in that statement is our belief of what is good. So we put our values in there but we also put a little bit of our bias in there, and that’s what happened to Tay,” Dreyfuss said.

It also happened earlier this year on YouTube. “YouTube makes ton of money by having young YouTube stars who do videos and upload their thing. A lot of these big stars, young stars in particular, started realizing that their videos, when they had any LGBT content, were disappearing. They had no idea why. When Google looked into it, they didn’t deny it. Google looked at it and said, ‘Oh, shoot. You’re right. The computer is deleting LGBT videos.’ When people asked Google, ‘Why are you doing that?’ Google said, ‘We don’t know.’

“That’s the risk of algorithms. The fact is, the reason why that was happening is that Google had put somewhere in there, a rule that LGBT was something controversial. I think it turned out that they have a restricted mode for children, and they had defined gay content as risky, and so children shouldn’t see it. Which meant that even a gay person doing a makeup video counts as too risky to be seen. Their algorithm was making the decision to remove that,” Dreyfuss explained.

It’s very difficult to go back and look at an algorithm and see where bias exists. This is why, when you create the rules and feed the data, you have to be extremely thoughtful, she said.

“Another prevalent bias in our algorithms right now is about to be really big because of the iPhone 8 that just came out. It’s in computer vision, facial recognition technology. Facial recognition technology is on iPads, it’s on iPhones, it’s really big in the new iPhone, and it learns by looking at pictures and faces. So it can know what human face is. But these algorithms will train on data that go way back in the history of photography, of Kodak and Hollywood, and that is a history of discrimination, where most of the faces were white. As a result of that, computer vision has a hard time seeing non-white faces. Computer vision is not biased. The human beings who even tried to create this technology are not biased, but the data they fed it is biased,” she said.

Facebook’s algorithms determine what content you see, and what you don’t see. Following someone’s page or liking a company’s brand isn’t enough to make that show up in your newsfeed if Facebook’s algorithms have determined that you aren’t interested in that update.

Earlier this summer, someone committed murder on Facebook Live. The video was live for hours, and people were furious that Facebook allowed it to be visible for so long.

“There’s two answers to that. One answer is that, technically, Facebook’s algorithm, though it is advanced, and though it makes these decisions about who your friends are and what it wants to show you, and how to add target things to you based on your demographic, it’s not advanced enough to watch live video as it’s happening,” she said. And the second answer is that the problem with computer vision is that algorithms cannot tell a real gun from a fake video gun game.

Fake news and election impact

The fake news of the 2016 election is another variable that has had a vast effect on society. Dreyfuss said, “Facebook has been accused of not being thoughtful about how it allowed fake news to be targeted to specific people who were voting in the 2016 US election. And at first, Facebook, Mark Zuckerberg, said, ‘I don’t think that’s a thing. I’m really not worried about it.’ But, recently, when it was found that these proponents of fake news actually bought ads on Facebook that allowed them to directly target their swarms to certain voters in certain areas in America, Zuckerberg said, ‘Whoa, okay, you guys, there’s like, indisputable evidence now, we’re going to do something to fix this.’ And they added new reporting tools for fake news, and they are announcing a whole new team. They hired a ton of journalists to help and they really seemed to be taking it more seriously, especially in the wake of this news that happened last week, which showed that Facebook’s algorithm, which is charge of how ads work.”

“The thing to understand about Facebook is even if we love it because it lets us see where our friends from college ended up and see their babies, that is not really how Facebook makes its money. And Facebook doesn’t really care actually, in terms of the bottom line about that. Facebook makes its money by putting ads in front of our faces, and we give Facebook a ton of details about our lives so that they can target the ads directly at us,” she said.

“And this algorithm is found to have created a category to allow people to target directly to anti-Semites and every other kind of bigotry that exists in the world. The fact is why are these things possible? Because Facebook didn’t know ahead of time that they would be. Facebook, when it says they reflect society, what that shows is that they did not look forward and think about how, if that’s the case, how is our society then going to be empowered to abuse the tools they give us?”

Don’t be controlled by algorithms

We no longer live in the 80s, when “move fast and break things” only meant egos being damaged by failed companies. Now, breaking things means lives and democracies are being impacted and lost, she said.

Algorithms can tell anyone anything they want, as information can be tailored to be what someone wants it to be. Moving fast, innovating and thinking forward and thoughtfully is something everyone can do to combat the invasion of algorithms in our lives.

Turning off push notifications is one way to combat how algorithms impact our lives, since it is subconsciously stressful to hear your phone ping or vibrate and know you need to look at it. It can also be combated by being more thoughtful, especially in what is Tweeted or written on social media in the wake of a terrorist attack. Even a well-meaning tweet saying a terrorist attack is horrible is spreading fear, which is the entire point of terrorism, she said.

“There’s all these ways to be a little more mindful. And on Facebook I think, just knowing when you log into Facebook, and see those adorable baby pictures…just being aware of the fact that this is a simulated, curated reality that you’re not curating, that Facebook is curating for you, I think will help you to be empowered to hold technologists to account, because we are their customers. We are their users. Without us, they are not making products.”

Also see: