Innovation

Why deepfakes are a real threat to elections and society

Experts predict that deepfake videos will be the newest way false information is spread. Some researchers even have a wager going on whether they will impact the midterm elections.

Deepfakes are a new breed of fake videos that use artificial intelligence (AI) to make a falsified video virtually undetectable by swapping out someone's face and voice with an imposter's. The consensus among researchers is that deepfakes will eventually be used to impact a political election, whether this year or in the near future.

This is much more than a Photoshopped meme or a fake news story. With deepfake videos, algorithms are used to recognize actual audio or visual aspects of a person and then, just as with a fake photo, an actual video of that person is doctored to replace what they really said or did with a false video clip that perfectly mimics them. It's nearly impossible to know that the video isn't real.

SEE: Cybersecurity and the 2018 Midterms (TechRepublic Flipboard magazine)

Social media platforms such as Facebook, Twitter, YouTube, and Reddit are prime candidates for deepfake creators to target.

It's such a concern that the September congressional hearings with Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey included questions about deepfake videos, how they manipulate the public, and what the companies are doing about it.

The threat even led the Defense Advanced Research Projects Agency (DARPA) at the Pentagon to embark upon a Media Forensics project to identify deepfakes and other deceptive images.

Deepfakes gained attention earlier this year when BuzzFeed created a video that supposedly showed Obama mocking Trump. The truth was that deepfakes technology was used to superimpose Obama's face onto footage of Hollywood filmmaker Jordan Peele.

While deepfakes began as a way to clumsily misrepresent celebrities in spoofs and sexually explicit videos, it is actually very complicated to create an undetectable deepfake video.

Only a few labs around the world have the capacity because the tools to create deepfake videos are expensive, but much less so than in the past.

"Sophisticated multimedia editing used to require significant human expertise and time, even with the best commercial tools. Today, we are seeing tools come directly from the research community that allow for photorealistic manipulation and special effects that used to cost millions of dollars to create. While these tools are an asset to content creators such as those in Hollywood, they are lowering the bar for those that want to use them for adversarial purposes, said Matt Turek, DARPA program manager.

Not ready for primetime

Despite this, some researchers have a friendly wager on whether deepfakes will be an impact by the end of this year, with a political candidate being the subject of a deepfake video that receives more than 2 million views before it's determined that it's not real.

Tim Hwang, director of the Ethics and Governance of AI Initiative at the Harvard Berkman-Klein Center and the MIT Media Lab, started the wager to begin a debate to see if his colleagues believed deepfakes would become a threat before the end of 2018, and possibly impact the midterm elections. Hwang said he is in the camp that doesn't believe deepfakes will cause a huge impact before the end of the year.

"It's not ready for primetime yet," Hwang said of deepfakes. "I think people who want to spread disinformation are pragmatic in what's the easiest way to have the biggest effect. And right now, machine learning isn't like that."

SEE: Midterm elections 2018: How 7 states are fighting cybersecurity threats from Russia and other attackers (free PDF) (TechRepublic)

Rebecca Crootof, executive director of the Information Society Project and a research scholar and lecturer in law at Yale Law School, said she wagered "yes" that deepfakes could have a serious impact by the end of 2018.

"It's not a matter of if, it's a matter of when—and when we learn that it happened. Chances are, we will only learn that a deepfake affected an election after the election takes place," Crootof said.

It's all in the blinks

Some researchers are working to find ways to combat deepfakes. Siwei Lyu, director of Computer Vision and Machine Learning Lab at University at Albany SUNY, has researched digital media forensics for 15 years, and he co-wrote a paper in June that outlines how to know if someone is lying. His discovery: t's all in the blinks. If someone doesn't blink much in a video, it's suspicious.

His team is seeking other ways to detect fakes, but he is keeping those methods confidential so that it doesn't help the people creating deepfakes find ways to dodge detection.

"We just got interested in this deepfake phenomenon earlier this year. The first thing we did is actually got a piece of the deepfake software and we actually played with the software, we actually improved it a little bit. Because we always believed to understand, to detect any faulty media we need to have a better understanding of the generation process," Lyu said.

"We have an improved version of the software, the algorithm, and we synthesized about 50 different sequences of those videos. We try a bunch of ways to detect that video, you know to tell the difference between the fake video and the real video," he continued.

SEE: Guidelines for building security policies (Tech Pro Research)

Lyu said by spending so many hours watching deepfake videos, and studying the videos, his team began to pick out small differences. For example, he felt uncomfortable and a bit uneasy watching the videos.

Never underestimate the importance of intuition. "I couldn't pin it down until one day, after probably viewing them for [a long time], I got really tired," Lyu said. Then suddenly I realized, the faces in those fake videos seem to be never blinking. That's the uneasy feeling that I related to an early experience of when I was a kid, playing with other kids, doing staring contests. We would just stare at each other, without blinking, to see who is going to blink first. Each time I did that I felt very uncomfortable when I was a kid.

"At the very beginning I thought this may be just a particular artifact of one video we synthesized, so I went back and watched all the videos we synthesized, and it seems that to be very consistent with videos longer than 10 seconds, sometimes 20 seconds or 30 seconds, and the figures in those videos, they don't blink," he said.

Adversarial training to avoid detection

The creators of deepfakes use adversarial training to learn how to beat the fake detector techniques, said Paul Resnick, founder and acting director of the Center for Social Media Responsibility at the University of Michigan.

"The idea is, suppose we have some automated detection that's developed and it looks at all the characteristics at people, like, it looks at if the skin tone's correct, and are people breathing at the right rate, and if the pulse in the forehead is the same as the pulse in the neck, and whatever things that you can imagine that you might put into a detector. But the attacker will be able to use that detector and train against it. So they'll be able to build their faking techniques that automatically check to make sure that the detector is not able to detect that they're fake," Resnick said.

"So they can sort of train their generator of fakes by having it automatically try to run the detectors. So that's part of what makes me pessimistic about being able to have effective detectors that are based solely of the contents of the video, because the attackers are eventually gonna get sophisticated enough to use the detectors as part of their training process for making their attack, or making their fakes," he said.

Since there are ways to get around software that detects fakes, using digital signatures on videos, and knowing where a video came from and who created it will be key toward avoiding the spread of deepfakes, Resnick said.

SEE: Deepfakes are a threat to national security, say lawmakers (CNET)

The GAN approach

Another researcher working on detection of deepfake videos is Bobby Chesney, professor and associate dean of the University of Texas School of Law. Chesney and Danielle Keats Citron co-wrote a paper in July on Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security.

"Danielle and I are trying to focus on true deep fakes, particularly GANs, and we take the view that we have not yet reached the day when true deep fakes are circulating with intent to deceive, though that day is looming," Chesney said.

GANs refers to "generative adversarial networks." The GAN approach brings two neural networks to bear at the same time. One network learns to identify the patterns in a digital media clip, such as of a politician's face, and the second network serves as a viewer to figure out if an image or video clip is real or not. The second network gives feedback, and the first network uses it to improve the believability of the deepfake video. This is all done using machine learning and AI, so the speed and scale cannot be mimicked by humans, Chesney explained.

DARPA's Turek added that, "GANs enable a computer to automatically generate manipulations. Now, with the right training, we can have a computer automatically generate what used to take a graphic artist several hours, if not days, to create by hand."

A new kind of blackmail

The problem is that while currently high-quality deepfakes are difficult to make, they will soon become easier to create. Once that happens, people with malicious intent could create deepfakes to destroy reputations of political candidates and others, because high-profile individuals are particularly at risk. And once a video goes viral, it's nearly impossible to stop.

"Right now there are labs out there that can do some really amazing fakery," Chesney said, "Access to that is not yet widespread. What is primarily available is not-so-sophisticated stuff that won't as readily pass the eyes and ears test."

Crootof said that the danger in deepfakes lies in that "they allow for new kinds of blackmail, electoral manipulation, and inflaming extant social tensions. Also, as Bobby Chesney and Danielle Citron have noted, they increase the possibility of a 'liar's dividend.' Once the public is aware of the possibility of deepfakes, it allows liars to claim that an accurate video is just a deepfake.

"Most critically, they risk further eroding trust in sources of information, thereby contributing to the continued fragmentation of our public discourse," she said.

SEE: Campaign 2018: Election Hacking (CBS News and CNET)

Search remains for silver bullet solution

Currently, no sure-fire way to detect a deep fake exists. "At present, there doesn't seem to be a silver bullet. All of the suggested solutions - more critical analysis in education, technological watermarking, legal bans, and ongoing surveillance by a trusted independent third-party entity - to combat deepfakes are either insufficient to prevent most problems or raise their own set of (possibly worse) issues," Crootof said.

Instead, Crootof expects this will play out much like altered photographs - where people will become increasingly aware of the possibility of deepfakes, and lose faith in what they see.

With the rate of advancement in image and video editing tools, Turek believes that in the next few years manipulations may no longer be limited to a single image or video. "We could face the threat of entire events being fabricated with images, videos, and audio content coming from multiple views and locations, providing overwhelming amounts of false evidence," he said."One could imagine with widespread dissemination that this could provoke riots, cause political unrest, or even prompt militaries to act, all on bad information.

The ramifications from this are unprecedented. "This is, of course, a serious concern, not only for the Department of Defense and military but to our nation in general," Turek said. "We rely heavily on visual media in everything from news reporting to law enforcement to open source content used to help understand trends happening around the world. If our trust is undermined and we can no longer have confidence in the provenance of our media, we will have difficulty believing all forms of communications."

It will lead to the public not trusting videos in general. Resnick said, "In the longer term, I don't think it's likely that the public will be fooled a lot of time, because once it becomes well known that you can't trust video, then they'll be an adjustment that people make. They won't assume that anything that they've seen is a real video. Just because you've seen it with your eyes in a video isn't enough on its own to conclude that it really happened."

Also see

deep-fakes-thumbnail.jpg
Image: CBS News

About Teena Maddox

Teena Maddox is a Senior Writer at TechRepublic, covering hardware devices, IoT, smart cities and wearables. She ties together the style and substance of tech. Teena has spent 20-plus years writing business and features for publications including Peo...

Editor's Picks

Free Newsletters, In your Inbox