Why AI-faked videos are a danger to national and business security

If you thought fake news was bad, just wait until hackers get their hands on AI-powered face swapping tech, says G2 Crowd CRO Michael Fauscette.

Why AI-faked videos are a danger to national and business security

Michael Fauscette, CRO of G2 Crowd, tells TechRepublic Senior Producer Dan Patterson that new AI tech, which can easily create fake news, can be a boon to hackers. The following is an edited transcript of the interview.

Dan Patterson: Artificial intelligence can do a lot of things, including swap your face. Let's talk a little bit about... it may seem like a funny novelty, but serious threats are involved with AI-based face-swapping. How does this technology work?

Michael Fauscette: Well, it's something that's evolved very quickly over the last, I would say, probably a year. It was originally designed, basically a project, with somebody playing around with technology, based on artificial intelligence, machine learning. Using a face swap sort of algorithm lets you cut somebody's face out of a video, put it on to somebody else's body. Of course, as you'd expect, the first trial of that were pretty rough and pretty easy to tell, but in the world of things that teach themselves how to be better, they've taught themselves how to be better very quickly. And so now you start to see things that are fairly realistic, videos portraying a person face on a different body, and even sound. So that's very accurate.

Dan Patterson: When you say taught themselves, or learned, is this the AI using 'recursive learning' to improve?

Michael Fauscette: That's correct. So it's really just machine-learning or deep-learning that takes an algorithm that does some activity, and then, very simply over time, improves itself, based on different factors, and feedback, of course. I'll tell you, just as an experiment, we took one of our folks on the marketing team, someone who didn't really have any video editing or particular skills, just has done some things on his own, personally. And within a day, he had produced a video that was fairly convincing, with one of my research specialists talking by cutting and putting on a different body, and then putting the words in his mouth so to speak.

Dan Patterson: So what are the potential risks or dangers to business politics and other industries?

Michael Fauscette: If you think about the saying that perception's reality, I think that you start to open up some really dangerous situations for business, and then as you said, for government, but take the business case first, and over the last couple of years, ransomware kind of activities have increased both in city government, like the Atlanta example recently, and of course for businesses that are held ransom based on some insertion of malicious software. Well now, take that further and go, Oh, I could put together a video that's convincing that shows your CEO taking a bribe or giving a bribe or doing something with a woman or an escort or whatever. There's all these different potential cases that set you up to look like you were doing something that you wouldn't normally do.

SEE: Zuckerberg skips global hearing on fake news, irking politicians (CNET)

And of course over time with experts, you could prove it wrong, but the hit to your stock price, for example, that doesn't recover quite as easily as you actually might think. And so in that moment, it could look very bad. Government, even worse. They're sort of public officials taking bribes or making racial comments or officials appearing in places and doing things that they wouldn't normally have done, or even in compromising positions, spies or criminals or soldiers doing horrific things in a war zone. There's all sorts of scenarios there that again, in the moment, even of an election. You have a candidate doing something that was really out of character, nobody would expect, but it has the impact of creating that disruption in the election that could cause them to lose. You can see there's just a lot of scenarios there that sound upsetting, it's just a way to put that

Dan Patterson: As a journalist where trust with the audience and with editors is a critical component of the job, I can see a lot of potentially horrific uses for this type of technology. Right now, the tech is packaged as kind of cutesy AR apps, but how widely available is this technology for weaponized purposes? I'm thinking specifically the shadow brokers, NSA cyber weapons leak, although that is framed as cyber weapons to begin with. This type of tech seems like it's very easily weaponized. How accurate is that?

Michael Fauscette: Well, yeah, it's very accurate. The whole thing really started with a reddit or subreddit I should say, and some code that was posted there with some videos, of course, and that was eventually shut down, but the code was already in the wild, and then of course, it was an open source project, and it was entered into some different websites where you can download the code and you can improve it. So once the code's in the wild, that means a lot of people have access to it and it just keeps showing up in different places, so you can't really eliminate it, and there's really no way to control it, and it just continues to get better with each sort of handoff.

SEE: Deepfakes are a threat to national security, say lawmakers (CNET)

Dan Patterson: Paint a picture for us of the next, say, 18-to 36-months. Is there any way that this type of technology is deployed for positive reasons, or are we looking at some sort of crazy fake news future where fake humans are blended in with fake news?

Michael Fauscette: Unfortunately, I don't see a real sort of positive use-case for the technology, and I think that it is going to improve. I do believe that can very well create some very bad situations, either from a business perspective, or financially, or, from a governmental perspective. You can think of all sorts of scenarios there.

We'll get better at a couple of things, we'll get better at building technology to try to identify. But the problem, of course, with that, is that in the moment, the damage can be done before you could get to the point of identifying it. Even some sort of digital certificates we could perhaps use, to provide evidence of provenance of certain videos. But again, the moment is the issue, and so for a while at least, there's a big risk around that, and the whole fake news problem.

Also see