TechRepublic's Dan Patterson talked with Michael Fauscette, CRO of software solutions company, G2 Crowd about the threat posed by video face-swapping technology.
Watch the video, or read a transcript of their conversation below:
Patterson: Michael, let's talk a little bit about, it may seem like a funny novelty, but the serious threats involved with AI-based face swapping, how does this technology work?
Fauscette: Well, it's something that's evolved very quickly over the last I would say, probably a year. And it was originally designed just basically a project was somebody playing around with technology based on artificial intelligence machine learning, and using face swap sort of algorithm that would let you cut somebody's face out of the video and put it onto the somebody's else's body.
Of course, as you'd expect, the first trials of that were pretty rough and pretty easy to tell. But, in the world of things that teach themselves how to be better, they've taught themselves how to be better very quickly. And so now you start to see things that are fairly realistic. Videos portraying a person, face on a different body and even sound that's very accurate.
SEE: Artificial intelligence: A business leader's guide (free PDF) (TechRepublic)
Patterson: So, when you say taught themselves, or learned, is this the AI using recursive learning to improve?
Fauscette: That's correct. So, it's really just machine learning or, deep learning that takes an algorithm that does some activity, and then very simply over time, improves itself based on different factors and feedback of course. I'll tell you, just as an experiment we took one of our folks on the marketing team, someone who didn't really have any video editing particular skills, just done some things on his own personally.
And, within a day, it produced a video that was fairly convincing with one of my research specialists talking, by cutting and putting on a different body and then putting the words in his mouth, so to speak.
Patterson: So, what are the potential risks or dangers to business, politics, and other industries?
Fauscette: Well, if you think about the saying that perception's reality, I think that you start to open up some really dangerous situations for business, and then as you said for government. But take the business case first and, over the last couple of years ransomware kind of activities have increased both in city government like the Atlanta example recently. And then of course for business, is that they're held ransom based on some insertion of malicious software.
Well now, take that further and go, oh I could put together a video that's convincing that shows your CEO taking a bribe or, giving a bribe or, doing something with a woman or, and escort, or whatever. There's all these different potential cases that set you up to look like you were doing something that you wouldn't normally do.
And of course, over time with experts you could prove it wrong but, the hit to your stock price for example, that doesn't recover quite as easily as you might think. And so, in that moment, it could look very bad. Government, even worse, right? They're sort of public officials taking bribes, or making racial comments, or officials appearing in places and doing things that they wouldn't normally have done.
Or, even in compromising positions. Spies, or criminals, or soldiers doing horrific things in a war zone. There's all sorts of scenarios there that again, in the moment ... Well, even of an election. You have a candidate doing something that was really out of character, nobody would expect. But, it has the impact of creating that disruption in the election. It could cause them to lose or ... You can see there's just a lot of scenarios there that sound upsetting I guess is the way to put that.
SEE: Nine ways to disappear from the internet (free PDF) (TechRepublic)
Patterson: Yeah. As a journalist where trust with the audience and with editors is a critical component of the job, I can see a lot of potentially horrific uses for this type of technology. Right now, the tech is packaged as kind of cutesy AR apps. But how widely available is this technology for weaponized purposes? I'm thinking specifically, the shadow brokers, NSA cyberweapons leak, although that is framed as a cyberweapons to begin with. This type of tech seems like it's very easily weaponized. How accurate is that?
Fauscette: Well yeah, I think it's very accurate. The whole thing really started with a sub-Reddit I should say and some code that was posted there with some videos of course. And, that was eventually shut down but, the code was already in wild. And then of course it was an open source project, and it was entered into some different websites where you can download the code and you can improve it.
And so, once a code's in the wild, that means a lot of people have access to it and it just keeps showing up in different places. So you can't really eliminate it, and there's really no way to control it. And it just continues to get better with each sort of hand-off.
Patterson: Micheal Fauscette, CRO of G2 Crowd, paint a picture for us of the next 18 to 36 months. Is there any way that this type of technology is deployed for positive reasons, or are we looking at some sort of a crazy fake news future where fake humans are blended in with fake news?
Fauscette: Unfortunately I don't see a real sort of positive use case for the technology. And, I think that it is gonna improve, and I do believe that, that can very well create some very bad situations either from a business perspective, or financially, or from a governmental perspective and, you can think of all sorts of scenarios there. I think we'll get better at a couple of things. We'll get better at building technology to try to identify.
But, the problem of course with that is that in the moment, the damage can be done before you could get to the point of identifying it. And, even some sort of a digital certificates that we could perhaps use to provide evidence of provenance of certain videos. But again, at the moment is the issue. And so I think for a while at least, that there's a big risk around that and the whole fake news problem.
- How blockchain technology could prevent fake news from spreading (TechRepublic)
- Welsh police facial recognition software has 92% fail rate, showing dangers of early AI (TechRepublic)
- DeepMind research shows AI can make itself more human, and businesses should take notice (TechRepublic)
- Facebook's fake account crackdown: Our AI spots nudity, hate, terror before you do (ZDNet)
- The ethics lessons will continue until morality improves (ZDNet)
- Can humans get a handle on AI? (ZDNet)
Dan Patterson has nothing to disclose. He does not hold investments in the technology companies he covers.
Dan is a Senior Producer for CNET and CBS News.