IBM security researchers demonstrate how new artificial intelligence-powered facial recognition technology can trigger malware lurking within common applications.
TechRepublic's Dan Patterson spoke with Marc PH. Stoecklin, Principal RSM & Manager, CCSI at IBM Research demonstrated to TechRepublic's Dan Patterson just how new artificial intelligence-powered facial recognition technology can trigger malware lurking within common applications.
Marc PH. Stoecklin: What we show in this proof of concept is AI-powered malware through a distribution channel, which is using unsuspicious, innocent-looking application. We use for this purpose a videoconferencing application that we call Talk. We're downloading this application. The user is opening the application from his download and it is running.
It's behaving normally. We have the sign-in screen. Now, the application can be used as if it was a normal application. Indeed, it is a normal application. It is a fully usable application at that point. However, what we're going to see now, if we're moving the laptop to look at Dan's face, the behavior will suddenly change.
SEE: IT leader's guide to the threat of fileless malware (Tech Pro Research)
What happened now, the AI model picked up on Dan's face, and from Dan's face derived a key. It used Dan's face as a key, basically, to derive how and when to unlock that malware. It makes it very evasive and very targeted to only Dan by using this application and showing him malicious behavior.
The AI is inspecting what is being seen by the webcam and is able to derive a key to unlock the malicious intent, and only if a specific person is showing up in front of the webcam to which the AI has been trained to recognize the person, then, in this case, the key can be derived, and the malicious behavior is showing up.