Security

IBM introduces 'Adversarial AI toolbox' to keep your AI from getting attacked

IBM machine learning researcher Maria-Irina Nicolae spoke with TechRepublic about how the company's new Adversarial Robustness Toolbox can protect AI from tampering.

At the 2018 RSA Conference, Maria-Irina Nicolae, machine learning researcher at IBM, talked with TechRepublic about the launch of a new tool to help companies test and improve their AI:

Nicolae: IBM announced the release of the Adversarial Robustness Toolbox today. This is a toolbox that is meant to help developers and researchers working on adversarial attacks against machine learning, so the toolbox features attacks and defense methods along with some metrics for evaluating the robustness of machine learning models.

So we had a demo today that relies entirely on features that you can find in the Adversarial Robustness Toolbox. What we are showing is how an attacker can tamper with an input, with an image, for a machine learning model. In this case we were looking for a visual recognition task. The machine learning model is trying to detect or to say what are the objects in an image. The attacker is going to introduce some very small noise perturbations to the image that might even be undetectable by humans. And these small perturbations will make the machine learning model behave in an unexpected way. In this case, it will make it predict the wrong object in the image.

SEE: How to Implement AI and Machine Learning (ZDNet/TechRepublic special feature) | Download as a PDF

Yes, so this is an important problem for every system that uses AI to make decisions. If you are able to tamper with the decision of the model and this has follow-up impact, say for your self-driving car example, you can see how there's a liability problem arising right away.

So having this kind of attack, where someone tampers with the input of an AI that's in production and already working, is what we call an evasion attack. Say, for example, the attacker would want to stay inconspicuous and not be detected, and that's the evasion part. Other types of attacks against machine learning models include tampering with the data that has been used to train the model. So when this happens, the AI is compromised and this is what we call a poisoning attack. The library will feature poisoning attacks and defenses against these types of attacks in the future.

Also see:

istock-685820632.jpg
Image: iStock/monsitj

About Jason Hiner

Jason Hiner is Global Editor in Chief of TechRepublic and Global Long Form Editor of ZDNet. He's co-author of the book, Follow the Geeks.

Editor's Picks

Free Newsletters, In your Inbox