Hackers are using artificial intelligence and machine learning to improve their attacks. Here's how to safeguard against malicious AI, according to Forrester.
Building a slide deck, pitch, or presentation? Here are the big takeaways:
- Hackers have exploited AI technologies to take control of IoT devices, spy, and carry out malicious activities. — Forrester, 2018
- 64% of global enterprise security decision makers report that they are concerned about AI technologies. — Forrester, 2018
Enterprises are tapping artificial intelligence (AI) tools such as text analytics, facial recognition, and machine learning platforms to transform almost every aspect of the business. However, cybercriminals are using those same technologies to improve their malicious activities, according to a new report from Forrester.
Some 64% of global enterprise security decision makers report that they are concerned about AI technologies, according to Forrester data. Security leaders must familiarize themselves with AI tools, and the potential ways that hackers can use them to exploit enterprise networks, the report noted.
"Ignoring the coming storm of AI-powered exploitation will lead to an entire second generation of failure and exploitation across future networks and infrastructures," according to the report. "The time to prepare for and understand the nuances of these types of threats is now, before you're under attack."
SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)
Forrester recommended that enterprises do the following to protect AI efforts from cybercriminals, as well as stop criminals from carrying out AI-powered attacks against your business:
1. Hire or contract an AI expert onto the security team
More than half of enterprises are now implementing or expanding AI efforts, according to Forrester. Marketers are using the technology to better understand customers and improve their experiences, while business and tech leaders are using these tools to gain insights from data. Meanwhile, other parts of the business are investing in physical robots and robotic process automation.
"Security leaders must make the effort to insert themselves into these initiatives and provide guidance on how the organization builds, uses, and secures AI technologies," the report stated. "Don't let security become an afterthought as it usually is with emerging technology adoption."
Forrester recommends security teams hire or contract with a dedicated AI expert who is tasked with ensuring the safety of all enterprise AI endeavors.
2. Inventory AI initiatives across the enterprise
Security leaders know that they must align their efforts with the business, and show support for business objectives. As part of these efforts, you must ensure that you are including relevant AI projects and planned activities, so that you can map out risks, security objectives, compliance objectives, relevant projects, and success metrics from the beginning.
"This will ensure that you aren't brought in too late to AI projects or, worse, not at all," the report stated.
3. Assess AI technologies as both weapons and attack vectors
As you inventory AI initiatives and perform a risk assessment, it's key to remember to assess the risks in two dimensions: How hackers will use weaponized AI technologies against you, and how cybercriminals will convert your organization's own adopted AI technologies as attack vectors into the firm.
"The former will require that you keep pace with cybercriminals by transforming your own security operations with AI technologies," the report stated. "For the latter, remember that the security team doesn't have any power to stop AI initiatives. Your job is to evaluate the risks and make recommendations."
The business ultimately makes the decision about what to do, based on its risk tolerance, the report noted.
4. Uphold data integrity as essential in an AI-powered world
AI systems make decisions based on their training and algorithms. If that training was corrupted by a malicious actor, every decision that system made would be corrupted as well. For example, if a cybercriminal took an inference engine that stopped malware and hacked it to see every internal connection to a specific website as malicious, that system could be flipped to see all connections to any website as having always been malicious—effectively DDoSing a network, the report noted.
"As you seek to protect AI technologies from cybercriminals, give equal attention to the confidentiality and integrity of the data," the report stated.
- Special report: How to implement AI and machine learning (free PDF) (TechRepublic)
- How AI-powered cyberattacks will make fighting hackers even harder (ZDNet)
- Machine learning: The smart person's guide (TechRepublic)
- Using AI to improve security in the data age (ZDNet)
- The malicious uses of AI: Why it's urgent to prepare now (TechRepublic)