In any debate, there are always at least two sides. That reasoning also applies to whether or not it is a good idea to use artificial intelligence technology to try stemming the advantages of cybercriminals who are already using AI to improve their success ratio.
SEE: Google Chrome: Security and UI tips you need to know (TechRepublic Premium)
In an email exchange, I asked Ramprakash Ramamoorthy, director of research at ManageEngine, a division of Zoho Corporation, for his thoughts on the matter. Ramamoorthy is firmly on the affirmative side for using AI to fight cybercrime. He said, “The only way to combat cybercriminals using AI-enhanced attacks is to fight fire with fire and employ AI countermeasures.”
Why choose AI in cybersecurity?
An obvious question is: Why add another expensive technology to a company’s cybersecurity platform, especially in a department that many upper management types consider to have a terrible return on investment? Ramamoorthy offered the following reasons:
- Enterprise security and privacy practices have become the representation of the trustworthiness of a business. A security breach or loose privacy practices might damage an organization’s reputation to the extent that it could drive away customers to competitors, irrespective of the competitiveness of your offering.
- It’s only fair that you put your best foot forward to make sure you stay on top of the cybersecurity game. Deploying evolving technologies like AI into your security practices can send strong signals to your customers that you have been taking them very seriously, and you’re in it for the long term.
Besides maintaining a good public image, Ramamoorthy said he believes AI can help an organization stay ahead of cyberattackers. We all know the pandemic world has democratized access to sensitive data. Confidential information is no longer restricted to private networks or corporate devices but can be accessed from anywhere on any device.
“This gives hackers multiple potential access points to access your confidential enterprise data illegally,” Ramamoorthy said. “Attackers use powerful techniques like AI to exploit unsuspecting end-users to gain access to privileged information by compromising said access points.”
SEE: Password breach: Why pop culture and passwords don’t mix (free PDF) (TechRepublic)
Another disadvantage is that traditional (non-AI) security approaches have always worked based on static thresholds. Attackers can game the system by flying under the radar of static thresholds.
With that in mind, Ramamoorthy then asked why organizations aren’t using the same technology to fight back? The time is ripe for upping the security and privacy protection game with the help of AI. Ramamoorthy offered several real-world cyberattack scenarios and how AI would assist cybercrime-fighters.
- Example: An organization with a SIEM solution has it set to alert when the number of failed logins to access proprietary information reaches ten per minute. A brute-forcing attacker can still do nine failed logins per minute and walk away unidentified.
Solution: Set elastic thresholds with minimal-to-no human intervention. Also, AI can monitor login patterns and set up thresholds depending on multiple variables like time of day, day of the week, and other recent trends in information access. For example, a Monday morning at 9 AM and a Saturday morning at 3 AM might need different thresholds.
- Example: An ill-configured threshold could lead to alert fatigue to whomever is responsible for monitoring SIEM system alerts.
Solution: AI can mitigate alert fatigue by identifying frequent, rare, unseen patterns and setting the alert priority accordingly.
- Example: It is nearly impossible for cybersecurity personnel to monitor access to every potential ransomware and phishing website.
Solution: AI can be deployed at endpoints to help identify and quarantine malicious websites, thereby enabling better data-access practices combined with techniques like multifactor authentication and zero-trust security.
Can AI improve security of data stored in the cloud?
Ramamoorthy said he believes AI can ensure better security across the tech stack—from cloud deployments to endpoints accessing data. “Rule-based systems might not be able to catch security vulnerabilities across the stack and might need complex rules to be written and maintained over time,” Ramamoorthy said. “With AI, the thresholds are automatically set depending on the trend and seasonal patterns in the data.”
He continued, “At the cloud level, AI can limit access to privileged information and avoid various attacks like Distributed Denial of Services, zero-day exploits, etc.”
What to look for in AI-security solutions
According to Ramamoorthy, it is important to ensure the selected AI solution envelopes in the entire stack. Also, SIEM products with AI-based UEBA (User and Entity Behavior Analysis) tools would help ensure the security of critical systems.
He also noted endpoint-protection products are starting to include AI-based features such as ransomware identification and malware mitigation.
Deploy AI capabilities sooner rather than later
Ramamoorthy suggested using AI in cybersecurity is an excellent way to avoid being the lowest-hanging fruit on the digital tree, as not many organizations are now employing AI cybersecurity solutions. That is not true with cybercriminals; they’re keen on AI and deploying more AI-enhanced cyberattack technology every day.
There is a reason Ramamoorthy used the examples he did. He explained why in his parting comments: “Embracing AI-based UEBA modules as part of an organization’s SIEM solution should be the first step, as it is a helpful way of monitoring users and entities, as well as identifying suspicious patterns early on.”