Top 5 barriers to AI security adoption

AI's immaturity and the lack of time and resources needed to implement the technology are the two top hurdles to adoption, according to a Cylance report.

Businesses don't get how AI cybersecurity tools work, but plan to use them anyway Some 71% of businesses plan to use AI and machine learning in their security tools this year, though over half aren't sure what that tech really does, according to Webroot.

Artificial intelligence (AI) holds a great deal of promise for helping cybersecurity professionals deal with more sophisticated and dangerous threats. But the technology faces several key obstacles before it is widely adopted in this field, according to a Tuesday survey conducted by SANS Institute and sponsored by Cylance.

Among the 261 cybersecurity professionals polled in late 2018 on their perceptions of AI, 35% cited the lack of maturity in AI as the top barrier they face in implementation. Further, 46% said they see AI as a technology that's still maturing, while just 5% said they believe it's highly mature.

Of those surveyed, technical staff expressed more confidence in the maturity of AI solutions than did management. That factor serves as a heads-up to cybersecurity professionals seeking to implement AI. Given the apparent risks of AI in relation to its perceive maturity, management may look for and expect quantifiable returns on their investment.

SEE: Cybersecurity strategy research: Common tactics, issues with implementation, and effectiveness (Tech Pro Research)

Another 27% of respondents called out the lack of time and skilled resources necessary to implement AI as their top barrier. Next, 24% pointed to the lack of commitment by management along with an insufficient budget as their No. 1 obstacle. Among other respondents, 10% said that AI poses too much risk in cybersecurity, while 2% called AI just marketing hype.

barriers-in-adoption-of-ai-for-cybersecurity

SANS Institute

The surveyed gleaned seven potential risks involved in using AI for cybersecurity based on the feedback of the respondents:

  1. Loss of privacy due to the amount and type of data to be consumed
  2. Over-reliance on a single, master AI algorithm
  3. Lack of understanding of the limitations of the algorithms
  4. Insufficient protection of data and metadata
  5. Inadequate training solutions
  6. Lack of visibility in decisions made through AI
  7. The use of the wrong algorithms for a specific problem

Many of the risks shared by respondents point to concerns about the algorithms used by AI to suggest solutions and make decisions, while other risks focus on data privacy and security.

"AI could be compromised and act in [an] improper way such as giving wrong decisions [if the data is compromised in any way]," one respondent said.

Tips for implementing AI in cybersecurity

For cybersecurity staff and management looking to implement AI, the resulting survey report offered several pieces of advice:

  • Understand your current use cases. Are you seeking help with threat detection, malware prevention, or another area?
  • Keep your security experts handy. AI still has a long road ahead of it before it can match the skills and savviness of a human analyst.
  • Understand the data and its limitations. Determine which data sources are most suited for your use cases.
  • Establish trust and transparency. Your AI platform needs to be transparent so you can understand the decisions it makes.
  • Allow enough time to train the AI platform. Creating a system that delivers solutions in seconds might demand weeks of time to set up the data and train that system.
  • Make informed decisions about AI technology. Set up the right procurement process to ferret out false claims about a platform from vendors.

If done properly, an implementation of AI for cybersecurity does offer certain benefits, according to the survey. Some 29% of respondents said AI could better identify unknown threats, such as zero-day threats and advanced persistent threats. A full 69% said they do currently use or plan to use AI for advanced threat detection and prevention.

The survey's target audience consisted of professionals working or active in cybersecurity and involved with or interested in the use of AI to improve their organization's security defenses. More than 60% of the respondents worked for organizations with 5,000 of fewer employees, contractors, and consultants.

Also see:

istock-913645450-1.jpg
Image: iStockphoto/guirong hao