Security

How one AI security system combines humans and machine learning to detect cyberthreats

PatternEx's 'active contextual modeling' system relies on experts to detect risks and teach computers with supervised learning and it outperforms machine learning detection systems.

Image: iStockphoto.com/chromatika

The risk of cyberattacks is one of the most dangerous threats facing businesses today. And while new versions of attacks are constantly being born, teams of analysts are rushing to keep up with the latest risks. While many detection systems rely primarily on machine learning for catching attackers, a new AI system at PatternEx depends on human analysts as a vital part of their system of supervised machine learning.

PatternEx's AI system is the first "virtual" security analyst team, and can predict, detect, and stop attackers in real time. TechRepublic talked to Uday Veeramachaneni, CEO of PatternEx, about how the system, which has proven to be 10 times more effective than traditional systems, works.

SEE: Artificial Intelligence and IT: The good, the bad and the scary

"Machine learning is one way to get to AI," said Veeramachaneni. "The problem is that people talk about machine learning loosely." Most systems, he said, use unsupervised learning. Anomaly detection, relying on unsupervised learning, he said, is insufficient on its own, producing lots of false positives. So PatternEx's system relies heavily on a human security analyst to teach the system to detect threats.

"There's no other way to do this than to mimic an analyst," said Veeramachaneni. With the massive data produced around security breaches, the missing element is human insight. "Humans were able to figure it out," he said.

That's the premise for his company: taking the data already out there and figuring out if there will be an attack.

"We realized that the best way to do this was to emulate the human analyst, the only successful tool in the arsenal," said Veeramachaneni. "We simulated the analyst, at scale."

SEE: HPE launches Investigative Analytics, using AI and big data to identify risk

PatternEx calls its system "active-contextual modeling," which is a predictive model based on feedback from analysts.

For example, if one analyst sees a hundred "events" a day, the system will show it a hundred of the rarest events. "The theory is that most attacks don't look like normal events," said Veeramachaneni. "When we show it to an analyst, they point out attacks."

Then the system absorbs feedback and figures out what kinds of behaviors lead the analyst to say that it's an attack. The system continuously analyzes behaviors. If analysts get it wrong, the machine will recompute what the analyst will predict, integrating more and more feedback from the analyst to refine the models.

The system produces a virtual analyst to predict breaches, which, in turn, can be extended to a global set of analysts for collaboration.

SEE: AI's largest-scale innovations aren't happening in cars or robots but in customer service

I asked whether PatternEx uses a similar method as Google DeepMind's AI system that mastered Go, and Veeramachaneni pointed out some key differences.

"In AI systems, you train, test, and deploy," he said. "DeepMind showed the deploy method. It trained in a lab, then deployed."

The rules for the game, he said, were defined. But "attackers don't play by rules. It becomes easier to use deep learning when you have labels. A right move versus a wrong move," he said. "For us, that doesn't exist."

So how do you train algorithms to detect threats? Their system, he said, is "engaging analysts to detect threats live. We're trying to aggregate the labeling process across companies so we can train models."

In AI's training phase, humans always need to feed it labels. PatternEx "can't automate that away," he said. "There will always be security analysts involved."

The analyst always needs to be involved, he said, because the threats are always changing. "Analysts are very intuitive—they know what's an attack. But they're not good at translating it to a rule-based system."

But while analysts will always be needed, it's also true that they won't need as many. What the company wants to do, instead, is prevent burnout, to make analysts efficient and collaborate with each other. "Analysts need to share information with each other," he said.

Their system is different from most today, which, Veeramachaneni said, "don't have objective measures of these systems. The incumbents are all static-rule based systems and don't show how to measure efficacy." These systems either do not update their models at all, or do so a few times a year.

PatternEx's system, by contrast, has three months of data showing that integrating human analysts is 10 times better at catching threats than relying on machine learning alone.

Also see...

About

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox