AI and machine learning have the potential to take a bite out of cybercrime, but let's not forget the human factor.
Artificial Intelligence (AI) is being touted as a technology that will reduce cybercrime. Ed Bishop, cofounder and CTO at Tessian, agrees that AI will help but only if it is configured to protect people.
"Despite thousands of cybersecurity products, data breaches are at an all-time high," writes Bishop in his sponsored VentureBeat article To protect people, we need a different type of machine learning. "The reason: businesses have focused on securing the machine layer--layering defenses on top of their networks, devices, and finally cloud applications. But these measures haven't solved the biggest security problem--an organization's own people."
SEE: The ethical challenges of AI: A leader's guide (free PDF) (TechRepublic)
The human layer vs. the machine layer
Bishop believes that AI is being groomed for cybersecurity applications using traditional machine-layer methodology to detect threats. With traditional machine learning, data is input directly into the model and compared to an operational baseline which then allows decisions to be made about whether the data is within acceptable parameters. Bishop mentions being able to quickly and accurately detect threats--malicious programs or fraudulent activity--at the machine layer is invaluable.
However, that approach, he suggests, is just more of the same and does not take into account the human layer. People have way more control over company data and systems today, and human behavior is far from static. For example:
- Humans are unique--no two are the same.
- Humans communicate with natural language, not static machine protocols.
- Human relationships and behaviors change over time.
"To predict whether an employee is about to leak sensitive data or determine whether they've received a message from a suspicious sender, for example, we can't simply give that raw email data to the model.," explains Bishop. "It wouldn't understand the state or context within the individual's email history."
"There is no concept of 'state'--the additional variable that makes human-layer security problems so complex," continues Bishop.
SEE: Hiring Kit: Security architect (TechRepublic Premium)
How stateful machine learning can help
This is where stateful machine learning comes to the rescue. It has the ability to look at historical data and calculate important features by aggregating all of the relevant data points which are then passed to the machine learning model. This is "a non-trivial task; features now need to be calculated outside of the model itself, which requires significant engineering infrastructure and a lot of computing power, especially if predictions need to be made in real-time," continues Bishop.
It may not be trivial, but stateful machine learning, according to Bishop, is the only way to protect employees and the sensitive data they access.
Bishop writes that misdirected emails were the leading cause of online data breaches reported to regulators in 2019. "All it takes is a clumsy mistake, like adding the wrong person to an email chain, for data to be leaked," he writes. "And it happens more often than you might think. In organizations with over 10,000 workers, employees collectively send around 130 emails a week to the wrong person. That's over 7,000 data breaches a year." That's enough of a reason for Bishop to use securing email as an example of how stateful machine learning can be of help.
Jane sends her client Eva an email with the subject line "Project Update." Knowing several pertinent email data points would be helpful in determining whether the email is actually intended for Eva or was sent by mistake. The data points might include:
- The nature of Jane's relationship with Eva;
- Subjects typically discussed between the two people; and
- Ways in which Jane and Eva normally communicate.
"We also need to understand Jane's other email relationships to see if there is a more appropriate intended recipient for this email," adds Bishop. "We need to understand all of Jane's historical email relationships up until that moment."
The project that Eva and Jane were working on concluded six months ago. Jane is now working with a new client, Evan. In a hurry, Jane accidentally sent an email meant for Evan to Eva; this could result in Eva receiving confidential information meant for Evan. "Six months ago, the stateful model might have predicted that a 'Project Update' email to Eva looked normal," writes Bishop. "But now it would treat the email as anomalous and predict that the correct and intended recipient is Evan. Understanding 'state,' or the exact moment in time, is absolutely critical."
SEE: Artificial intelligence ethics policy (TechRepublic Premium)
Training and company policy have historically been ineffective when it comes to cybersecurity, and, according to Bishop, focusing on the machine layer of cybersecurity will also be ineffective, as people are unpredictable. The key is to focus on the human layer of cybersecurity.
- 9 data security trends IT departments should expect in 2021 (TechRepublic)
- How AI, ML, and automation can improve cybersecurity protection (TechRepublic)
- How to become a cybersecurity pro: A cheat sheet (free PDF) (TechRepublic)
- Internet and email usage policy (TechRepublic Premium)
- Artificial intelligence requires trusted data, and a healthy DataOps ecosystem (ZDNet)
- What is AI? Everything you need to know about Artificial Intelligence (ZDNet)
- Cybersecurity and cyberwar: More must-read coverage (TechRepublic on Flipboard)