Offices are starting to reopen but hybrid work is still a reality for many organizations. And while the flood of job changes nicknamed the Great Reshuffle is predominantly among frontline workers, those organizations are still dealing with new staff who don’t yet know company processes, whether they’re joining the company now without meeting their colleagues in person or coming into the office for the first time.
Businesses turned to technology for remote and hybrid working but the initial focus was on productivity and supporting employees, with IT teams often going back to consider security and compliance after the initial urgency to go remote. As well as protecting devices being used at home for work from attackers, organizations wanted auditing and data loss prevention to make sure employees are following the right processes when they work with data.
SEE: Google Workspace vs. Microsoft 365: A side-by-side analysis w/checklist (TechRepublic Premium)
Insider risk isn’t just about disgruntled employees taking confidential data with them when they leave. More than half of insider threats are typically inadvertent, said Alym Rayani, general manager for Compliance and Privacy at Microsoft. Nearly three quarters of organizations in a CMU study had more than five malicious insider incidents in 2020 (69%)–but even more had at least as many unintentional insider problems where data or access was inadvertently misused.
The changing work environment only exacerbates the problem, he suggested. “In compliance, it’s all about managing change, because nothing’s ever static, but this is more change than I think anybody’s ever been used to.”
“There’s employees leaving; there’s also employees joining. New employees who don’t understand all the protocols or the handbook and all the stuff that comes with joining the organisation may inadvertently do things that create risks, and you know, they didn’t mean to,” Rayani pointed out.
“On my team, we’ve hired three new people in the last month, and they’re learning how to deal with sensitive information.” Rayani’s group has access to information used for Microsoft’s financial reporting, which is subject to various regulations. “I actually just sent a note to one of my peers saying, ‘Let’s follow this automated protocol we have for how these users get access to this information, how it’s marked.’ And it’s not because those users are malicious, it’s because they’re learning how Microsoft treats this data.”
Help, not hinder
Insider risk management is about being able to spot, understand and act on potential threats from inside your organization without reducing productivity or browbeating employees who get it wrong. Instead, you want to use incidents to educate users and help them stay within policy. To do that, you have to know what’s normal for your organization and your employees. Is it suspicious if someone accesses thousands of files very quickly? That depends on whether they’re files of customer data or files in a developer repository, where working with code can mean copying lots of files automatically–and on whether the person doing that is a developer.
The Insider Risk Management feature in Microsoft 365 E5 (available as an add-on for E3 subscriptions) uses machine learning to look for these kinds of patterns, including sequences of behaviour that can be subtle, like changing the sensitivity label on a document.
“If someone downgrades a document from confidential to public, they may do that because then they can transfer that document somewhere under the radar. It may not be obvious what that is leading to but when you start to put that signal together with other things that are happening, then you can understand what that correlation might look like,” he explained.
That might be a sign that someone is sending information outside the company (something Microsoft refers to as cumulative exfiltration)–or they might just be putting it onto a cloud storage service so they can look at it when they’re working from home or going to a doctor’s appointment. “If users are working differently, and you start to adapt to that, then you can understand what happens when a document was downgraded and then uploaded to a website.”
Rather than stopping users doing that and potentially blocking them from getting their jobs done, you may want to nudge them into better ways of working. “The best thing you can do is actually teach the user in the moment. If they do something like that, you could automatically send an email with a link to the handbook or link to training or a tip. You can use real-time situations to bring your organisation up to speed on how to handle data correctly.”
One way to understand user behaviour without reducing productivity is to prompt users to explain why they’re doing something. When you change a document label from confidential to public, it might be for convenience, or it might be because a secret project is being announced as a new product so you want people to be able to find out the details.
Organizations can set policies to manage which documents can be relabelled and why. “If the organisation configures the information protection portal to require justification, then the user can put in ‘I wanted to get this one document to look at it on my phone as I go to the doctor.’ But say you have information related to reporting to the SEC, and it’s a lot of risk, you can say I never want something that is labelled this way to ever be able to be downgraded, and unfortunately that user is going to have to do it in a different way because it’s just so sensitive.”
Patterns can also be seasonal: Employees in your accounting team may only look at key financial data once a quarter or even once a year. Rayani encourages organizations to turn on Insider Risk Management even if they don’t plan to use it immediately, because initially the system looks back at only ten days of data. “You allow the system to learn over time and to do pattern recognition, and to learn what’s outside the norm over a longer period of time.”
You can also create rule-based policies when the system spots behaviour that looks unusual but is one of those seasonal patterns, to avoid getting the same false positive every year.
Setting priorities
When working habits are still in flux, machine learning means the system will learn the new normal as it happens, so you know when behaviour is really unusual rather than just unfamiliar. “We have a new capability to identify and alert higher when the machine learning model says, ‘this particular user’s activities are higher than average for your organisation.’ And of course, that organisation could be changing over time, as user behaviour changes as people on-board and off-board.
“What’s really important is, what is it in relation to what should be considered the norm for your organisation and when do you say ‘OK, this is so far out of the statistical norm for my organization that I really need to triage this and act fast on it.’”
It also learns from how security analysts create and triage results. That’s important to avoid the false positives that waste the time of your security and compliance team. “How can we help what is typically a small group of analysts or investigators more effectively identify and triage those risks, meaning getting to the right ones and doing it more quickly?”
SEE: Windows 11: Tips on installation, security and more (free PDF) (TechRepublic)
Microsoft 365 Insider Risk Management builds on the same techniques that SharePoint uses to automatically classify documents as sensitive or confidential. These trainable classifiers learn how users classify documents and need about 30 documents to create a pattern to follow.
Financial services customers already use those machine learning models in Microsoft 365 for communications compliance, monitoring internal phone calls and chats between brokers and dealers to prevent insider trading. Other regulated industries use it to protect assets, detect code-of-conduct violations like sharing inappropriate content and in industries like healthcare where they’re required to track customer complaints.
“If something is wrong with a medication, or something is found in a product, they’re required to track and respond to those complaints,” Rayani explained. “We have a customer complaint classifier that finds those possible complaints and surfaces matches so that they can process and officially record those things for their regulatory requirements.”
But even industries that don’t have compliance and regulation requirements are now able to use communications compliance to improve customer satisfaction. “They’re adopting it to make sure that they’re doing right by their customers. They can identify those customer complaints over chat and other situations more easily, deal with them and make their customers happier and improve their brand.”
That’s different from the usual sentiment analysis which looks at the tone of language to add context. Here, the classifier looks at the words people use, whether that’s like ‘the seal was damaged’ or ‘my medication was contaminated’ or other phrases you expect unhappy customers to use.
Leaving your customers unhappy is a different problem from users who are exposing data, accidentally or on purpose, but it’s still a risk some organizations want to manage, Rayani said. As with the more familiar insider risk management, the goal is to give customers the flexibility to monitor what they care about.
“They can determine their own risk thresholds, their compliance priorities, their goals. Some of our customers are just trying to meet mandatory regulatory requirements. Others want to use these tools to uphold a company culture, and others want to optimise for the customer experience—or all three.”