How Microsoft is trying to identify and eradicate disinformation

Diana Kelley, Microsoft's Cybersecurity Field CTO, explains how the company is addressing influence campaigns, using machine learning for these models, and looking at eliminating disinformation.

How Microsoft is trying to identify and eradicate disinformation

CNET's Dan Patterson interviewed Diana Kelley, Microsoft's Cybersecurity Field CTO, about how the company is addressing influence campaigns, using machine learning for these models, and looking at eliminating disinformation. The following is an edited transcript of the interview.

Campaign 2018: Election Hacking is a weekly series from TechRepublic sibling sites, CBS News & CNET, about the cyber-threats and vulnerabilities of the 2018 midterm election.

Dan Patterson: It seems as though there are coordinated attempts to undermine the faith and confidence in institutions. Some of these are influence campaigns that are run by countries like China, Russia, North Korea, Iran. But, how do we defend against something that is so simple, but, also incredibly complex and uses social media kind of like it's intended to be used?

Diana Kelley: Yeah. So, trying to identify disinformation and eradicate that, and protecting the users themselves within the campaigns, part of the Defending Democracy program is something called Account Guard, and it's free for anyone who's running for either a federal, a state or a local election, and to their campaigns. All you have to do is be signed up for Office 365 and opt in to the program. But then, that can look across emails that are going both into the campaign accounts, and also, potentially, the personal accounts. Again, if there's opt-in use for this, because they use multi-channel attacks, they try to get in through one side and then leverage that and pivot to escalate their privileges.

So, working on those kinds of solutions to be able to protect the candidates themselves, so that they are not going to have smear campaigns, or their information doesn't get out, or potentially, their data is stolen, their emails are stolen, but not published. I don't know if this has happened yet, but a concern I have is what if emails are stolen and then what's published is actually tampered-with data, so being able to go back and prove this was actually the email. What you're saying I said is not the truth.

SEE: Cybersecurity strategy research: Common tactics, issues with implementation, and effectiveness (Tech Pro Research)

Dan Patterson: Does machine learning have a role to play here?

Diana Kelley: Yes, it does. And, as I had mentioned, machine learning really can help with those models, are getting much better. We're tuning our models over time to find the social engineering attacks and to start to see the patterns that may indicate there's compromise, or there's malicious activity, even if it's, at this point, a little under the radar, the machine learning models are helping us to see them and detect them earlier.

Dan Patterson: In technology, it's very easy to identify the trends and the tools that we have now, but it's kind of harder to see the unknown unknowns, to borrow a Don Rumsfeldian phrase. What unknown unknowns, or at least, let me rephrase that, what known unknowns could be surfacing in the next couple of years? What threats might be on the horizon?

Diana Kelley: Well, there's the social part, this continuing, if disinformation campaigns are working, then attackers are good at learning, "Oh, this is successful," so they're continuing to take forward disinformation campaigns, so that means, in social media, looking at how we can eliminate disinformation, make it transparent and clear, at least who this information is coming from. And, this is not about taking away free speech, or violation. This is simply about transparency. So, if it's a made-up story, stories are great, so, if it's a made-up story, that's fine, as long as you're reading fiction and you know it's fiction. So, focusing on that, I think, is a really important aspect.

Also see