Dr. Richard Ford, Chief Scientist at Forcepoint, discusses how cyber attacks undermine confidence in our institutions, the dangers of "cyber fatigue," and how AI is where the battle for cyber safety will be won or lost.
TechRepublic's Dan Patterson sat down with Dr. Richard Ford, Chief Scientist at Forcepoint and discussed cyber attacks, how they undermine institutional confidence and how the human factor affects cyber risk.
Dan Patterson: Consumer or business, we have similar security risks. What's the problem? What challenges do we face and why is cyber one of the confounding problems in the age of the Internet?
Dr Richard Ford: Right. Cyber touches every part of your life. There's really no part of your life that a computer doesn't impact nowadays. When we worry about cyber, we're really worrying about well-being. If you really want to zoom out, computer security, cyber security, it's a means to an end. Look at why you want secure machines, why you want secure systems.
I don't even actually like the word "secure." I like the word "safety." Because when I say security, a whole bunch of different lights go off in your head. Safety is something that we get. We get what it means to be safe. When we say "secure" you suddenly get in the weeds. Suddenly we're thinking about ones and zeroes. You're thinking about specter and melt down and some very, very detailed parts of the CPU.
My job is about helping make you safe, about the promises of trust, about things doing what you expect them to do, about a predictable universe. It's something you "get." I think when we take that approach, you've got cyber security in the right box.
SEE: Cybersecurity spotlight: The ransomware battle (Tech Pro Research)
Dan Patterson: How we define risk and trust can shift person to person. But trust is deeply tied to institutions, to our activities, to safety, and to making sure our behavior belongs to us and is not hack-able. I even struggle with not using the language of cyber to talk about the challenges of cyber. There are ambiguities in the relationship between humans and trust. So how do we define trust with institutions and safety with institutions?
Dr Richard Ford: When we talk about trust, it's essentially about unfulfilled promise. My bank trusts me essentially, over my mortgage. The unfulfilled promise that I will eventually pay back my mortgage. It's the fulfillment of something that hasn't happened, and I think that's what you mean when you talk about trust.
Safety issues are when something you trust lets you down in an interesting way, or when you trust in the wrong thing, "I'm sure that this person I'm sending my bank account details to is really going to wire me $34 million." That's an example of misplaced trust, which cyber criminals play on all the time.
I like those words because they're organic and I to take security seriously, you have to put the human back in it. One of the reasons we have specter and melt down, boil over and get cold after six weeks is because we forget about the human in the loop, and the potential to impact human beings, as opposed to impact on ones and zeros on a computer somewhere (in an abstract sense0.
Dan Patterson: Is our safety, our trust, tied to the news cycle or to our own boredom of the problem?
Dr Richard Ford: Cyber fatigue is a very real thing. There's been several papers about it, people have discussed this concept. It's sort of burn out. There are interesting studies that show that when you get sufficiently fatigued in the cyber arm you actually say to yourself, "Oh, I just can't be bothered to use a new password." You start to make bad decisions and actually it's detrimental to your overall security.
So how do you manage that? How you balance having your shields up and doing the right thing, and then not suffering from this encroaching cyber fatigue? In new cycle, there's a breach in my news feed almost daily. I use Vienna, a little stream reader. Vienna's great and almost every day somebody's been breached, some credit card system. You become immune the additive effect of all these breaches. It's actually one of the biggest problems with cyber, when we talk about breaches, and it becomes the new normal. When that happens, we lose sight of the human impact and the risks we face, existential risks, because so much of our lives tie into cyber.
You have this strange position where you're fatigued because it's always top of mind, but you also don't care about it, because it's always there and always a problem.
Dan Patterson: How do I combat cyber fatigue? How do I combat these ... the word you used there, the big E word, existential. I don't want to think about existential challenges when it comes to protecting myself. So how do I address this on a practical level and on an existential level? How do I tackle these challenges?
Dr Richard Ford: Let's break those two questions apart. I'm going to take the easy one, the practical, then we'll pivot back to existential.
I think for the average user, there's a lot of things that you can do to address the risk you control. However, there are a lot risks you don't control. For example, if a credit reporting agency gets breached, your information is out there. It happens, you didn't have any real control.
For the home user, for the personal user, it's simple things: be aware, adopt healthy skepticism. We start with mindset. Then patch, patch, patch. It's so boring. Nobody wants to talk about patching. It's one of those things that just matters. As we start to bleed into corporate life, as you get more senior in an organization, you think about what you share online. Are you posting geo tagged photographs that are giving a potential bad guy your location at all times? That's probably a bad thing to do.
In fact, one of the interesting aspects of this is that there's a blur that begins to exist especially in the eyes of the attacker. The attacker doesn't see "personal Dan" and "corporate Dan." The attacker sees just Dan, and will come at whichever side they can get into most effectively.
From a practical perspective, it's mindset, it's that very simple keeping everything right up to date and then thinking about what information you put out.
Existentially, it's harder because you're part of a much broader system. Changing that broader system is very difficult because it has massive momentum. The entire security industry has significant momentum around it. You also have push there to IT-ization.
Going back to trust: trust is carried out by computer, rather than human-to-human. When we're done, I'm jumping into an Uber. That's trust mediated essentially by a computer. I'm just going to look for a little U on a car and see if it's going to work out for me because I have trust in the Uber system.
Dr Richard Ford: I've been wrestling the most, over the last couple of weeks, is how to, in the near future, get people to take this more seriously. I don't think we put enough gravitas around security. It's always there, but we don't recognize what a big threat we face.
In terms of actual threats, base AI is going to be really interesting. It's interesting from a defensive stand point and from an offensive standpoint. So as the attackers think about new ways to leverage artificial intelligence I think that that's something that keeps me up at night because the game could start going very fast once both sides start to deeply significant automation.
So there's the mindsets, how do we move this up the priority list. This shouldn't be a problem for my kids, my grandkids to solve. It's a problem that our generation is going to have to solve. We have to get this right. In fact, it would be wonderful to be able to look back a hundred years from now and look back at this time because I think the decisions that we make will have lasting consequences.
You always look for the galvanizing event that's going to change our minds, or set things in motion. I'd argue I'm not certain what's going to be bigger than some of the things that we've already gone through, even in the last year.
Dan Patterson: When we talk about artificial intelligence, what are we really talking about? That's kind of a broad umbrella term. Can you break down the components of AI and why this is so important related to cyber?
Dr Richard Ford: "Artificial intelligence" gets used a lot in the marketing literature of cyber. Often what they're really talking about is either basic statistical techniques, simple statistical models, or even small amounts of machine-ware. AI takes it to the next level. It's essentially can you create machines that learn how to do better on a particular problem space, how to optimize to solve a particular problem. Attacks unfold at the speed of computer. For example, a human analyst has to figure out what's going on and it is expensive and slow. An easy use for artificial intelligence, for example, would be to use that AI, as, and I like this expression, as a cognitive prosthesis, that doesn't make decisions for me, but it enables me to make better decisions, or enables me to see the most important parts of the problem. Computers are good at taking a hundred million events and determining what's most important. It's leveraging that is really exciting.
The flip side is once defenses are moved and morphed by computer, attackers will use computers. It becomes very game theoretic, right? It's like two computers struggling over a chess board. It's a whole other level of complexity, for supremacy and it's hard.
I don't want it to come across as entirely negative. Even today we make progress, moving away from threat-centric to the more meta, that's what is security provides. What I really care about is my data. Security is not just stopping that threat coming through the door, but protecting your actual data. We need more data-centric views on security, more human-centric, more human in the middle of the model. We're starting to see progress. I spend a lot of time researching and designing around how to make it more human-centric.
AI is going to be where this battle is won or loss. I also worry a lot about my AI being compromised. AI's just a computer program. It's going to have vulnerabilities that could be even worse. When I no longer understand how my system works, how do I know when it's not working properly? Those questions they sound like they're circular, and to some extent they are, but those are the big questions we'll be grappling with over the next 10, 15, 20 years. It's going to be a very, very interesting decade I think.
- Special report: How to implement AI and machine learning (free PDF) (TechRepublic)
- Ransomware: Why the crooks are ditching bitcoin and where they are going next (ZDNet)
- Ransomware: A cheat sheet for professionals (TechRepublic)
- Fake cryptocurrency scam delivers ransomware - and more malware when you pay up (ZDNet)
- Why SMBs are at high risk for ransomware attacks, and how they can protect themselves(TechRepublic)