Why humans are an intractable cybersecurity problem

Nations need to be aware of technological opportunities and the risks of artificial intelligence, social media, and big data, says Neil Walsh, UN Chief of Cybercrime.

Why humans are an intractable cybersecurity problem

TechRepublic's Dan Patterson spoke with Neil Walsh, Head of Cybercrime, Anti-Money Laundering and Counter-Financing of Terrorism Department for the United Nations, about the impact and influence that the UN has on technology and cybersecurity.

SEE: Guidelines for building security policies (Tech Pro Research)

Patterson: The internet is a powerful tool. But there are some key vulnerabilities to the internet, and one of those is not a technical vulnerability. It's the ability to divide people, and often when we look at algorithms, it's the ability to talk to key constituencies, without necessarily talking across those constituencies.

So what I'm getting to here is, we've seen norms change dramatically in the last, say 18 to 36 months.

How do you work with countries like the United States where norms are dramatically different in the current administration than they were in the previous administration, particularly when it comes to the relationship between the administration and some of these other actors that, you could say, troll or behave in a way that maybe exploit some of these loopholes on the web?

Walsh: It's a really good question, I don't think there's an easy answer to it. We work with all countries around the world. The US for example, helps us in my organization to deliver the work that we do, financially supporting and giving us assistance. That's the same with lots of different countries from east to west. Looking at the norms side, the UN looks at norms and legislation, and policy across the world, and lots of different bits of the business. In the cyber side, the way that we get into this, into a good space, is that we look at that broad generic risk.

There is for example, one case at the moment that we're really aware of, where over a hundred thousand paying members on a dark net pedophile forum pay to watch the live streaming rape and abuse of children under the age of six months.

Now, that's horrendous on its own, and the impact that we can have in doing something about that goes way beyond that one specific investigation. More importantly it shows, that we have to work together, we have to find those areas of consensus to do something about it. This will continue, irrespective of politics, irrespective of which party's in power, these threats exist. They're gonna continue to grow, I suspect, in our lifetime. and this is why Secretary General is trying to get us to focus on what do we do, how do we have that big piece of impact.

So for example, we would look at, how do we train law enforcement, how do we keep children aware of threat online. For example in the European Commission, one of the biggest threats that they see and talk about now online, not child abuse, not online terrorism, is the proliferation of fake news, whatever that may be described as. And the key point in this is we have to help children to understand what that means.

I was speaking to a school in Oslo, in Norway, back in December. A group of 14 year olds, incredibly clever kids, really get it, and they're trying to understand, "what if I follow somebody online? What if I get a celebrity feed that I'm following, and they have 80 million, 100 million, 110 million followers, and you see something that is liked 20 million times. How do you know it's true? How do you make that distinction between something coming across your radar and establishing truth versus fiction. So this whole concept is, we're working for example with Qatar, who are getting us the money to run an education program that we've designed to help kids from the youngest of ages, right away through their lives, to understand what all of this threat looks like.

Then we establish what critical thinking is, that you make a decision, as a child, as an adult, on what's presented before you. I think, in our lifetime, in the next five, ten, twenty years, we see the rise of fake news, we see the rise of deep fakes, where someone could record this now and put entirely different words into my mouth, with the same accent, with the same speed of speech, that could say something totally different. So we need to be really aware of that technological opportunity but also the risk that goes with it, and how we get that balance right.

Also see