Security

Black Hat 2018: Sneaker bots and their challenges

Josh Shaul, vice president of web security at Akamai, sat down with TechRepublic's Dan Patterson at Black Hat 2018 to speak about sneaker sales' market and after-market.


Josh Shaul, vice president of web security at Akamai, sat down with TechRepublic's Dan Patterson at Black Hat 2018 to speak about sneaker sales' market and after-market. The following is an edited transcript of the interview.

Josh Shaul: Akamai, as a business, is all about delivering content, internet content, to end users. We run hundreds of thousands of servers that are all around the world, physically close to all of the end users on the internet, and our business helps companies take content, whether that's their website, or streaming video, or a download, and get it very quickly to those end users that want to have access to it.

SEE: Network security policy (Tech Pro Research)

We also, on the flip side, protect those websites from the users, from those users attacking them, trying to abuse them, or misuse them, or knock them offline. Our business really helps the internet to go, and makes the internet fast and makes the internet reliable. We make the internet secure.

Stumbling into sneaker bots

We stumbled into sneaker bots. We had been building the business around understanding the difference between human and robot visitors to websites, and that was mostly trying to help our customers to deal with these big attacks they were seeing where robots—these big seas of bots, these big bot nets had been loaded up with stolen usernames and passwords. Those usernames and passwords were being tested against thousands of websites.

We were trying to stop that testing of stolen usernames and passwords by these bots. We built some software that was designed to tell the difference between humans and robots, and that software started to get used in ways that we hadn't imagined it would be used in. One day we saw some huge increase in the activity in our systems, in the number of requests that we were processing to determine human versus robot, and as we dug in we realized that those requests were coming from a sneaker sale.

SEE: Identity theft protection policy (Tech Pro Research)

So, we started to do research. What's going on? Why is this sneaker sale generating so much robot activity? And we did the easy thing: We went to Google, and we typed in sneaker bot, and the results were stunning. There were tons of tools that were available that you could buy, and you could go and get the latest pair of sneakers from these limited-edition sneaker sales that happened from a bunch of different brands.

So, our research in that area was born, and we realized that there is a huge market for sneakers, for limited edition sneakers that people are really excited about, to the point that there's an aftermarket where folks are willing to pay lots of money, lots more than the original sale price, for these products. An entire industry had cropped up where bots go and help purchase the sneakers and then sell them aftermarket.

It doesn't seem to be anything illegal that's happening here at all. It seems like folks are just using legitimate website functionality in shopping, and then selling their goods later. All of that's normal legal activity, and that's really different from what we've traditionally dealt with in the information security industry.

SEE: Cybersecurity strategy research: Common tactics, issues with implementation, and effectiveness (Tech Pro Research)

There's almost always been a criminal element to it where if you get caught you're going to go to jail. You have penalties on the other side. But here, when you have something like a bot net that's just buying product and reselling it, there is no penalty on the other side. Folks can operate in the open, and that creates a really different level of challenge than we've ever had before. There's no need to hide your presence, so it can be much more bold and much more sophisticated, and it tends to be a harder problem to go and stop it.

These bots really are using the functionality of the web to exploit the functionality of the web. It's exactly the way it sort of comes together, and the tools that are out there are not exploits so much as they're tools that are built to mimic the user experience on a particular site, or the way a user would experience a particular site.

Building tools tailored for a particular target

The bot operators who are running these tools—they know there are companies like Akamai that have built products to detect the difference between a human and a robot, and what they're trying to do is build tools that are tailored for a particular target to then make it very hard for us to tell that what we're dealing with is a robot. They mimic human activity, they follow the path that a normal human would follow through the site, clicking through the right pages in the right order.

SEE: Infographic: Almost half of companies say cybersecurity readiness has improved in the past year (Tech Pro Research)

They really make it very difficult for us to determine the difference between human and robot, and the cat-and-mouse game that we're in is continuing to make sure that we can detect that it's a robot and that we can then take action based on that. And on the other side, the folks that are building these tools are trying to take the next step to make that harder for us.

Searching for humans not robots

We have tried for a long time to find robots, and we were looking for all different signals that we could detect that would say this is a robot. Every time we did find a good signal, and we used it, and we started to take action based on it, the robots would figure out what we had learned, and then they would change, and they'd move ahead of us. Eventually, we realized we were losing the battle to figure out these robots, until someone had a brilliant idea that we should look for the humans instead of looking for the robots.

Now, it sounds like the same problem, but it turns out that the signs of humanness that are out there are a little easier to detect than the signs of robot. What we do is we look for interactions between users and machines: key presses, mouse movements, touch pad events, touchscreen events, accelerometer, and gyroscope events on a mobile device. We take all that signal from the way the user interacted with the machine, and we feed that into an analytics engine in the cloud, and that analytics engine has been trained to know what human movement looks like.

SEE: Research: Defenses, response plans, and greatest concerns about cybersecurity in an IoT and mobile world (Tech Pro Research)

So, a human, the way you type, the way you move the mouse has a very unique and organic sort of feel to it. You can't move a mouse in a straight line. You can't move a mouse in a perfect curve. You can't have perfect acceleration. You haven't have equal timing between key presses. You'll never push the same key twice for the same amount of time.

All of those tiny, tiny little factors can be analyzed and added up to say that, "yeah, this is really a human that's interacting with this machine" versus, "nah, a human couldn't do this." A human couldn't generate those movements, couldn't generate those key presses, couldn't generate that series in that order with that timing. That's what we typically use today to detect whether we're looking at a human, or we're looking at a robot.

It's hard to say criminals here, because these just seem to be legitimate business operations, and I'm sure some of them are enterprising people in their homes that had enough cash to buy a hundred pairs of sneakers, and is buying them and then selling them online again and making a profit and is able to sort of build the business that way.

SEE: Information security policy (Tech Pro Research)

At the same time, I'm sure there's a real business that's looking at this saying, "Hey, there's a huge amount of money to be made here, and if I can put up a couple of hundred thousand dollars for inventory I can go and make millions of dollars on the aftermarket very quickly."

So, I think there's a big mix. And, again, because the activity's not illegal, it brings a very different element to the party of folks that are just enterprising and looking for profit.

Industries that are affected the most

The industries that get hit very, very hard are the ones that have either very high value accounts, so, financial services or any place where you have a stored value, a gift card, a membership card, or a loyalty program where there's value. They get hit really hard with these credential stuffing attempts to try to figure out can I get valid usernames and passwords, and then just pump the dollars out of those stored value programs.

The other area that gets hit very hard is the inventory. It's inventory grabbing for limited-edition type of products. Whether those are sneakers, or they're tickets to a ballgame, or some kind of a concert type event that's really exciting, or any other product that's really limited in quantity and highly desirable, they get hit hard.

Again, because it's not illegal, there's a different type of criminal, and it's also not the typical security scenario where throughout the 20 plus years I've been in security I've said a million times, "I don't have to actually outrun the bear. I just need to outrun you. I just need to be faster than the next guy." Because security, in that world the attacker will always go after the easy target.

SEE: Security awareness and training policy (Tech Pro Research)

When it's this limited-edition item, there is no other target. There is no outrun you. You have to outrun the bear. They're going to keep coming and keep coming and keep coming. There's no other place to go. Nobody else makes those sneakers. Nobody else sells those tickets. It's a really dedicated sort of actor there that's the focus. Very different from what we've seen in the past in security typically.

You think about scraping and content scraping in that world. You put up your ebook. You wrote your great article. You publish your song. You're streaming some content, you know, the latest championship game, and bots or companies want to take that content and monetize it themselves, right? They'd like to take the work that you did and put it on their own site, and monetize it separately. So, they write bots for scraping that'll come and scrape your video content, that'll come and scrape your text, it'll come and scrape your images and pull that stuff off and put it on a separate site.

They do that to monetize, but sometimes they also do it for more nefarious purposes. They're trying to scrape, and then set up a phishing site and get people to then be exploited by, "Hey, come into this phishing site," and that really, that's very painful in distributed businesses.

SEE: System update policy (Tech Pro Research)

These businesses that have independent marketing, kind of individual groups that come together in franchises that may have lots of different websites that are legitimate for different countries, for different regions, for different distributors, those are the places where it's really scary. Because that's very hard to look at the website and know that it's a good one or a bad one, because there's not just one official company website anymore. Now you're dealing with hundreds. That's a lot of problems that are out there.

How to manage it vs. how to stop it

I think it is in a lot of ways like spam, and it's a problem of how you're going to manage it versus how you're going to stop it, because, again, like spam, it's not illegal to go and do this activity. So, it's really hard to make it go away. The name of the game is to break the economic model so that it's not profitable, it's not an interesting thing for folks to do anymore to build and run these bots.

We focus a lot on trying to do that by confusing them, by giving them the wrong content, by giving them the wrong prices, by sending them down some maze. "Yeah, absolutely, you just logged in, those credentials are great. Go and seal them off." And trying to create confusion in that business model that makes it more costly for them to operate, and it creates a less viable business for them in the end.

SEE: IT leader's guide to cyberattack recovery (Tech Pro Research)

That's a lot of the focus here. It is not so much making it go away, but making it painful. We're never going to be perfect. No security technology ever can be. You've got an adversary that's moving and shifting all the time and doing unpredictable things, and so you know that every once in a while they're going to find a way through, and they're going to get through.

Really, it's about how quickly can you put out the fire? How quickly can you stop the damage? How quickly can you respond to those scenarios and contain? That's an area of focus for us continuing on, of evolving our detection so that we can avoid a problem, but also getting really, really good at when a problem does happen, containing it, limiting the damage, and cleaning it up.

Also see

About Dan Patterson

Dan is a Senior Writer for TechRepublic. He covers cybersecurity and the intersection of technology, politics and government.

Editor's Picks

Free Newsletters, In your Inbox