Websites often strive to attract visitors as a way to generate interest, account signups, and hopefully income. But in addition to all the human beings who surf the web, bots are another common visitor to many sites. Automated programs designed to replicate human behavior, bots run the gamut from relatively benign to hostile and malicious. Released Tuesday, Imperva’s “2020 Bad Bot Report: The Bad Bots Strike Back” looks at how bad bots play a role in website activity and how website owners can protect themselves against these threats.

SEE: Artificial intelligence: A business leader’s guide (free PDF) (TechRepublic Premium)

According to the IT security company Imperva, bad bots can threaten websites in a variety of ways.

  • Price scraping bots can gather and analyze product prices as a way to beat competitors in the marketplace.
  • Content scraping bots can steal and reuse website content.
  • Credential stuffing or credential cracking bots can use brute force attacks to ascertain user credentials. This type of attack can lead to account lockouts, financial fraud, and increased customer complaints.
  • Account creation bots can create free accounts that are then used to send spam and exploit promotions aimed at new users.
  • Card cracking bots can test stolen credit cards numbers to see which ones work.
  • Denial of service bots can deploy DOS attacks to overwhelm and crash websites.
  • Gift card balance checking bots can steal money from gift card accounts that contain a balance.
  • Denial of inventory bots hold items in shopping carts, thereby preventing access by valid customers.

In its research, Imperva found that traffic from bad bots accounted for one-quarter (24.1%) of all website traffic in 2019, a jump of 18% from the prior year. Advanced persistent bots (APBs) compromised almost three-quarters of all bad bot traffic. These APBs try to evade detection by cycling through random IP addresses, using anonymous proxies, and changing their identities.

Many industries are impacted by bad bots, but the financial services sector was the hardest hit in 2019, accounting for almost 48% of all bad bot traffic. Other industries in the top five bad bot list were education at 46%, IT and services at 45%, marketplaces at 40%, and government at 37%.

One trend in the land of bad bots is Bad Bots as-a-Service, according to Imperva. In this scenario, bots are sold as a tool to scrape data from websites, often those maintained by competitors. Businesses package these bots as intelligence services with such names as pricing intelligence, alternative data for finance, or competitive insights. Further, there’s been a growth in job postings for positions such as Web Data Extraction Specialist or Data Scraping Specialist.

“We closely monitor how malicious bots iterate to evade detection and commit a wide range of attacks, and this year’s findings have revealed the next evolution: Bad Bots as-a-Service,” Kunal Anand, CTO at Imperva, said in a press release.

“Bad Bots as-a-Service is an attempt by bot operators to legitimize their role and appeal to organizations facing increased pressure to stay ahead of competition,” Anand said. “It’s critical that businesses spanning all industries learn which threats are most pervasive in their field and take the necessary steps to protect themselves.”

To protect websites against bad bots, Imperva offers the following advice:

  1. Block or CAPTCHA outdated user agents and browsers. The default configurations for many tools and scripts contain user-agent string lists that are largely outdated. This won’t stop the more advanced attackers, but it might catch and discourage some. The risk in blocking outdated user agents and browsers is very low; most modern browsers force auto-updates on users, making it more difficult to surf the web using an outdated version. Here’s how Imperva recommends that organizations treat older browser versions: Firefox – Block version 52 or older, CAPTCHA version 60 or older; Chrome – Block version 57 or older, CAPTCHA version 65 or older; Firefox – Block or CAPTCHA version 10 or older; Safari – Block or CAPTCHA version 9 or older.
  2. Block known hosting providers and proxy services. Even if the most advanced attackers move to other networks that are more difficult to block, many less sophisticated perpetrators can still use easily accessible hosting and proxy services. Disallowing access from these sources might discourage attackers from coming after your site, API, and mobile apps. Imperva recommends blocking these data centers: Digital Ocean, Gigenet, OVH Hosting, and Choopa, LLC.
  3. Block all access points. Be sure to protect exposed APIs and mobile apps—not just your website—and share blocking information between systems wherever possible. Protecting your website does little good if backdoor paths remain open.
  4. Carefully evaluate traffic sources. Monitor traffic sources carefully. Do any have high bounce rates? Do you see lower conversion rates from certain traffic sources? They can be signs of bot traffic.
  5. Investigate traffic spikes. Traffic spikes appear to be a great win for your business. But can you find a clear, specific source for the spike? One that is unexplained can be a sign of bad bot activity.
  6. Monitor for failed login attempts. Define your failed login attempt baseline, then monitor for anomalies or spikes. Set up alerts so you’re automatically notified if any occur. Advanced “low and slow” attacks don’t trigger user or session-level alerts, so be sure to set global thresholds.
  7. Monitor increases in failed validation of gift card numbers. An increase in failures, or even traffic, to gift card validation pages can be a signal that bots such as GiftGhostBot are attempting to steal gift card balances.
  8. Pay close attention to public data breaches. Newly stolen credentials are more likely to still be active. When large breaches occur anywhere, expect bad bots to run those credentials against your site with increased frequency.
  9. Evaluate a bot protection solution. Bad actors are working hard every day to attack websites across the globe. The tools used constantly evolve, traffic patterns and sources shift, and advanced bots can even mimic human behavior. Hackers who use bots to target your site are distributed around the world, and their incentives are high. Today, it’s almost impossible to keep up with all of the threats on your own. Industry analysts agree, which is why Gartner has added bot defense as a core requirement for Web Application Firewall (WAF) and Content Delivery Network (CDN) vendors. Your defenses need to evolve as fast as the threats.

Image: Getty Images/iStockphoto