Every business depends on data. It’s how they evolve, improve, pivot, market and grow. The lure of data isn’t limited to marketing and suit-type staff. IT also requires data to not only make decisions, but to improve the technology used to make business happen.

That’s why you need all the tools to collect information about your systems—every system. One crucial service you need to know about is your web servers. After all, those web servers are how you sell your products and interact with the public.

How do you collect data about your websites? One way is to use the Apache Bench tool. Apache Bench is a tool used to load test websites and APIs. I’m going to show you how to install and use this handy tool, so you’ll have all the data you require about how your websites are performing.

SEE: Checklist: Server inventory (TechRepublic Premium)

What you’ll need

Apache Bench can be run from Linux, macOS and Windows. I’ll be demonstrating on Linux (specifically Ubuntu Server 20.04), but the command is the same regardless of platform. If you’re running Apache Bench on Linux, you’ll also need a user with sudo privileges.

How to install Apache Bench

In many cases, Apache Bench comes pre-installed with the Apache Server. If you’d prefer to run the command on a machine that doesn’t include Apache, you can install the ab command without the server. Here’s how.

Log in to your Ubuntu Server and issue the command:

sudo apt-get install apache2-utils -y

If you’re on a Red Hat-based distribution, that command would be:

sudo dnf install httpd-tools -y

Once the tool is installed, you’re ready to benchmark.

How to use Apache Bench

I’m going to demonstrate using Apache Bench on a Nextcloud site hosted within a data center. The command can be run with several options:

  • -n – The number of requests to send to a site
  • -c – The number of concurrent requests to send to a site
  • -H – Use a custom header to append extra values to the request
  • -r – Do not exit on socket receive errors
  • -k – Enable the HTTP KeepAlive feature
  • -p – Use a file that contains data to POST
  • -T – The content-type header to be used for POST/PUT data

I’m going to make use of the -n, -c, -H, -r, and -k options like so:

ab -n 100 -c 10 -H "Accept-Encoding: gzip, deflate" -rk

The above command sends 10 concurrent requests at a time (out of 100 total requests), adds gzip and deflate encoding, does not exit on receive errors, and enables the HTTP KeepAlive feature. You must include the path to the website, otherwise you’ll receive an error. In other words, I couldn’t just use as the URL for the ab command. However, I could use (with / being the document root of the site). Of course, if your site uses a non-standard port, you’ll need to include that in the address.

The command will complete fairly quickly and will present results like this:

Server Software: Apache/2.4.46

Server Hostname:

Server Port: 80

Document Path: /nextcloud

Document Length: 318 bytes

Concurrency Level: 10

Time taken for tests: 0.083 seconds

Complete requests: 100

Failed requests: 0

Non-2xx responses: 100

Keep-Alive requests: 100

Total transferred: 79910 bytes

HTML transferred: 31800 bytes

Requests per second: 1200.28 [#/sec] (mean)

Time per request: 8.331 [ms] (mean)

Time per request: 0.833 [ms] (mean, across all concurrent requests)

Transfer rate: 936.66 [Kbytes/sec] received

Connection Times (ms)

min mean[+/-sd] median max

Connect: 0 1 4.1 0 22

Processing: 1 5 4.5 4 23

Waiting: 0 5 4.2 4 18

Total: 1 7 5.8 5 26

Percentage of the requests served within a certain time (ms)

50% 5

66% 8

75% 10

80% 11

90% 14

95% 19

98% 26

99% 26

100% 26 (longest request)

As you can see, you receive quite a bit of information about how your site performs under load. If your site works under a much heavier load, you can up the variables like so:

ab -n 1000 -c 100 -H "Accept-Encoding: gzip, deflate" -rk

That command will take longer to complete because we’re hitting it with more requests.

How to interpret the results

We need to break the results down into sections. The important information from the first section is this:

Requests per second: 1200.28 [#/sec] (mean)

Time per request: 8.331 [ms] (mean)

Time per request: 0.833 [ms] (mean, across all concurrent requests)

Transfer rate: 936.66 [Kbytes/sec] received

The above information indicates:

  • Our test site handled 1200.28 requests per second
  • Each request averaged (mean) 8.331 ms per
  • Across all concurrent requests, the mean time was .833 ms
  • Transfer rate of requests received was 936.66 Kbytes/second

If I compare that to the results from my business site, I can see the Nextcloud site performs quite well. The results from the external tests concluded:

Requests per second: 553.73 [#/sec] (mean)

Time per request: 180.592 [ms] (mean)

Time per request: 1.806 [ms] (mean, across all concurrent requests)

Transfer rate: 487.85 [Kbytes/sec] received

One thing we have to take into consideration, when comparing results, is where the servers exist in comparison to where the testing is being done. To compare a site hosted in an on-premise data center to a site hosted by a third-party host will end in an apples to oranges comparison. You’re much better off comparing sites in similar locations, so compare two different sites hosted from within your on-premise data center and then compare two different sites from third-party hosting. You can then compare the two and see why one is outperforming the other.

The second section is the connection times, where you can compare the fastest, slowest, median and mean connection times for connection, processing and waiting. You probably won’t glean much useful information here, other than knowing your mean and median connection times. That could be useful information—especially when the goal is to make sure the site can scale to meet demand.

The third section gives you an overview of response times. These times are based on cumulative distribution, so you’d want to focus on the middle of the chart. In our example, 95% of the requests took 19 ms or less, 99% of the requests took 26 ms or less and only 1% of the requests took 26 ms.

In the end, there’s plenty of information Apache Bench can offer. You’ll want to run it regularly on your sites to make sure they are performing as expected. Although other tools offer more extensive results, the ab command is fast, often pre-installed with Apache, and gives you just the right amount of data to help you make decisions on where to go next.

Subscribe to TechRepublic’s How To Make Tech Work on YouTube for all the latest tech advice for business pros from Jack Wallen.

Image: Gorodenkoff/Shutterstock