How are your servers today? Up and running? That's nice. How are they really, though? Are they running at optimal performance? Is the hardware you've chosen to serve up your company's e-commerce site up to snuff? How do you know? Magic? No, you don’t know. But I’ll let you in on a little secret. You can know. How? Linux and httperf.
Benchmarking tools have long been available to computer manufacturers and testing labs. Fortunately for the network-inclined, benchmarking tools have migrated over to your network and can make good (or bad) your boasting and bragging. Better still, they can put your mind to rest that in fact, your server hardware is perfectly up to the job at hand.
One of these benchmarking tools is httperf. Written by David Mosberger, it is a flexible utility for generating various HTTP workloads and measuring server performance on both micro and macro levels. In this Daily Feature, we'll install httperf and generate a few reports to illustrate how it works.
You have your choice: Install via rpm or source. Source, you say? Good choice. Grab the latest, greatest tarball file (as of this writing, it’s 0.8), su to root, mv the file to /usr/local/bin, and run the commands:
tar xvzf httperf-0.8.tar.gz
which should create a pretty little binary executable called httperf.
You could also opt to install via rpm. If you choose this route, download the latest rpm file and, as root, run the command (in this instance, we're installing 0.8):
rpm -ivh httperf-0.8-1.i386.rpm
which will install the binary executable httperf.
How it works
To put it simply, httperf overloads a server beyond its normal capacity. The mechanics of this are actually quite complex. Using the tool, you saturate the server with TCP connections. The problem is that once the server’s maximum load is exceeded, the client side begins building up resources and, since a client machine has finite resources, these would eventually run out. Because of this, the author included a flexible time-out function that allows you to declare how quickly a process waiting for a response will time out.
The httperf utility runs as a single-threaded process using nonblocking I/O to communicate with the server and a separate process per client machine. One thing to remember is that you have to break the TCP connection into sessions and calls. Each httperf test will consist of a session (or sessions) made up of calls spaced out at certain intervals.
The command syntax is simple. Let’s say we're going to run a very basic test on our Web server (IP address 192.168.1.100). The minimum test would be:
and would return something like this.
The critical information to look for here is Connection rate, Connection time, Request rate, Reply rate, CPU time, and Net I/O. As you can see from this test, the results were quite zippy (to say the least). What you have to understand is that the test was run on a local machine.
What if you want more telling results? Let’s hit an external machine with a more vigorous test. This test will do a number of things. First, it will use the —hog switch, which will use as many TCP ports as necessary.
To hog or not to hog?
If you are testing between UNIX and NT machines, you will have to use the —hog switch due to a TCP incompatibility between NT and UNIX.
Next we’ll use the —server switch, which declares the IP address (or URL) of our target server. We’ll also declare some specifics regarding the session with the —wsess switch. The —wsess switch allows us to dictate the total number of sessions to generate, the number of calls per session, and the user think-time (in seconds) that separates consecutive calls. If we use —wsess=10,5,2, we're dictating 10 sessions at five calls per session with two seconds between each call. You can see how simple it is to scale your testing with this option. Run the command with —wsess=10000,1000,1 and your server will quickly, albeit painfully, be put to the test.
After the —wsess switch, we'll want to toss in the —rate switch, which tells the command how many sessions to generate per second. The default value for this switch is 0, meaning all sessions will be generated concurrently. I typically use a rate of 1.
The last switch we’ll throw is the —timeout switch. This switch specifies the amount of time (in seconds) that httperf will wait for a server reaction. The default value is infinity, but you probably won't want to hold up your processes too long during testing.
Let’s put the pieces together and whip up a good command.
We’ll have returned to us something like this.
Compare the differences between the two tests. Pretty dramatic, eh?
Let’s face it. When you gotta know, you gotta know. Sure, you can rely on manufacturers to spit out specs for your server. You can read volume after volume of survey material. But the numbers can quickly speak for themselves.
Don’t forget to take a look at the httperf man page (run the command man httperf) for some of the many options not covered in this Daily Feature. What are you waiting for? Commence testing!
Jack Wallen is an award-winning writer for TechRepublic and Linux.com. He’s an avid promoter of open source and the voice of The Android Expert. For more news about Jack Wallen, visit his website jackwallen.com.