Servers optimize

Presenting the Pogo Linux Mini-Pro 1U server: More bang for your buck than a free Porsche!

Looking for an inexpensive yet powerful solution to your enterprise server needs? Look no further than Pogo Linux's Mini-Pro 1U server. Here, Jack Wallen, Jr. takes a look at this beast with an eye towards benchmarking.

Pogo Linux continues to amaze me. They put out high-quality Linux hardware at bargain-basement prices. Up until now, their lion's share has been in the desktop space. But as the IT game changes, so do the players. Pogo Linux is now bent on selling mid- to high-end server systems.

The particular machine I was given (the Mini-Pro 1U server) was a monster of a machine that roared to life and heated up my office about 10 degrees (fortunately it's winter and I could use that magic 10). The specs on the machine read like a dream (for many small to midsize companies):
  • CPU's: dual PIII 933
  • Motherboard: Asus CUR-DLS Serverworks
  • RAM: 256 MB PC133 ECC Registered SDRAM
  • Hard Drive: 20 GB

Looks that kill
I would never be one to say that a piece of hardware was sexy. I will say, however, that Pogo Linux has created one clean-looking server. The 1U is lean, coming in at only 1 3/4" tall X 16 3/4" wide X 22 3/4" deep. The front of the server displays a minimal design with two removable drive bays, CD drive, floppy drive, and power and reset buttons. There is very little room for confusion and lots of room for simplistic function.

The back of the server sported the following connections:
  • Power
  • Two USB
  • Two ethernet
  • Serial port
  • Parallel port

Standard fare—although a gigabit connection may have been nice.

Kernels
Not only is the hardware set up for sweet server power, the software has been fine-tuned by Pogo Linux to perform. You will notice this the minute you power on the 1U server and you see the choice of three different kernels: standard, enterprise, and SMP. All three kernels are built on 2.2.16-22 technology and all three serve a different purpose.

Standard kernel
There is very little to say about this kernel. The standard kernel shipped on the 1U server is simply a standard Linux kernel with no optimizations made. This kernel should be used for low-end needs where power is not necessary, or for emergency uses.

Enterprise
This kernel has been configured for faster I/O throughput and better database handling.

SMP
This kernel was configured and compiled to take advantage of the dual-processor nature of the server.

Performance
Probably the highlight of reviewing this server was its performance. Out of the box and within 10 to 15 minutes time, I had the following services up and running:
  • Apache
  • ftp server
  • ssh
  • print
  • StarServer (This was the only software I had to install.)
  • Firewall

Not a shabby list for only 10 to 15 minutes set up time!

Benchmarking
Let's face it, in today's market it's all about (or should be about) the numbers. “How well does this server perform?” should be the question on your mind. After doing some fairly heavy benchmarking, I can say this server stood on its own.

Using a benchmarking utility called httperf, I was able to score some pretty serious numbers with this unit.

The first command run:
httperf --hog --server www --num-conn 100 --ra 10 --timeout 5

causes httperf to create 100 connections (at a fixed rate of 10 per second) to host www, send a request for the root document (http://www/), receive the reply, close the connection, and then print some performance statistics. The statistics returned look like:
  • Total: connections 100 requests 100 replies 100 test-duration .902 s
  • Connection rate: 10.1 conn/s (99.0 ms/conn, <=1 concurrent connections)
  • Connection time [ms]: min 1.2 avg 1.3 max 1.8 median 1.5 stddev 0.1
  • Connection time [ms]: connect 0.2
  • Connection length [replies/conn]: 1.000
  • Request rate: 10.1 req/s (99.0 ms/req)
  • Request size [B]: 58.0
  • Reply rate [replies/s]: min 10.0 avg 10.0 max 10.0 stddev 0.0 (1 samples)
  • Reply time [ms]: response 0.9 transfer 0.2
  • Reply size [B]: header 312.0 content 2890.0 footer 0.0 (total 3202.0)
  • Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0
  • CPU time [s]: user 3.65 system 6.13 (user 36.9% system 61.9% total 98.8%)
  • Net I/O: 32.2 KB/s (0.3*10^6 bps)
  • Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
  • Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

The next test:
httperf --hog --server=www --wsess=10,5,2 --rate=1 --timeout=5 --ssl

causes httperf to generate a total of 10 sessions at a rate of one session per second. Each session consists of five calls that are spaced out by two seconds.
  • Total: connections 10 requests 50 replies 50 test-duration 17.022 s
  • Connection rate: 0.6 conn/s (1702.2 ms/conn, <=9 concurrent connections)
  • Connection time [ms]: min 8008.1 avg 8034.7 max 8220.3 median 8013.5 stddev 65.3 Connection time [ms]: connect 0.3
  • Connection length [replies/conn]: 5.000
  • Request rate: 2.9 req/s (340.4 ms/req)
  • Request size [B]: 58.0
  • Reply rate [replies/s]: min 1.8 avg 3.1 max 4.2 stddev 1.2 (3 samples)
  • Reply time [ms]: response 1.3 transfer 4.1
  • Reply size [B]: header 312.0 content 2890.0 footer 0.0 (total 3202.0)
  • Reply status: 1xx=0 2xx=50 3xx=0 4xx=0 5xx=0
  • CPU time [s]: user 4.73 system 9.96 (user 27.8% system 58.5% total 86.3%)
  • Net I/O: 9.4 KB/s (0.1*10^6 bps)
  • Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
  • Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
  • Session rate [sess/s]: min 0.00 avg 0.59 max 1.00 stddev 0.50 (10/10)
  • Session: avg 1.00 connections/session
  • Session lifetime [s]: 8.0
  • Session failtime [s]: 0.0
  • Session length histogram: 0 0 0 0 0 10

The next test
httperf --hog --server=www --wsess=1000,5,2 --rate 1 --timeout 5

bumps up the previous test and generates a total of 1000 sessions at a rate of one session per second. Each session consists of five calls spaced out by two seconds. The results of this test:
  • Total: connections 1000 requests 1000 replies 1000 test-duration 99.902 s
  • Connection rate: 10.0 conn/s (99.9 ms/conn, <=1 concurrent connections)
  • Connection time [ms]: min 1.0 avg 1.1 max 8.2 median 1.5 stddev 0.5
  • Connection time [ms]: connect 0.1
  • Connection length [replies/conn]: 1.000
  • Request rate: 10.0 req/s (99.9 ms/req)
  • Request size [B]: 58.0
  • Reply rate [replies/s]: min 10.0 avg 10.0 max 10.2 stddev 0.0 (19 samples)
  • Reply time [ms]: response 0.8 transfer 0.2
  • Reply size [B]: header 312.0 content 2890.0 footer 0.0 (total 3202.0)
  • Reply status: 1xx=0 2xx=1000 3xx=0 4xx=0 5xx=0
  • CPU time [s]: user 33.25 system 66.64 (user 33.3% system 66.7% total 100.0%)
  • Net I/O: 31.9 KB/s (0.3*10^6 bps)
  • Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
  • Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

What’s very interesting about the above two tests (each test being the same except that the latter created 1000 connections instead of 10) is that in the second (and longer) test, the server’s average reply time dropped dramatically (from 1.3 milliseconds to .8 milliseconds).

The final test was to modify the above test and shoot it off the scale. By having httperf create 1 million connections (from four different machines) to the server, we could see how this machine would handle long-term, constant load.

During this test, each CPU (on the server) never reached above 16% usage (although each client reported a range from 59.9 to 99.9% CPU usage). This indicated that the processors were up to the job of handling heavy loads. But what did the numbers say?

The command for this test was:
httperf --hog --server=www --num-con 1000000 --ra 10 --timeout 5

and the four test reports look like this:

Test client #1:
  • Maximum connect burst length: 5
  • Total: connections 1000000 requests 1000000 replies 998025 test-duration 99999.925 s
  • Connection rate: 10.0 conn/s (100.0 ms/conn, <=33 concurrent connections)
  • Connection time [ms]: min 0.2 avg 55.9 max 6006.1 median 1.5 stddev 382.4
  • Connection time [ms]: connect 1.3
  • Connection length [replies/conn]: 1.000
  • Request rate: 10.0 req/s (100.0 ms/req)
  • Request size [B]: 58.0
  • Reply rate [replies/s]: min 6.0 avg 10.0 max 14.0 stddev 0.4 (19998 samples)
  • Reply time [ms]: response 52.7 transfer 3.8
  • Reply size [B]: header 312.0 content 2890.0 footer 0.0 (total 3202.0)
  • Reply status: 1xx=0 2xx=998025 3xx=0 4xx=0 5xx=0
  • CPU time [s]: user 31297.98 system 68629.89 (user 31.3% system 68.6% total 99.9%)
  • Net I/O: 31.8 KB/s (0.3*10^6 bps)
  • Errors: total 1975 client-timo 1975 socket-timo 0 connrefused 0 connreset 0
  • Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

Test client #2:
  • Maximum connect burst length: 12
  • Total: connections 1000000 requests 1000000 replies 999998 test-duration 100002.877 s
  • Connection rate: 10.0 conn/s (100.0 ms/conn, <=104 concurrent connections)
  • Connection time [ms]: min 0.4 avg 928.7 max 324585.1 median 10.5 stddev 3291.3
  • Connection time [ms]: connect 85.0
  • Connection length [replies/conn]: 1.000
  • Request rate: 10.0 req/s (100.0 ms/req)
  • Request size [B]: 64.0
  • Reply rate [replies/s]: min 6.2 avg 10.0 max 13.8 stddev 0.6 (19850 samples)
  • Reply time [ms]: response 479.5 transfer 364.3
  • Reply size [B]: header 312.0 content 2890.0 footer 0.0 (total 3202.0)
  • Reply status: 1xx=0 2xx=999998 3xx=0 4xx=0 5xx=0
  • CPU time [s]: user 9621.78 system 50281.26 (user 9.6% system 50.3% total 59.9%)
  • Net I/O: 31.9 KB/s (0.3*10^6 bps)
  • Errors: total 2 client-timo 0 socket-timo 0 connrefused 0 connreset 2
  • Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

Test client #3:
  • Maximum connect burst length: 6
  • Total: connections 1000000 requests 999660 replies 993514 test-duration 100000.921 s
  • Connection rate: 10.0 conn/s (100.0 ms/conn, <=40 concurrent connections)
  • Connection time [ms]: min 0.2 avg 113.3 max 6705.7 median 1.5 stddev 570.1
  • Connection time [ms]: connect 43.3
  • Connection length [replies/conn]: 1.000
  • Request rate: 10.0 req/s (100.0 ms/req)
  • Request size [B]: 64.0
  • Reply rate [replies/s]: min 4.0 avg 9.9 max 15.6 stddev 0.5 (19990 samples)
  • Reply time [ms]: response 70.6 transfer 8.2
  • Reply size [B]: header 312.0 content 2890.0 footer 0.0 (total 3202.0)
  • Reply status: 1xx=0 2xx=993514 3xx=0 4xx=0 5xx=0
  • CPU time [s]: user 26126.45 system 70125.23 (user 26.1% system 70.1% total 96.3%)
  • Net I/O: 31.7 KB/s (0.3*10^6 bps)
  • Errors: total 6486 client-timo 6486 socket-timo 0 connrefused 0 connreset 0
  • Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

Test client #4:
  • Maximum connect burst length: 12
  • Total: connections 1000000 requests 1000000 replies 999998 test-duration 100002.877 s
  • Connection rate: 10.0 conn/s (100.0 ms/conn, <=104 concurrent connections)
  • Connection time [ms]: min 0.4 avg 928.7 max 324585.1 median 10.5 stddev 3291.3
  • Connection time [ms]: connect 85.0
  • Connection length [replies/conn]: 1.000
  • Request rate: 10.0 req/s (100.0 ms/req)
  • Request size [B]: 64.0
  • Reply rate [replies/s]: min 6.2 avg 10.0 max 13.8 stddev 0.6 (19850 samples)
  • Reply time [ms]: response 479.5 transfer 364.3
  • Reply size [B]: header 312.0 content 2890.0 footer 0.0 (total 3202.0)
  • Reply status: 1xx=0 2xx=999998 3xx=0 4xx=0 5xx=0
  • CPU time [s]: user 9621.78 system 50281.26 (user 9.6% system 50.3% total 59.9%)
  • Net I/O: 31.9 KB/s (0.3*10^6 bps)
  • Errors: total 2 client-timo 0 socket-timo 0 connrefused 0 connreset 2
  • Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

The important numbers to note are the following:
  • Total replies: ranged from 2 to 6486 failures
  • Reply times: response ranged from 52.7 milliseconds to 479.5 milliseconds; transfers ranged from 3.8 milliseconds to 364.3 milliseconds
  • Test completion time: 27.77 hours
  • Reply rate: average reply rate ranged from 9.9 milliseconds to 10.0 milliseconds
  • Net I/O: ranged from 31.7 Kbps to 31.9 Kbps

This was a fairly strenuous test. And the results may seem to lead you to believe that the server didn't fare too well (when looking at such numbers as Net I/O). What you have to take into consideration is the size of this test. This server was constantly plowed by four machines for almost 28 hours, and the maximum transfer time was still only 364.3 milliseconds (granted this was within a single location—the TechRepublic building). To give a better idea of how distance can affect this test, I ran a similar command from home and came up with a 78.6 millisecond response time and a 32.7 millisecond transfer time. Although the distance was a mere 10 miles, you can see that the transfer rate fell well into the low end of the primary test results. Not bad!

Minuses? Who said anything about minuses?
If I had to point out any downfalls of the Pogo Linux Mini-Pro 1U server, it would have to be the lack of enough hard drive bays to allow RAID 5. With space for only two drives (fortunately both being removable), there is only enough hardware to accomplish RAID 1. As a server, working within a small to midsize enterprise, you are going to want to shoot for RAID 5. You could settle for RAID 1 (using two drives), but most IT pros are going to want RAID 5.

But I'm getting picky for the price, and more than likely, if you are looking at high-end redundancy, you are going to use a hardware RAID solution over a software RAID solution.

Conclusion
It's a tough market out there, and the IT economy isn't quite as strong as it was a year ago. For this reason, finding a good deal might soon make or break your budget, and the new Pogo Linux Mini-Pro 1U server fits the bill to a T! Great performance at a low cost—this machine stands up to daily use and the bottom dollar.... What more could you ask for?

Take a look at the machine on the Pogo Linux site, and when you buy one, tell 'em Jack sent ya.
The authors and editors have taken care in preparation of the content contained herein but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for any damages. Always have a verified backup before making any changes.

About

Jack Wallen is an award-winning writer for TechRepublic and Linux.com. He’s an avid promoter of open source and the voice of The Android Expert. For more news about Jack Wallen, visit his website getjackd.net.

0 comments