Purchasing decisions: Push the server envelope or hold out a bit?

In today’s technology landscape, there are more options than ever on server equipment. IT pro Rick Vanover shares his thoughts on purchasing decisions based on the different models available right now.

One thing as an IT professional I always try to do is make the best forward-looking decisions. In one category, we’ve come to a crossroads of sorts. This is the question of processor models and the corresponding server model selections that are available. I brought this up in a post earlier on the new 30nm and 45nm processor models, but how does this translate to server model purchases right now? This is important as, in most situations, the average server we purchase today is way too powerful for the software requirements we are putting it up against. This is even the case in virtualized environments.

I’ll pick on Hewlett Packard’s ProLiant series of servers for the time being. Like many other administrators, I find myself most frequently using the ProLiant DL380 server series. Right now, there are two models “active” in the Intel processor line. The DL 380 Generation 6 (G6) and Generation 7 (G7) models are now available. There is always some amount of overlap, but this is different.

The G6 model became available once the Intel 5500/5600 (Nehalem) series of processors were available and the G7 followed once the other series of 5600 processors (Westmere) were offered with up to 6 cores per processor. To the purchaser, this can be confusing. This can be made further confusing as some of the same processor series (5500/5600) can be offered on both the G6 and G7 models. Does this mean that the G6 model will have a peculiarly short lifespan as a current model of server? I believe not. The G6 was made available in mid-2009 and the G7 was released not even one year later.

What does this mean for the customer? Simply speaking, there is marginally more choice. On the HP website, you can configure an HP ProLiant DL380 G6 or G7 server but may or may not be able to make an exact match to the processor model between the two series of servers. What you can do is configure servers and compare the price differences for a difference that may not appear to be too much.

For example, a very low-end DL 380 G7 is $140 more expensive than a low end G6. The difference is primarily the processors -- at 2.00 GHz versus 1.86 GHz. Is the extra processor frequency worth the extra $140 in price, or approximately a 6% price difference? Pricing of course will vary by reseller channel, but again in the case of the online configuration, a premium server has a slightly different price difference. The highest frequency base configuration, G6, is approximately 15% less expensive while only trading off two cores and 400 MHz in processor frequencies.

Other server brands have similar situations now where we can really configure more options that may end up saving a lot in the long run or with a large quantity purchase. What are your thoughts on the multiple server models that are available now? Is price king, or is loading up on cores still the only way to go? Share your comments below.


Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.


As it was in the beginning, is now and ever shall be? PCs have evolved from 4.77 Mhz 8-bit to multi-core multi-gigahertz screamers but the challenge still remains - accessing data for processing, sending results and storing results when done. Back in the day, we waited for floppy drives to poke along at glacial speeds. Today, the drives and arrays are faster, the network speeds are faster - and the processor still is often waiting for the I/O to load or transmit something. We've all seen it - the computer running slow while the HDD's thrash. It's all happening MUCH faster than it used to, but I/O is still the bottleneck, more often than not.


Server manufacturers have been holding off, but are now are announcing capabilities for new systems. HP, IBM, and Oracle (Sun) are on the verge of launching their latest systems. The goals essentially come down to more processing power using less energy. I'm mostly impressed with the the new IBM systems. The Power 795 offers 256 processor cores (4X the 695 model just a few months ago) in a 42U rack mount. This can effectively provide thousands of logical partitions (VM's). But that figure's supposed to quadruple again next year. Is it worth your wait? Suppose if depends on your needs and time frame. http://www-03.ibm.com/systems/power/hardware/795/specs.html


With Virtualization more cores is better most of the time. Great for the few application that can really take advantage of multiple core technology. However biggest hurdle for virtualization is I/O, network, storage. Many times I see overkill on the server hardware and still having performance issue due to bottlenecks with storage and/or networking, with too small a pipe back to a SANS, or too many VM's on a SAN's period (for the type of SAN's implemented. It comes down to there are fewer real good system and storage experts. I hardly see anyone really properly know how to spec out server hardware, and it's ten times worse for getting any enterprise storage solution done correctly. As for most applications, simple is better. Depending on the situation, can be better to use several cheaper clustered smaller machines than trying to put everything on one big machine where not a VM such as databases and other services that either need to be on dedicated iron, especially if High I/O. So in summary, the answer, I use a lot, it depends.

Editor's Picks