Servers

The pros and cons of tower, rack, and blade servers

Scott Lowe goes back to basics with this overview of tower, rack, and blade servers to help you make an educated decision about the best option for your data center.

There are three main choices when it comes to buying a new server: tower, rack, or blade. Here are some of the pros and cons about each kind of server, as well as some of my experiences with each one.

Tower servers

Tower servers seem dated and look more like desktops than servers, but these servers can pack a punch. In general, if you have a lot of servers, you're probably not using a bunch of tower servers, because they can take up a lot of space and are tough to physically manage since you can't easily stack them on one another. In some cases as organizations grow and move to rack servers, conversion kits can be purchased to turn a tower server into a rack-mount server.

As implied, tower servers are probably found more often in smaller environments than anywhere else, although you might find them in point solutions in larger places.

Tower servers are generally on the lower end price-wise, although they can expand pretty decently and become really expensive.

Tower servers take up a lot of space and require individual monitors, keyboards, and mice or a keyboard, video, mouse (KVM) switch that allows them to be managed with a single set of equipment. In addition, cabling can be no fun, especially if you have a lot of network adapters and other I/O needs. You'll have cables everywhere.

I don't buy a lot of tower servers these days, but they still have a place. My most recent tower server purchase was to serve as my backup system running Microsoft Data Protection Manager 2010.

Rack servers

If you run a data center of any reasonable size, you've probably used a lot of industry standard 19" wide rack servers. Sized in Us (which is a single 1.75" rack unit), rack servers can range from 1U "pizza boxes" to 5U, 8U, and more. In general, the bigger the server, the more expansion opportunities are available.

Rack servers are extremely common and make their home inside these racks along with other critical data center equipment such as backup batteries, switches, and storage arrays. Rack servers make it easy to keep things neat and orderly since most racks include cable management of some kind. However, rack servers don't really simplify the cabling morass since you still need a lot of cabling to make everything work -- it's just neater. I once worked in a data center in which I had to deploy 42 2U Dell servers into three racks. Each server had to have dual power cables, keyboard, video, and mouse cables and six (yes, six) network cables (six colors with each color denoting a specific network). It was a tough task to keep the cabling under control, to put it mildly. Because everything was racked, there was built-in cable management that made this easier.

Like tower servers, rack servers often need KVM capability in order to be managed, although some organizations simply push a monitor cart around and connect to video and USB ports on the front of the server so that they don't need to worry about KVM.

Rack servers are very expandable; some include 12 or more disks right in the chassis and support for four or more processors, each with multiple cores. In addition, many rack servers support large amounts of RAM, so these devices can be computing powerhouses.

Blade servers

There was a day when buying individual blade servers meant trading expansion possibilities for compactness. Although this is still true to some extent, today's blade servers pack quite a wallop. I recently purchased a half-height Dell M610 blade server with 96 GB of RAM and two six-core processors.

There is still some truth to the fact that blade servers have expansion challenges when compared to the tower and rack-based options. For example, most tower servers have pretty significant expansion options when it comes to PCI/PCI Express slots and more disk drives. Many blade servers are limited to two to four internal hard drives, although organizations that use blade servers are likely to have shared storage of some kind backing the blade system.

Further, when it comes to I/O expansion options, blade servers are a bit limited by their lack of expansion slots. Some blade servers boast PCI or PCI Express expansion slots, but for most blade servers, expansion is achieved through the use of specially designed expansion cards. In my case, the Dell M600 and M610 blades have three mezzanines. The first mezzanine consists of dual Gigabit Ethernet adapters. The remaining mezzanines are populated based on organizational need. In my case, our blades have a second set of Gigabit Ethernet adapters housed in the second mezzanine and Fibre Channel adapters in the third. If necessary, I could also choose to use mezzanine cards with four ports in some configurations. So, although the blade server doesn't have quite the I/O selection of other server form factors, it's no slouch, either.

When raw computing power and server density is the key drive, blade servers meet the need. For example, in my environment, I have a 10U Dell M1000e blade chassis that can support up to 16 servers. So, each server uses the equivalent of 0.625U of rack space. On top of that, the blade chassis holds four gigabit Ethernet switches and two Fibre Channel switches, so there is additional rack space savings since I don't need to rack mount these devices to support different connectivity options. In addition, the blade chassis has a built-in KVM switch so I don't need to buy a third party and cable it up.

Speaking of cabling, a blade environment generally has much less of it than tower or rack environments since a lot of the connectivity is handled internally. You'll end up with a neater server room as a result.

Another point is adding a new server consists of simply sliding it into an available slot in the chassis. There is no need to rack a new server and deal with a bunch of new cabling. This small size makes heat dissipation a challenge. Blade chassis can put out a lot of heat.

From a cost perspective, blade servers require some initial infrastructure, such as the chassis, so the upfront cost is often higher than for servers of other types.

Bottom line

If you need one or two servers, a tower solution probably makes sense. If you need three to 24 servers or massive scalability, then rack servers are for you. When you go need more than 24 servers, I advise you to consider a blade solution to meet your data center needs.

About

Since 1994, Scott Lowe has been providing technology solutions to a variety of organizations. After spending 10 years in multiple CIO roles, Scott is now an independent consultant, blogger, author, owner of The 1610 Group, and a Senior IT Executive w...

15 comments
dlovep
dlovep

Every one wants a super car, a super computer, a super rich dad (just in case)... so if price doesnt matter, than all doesnt really matter.... Honest speaking, if you dont need a Rack in your server room, Tower is the choice, if your server room is big enough and you still dont want a Rack, Tower is the choice. If you company are happy to pay for Racks, Air-con, cabling mount...etc. and giving you plenty of time to manage your cabling & KVM, than Tower is not your choice. Rack, good for maintenance by us, bad for company, you need a place dedicated for it, dont tell me you can mount a wheel on the bottom and move it. you see the giant water pipe comes out from the back and thousands Cat-5/6. If your company or department never really "Move" this is the best choice. Blade server, which mean your company invest quite a large amount of money in this department, whereas your server room must be reasonable big to compare with others, if you put this blade server on your desk or under your desk, you wont need a server room, but not optimise the power of it.

DaPearls
DaPearls

Unless you are running a business out of your house, there is no need for towers in any business. Get yourself a small rack with proper UPS, Cable and Power management and put in a few rack mounted servers. This gives you a clean, secure environement. No more putting your rack servers underneath (or on top) of a desk in the office or squeezing them into an electrical closet. Also consider Paas and Sass. Most SMBs can take advantage of the cloud to eliminate the capital investment in the hardware, backups, electrical, etc. AND they don't have to deal with lifecycle issues.

NCristensen
NCristensen

In our DC we have around 250 servers, all in Rack. We decided against the blade in order to prevent being locked-in by a certain vendor. Both the cable clutter and the keyboard/monitor cart belongs too a distant history, since we went over to Cat5 KVM IP Switches. While RDP is the prime tools for remote access, iLO is still far from being able to beat the performance that my KVM IP Switch provide me!

Jesper_L
Jesper_L

Really, is any serious company still using KVM Switches today, that are not based on Cat5 technology??

PassingWind
PassingWind

Tower servers can be managed remotely - CLI or GUI interface - using secure standard tools. Tower servers can be located near their clients - with no more power or cooling issues than a desktop - saving a fortune on power supply, cooling and on data wiring back to a central cabinet. Remote tower servers can host one anothers' remote backup copies on a scalable 'buddy server' basis as the enterprise grows. Instead of everything converging on the central hub of your network, how about everything heading out to the periphery where the work is done and stored? Only the management terminal sits at the hub. Really peaceful - helps concentration ... Horses for courses. If it fits this decentralized scalable model, costs tumble.

donallsop
donallsop

Depending on where the server is located, that may be an issue. Rack servers are LOUD.

dfrey_us
dfrey_us

You might want to give a nod to the compactness that can be achieved using a rack server and virtualization.... Instead of racking several servers you can just rack one modest size server and cut it up into several VMs. This way you can consolidate your data center down quite a bit.

rlyonsmd
rlyonsmd

My prior organization is considering Blade servers versus Rack servers. One thing that isnt mentioned in this article is the lifecycle. While both have the same hardware lifecycle, speaking from Dell experience, that is 5 yrs from date of purchase, you have to take into account the following, equipment and O/S improvements over that time along with application development and see where you will be in 5 yrs. For example, if you need to replace a particular application in 3-4 yrs and want to use the latest O/S, will the blade servers be supported? If you have rack servers for applications, all you have to do is replace that particular server with a new one, load newer O/S and migrate the application. If the blade server doesnt support the newer O/S, you are stuck until it does or you're moving back to a rack server based on immediate needs for application(s) upgrades. While giving a statement of 3-24 servers for rack and 24 or more for blade seem like guideline in the author's opinion, you need take into account various factors when considering any switch or buildout.

GraemeLeggett
GraemeLeggett

Once you buy a rack cabinet, besides hiding the tentacles of the cable monster, you get a modest degree of physical security that can be useful for a small company with limited office division. The UPS switches are tucked out of the way and no-one accidentally unplugs some peripheral. The backup cartridges are the other side of a locked door until the nominated person takes them away to be put in a safe. And if you buy a full height version, no-one will put cups of tea or a potted plant (that needs watering daily) on it.

PassingWind
PassingWind

Quote: there is no need for towers in any business The 'server room' with everything in one place defeats scalability, and increases the risk of a disaster. (Who put the server room under the water tank? Whose server room got hit by lightning?) The policy of dispersed minimalist resources has its place where resilience really matters. The only serious problem with dispersal is that vendors don't make as much profit out of it. It is indistinguishable from low margin, low hassle, high volume consumer deployment - and to protect their more profitable business, they insist small towers are only suitable for that market. But you get simple, robust 'fit and forget' quality hardware at a good price. If your design is truly scalable you can add (or lose) another department without impacting the rest of the enterprise - and modern software has been developed in just such an environment. So where several small simple dispersed servers at a few hundred dollars each can meet the need, it can be a better solution than one big expensive complicated server system in a central location. Horses for courses. If you want a regular supply of small issues to keep your support system fresh, use small independent dispersed servers working cooperatively for the enterprise. If you want the next inevitable catastrophe after everybody has forgotten the last one, put all your eggs in one very safe basket and wait for the unforeseeable event.

ndary007
ndary007

one of the issues to deal with in a heterogeneous environment is how to remote access all vendors in properly managed secured ways... for that you can use the AccessIT Access management system which i found that can give you access from a single interface to all vendors (iLO, iDrac, iLOM, RDP, VNC, SSH, TELNET, and many many more).. and you can add more vendors yourself to the system... http://www.minicom.com/kvm_accessit.htm#

mkoelsch
mkoelsch

Any computer, server or not, can be noisy. You have to keep them cool, and with all the drives and processors some servers have you are going to need fans. It is reality. The same can be said of a similarly equipped tower if it is being properly cooled.

Spitfire_Sysop
Spitfire_Sysop

If you cluster the blades in to a huge chunk of virtual hardware and then run VMs on top of that then you can simply assign how much hardware is being used for each "server" without ever caring where it is actually running. If a blade goes down you should be able to fix it while keeping all of your VMs up, in a perfect world.

jhoward
jhoward

Not that there is any inherent reason to distrust coworkers but the closed door on the rack does give a sense of security - real or otherwise inferred. In a smaller office environment even a small rack helps to contain the usual sprawl of equipment and invites you to clean up the spaghetti mess of cabling usually done in these environments.. I have to admit though I nearly fell out of my seat about the plant thing. I worked in a company that was mostly retail a while back (tech was not their strong suit) and their office had this old mammoth custom built tower from the late 90s that was about the size of a small rackable cage these days. Needless to say the office manager at the time had this "hideous eyesore" hidden underneath a jungle of long leafed plants that as you said needed constant watering.

blackepyon01
blackepyon01

I have an old server tower on wheels from the 486 era (Full size AT MB, massive PS) that I've converted to mount a full size ATX MB and rebuilt the PS (large size case) with the guts from a modern power supply. I use it for my home machine which is also my graphics workstation. Lots of room for my dual video cards, RAID array, DVD drive, DVD burner, card reader, floppy (for legacy, I like to fart around with old machines), removable drive bay for backups, etc. Cools very nicely too. It sits beside my desk.

Editor's Picks