Servers optimize

10 things to look for in a server chassis

A lot of server headaches, like overheating, service interruptions, and slow performance, can be caused by a chassis that doesn't meet your needs or suit your environment. You can avoid a host of problems by making sure you take these 10 factors into consideration when selecting a server chassis.<br><br>

This article is also available as a PDF download.

It's easy to roll out the wrong server. That's why it happens all the time. Short-sighted administrators, budget-conscious managers, and even tight-fisted clients all contribute to the problem.

Many server issues -- whether the trouble is expansion limitations, overheating, service interruptions, or slow performance -- can be traced directly to the server chassis. To reduce the odds of having to perform expensive upgrades and even premature system replacements, carefully consider a server's chassis requirements up front.

Here are 10 things to look for in a server chassis. Just to be clear, by chassis I mean the server's actual case and its base components.

#1: Size

The first, and most obvious, chassis consideration is size. Will you be mounting the server in an existing rack or will the server be free-standing?

Occasionally, system builders will pack a server, with all its accompanying peripherals, in a standard mid-tower case. That's a bad idea. At 17 to 18 inches high (and six to eight inches wide), most mid-tower ATX cases support only three to six 5-and-a-quarter-inch bays and a pair of 3-and-a-half-inch bays. Mid-tower ATX chassis typically have seven expansion card slots but only a standard PS/2 power supply.

Full towers, at 20 to 24 inches tall (and the same six to eight inches in width) typically add support for four to nine 5-and-a-quarter-inch bays and six to twelve 3-and-a-half-inch bays, essentially doubling the number of hard disks that can be installed. Although full-tower ATX cases usually support the same number of expansion card slots as a mid-tower chassis (seven), full-tower cases can make use of PS/2 or larger power supplies.

Rack-mount components, meanwhile, typically measure 19 inches wide. Frequently referred to as a 1U, a standard 1U rack-mount server is 1.75 inches high, while a 2U server measures three inches in height.

1U systems typically support one or two CPUs, one or two riser cards, two or three expansion slots, and usually up to four IDE, SATA, or SCSI drives. 2U chassis, meanwhile, routinely add support for up to six or eight hard disks. Most 4U systems support up to four CPUs, eight or more hard drives, and six or more PCI/E expansion slots.

#2: Power

Power is the next consideration. The more components (CPUs, PCIe cards, network adapters, hard disks, etc.) a server includes, the more electricity it requires.

Exactly how much power a server requires remains a bit of a science, but full-tower servers typically require 300- to 800-watt power supplies. 1U server rack units typically feature 350- or 400-watt to 600-watt power supplies. 2U models, meanwhile, often include support for twin 600- to 750-watt PSUs.

Be sure you select a power supply that can meet your anticipated needs. For help calculating power supply requirements, check out David Gilbert's TechBuilder article.

#3: Speed

A server chassis, at first glance, doesn't appear to have much impact on the backplane, performance, or the speed with which applications operate and requests are answered. But if a chassis doesn't support multiple network adapters, a sufficient number of CPUs, or an adequate memory configuration, a server's going to become a bottleneck. When selecting a server chassis, be sure to identify a model that supports the number of CPUs, disks, and network adapters necessary to meet projected and growth requirements.

#4: Tool-less assembly

Review sales literature and technical specifications to confirm that the server body being purchased includes tool-less fan modules, hard disk trays, and CD or DVD drive replacement. When you're wrestling with replacing a failed component or adding a new drive, don't complicate the process with having to locate tools.

Even when you have the tools on hand, maneuvering them within tight, electrified spaces in crowded server racks is no longer necessary. Most server models now feature tool-less assembly for commonly replaced components. Take advantage of this benefit by insisting upon tool-less components whenever possible.

#5: Ventilation

Servers require considerable power to fuel all the services they provide -- so they generate considerable heat. To continue working properly, that heat must be displaced.

Specify that full-tower models feature front and rear case fans as well as side fans and vents. 1U server systems should have at least four 56mm cooling fans for the system's hard disks, processor(s), and expansion cards. Power supplies should have their own cooling fans (typically twin 40mm or 56mm in 1Us). 2Us usually add a fifth 56mm (or larger) fan for cooling internal components.

When specifying server chassis requirements, look for models that include hardware-based support to alert you of low fan RPMs and failures. Promptly catching cooling fan errors is critical in helping identify the failure and preventing damage to a server's sensitive electrical components.

#6: OS compatibility

It should go without saying, but incompatibilities do exist. Always confirm that the server chassis you select has hardware that's compatible with the operating system the unit will run. Microsoft has expanded hardware support to work with most every major server chassis and component, including most common motherboards, CPU and memory configuration, and disk controllers.

However, neither UNIX nor Linux boasts the same compatibility. If your open source friendly organization intends to leverage new equipment (particularly just-released components debuting freshly authored drivers) to power UNIX- or Linux-based systems, confirm that the hardware is certified for use on the operating system in question. Few things are more frustrating than rolling out a new box under a tight deadline only to discover its RAID controller isn't compatible with the OS. Avoid those headaches. Research a chassis' compatibility (for every component) before you place the order.

#7: Hot-swappable bays

Unless you enjoy working evenings and weekends, purchase only server chassis boasting hot-swappable disk drives. In most midsize and large enterprises, powering down a server during business hours to add disks simply isn't an option.

Where you can, opt for hot-swappable fans and optical drives, too. All components fail, so the more hot-swappable options you have, the less likely you are to have to wait until nonbusiness hours to replace a failed device.

#8: Expandability

It's easy to get caught up in the whirlwind that often accompanies an urgent new need within an organization. Thoughts can quickly turn to ensuring that a new box -- needed quickly to power a new SQL-based sales tool, for example -- adequately addresses the new tool's service requirements. Easily lost in conversation is consideration for any growth the sales department might experience.

For example, if the new server is designed to support 50 field agents, but the new initiative proves so successful that suddenly 100 agents are pounding the application (as well as a new groupware program loaded to improve communications in the field), the box can quickly become overwhelmed.

When scoping the proper size (as reviewed in #1), pay particular attention to the memory configuration and number of CPUs and hard disks a chassis supports. It's much more economical to add a pair of RAM chips, a new CPU, and a pair of hard disks to an existing chassis than it is to purchase yet another new server box.

#9: Redundancy

Power supplies fail. As system features and speeds increased and prices dropped, something had to give. The power supply is often among the shortchanged. Anticipate power supply failures by specifying that a chassis include two PSUs. Should one fail, being able to quickly transfer to the other, already-installed unit will buy you precious time to track down a replacement for the failed PSU (which can then assume the backup responsibilities).

The same is true with network interface cards. Having seen numerous network outages following strong thunderstorms (whose shocking lightning bolts seem particularly attracted to NICs connected to business-class DSL service), I routinely build server chassis with two NICs. Should one fail, you don't even need to crack the case to restore network connectivity; simply move the Ethernet cable to the backup port and update the server's software configuration, and service returns to normal.

#10: A lock

Many smaller businesses place full-tower and even rack-based systems in insecure areas. Often, these servers are placed in public areas where all employees enjoy unfettered access. Guard against inadvertent -- and even intentional -- shutdowns and reboots by purchasing server chassis that boast lockable panels. By locking a server's panel providing access to power and restart switches, you can eliminate accidental restarts resulting from well-meaning cleaning crew or a disgruntled employee looking for an early start to the weekend.

11 comments
longwayoff
longwayoff

As one of the folks who has had to learn about servers by guess & by golly instead of training, I found this very helpful. We have both M$ & not servers; the parts/OS compatibility issue was a nice heads-up. Thanks! Lynne

Chris.Reynolds
Chris.Reynolds

For a sole-charge neophyte at an SMB, trying to get up to speed in multiple areas very quickly, review articles like this are a real help. Kaitoa, well done.

pcrx_greg
pcrx_greg

This article is aimed at people that believe you can build a "powerful" server using a Celeron or Athlon processor and 512MB of RAM. You can build a server using these components, just don't expect very good performance. A good chassis is important to avoid performance issues and to allow for future expansion since server buyers usually expect their server to long outlast any desktop or workstation.

CG IT
CG IT

I use 2U and 4U rackmounts. Depends on how I'm feeling that day how much room I need in the rack and how much $$ I got. if O/S was mentioned in the article....I dunno ......whoever wrote the article ought to have looked at the case mod article. Someone stuck a comp in a basketball.... that should have been a clue as to O/S compatibility with cases....

marketingtutor.
marketingtutor.

This is what comes from the type of people that need Microsoft to "certify" that they are a professional. What the heck is this rubbish. Anyone that can stack those little ABC blocks can figure this out, more so anyone that A. Knows what a "Server" is. B. Has any use for a "Server". C. Knows what a power supply is. Sorry, I'll try not to be TOOO rude, but this is absolute contrived nonsense that is just put out to come up with some content. This is the typical rubbish one would expect from someone brandishing a MCP title. What's next, we get the A+ certified guys coming in here and explaining how to be careful when handling static sensitive components and how to attach our ground straps, followed by the 10 paragraphs on "How to Boot Up your 'Personal Computer'". My apologies to you if you were honestly trying to put out a good article. In any case this really is below TR standards. I have been considering unsubscribing from TR, and now I certainly know what I will be doing after I post this. Ugh, what a waste of my time this article was. After this, I think I need to go take a long hot shower.

cparris
cparris

With all due respect, the M$ certifications are like having money in the bank. I, for one, have been looked over for a few job positions before simply because I didn?t have the $400 for the MCSE certification test. I?ll agree that some of the M$ certified people actually are, I hate to say it, idiots. Please keep in mind that TR is here for the experienced, knowledgeable professional and for the one that got stuck in the ?computer person? role simply because they had enough common sense to check the power because the darn thing wouldn?t come on. I admit that this was a little basic for me; however, for those that have no technical training at all it is a good ?heads up? article. Finally, if you think you can write a better article about something; please do so instead of complaining about the content. Last time I checked this was a cooperative information source.

sekeris
sekeris

We have this proverb in Greece; my modest translation in this case would be that the certifications are the only way to prove a least amount of knowledge which happens to be the criteria to get a job; then you have to prove that you worth the job cause certifications couple only with experience, meaning that if you have the experience you can be a smart ass and get the certification.

cparris
cparris

How many people do you know that attempt to get a job in this field before they are finished with school? How much experiance do they have? A year tops? Or do you think there's more than that avaliable to a person that early in their career? Yes there are the druggery jobs, but what about in their particular field? Oh, and last time I checked, being a smart ass wasn't taken as payment for the certification tests.

knura
knura

The author makes very good points about what to look for in a server chassis. However, I am wondering how OS compatibility is connected in choosing a server chassis itself. The point about drivers for components such as motherboard, add on cards etc. is a separate issue and IMO not relevant to choice of chassis. So far (in my experience), a typical server grade chassis has accommodated all the components (motherboard, CPU, memory, RAID card etc.) that are compatible with my OS platform.

JodyGilbert
JodyGilbert

Have you ever been stuck with a server chassis that created problems or required an upgrade or system replacement? What other considerations would you add to this list?

rhaf
rhaf

Contrary to other major vendors, we at Fujitsu-Siemens spend millions of dollars designing and testing various housings and components in different combinations. We are not only regulated by strikt government laws and market requirements, but more importantly, by customer input. This article does touch on a few important subjects, being heat generation and related ventilation subjects. However, who is the real audience for this story? It can not be the major companies, as they all could not be bothered with these "trivial" subjects. Unfortunately for them this is also why they make many mistakes in their choices and spend far to much money due to these mistakes. So I must assume the article is intended for small and medium companies. When deciding what kind of server you want, might I suggest that the person responcible for this choice, first answers the following questions... "What do I need the server for? File/Print serving, running a database, etc. What software will I need for this task (OS and Applications software)." Knowing what you need, you can determine what specifications the required server needs to have. This is where it becomes tricky and gets to a subject which is not covered by this article... Server sizing! Does the target audience for this article have any idea how to size for instance an Exchange or SQL server? So now a potential "server-owner/buyer" should have an idea what he/she needs. Then the next question becomes most important... "What will it cost my company, if the server we use is not available (for whatever reason)for 1 hour, 1 day or 1 week?" Now this covers a number of costs like "loss of productiontime" (depending on howmany users are connected this can become a high cost in itself), cost for transport and repair or replacement and cost to your standing with your customers and this last one is a major point to think about!! This can cost you your business if your server fails! These answers together and possibly a few more, form the basis of the level of "uptime" (time that the server has to be productive for you to run a business succesfully)required from your network in general and your server in particular. You now need to decide on your server protection strategy.... Backup to tape....backup to disk.... Redundant powersupplies, redundant fans, Harddisk raid strategy, etc. How about server insurance incase of fire or theft? In case of defect...and you built your own server or bought a cheap "home made" one, can you still get the required components and at short notice? If you need to take the unit in for repairs, will you get a replacement unit to cover the repair time. If so, how much time and effort will it cost to re-install the temp.server so you are productive again? There are so many issues concerning the choice for a server. The subjects concerning the chassis are just a very small part of the choice if at all important. I can give you one example of a local customer where things went wrong... The customer is a small transport company employing some 20 people and running I believe some 5 lorries (trucks). They decided that a server is no different than any other computer and bought a pretty hefty "home-made" tower server. The first problem they had was that they run dedicated transport/logistic software for which no really good sizing info was available and also they had no idea how much computer performance they needed overall. They decided to send everybody home early on the friday afternoon so thay could start running the backup which would take 12 hours. The plan was to install the new server during the beginning of the weekend and use the sunday afternoon and evening to restore the data to the new server hardware. This meant they could only start testing everything on the monday morning during what are normal production hours. Although at first all seemed well, on monday afternoon the server "crashed" and from then on kept "crashing" at different intervals for no obvious reason. The local supplier supplied new hardware but after loosing the whole of the monday's production time, the problem started reoccuring. On wednesday, we were called in and supplied one of our purpose designed and built servers. And on thursday morning the customer was up and running again. The difference in price between the two servers was in the region of some 10-14000 dollars, which is a lot of money, which ever way you look at it. I later asked the company director what he estimated what the total cost had been for the downtime. He said he could not yet put a real figure on it, but it had meant that due to the total unavailability of the transport and logistic software and data, 10 people had been told to stay at home on full pay. The five trucks costing more than a house a piece were standing idly at various locations, some holding cargo and no where to go, cargo had been left at customer sites and had perished (they transport fruit and flowers) and this not only meant hefty financial penalties from customers, but also the fact that customers had transfered to other transport companies! So finally the director indicated that he could have given every person in the company the 14000 dollars difference in purchase price and bought the more expensive but working server, for far less money than they had lost now!!! A very very expensive lesson to learn. So again... a server chassis is the least of the troubles. Focus on your requirements and business needs and let us professionals focus on designing and configuring the server hardware to support you. You will be able to sleep a lot better!