This article is also available as a PDF download.
It's easy to roll out the wrong server. That's why it happens all the time. Short-sighted administrators, budget-conscious managers, and even tight-fisted clients all contribute to the problem.
Many server issues — whether the trouble is expansion limitations, overheating, service interruptions, or slow performance — can be traced directly to the server chassis. To reduce the odds of having to perform expensive upgrades and even premature system replacements, carefully consider a server's chassis requirements up front.
Here are 10 things to look for in a server chassis. Just to be clear, by chassis I mean the server's actual case and its base components.
The first, and most obvious, chassis consideration is size. Will you be mounting the server in an existing rack or will the server be free-standing?
Occasionally, system builders will pack a server, with all its accompanying peripherals, in a standard mid-tower case. That's a bad idea. At 17 to 18 inches high (and six to eight inches wide), most mid-tower ATX cases support only three to six 5-and-a-quarter-inch bays and a pair of 3-and-a-half-inch bays. Mid-tower ATX chassis typically have seven expansion card slots but only a standard PS/2 power supply.
Full towers, at 20 to 24 inches tall (and the same six to eight inches in width) typically add support for four to nine 5-and-a-quarter-inch bays and six to twelve 3-and-a-half-inch bays, essentially doubling the number of hard disks that can be installed. Although full-tower ATX cases usually support the same number of expansion card slots as a mid-tower chassis (seven), full-tower cases can make use of PS/2 or larger power supplies.
Rack-mount components, meanwhile, typically measure 19 inches wide. Frequently referred to as a 1U, a standard 1U rack-mount server is 1.75 inches high, while a 2U server measures three inches in height.
1U systems typically support one or two CPUs, one or two riser cards, two or three expansion slots, and usually up to four IDE, SATA, or SCSI drives. 2U chassis, meanwhile, routinely add support for up to six or eight hard disks. Most 4U systems support up to four CPUs, eight or more hard drives, and six or more PCI/E expansion slots.
Power is the next consideration. The more components (CPUs, PCIe cards, network adapters, hard disks, etc.) a server includes, the more electricity it requires.
Exactly how much power a server requires remains a bit of a science, but full-tower servers typically require 300- to 800-watt power supplies. 1U server rack units typically feature 350- or 400-watt to 600-watt power supplies. 2U models, meanwhile, often include support for twin 600- to 750-watt PSUs.
Be sure you select a power supply that can meet your anticipated needs. For help calculating power supply requirements, check out David Gilbert's TechBuilder article.
A server chassis, at first glance, doesn't appear to have much impact on the backplane, performance, or the speed with which applications operate and requests are answered. But if a chassis doesn't support multiple network adapters, a sufficient number of CPUs, or an adequate memory configuration, a server's going to become a bottleneck. When selecting a server chassis, be sure to identify a model that supports the number of CPUs, disks, and network adapters necessary to meet projected and growth requirements.
#4: Tool-less assembly
Review sales literature and technical specifications to confirm that the server body being purchased includes tool-less fan modules, hard disk trays, and CD or DVD drive replacement. When you're wrestling with replacing a failed component or adding a new drive, don't complicate the process with having to locate tools.
Even when you have the tools on hand, maneuvering them within tight, electrified spaces in crowded server racks is no longer necessary. Most server models now feature tool-less assembly for commonly replaced components. Take advantage of this benefit by insisting upon tool-less components whenever possible.
Servers require considerable power to fuel all the services they provide — so they generate considerable heat. To continue working properly, that heat must be displaced.
Specify that full-tower models feature front and rear case fans as well as side fans and vents. 1U server systems should have at least four 56mm cooling fans for the system's hard disks, processor(s), and expansion cards. Power supplies should have their own cooling fans (typically twin 40mm or 56mm in 1Us). 2Us usually add a fifth 56mm (or larger) fan for cooling internal components.
When specifying server chassis requirements, look for models that include hardware-based support to alert you of low fan RPMs and failures. Promptly catching cooling fan errors is critical in helping identify the failure and preventing damage to a server's sensitive electrical components.
#6: OS compatibility
It should go without saying, but incompatibilities do exist. Always confirm that the server chassis you select has hardware that's compatible with the operating system the unit will run. Microsoft has expanded hardware support to work with most every major server chassis and component, including most common motherboards, CPU and memory configuration, and disk controllers.
However, neither UNIX nor Linux boasts the same compatibility. If your open source friendly organization intends to leverage new equipment (particularly just-released components debuting freshly authored drivers) to power UNIX- or Linux-based systems, confirm that the hardware is certified for use on the operating system in question. Few things are more frustrating than rolling out a new box under a tight deadline only to discover its RAID controller isn't compatible with the OS. Avoid those headaches. Research a chassis' compatibility (for every component) before you place the order.
#7: Hot-swappable bays
Unless you enjoy working evenings and weekends, purchase only server chassis boasting hot-swappable disk drives. In most midsize and large enterprises, powering down a server during business hours to add disks simply isn't an option.
Where you can, opt for hot-swappable fans and optical drives, too. All components fail, so the more hot-swappable options you have, the less likely you are to have to wait until nonbusiness hours to replace a failed device.
It's easy to get caught up in the whirlwind that often accompanies an urgent new need within an organization. Thoughts can quickly turn to ensuring that a new box — needed quickly to power a new SQL-based sales tool, for example — adequately addresses the new tool's service requirements. Easily lost in conversation is consideration for any growth the sales department might experience.
For example, if the new server is designed to support 50 field agents, but the new initiative proves so successful that suddenly 100 agents are pounding the application (as well as a new groupware program loaded to improve communications in the field), the box can quickly become overwhelmed.
When scoping the proper size (as reviewed in #1), pay particular attention to the memory configuration and number of CPUs and hard disks a chassis supports. It's much more economical to add a pair of RAM chips, a new CPU, and a pair of hard disks to an existing chassis than it is to purchase yet another new server box.
Power supplies fail. As system features and speeds increased and prices dropped, something had to give. The power supply is often among the shortchanged. Anticipate power supply failures by specifying that a chassis include two PSUs. Should one fail, being able to quickly transfer to the other, already-installed unit will buy you precious time to track down a replacement for the failed PSU (which can then assume the backup responsibilities).
The same is true with network interface cards. Having seen numerous network outages following strong thunderstorms (whose shocking lightning bolts seem particularly attracted to NICs connected to business-class DSL service), I routinely build server chassis with two NICs. Should one fail, you don't even need to crack the case to restore network connectivity; simply move the Ethernet cable to the backup port and update the server's software configuration, and service returns to normal.
#10: A lock
Many smaller businesses place full-tower and even rack-based systems in insecure areas. Often, these servers are placed in public areas where all employees enjoy unfettered access. Guard against inadvertent — and even intentional — shutdowns and reboots by purchasing server chassis that boast lockable panels. By locking a server's panel providing access to power and restart switches, you can eliminate accidental restarts resulting from well-meaning cleaning crew or a disgruntled employee looking for an early start to the weekend.