Networking

Maximizing the configuration of rack-based server farms

Get the details of how a netadmin moved 48 servers that needed 149 cables into three cabinets.


In my previous article, "Building out a rack-based server farm," I discussed the hardware we chose when we built out our company’s initial Windows 2000 server farm. This hardware—Dell rack dense servers, Dell cabinet enclosures, Lightwave Communications KVM switches, and more—allowed us to deploy 48 servers in three cabinets. This resulted in significant savings due to the sheer number of servers we were able to pack into the space. Since our hosting provider charges us per month by the rack, the total savings are quite significant.

This article will offer a closer look at how we maximized the challenging physical configuration of this hardware.

Physical configuration
Transitioning from a pile of hardware to a fully functional cabinet was not a trivial process, when you consider that each server had the following:
  • Two network adapters, one on the motherboard and one in a PCI slot
  • Dual power supplies, so there were two separate power cables
  • A Dell Remote Assistant Card (DRAC) with its own network connection and power supply
  • A video cable, a mouse cable, and a keyboard cable

This meant that we had to get a total of nine cables to each server. Multiply that by 16 servers per cabinet and it meant that we had to run 144 cables through the cabinet just to support the servers. The KVM switch itself also required five cables, bringing the final total to 149 cables. This presented quite a challenge.

Fortunately, the Dell cabinet enclosures we ordered were very functional. They had a split back door, which allowed easy access from behind (especially in tight spaces). Also, the back, front, and side doors were removable, which provided additional flexibility. And once everything was installed, the cabinets could be coupled together for added stability.

However, we had an additional problem to deal with: power. Unfortunately, with 16 servers, 16 DRACs, and KVMs, we required six 20-amp circuits to support all of the equipment. This required us to install six separate power strips into the cabinet. These particular power supplies had locking power plugs, which required our hosting provider to install new power jacks under the floor.
Sponsored by SUN Microsystems
Introducing Sun's first Midframe servers that combine mainframe capabilities with midrange affordability:
The Sun Fire(R) Midframe server family.

For more information, check out TechRepublic's Server Architecture Briefing Center, or visit Sun Microsystems site

We decided that the easiest way to install all of this cabling into the cabinets would be if all of the servers, along with their rack mount kits and cable management arms, were already in place. Dell servers ship with a cable management arm that swings back and forth as the server is pushed into and pulled out of the cabinet. Cables connecting to the server are tied to this arm. The movement feature allows servers to be worked on without having to completely disconnect all of the cables. This made things a bit easier.

We ran three network cables from each server to the very middle of the cabinet at the rear where we placed a 48-port patch panel. We ran two regular power supply cables from each server to separate power strips, thus ensuring that each server was on two different circuits. We ran a small cable from the back of the DRAC to the DRAC power supply, which we attached to the cabinet and which was then connected to a power strip. Finally, we ran a KVM cable from each server to its closest KVM switch. All cables were cable tied to the cabinet so that they would stay neat and manageable, even as servers were serviced or the cabinet was moved.

During installation, I labeled each cable with the name of the server to which it was connected. All of this information is recorded in my infrastructure documentation, which I’ll discuss in my next article.

While these details required a lot of work up front, this design provides a great deal of flexibility and allows easy addition and removal of servers to the network. It also employs a lot of common sense—this was a key to my design. I wanted something that could be easily administered by anyone walking into the room, and I felt confident that my design accomplished this goal.

Lessons learned
Even though I am very happy with the end result of this configuration, I am just now beginning a second identical project, and I plan to do a few things differently to avoid problems I discovered in the first project once we were up and running:
  • First and foremost, I will be testing every patch cable before it is installed. This should have been obvious the first time around, but I didn’t do it.
  • Second, there are many different VLANs on our network, each represented by a different color or cable both physically and in our network documentation. I plan to run a cable to each server for every possible VLAN (six total) as well as a backup cable in case one fails. While not all of the cables will be physically connected to anything, this will allow us maximum flexibility while maintaining an infrastructure that is well documented and easily manageable.
  • Finally, I have decided to slightly move servers in the cabinet to allow for 1U of space below each of the two KVMs in each cabinet. In my original cabinet, I did not have this space, and the keyboard, monitor, and mouse cables have a tendency to catch on the servers below the KVM when that server is pushed into and pulled out of the cabinet.

While these problems aren’t major issues, they are annoyances that I want to fix in order to improve our infrastructure.

Overall, this was a very interesting project. I hadn’t worked on rack-based server farms like this before, and I learned a lot in the process.

Have a comment?
We look forward to getting your input and hearing about your experiences regarding this topic. Join the discussion below or send the editor an e-mail.

 

Editor's Picks