At the end of December 2000, I accepted a position as a systems engineer for a financial data company. One of the first tasks I was assigned was to build out a brand new server farm and network infrastructure for Windows NT and 2000 services and products that were being developed for these platforms. In addition, I had to plan the migration of existing Windows-based products and services from other facilities to this one. Having the opportunity to start from the beginning with a new infrastructure allowed me to carefully plan and install both the hardware and the software in such a way that it would be easy to manage and easy to expand. This article will take a look at the hardware we chose and why we chose it.

Project requirements
My company decided to house its servers at a third-party hosting facility. As a result, all server infrastructure needed to be easily manageable from both a hardware and a software standpoint. Since we would not have any employees based at the hosting facility, remote server administration was a critical issue.

When I was initially assigned this project, I was asked to build out three racks of Windows machines. These were the stipulations:

  • The machines had to be remotely manageable.
  • I had to get as many machines as feasible into this space.
  • All of the machines had to be identical.

On the surface, this appeared to be a fairly straightforward project. However, to do it right and not create extra work for myself required a lot of planning and preparation on my part.

Selecting the hardware
Before I could do any real planning, I had to decide what hardware we would use as a standard build for our Windows environment. We already had a large investment in Dell hardware, and as one of Dell’s premier customers, we enjoyed a significant discount on any servers we bought from them. Add to the mix Dell’s excellent service and reliability and the fact that these servers would likely be out of our production environment before their warranty was up, and it is easy to see why we chose Dell over other hardware vendors for this project, as well.

Specifically, we chose the Dell PowerEdge 2450 server for the following reasons:

  • Size: At only 2U in height, I was able to fit 16 servers in each cabinet along with the other necessary hardware.
  • Remote management features: The Dell 2450s can be upgraded with the Dell Remote Assistant Card (DRAC). The DRAC allows remote management of a server, and since it has its own power supply, it works even when the server is powered down. It works by intercepting calls to the I/O ports and redirecting them to a dedicated network jack on the DRAC. Dell’s 1U high servers do not currently offer this ability.
  • Experience: Many of our Red Hat Linux systems run on Dell 2450 hardware, so our group was already familiar with the hardware.

Setting up the cabinets
Now that the server hardware had been chosen and the 15 new servers I needed were ordered, it was time to decide exactly how we would physically build out the infrastructure and to decide on and order the supporting hardware.

Sponsored by SUN Microsystems

Introducing Sun’s first Midframe servers that combine mainframe capabilities with midrange affordability:
The Sun Fire(R) Midframe server family.

For more information, check out TechRepublic’s Server Architecture Briefing Center, or visit Sun Microsystems site

The next big hardware decision was the choice between enclosed cabinets or open racks for these new machines. In my experience, I have generally used open racks since they are both versatile and fairly inexpensive. To mount the Dell 2450 servers into an open rack, there are converter kits available that work very nicely with Dell rack mount kits.

I had decided on this approach until I visited another facility that was using a cabinet solution. I instantly fell in love. The cabinet solution allows for fairly easy implementation of servers, cable management, and mobility, and it looks really good when it’s full. Cable management and ease of server installation are also much improved with the cabinet solution. At that point, the decision was made—we would go with three enclosed cabinets provided by Dell.

In addition to these cabinets, I ordered a 1U high LCD monitor, keyboard, and mouse combination from Dell. This unit is in the center cabinet and provides console access to all of the servers in the cabinets.

Now that the major hardware was decided upon and ordered, it was time to make some decisions on the hardware that would support this new infrastructure and to decide how to incorporate everything into our existing environment.

Windows machines inherently have a need for an attached VGA console, keyboard, and mouse, or some machines will not even boot properly. Other areas in our environment already used Lightwave Communications‘ PC Server Switch KVM (keyboard/video/mouse) units. These units work well since each port has its own processor and “keep alive” capability. These features make it possible for a server to boot without the need for the KVM switch to be on that port. In addition, these units can be daisy-chained, so interconnecting the new units to old units in the future would be no problem. As a result, I decided to include these units in my design.

The final pieces of the infrastructure involved the networking components. I spent considerable time on this decision. I was torn between putting a 48-port switch into each cabinet and putting a 48-port patch panel into each cabinet. Each approach has its pros and cons. A switch solution would allow me to conserve ports on our Cisco Catalyst 5500 backplane but could introduce a blocking-bottleneck on the network depending on network load. A 48-port patch panel would allow each server to have its port directly on the network backplane, but the network backplane ports are expensive. In the end, I decided to put the 48-port patch panels into each cabinet.

Additional minor hardware purchases included network patch cables, KVM cables, screws, and other small items for organizing the servers. All of these hardware purchases were carefully thought out to make sure that everything I put in could be easily replicated for future expansion.

In the end, I not only built out the first rack of servers, but I also created a standard by which we can easily build out any number of identical cabinets. In my next article, I will discuss the physical installation of this hardware, as well as some things I plan to tweak for my next iteration of the project.

What tips do you have for building out a server farm?

How does your rack-based server topology look? We look forward to getting your input and hearing about your experiences regarding this topic. Join the discussion below or send the editor an e-mail.