General discussion

Locked

High density rack wiring - best practices?

By angry_white_male ·
We're seeing our datacenter migrate from 5U and 7U servers to 1U and 2U servers. With the older servers - things were fairly simple, but we're starting to cram 12-15 servers into a 42U rack (with room for UPS's, monitor, keyboard drawer, KVM, etc...).

Obviously this is creating a bit of a wiring problem and could use some ideas on how to better manage wiring, especially with long and thick power cords and KVM wiring. The 2U rack servers we're buying have an articulated arm for better cable management but these things are a pain to deal with and get around.

Is there a good best practices guide out there? I'm not 100% sold on the network KVM's because of performance issues and expense?

TIA.

This conversation is currently closed to new comments.

Thread display: Collapse - | Expand +

All Comments

Collapse -

Virtualization Advice

by pmatvey84 In reply to What he said!!!!!

Virtualization is a great way to go to simplify infrastructure; we are hoping to implement it in our enterprise. What flavor did you end up using (ex. ESX, VM Server, Windows Vitual Server)?

Collapse -

What I did

by melbert09 In reply to High density rack wiring ...

Here are a couple of ways of saving space and making things look clean:

1. Custom cut all your CAT5/CAT6 cable. Takes a while but in the end it usually looks good. Plus you wont have cables stuffed under your servers in the rack.

2. USE the cable management arms that came with your servers. Make sure you properly put Power leads and Network lines in there and properly tied. Makes pulling the servers out of the Rack much easier.

3. Depending on the color of your racks, use "color Corrdinated" Zip Ties and Black Velcro straps to tie cabling to the side.

4. Use a Raritan IP KVM. Even if you dont use the IP part and just used the Monitor and Keyboard and Mouse directly connected. Its worth it, for space and cleanliness. Only uses CAT 5 cabling.

5. If you can afford to, color code your cabling for different networks this makes tracing cables soooo much easier. And it looks nice as well.

Collapse -

Same boat

by cyberjunkie21 In reply to High density rack wiring ...

We're facing the same boat as you. Unfortunately our racks were planned without forethought So the cableing is a nightmare. One of our biggest hurdles is power now, we're reaching capacity quickly and are going to have to add more service to the building. We have the Raritan KVMs as well and it's quite awesome having it go over CAT5.

Collapse -

Good tips - thanks all

by angry_white_male In reply to Same boat

I never thought about shortening the power cables - fairly easy task. My boss will surely balk at the expense of new CAT5 KVM's... but that's what the next budget year is for.

Thanks everyone.

Collapse -

PDU, KVM, cable management arms, zip ties

by ceswanson In reply to High density rack wiring ...

We have a HP 10000 series 42U rack and we use HP servers in sizes ranging from 1U to 5U. We also use the HP IP KVM because of smaller cables and we can put 16 servers on 1 KVM instead of requiring 2 of the traditional 8 port KVMs with the huge cables.

Get vertical PDU's so you can use the shortest possible power cables. Like was stated in another post, we ordered 3ft and 6ft power cables instead of using the 9-12ft ones that came with the servers. You do get a lesser gauge of wiring (oem is 14, aftermarket is 16) but they seem to work fine. Using vertical PDU's and short power cables, I elimintate the routing of power cables down the rack.

On servers 2U and more, I use the cable management arms that came with the HP servers. They work great, and even lift up and pull back far enough to allow me to work on the back of the computer or get to the inner section of the arm.

I use a patch panel near the top of the rack to help manage network cables. I then custom make the cables to length, including any length that gets routed through the cable management arms.

Power wires loop out and to the plugs, KVM cables route to one side and up to the KVM switch, and network cables route to the other side and up to the patch panel.

So inside the rack, all the wires are routed to the sides and up, zip tied together. The bundle of network cables comes out the top and into the ceiling all nice and tight, and the power cord to the wall comes out the bottom. Very clean.

Collapse -

An extra cable spaguetti saving tip

by dzenizo In reply to PDU, KVM, cable managemen ...

I'd recommend to put the server's network switches on each rack instead of in the wiring rack as recommended by the usual "structurated cable" guidelines; In this way cable runs inside the rack are cleaner.
We usually put two 1U 24 port gigabit switches for redundancy at the middle of the rack facing backwards (they usually fit behind the slide-out console), run one cable from each of the two switches to each server for redundancy/load-sharing and then run as many cables from the switches to the central switch as needed for trunk aggregation. This has saved us a LOT of cost on labor and as a bonus we get cleaner cabling. BTW, we have about 600 servers wired this way.

One additional word about the cable management arms (they work great, BTW). If you standardize (highly recommended), then you can pre-assemble the cables in the arms BEFORE mounting the arms, this will save you a lot of time and effort. You can do that by making a "sample" arm, starting from the cables that go to the server and go then to the base of the arms, since that's the variable length that has to be managed.
Once you have the right measurements for one arm, you just copy this sample, install the arm and route the cables from the base to te switches, PDUs, KVMs, SAN switches, etc.

A final thougth about rack placement: Avoid the "paranoia" of packing the racks side to side; leave a manageable sppace between them. In our case, an inch or two between racks doesn't make a difference in space requirements and this helps with cooling and future needs, like removing a side pannel to better route cables, or trying to fit a new rack that is a LITTLE wider than the standard.

Collapse -

Wiring problem? I'd worry more about cooling than wiring!

by Why Me Worry? In reply to High density rack wiring ...

I'm not a big fan of blade or "pizza box" servers, because although they are designed to squeeze more server into less space, they generate more heat and fail quicker due to thermal shutdown. If I were you, I'd worry more about the added cooling capacity needed for the data center and less about managing wiring, not to say that cable management isn't an issue, but systems that constantly overheat will be a bigger problem than keeping the wiring all nice and neat.

Collapse -

Big AC unit

by angry_white_male In reply to Wiring problem? I'd worr ...

Last year we installed a 20-ton AC unit in a room with only 30 servers... believe me - cooling is not a problem in there!

Collapse -

The racks need also good airflow for interior cooling

by dzenizo In reply to Big AC unit

From my experience, I can tell you that good site cooling doesn't necesarly means good rack mounted server cooling!

I've seen rack "behinds" so crammed with cables that they obstruct air flow, and the servers were shutting down because of overheating, eventhough the rrom was at a constant freezing temperature. Also, I witnessed one site that had backup and operating procedures sheets pasted in front of the servers, using the suction force to hold down the papers!
Even in these days of ISO and SOX certifications, there are many "professional" sites that don't monitor servers internal environmental variables.
So good air flow, aided by good wiring practices, IS a big factor

Collapse -

Wrightline TOC cabinets

by sgregory217 In reply to High density rack wiring ...

If you have a raised floor with cooling under the floor, I recommend wrightline's TOC cabinets that have air manifold doors for intake and exhaust. They have proven pretty efficient for us(providing cooling for 36-38 1U dell servers. We also use Server Technologies'84 receptacle PDU with custom length power cords and run all our servers at 220v to reduce the current draw. We've also found that the Sun servers that run Windows OS run cooler and take less juice. We have a 48 port patch panel at the top of each rack inside and run data and kvmoip cabling opposite the power cords. The custom length power cords have made a huge difference in cable management within the cabinet.

TOC info
http://www.wrightline.com/productDetail.asp?ProductID=7&ProductCategoryID=8&SubCategoryID=0

PDU info
http://servertech.com/products/3PhasePowerDistributionMetering/SmartCDU84-DD/

Related Discussions

Related Forums