General discussion


High density rack wiring - best practices?

By angry_white_male ·
We're seeing our datacenter migrate from 5U and 7U servers to 1U and 2U servers. With the older servers - things were fairly simple, but we're starting to cram 12-15 servers into a 42U rack (with room for UPS's, monitor, keyboard drawer, KVM, etc...).

Obviously this is creating a bit of a wiring problem and could use some ideas on how to better manage wiring, especially with long and thick power cords and KVM wiring. The 2U rack servers we're buying have an articulated arm for better cable management but these things are a pain to deal with and get around.

Is there a good best practices guide out there? I'm not 100% sold on the network KVM's because of performance issues and expense?


This conversation is currently closed to new comments.

Thread display: Collapse - | Expand +

All Comments

Collapse -

My Experience

by danmcl In reply to High density rack wiring ...

I havent seen many best practise guides, only photos of what and what not to do!
I find that either reusable nylon zip-ties or velcro ties are very useful, I prefer the velcro ties but if expense is an issue with your purchasing department then the releasable nylon zip ties are just as good, just a bit harder to work with later on.

I put in a 1u cable management tray about halfway up the rack and bring ethernet to the top and power to the bottom or sides depending on where the distribution units are.

I keep the power and ethernet on separate sides of the rack and have them tied on.

I also leave a bundle of cable tied up with the velcro ties(power and ethernet) for when I need to slide the server out, I see a lot of installations where people forget to do this and then end up pulling the plug on a server non-intentionally (unintentionally???)

edit: FWIW: The server rack in question ^^^ has 26 servers (mix of 1u and 2u) as well as a bundle of structured cabling going back to a separate rack for patching into a Cisco Catalyst

I inherited a very badly installed rack that was in a standard office (no A/C etc.), the room was regularly at 40degC plus. Sorting the cables out at the back improved airflow significantly and I saw a temp drop of 15degC!

Collapse -

Good response

by NOW LEFT TR In reply to My Experience

I would say similar having just gone through the same process in April. It is well worth the putting in the effort at the time.

Collapse -

Less is better

by rzrwire In reply to High density rack wiring ...

A couple small things that make a BIG difference... buy smaller power cables, avail via CDW etc, you can get them as short as 3 ft, and a lot thinner than std power cables. This saves a lot of space and hassle. Also IP KVMS (we use the Raritan models) save a TON of space, especially compared to those long, heavy, awkward, standard KVM cables.

Collapse -

Avocent KVM switches help

by rurick In reply to High density rack wiring ...

One minor thing that we found helped. We moved to Avocent KVM switches. They dont use the traditional KVM cables. they use standard cat5 patch cables, which of course can easily be cut to the exact length you need. Just buy the KVM and a dongle for each server, and run cat5 cables between the two (only up to 30 feet or meters, cant remember which but plenty of room to run between racks) and you are set to go. Also, these KVMs arent but several hundred more than a standard KVM fully loaded. WELL worth the cost.

Dont skimp on double sided velcro wraps and zip ties. they are a lifesaver.

Also, custom cut your power cords. pickup a box of power plugs from the hardware store. figure out how long of a power cord you need, and custom make your power cords to remove excess length. (make sure you are qualified of course)

We went from rats nest to neat freak after all of the above.

Collapse -

Avocent KVM switches help.

by seckel109 In reply to Avocent KVM switches help

I whole heartedly agree with rurick. We evaluated a number of KVM switches a couple of years ago. And the Avocent by far was the best.Expensive yes. Unobtrusive indeed.Support? Outstanding. This device along with on-site fabricated cables (power and data) is just the ticket to make your data center a keeper.

Collapse -


by dave.bruner In reply to Avocent KVM switches help

I have to agree with the Avocent KVMs. We also use color coded cat-5 cables to distinguish the KVM from the network. This helps and adds some color to the cabling!

Collapse -

Avocent sucks. They depends on Windows server for management. Go Raritan!

by Why Me Worry? In reply to Avocent KVM switches help

Try Raritan KVM switches, as the management interface is built into the KVM unit itself and does not require a separate Windows server to manage any of them. I've used Raritan for years and they are by far the best compared to HP KVM switches and those from Avocent.

Collapse -

Avocent Sucks? Get your facts straight!

by loadedmind In reply to Avocent sucks. They depe ...

I would think that before anyone would post something so dramatic a statement as this would be better informed before doing so. Avocent is every bit as compatible with linux, Solaris, and Unix than Windows. In fact, the software will run on Solaris, Suse, RedHat, then Windows... Where do you get your information anyhow?!

Collapse -

Virtualization Man!!!!!!!!

by tomw In reply to High density rack wiring ...

It has been the answer for us. 65 server down to less than half a rack. Cabling is not an issue. The saving on hardware maintenance, power, cooling, etc make this an easy one to pitch.

Collapse -

What he said!!!!!

by bradl In reply to Virtualization Man!!!!!!! ...

We have 45 virtual servers on 4 physical 2U machines - 8 NIC ports and 3 power cables each. iSCSI SAN and LAN cables are color coded and wrapped in velcro strips tied to one side of cabinet, power cables wrapped and tied to the other side plugged into a 3 phase APC power strip mounted vertically in the cabinet, with each power supply of a server plugged into a differnet phase. We are at 30% capacity for VMs so we can grow quite a bit without adding any cables, UPS, cooling, etc. Saving serious dollars and adding agility - and the VMs are faster and more reliable than comparable hardware - go figure.

Related Discussions

Related Forums