Servers

Switches in server racks: Good idea or not?

Putting managed switches in server racks can gather mixed results from network administrators. In this TechRepublic network blog, IT pro Rick Vanover highlights reasons both for and against this practice.

For large datacenters, there are two primary approaches to cabling servers. The first is to have patch panels in each server rack, and the other is to have switches in each rack. These both present benefits and management topics. For the organizations that are teetering on one way or the other regarding in rack switching, this collection of a few pros and cons each way may help guide the decision process.

Reasons for having switching in server racks include the following:

    -Less risk of running out of ports in that rack. When switches are in a rack, there are more options, including adding a switch to accommodate the needs in that rack. The alternative may mean additional patch panels installed and run back to the core network.
    -Can delegate connectivity requirements. If chosen, server administrators can wire up their own cables to the port that is provided. And in that sense, network teams can have a much smaller responsibility in regard to the cabling from switch port to connecting device. Frequently, server administrators would like this flexibility and would generally accept the responsibility.-Potentially deliver as-needed service. Certain low-priority systems and racks may not need much in terms of connectivity. Simply putting a single fiber run with a GBIC feeding a managed switch providing 100 Mbit/s ports may suffice. It may not be worth the cost of more expensive switches or blade ports in a central patch panel to provide connectivity to lesser important systems.

Reasons against having switches located in server racks include the following:

    -Less risk of tampering. When the network team’s equipment is located in racks that are mostly populated by non-network servers, there is a risk that server administrators will switch ports or move things around.

    -Potential underutilized switches. If switches are placed in each server rack, there is no guarantee that each switch will be utilized completely. Whereas in a central patch panel and switch environment, the entire connectivity footprint can be placed on designated equipment with minimal underutilization.-Less management. Reasons such as power control, risk of someone else tampering with the switch, and more ground to cover with the switching footprint make in-rack switching harder to manage and control.

These are only a few points around the topic of in-rack switching. My goal is not to say which approach is better, as I have not established any criteria to determine which is better. I now am looking for your comments as to why you think in-rack switching is or is not a good way to go. Share your comments below based on what you are currently doing related to this issue.

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

36 comments
firoz83_del
firoz83_del

Hi I want to a rack for the use networking. minimum 90 user use at a time can u Sages me rack size and switch model

mike.dickin
mike.dickin

My problem is with airflow. I like the idea of switches in the rack linked back to the network via fibre as the density of cables providing patch panels in server racks can make them difficult to manage and as more and more servers with more port come on board you tend to run out. However if you mount the switch at the back of the rack where the server NICs are then the switch blow air into the front of the rack against the usual airflow through the rack. Is this something others have come across?

LarryD4
LarryD4

Their are pros and cons to both methods, but I myself prefer a seperate rack for patch panel and switch. I think the choice is based more on who is designing the site and the way they need to operate. I know their are some great benefits to incorporating the switches on my server racks, but in my head its all wrong. I need that division between the two. Our Avaya systems are integrated, they setup two cabnet racks, double long, with switches in the back and servers in the front. Which is fine for me, because the punchdown board is still on the wall. But I need to have that division between my servers and the wiring.

bggb29
bggb29

We started putting switches in racks several years ago. It has +"s and -"s. All racks have gig switches and vlans. If we need to aggregate bw we add ports. to a trunk.Our vlans are trunked into internal and dmz's We have multiple subnets in each trunk. The in rack switches reduce the amount of cables that run throughout the cabling troughs, which makes troubleshooting massive amount of cables easier. We are not large enough for discrete server and network teams so access to a switch is no problem. We also have unique vlan that has no layer 3 connectivity we labled it unused and move all unused ports in a switch to that vlan. So if a server only admin tries to connect up a port with out the network admin knowing about it the server has access to nothing outside of the unused vlan. We typically add old 100mb switches to a rack for out of band mangement. In racks with vmware esx servers we put in 2 cisco 3750's aggregate bw to the core for each trunk across the switches then connect 2 pnics from the servers into the 2 switches for redundancy.

paulgraydon
paulgraydon

At the ISP I worked for we provided hosting for customers alongside our own infrastructure. In the interests of administration ease and efficiency, we were set up with end cabinet patching, across the board. NOC staff handled physical installs of our equipment and patching, networks and sysadmin co-ordinated (departments sat next to each other) to find spare ports. With that kind of set up customers never had any form of direct physical access to our switches so could never do anything to compromise them, especially important when they weren't the only customers in a rack. Due to power constraints we'd more often than not be unable to fill a rack with more equipment than we had spare ports, except for one of our particular platforms that required 3 Ethernet ports per server (front end, storage back end, ilo) I really don't see that the pros for cabinet switches are anything worth making a fuss about. Networks should be a lot more on top of switch management. "Certain low-priority systems and racks may not need much in terms of connectivity. Simply putting a single fiber run with a GBIC feeding a managed switch providing 100 Mbit/s ports may suffice. It may not be worth the cost of more expensive switches or blade ports in a central patch panel to provide connectivity to lesser important systems." Shove a cheap switch in the central patch panel. Jobs a good'un. Even with full rows we always had spare space in the central racks for cheap switches.

bandman
bandman

For what it's worth, I admin a small/medium infrastructure. We've got three racks colocated in two different locations, and three offices in addition to those. Each of those offices has servers, and two of them have racks. In our primary site, we've got a blade chassis with 10 servers, and three rackmounted servers, plus a handful of other equipment (load balancers, routers, firewalls, etc). I've got two switches in that rack to do load balancing / failover with bonded NICs. Each of the firewalls / load balancers / routers is setup with high availability clustering, and the infrastructure is as redundant as I can make it, or at least as redundant as I know how and can afford. At our backup site, we've got two racks, 10 meters apart with a single cross-connect between them. This necessitates having a switch in the less populated rack, and the more populated rack has two switches in a hot/cold standby config. At the offices, we've got patch panels in the machine room which are wired into the jacks on the walls. The patch panels are patched into switches which are reverse mounted (mounted in the back of the rack) into the same racks which house the servers at that office. Like I said, small/medium infrastructure, so I don't know how much I add to the conversation, but that's how we do it.

CG IT
CG IT

except for small businesses which either use a 7' rack or a wall mounted rack [16U or less]. With the wall mounts, only the switches and routers are in it. The servers are towers and sit on the floor. Medium and large business usually have lockable cabinets for the servers and relay racks for the switches and routers [and the multitude of patch panels]. For DB servers, they are often in a locked room of their own with their own relay racks, switches, routers, and leased lines.

Neon Samurai
Neon Samurai

Is there a way to setup a pair of switches between server primary/secondary NIC cables and ISP primary/backup nic ports? All in one switch with a cold backup in the catch is probably the most rational setup requiring only a visit to swap in the good switch. Two switches connected with a short cable; the primary in the ISP primary and secondary in the ISP backup. If the server primary go in the primary switch and secondary go in the secondary switch. This seams to cover everything but failure of the primary switch if the ISP does not switch to it's backup. Does anyone have a better idea of a switch setup? It has to manage ISP controlled toggle between two ports, failure of a switch and toggle between two server NIC based on connection failure.

daileyml
daileyml

An additional benefit of having a top-of-rack switching design is flexibility. Having the capability to house application systems of different types within the same rack, connected to the Data Center distribution layer via high-speed uplinks, offers a great deal of flexibility to the placement of systems within the Data Center. For instance, it allows racks to contain "services" as opposed to systems, whereas databases, application servers, and web front-ends for a particular service can be housed and secured in the same rack with inter-server communications remaining local to the rack. This segregation of communications is a valuable benefit--and may be a requirement depending on your security and Data Center architectures--if you are transmitting or storing sensitive data and information. In a properly designed Data Center a top-of-rack deployment can also add an additional layer of security by allowing the deployment of access-list controls closer to the Data Center Access layer systems as opposed to trying to handle security for all ports at the larger Distribution layer switches. While it is true that top-of-rack switching designs may result in underutilized switch ports in each rack, today's lower cost-per-port switch platforms, coupled with the new switch features available, make top-of-rack design an easy to manage and cost effect solution, in my opinion. -Mike D http://www.daileymuse.com

b4real
b4real

To save a run back to the main switching patching area is a good idea.

Stephane.Brunelle
Stephane.Brunelle

What it boiled down for us was the cost per port, based on 24 port switches, would have been even more obvious with 48 port units. We looked at the installation costs of the runs from the cabinets to the wiring closet (around 80$ per run x 24 or 1920$ or 80$ per server port) vs bringing 4 runs for a trunk and installing the switch ( 4 x 80 + 1000 or 1320$ or 66$ per server port) in the cabinet. We then multiplied this over 16 cabinets and the finance guys were happy with our solution :) As mentionned before, it is also a lot easier for troubleshooting cable issues as the feeds remain in the same cabinet and can easily be traced, that in itself saves us a lot of man-hours. Not having everything fully documented, this also helped server troubleshooting/location finding since we named the switches according the their physical location, once a server is hooked up to a switch, you now know their physical location too. Regards

nico.verschueren
nico.verschueren

I think you make a good point here. My opinion is that in a smaller environment, you will be more inclined to put switches in the server rack, while a larger organisation will probably go to the option of having separate racks for the switches. In the organization where I came from, the (IT) infrastructure was fairly large. We had a bit over 200 servers running over multiple data centres. Besides some small groups of servers running next to operator rooms (for specific functions) we had 2 main data centres. Each of those data centres consisted of 1 server room and 2 network rooms. The network was fully redundant with core routers in all 4 network rooms and for servers that needed high availability; there was a network connection to both of the network rooms for that data centre. 1 Of the advantages was the power distribution (we actually had our own multiple power distribution cabins). Both the network rooms connected to a data centre are connected to a different power distribution cabin, as are the 2 server rooms. So if one of the power distribution cabins would fail, we still had full network functionality and possibly full server capacity. (Without having to depend on UPS and Diesel engines). And better network interconnectivity without having the put fibres to all the server rooms The other advantage was better change control. The server team could not do anything ?on the fly?, so there was always a trace of what was changed, so the CMDB could be kept up to date easily 1 thing I did consider was having switches in the rack for ILO. That saves on the needed network cables and connectors, while failure of those is less important. And if you go onto UPS, its easy to unplug those to save on power consumption.

windfreak2000
windfreak2000

WE used to use a primary/secondary Cisco switch design when I was at WebTV (Now Microsoft) that involved changing the Spanning tree parameters on the switch's to facilitate automatic fail-over. The entire service side infrastructure was redundant all the way out to the multiple ISP connections.. All the routers were configured with HSRP and dual connected to primary and secondary switches down to the servers and NetApps..

b4real
b4real

When running back to a central patching, servers with two interfaces can go to separate switches - a big plus in the redundancy department.

bandman
bandman

What I've got are fully redundant, identically configured switches. 48 port 3com switches (segregated into two separate VLANs for DMZ/internal access). I run Linux servers, and I've bonded the network cards into one virtual interface, and I've ran eth0 from each machine into switch A (with red cat5) and eth1 to the other switch B(with blue cat5). I've also bridged my switches with a couple of dedicated cables to ensure connectivity if one of the NICs die on a server. This is only part of the redundancy. Other devices which only have one NIC need replicated, one for each switch. I've got some Kemp Loadmaster 1500s setup in High Availability (HA) mode, one on each switch, and the same thing with my firewalls (Juniper Netscreen 5GTs with NSRP). As for routing, our colocation provider is using VRRP to send us data down two redundant connections. On their end, they've got a router with two interfaces, and both interfaces are handed to us. I've got two Cisco 2621's (two fast ethernet interfaces apiece), one for each handoff. This makes everything in the networking stack redundant. Making the computers redundant is a completely different topic :-) I hope this helped. Please drop me a line at standalone.sysadmin@gmail.com if you have any questions, and check out my blog at http://www.standalone-sysadmin.com if you'd like.

christopher.stewart
christopher.stewart

Problems with other peole moving or installing network connections can be prevented by: Disabling unused ports Using Port Security with the MAC Address Labeling all connections with equipment name and port number. I actually prefer a dedicated set of racks for routers and switches. If you plan correctly, you should have a switch chassis large enough for ample growth. If you need more ports, you buy another card.

nhon.yeung
nhon.yeung

One important consideration left out is that often the switches located within the racks are limited by their connection. If you were to use a pair (for redundancy) of large chassis switches that patch back to the racks, it is more than likely that the backplane of these switches is much greater than that of any uplink. Hence traffic will be switch at near wire speed. This is important, for instance you had a rack dedicated for backups where the uplink port can become a potential bottle neck.

bandman
bandman

I think it's great to see discussion of topics like this. I think everyone either assumes that everyone else knows the best practices or doesn't think things like this matter. Trackback

b4real
b4real

Stephanie: I'm thrilled by your response. It does seem today that the costs involved for most solutions seems to be the 'trump' card. Outside of compliance reasons dicating otherwise. Further, you bring up a good point of easier troubleshooting. Thanks for the response.

bandman
bandman

What do you use for documenting your infrastructure? Do you use design pages which are essentially MSWord docs, or do you have a database for storing the info? I've been trying to learn what larger infrastructures use for that sort of thing.

windfreak2000
windfreak2000

Have the added advantage of reducing the number of patch cables that need to changed ( The bane of any data center) What I typically see is that folks are in a hurry and don't always follow approved standards when patching equipment, particularly when its a test lab environment ( engineers are the worst when it comes to patching) So having a switch mounted in the top of a rack pre-patched can help minimize one side of the patch issues and isolate wiring messes to the equipment rack side..

Neon Samurai
Neon Samurai

When given the choice, I'll run redundant power cables out to two separate power circuits along with my redundant NIC run to separate switches. One branch of either can fail and I keep on ticking along the second branch.

pgit
pgit

If you please, could you lay out your bridging configuration between the two 3coms? btw ever see a switch duct-taped to the wall? A plumbing waste line air vent that opens directly above a server?

Neon Samurai
Neon Samurai

I have two servers, each with a primary and secondary NIC. With two switches, the part I was not sure of was using a short cat6 to link the two together. What finally cancels out this setup is if the primary switch fail but the ISP's primary port remains active. In that case, I'm not getting data over my broken switch and my unbroken switch is not getting data out the disabled ISP backup port. Since I have a single point of failure with either a hot or cold backup, it's going to be a cold backup mounted in the rack and a visit to transfer all the network cables. Luckily, the rack is still small and manageable. I'm glad to hear that a short cat6 bridging two switches does not cause grief.

nico.verschueren
nico.verschueren

In theory everything could be found in the CMDB, which is part of the Service Desk tool. All was in there, but in practice, the network guys had their own DB (don?t know what they used) and us server guys had a spreadsheet (Excel) with all the data we needed, because that was just easier to find everything.

bandman
bandman

another problem is that approved standards are rarely both. I've never read any generally published standards. It seems to be a site-by-site thing. Do you know of any "best practices" sites?

bandman
bandman

Ah. I'd probably do it exactly your way with the patch panels if I had to deal with more than one rack, too. Since I've only got one, I stick with the switches :)

brian
brian

That's scary. I could copy your post word for word and it would be the exact configuration we just finished at a customers site. It consisted of 4 racks full of Dell servers, 0U PDU's along the right and left side of the rack connected to separate circuits. Tape colors were different though :) For the record we used 48-port patch panels in the racks which fed back to the MDF network rack over cable trays (about 15 feet away). Doubled the cost of CAT6 patch panels needed but there was no need for expandability as the racks were pretty much full and the only fiber was running between the SAN and servers within the same rack. IMHO either solutions are fine as long as you have justification for it (ours was that the servers ethernet ports were connecting back into a stack of 4 Cisco 3750 and Stackwise cables across 4 racks would be pretty ugly - plus I can label the ports on the patch panel for the servers).

bandman
bandman

The way we've ran ours is to use vertical power strips on the right and left sides of the rack. They are connected to different circuits (and different power sources) for redundancy. Each server is connected to both and the power cords are marked with electrical tape on both ends of the cables using red to mark the right power source and blue for the left. The tape is a huge help when I'm physically working in the rack.

bandman
bandman

Sure. We've got 2 "3com Baseline Switch 2948 Plus" switches. They have four SFP ports. I've used two of these ports to bridge between the switches using regular SFP copper cables. In the configuration, My switches have two VLANs, and port 47 and 48 belong to both. I've set up Link Aggregation, and the first (and only) group is group id 1, type is manual, and the ports are 47 and 48. Does that help?

pgit
pgit

It was a hack a plumber put in when they had an untraceable drain problem. Of course I moved everything. But this place, wow. It's been remodeled probably 20 times since new in the 1970's. No way it meets code. (sshhh!) The current occupants removed some walls and in a hallway-ish area left one of those old school silver fiber covered power lines dangling almost to the floor, from a hole about 8 feet up the wall. They'd cut it and removed the rest with the wall. It dangled there for months, until one day I happened to put a meter on it. It was live and 220V. The way this nightmare is constructed there was no way to find where the circuit breaker was. (there had been up to ten tenants at times) They could have put a meter on it and shut every 220 circuit breaker one at a time, of course. But what mayhem could that do to the two other tenants? Their "electrician" "fixed" it by yanking it up above a drop ceiling and taping a plastic bag over the end.

seanferd
seanferd

The vent is indoors, over a server? WTH? Or would that be one of those small vents for sinks? If so, the server room is below a lavatory? Good grief.

bandman
bandman

I think that the determining factor would be how your provider degremines that your link is down. If it is monitoring the status of the interface then a down switch would trigger the failover, I would imagine. You would aNt to test during a maintenance window of course.

Editor's Picks