Storage

Bypass fibre channel switching for small storage environments

Every IT budget is under pressure to reduce costs. IT pro Rick Vanover outlines a few scenarios for small storage networking installations where the switch can be omitted, saving on the upfront purchase price.

There is nothing more frustrating than building out a solution only to determine that while the costs for servers and storage have been accommodated for, the switching costs have not. I personally think we are in a transition time where fibre channel still makes sense in some situations where 10 Gigabit Ethernet (10 Gig-E) doesn’t quite make sense. For small installations, there is an opportunity to skip the fibre channel switch and connect the servers directly to the storage. This can even be possible in multiple node cluster configurations of two, three or more servers.

In the course of virtualizing small workloads, administrators may be presented with an opportunity to consolidate a number of servers to a small VMware or Hyper-V cluster. The most critical design element of virtualized servers is the shared storage implementation; however, it may appear to be wasteful to invest in a fibre channel switching infrastructure for a small cluster. By a small cluster, I’m referring to a cluster of two or three VMware or Hyper-V hosts. Depending on any number of factors, that could be between 10 and 80 virtual machines on the small cluster by widely accepted virtualization design strategies. A typical configuration is shown below in Figure A below:

Figure A

Figure A

Click image to enlarge.
This is a relatively straightforward storage configuration, but in a small configuration the switching components may not be needed. For smaller virtualization implementations that have the only consumer of the storage being the Hyper-V or ESXi hosts, the switching components can be removed. The obvious benefit is reduced cost. Fibre channel switch pairs can cost USD $30,000 or more for a new purchase plus support agreements. This reduced footprint configuration would look like Figure B below:

Figure B

Figure B

Click image to enlarge.

There are a few considerations that need to go into a design such as this. Primarily, the amount of ports is limited and where they go are important. Most storage processors in the modular storage space allow four or more fibre channel ports for connectivity. Consider a dual controller system without a switch that has two fibre channel ports, two servers can connect directly to each controller. This will still provide dual-path connectivity to each controller from each host. Additionally, storage processors can have additional fibre channel ports added to the storage processor to allow three, four or more servers to connect directly in lieu of a switch.

The clear issue here is scalability. If there are additional consumers to the storage fabric, a switch may need to be added. This could include a tape drive, additional hosts or additional storage processors.

Have you ever designed around avoiding the switch in small environments? Share your comments below.

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

11 comments
joshi_at
joshi_at

I??m not shure which FC switches you are pointing to, but my two (Brocades) where around 6k. So I have everything I need - redundancy AND support! Beware, that not every configuration is supported, althought it is working. Afaik EMC doens??t support direct connections of nodes to theis storage products; it is working, but it isn??t supported...

bby_cn2000
bby_cn2000

Only one question, in this scenario, can all storage's capacity be shared by the different hosts?

jmarkovic32
jmarkovic32

Even with a small implementation of a couple of hosts, you still need to think about growth right? DAS limits you to the number of physical connectors on the storage array. A SAN allows you to scale up to the maximum number of hosts that the array can support without having to worry about adding controllers or cabinets.

miked123456
miked123456

I don't mean to be a jerk, but I fail to see the reasoning behind this method other than "I forgot to budget for switches" why not just save some more money and build your server with a high end controller and additional bolt on storage? This solution appears to defeat the purpose of 'network storage'. In this situation if you lose the server you have effectively lost the data, because you have no way to access it. Maybe it's too early and I'm too being cynical, but I would like to hear if anyone finds or is using this configuration

Vegaskid
Vegaskid

Removing the switches still allows shared storage between the hosts. Its the storage processors themselves that determine which hosts can see which LUNs. I think the whole point of this post is, if you need FC performance but have always been put of by the overall prohibitive cost, then you can still have it, with big savings. I know lots of small shops that have high performance requirements for an application that iSCSI can not provide and this lower cost approach to FC SAN works well.

b4real
b4real

Totally removed - I understand that. It is effectively a point-to-point pathing configuration. I don't want to go the FC-AL route, however.

sbarsanescu
sbarsanescu

For the scenarios described, one could approach it like that. However, for such a limited scenario, why not go iSCSI - over Gigabit LAN and save some more? I mean, if one doesn't have a lot of budget, then probably the SAN specs are such that latency/throughput would not be limited by iSCSI. And then, FC is more expensive than iSCSI, no?

b4real
b4real

Such a small environment likely doesn't have incredible I/O requirements but yes iSCSI would do - so you do have a point.

teeeceee
teeeceee

iSCSI would be the way to go. Even in this scenario you would need servers configured with expensive fiber ports. With iSCSI you can use less expensive NICs with existing gigabit switches for connection to the storage host. The advantage of the direct connection method, would be in eliminating any latency or bandwidth contention on the switch and iSCSI channels.

teeeceee
teeeceee

Yes, iSCSI can be a bottleneck on gig switches. For iSCSI to be feasible for a SQL Server model, dedicated NICs on both the storage node, and the SQL Server node should be installed. Also, on the switch, teamed ports configured as VLAN trunks dedicated and configured for the iSCSI traffic would work best. Of course your switch would have to be capable of that configuration, using something like a Cisco Catalyst 3560 or better. However, for high IO to the SQL Server node, direct peer to peer channels would be better.

njcsamuels
njcsamuels

Currently, we're trying to decide whether to purchase SAS or FC. see link for a sas controller below. http://www.hp.com/products1/serverconnectivity/storagesnf2/sas/ FC 4 Gb/s, 2 ports SAS 3 Gb/s, 4 ports This will be for a SQL Server. We're running perf mon to try to decide which to choose. Our consultant won't consider iSCSI due to only 1 GB switches and prior experience with slow perf.

Editor's Picks