Storage

Network and storage protocol options: Choose wisely

There are a plethora of network and storage protocols available today, each with their own benefits. IT pro Rick Vanover explains that these are important decisions today and also in the future.

Too many times, I start theoretical discussions on whether the IT topic at hand is a new build or an upgrade to an existing infrastructure. The new build option seems to be painless, right? There are no barriers, no pre-existing infrastructure limitations, and no bad decisions by administrators before us.

The reality is that making decisions on new infrastructure builds is quite difficult, actually. There are a number of network and storage protocols to choose from today. This is of course aside from the network and storage product selection process. When it comes to the network and storage protocol topic, one area that stands out as a big sticking point is Fibre Channel over Ethernet (FCoE). There are a number of ongoing discussions on how the FCoE standards do and do not matter. I would edge on the side of them not mattering in terms of one vendor selection for storage networking equipment usually is the operating norm. But we must also step back and remember that FCoE is keeping a technology that isn’t really that good and moving it forward to our fastest storage network capabilities. Social promotion, if you will.

The central point is that there are a lot of options for storage networking today. I’d choose something that would leverage a standard 10 Gigabit Ethernet instead of a solution that would adapt fibre channel to a new media type. In the 10 Gigabit options, I’d select iSCSI for most situations with the occasional NFS use case. Also in the 10 Gigabit arena, ATA-over-Etherenet (AoE) can be a compelling storage solution without the traditionally high costs of a fibre channel SAN.

Using a standard 10 Gigabit Ethernet infrastructure appeals to me as server networking can also be addressed by the same type of infrastructure. This also can integrate with traditional Gigabit Ethernet environments easier than other solutions.

Of course, playing armchair architect revolves around getting requirements in line and identifying what type of money can be spent for the network and storage architecture.

How do you approach all of the network and storage protocol options for today’s new infrastructure builds? Share your comments below.

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

7 comments
georgedundon
georgedundon

You are right that FCoE moves a technology the isn't very good forward, but that technology is ethernet. From an engineering perspective FC is significantly superior and that is why it has a home in mission critical storage. You can guarantee that every time you use an ATM or a POS terminal, every time you make a phone call, or do any normal day to day trans action that thousands of other people are doing at the same time, FC will be involved. Bandwidth is not the issue with storage, latency is. Any technology that uses TCP has RTT latency by definition. So when choosing the correct storage architecture it is important to choose the right tool for the job. iSCSI offers all of the benefits of SAN technology for applications that are not latency sensitive. CIFS/NFS is ideal for less latency sensitive applications especially where data sharing is desireable. FC and FCoE are the ideal tools for very high transaction rate applications like mission critical databases. And most of all, ask a storage expert about storage issues and a software expert about software issues.

lpoehlitz
lpoehlitz

I agree that 10 gbit is great up and coming technology. iSCSI is super easy to deploy and manage. But if your application demands maximum performance (bandwidth), seems like FC or FCOE would be the way to go. If using iSCSI where high performance or high bandwidth is needed, you would at least want to consider jumbo frames. Bigger packets equal less round trips and thus less latency, which generally results in greater bandwidth. The promise/advantage of FCOE is the convergence of data and storage on one network (thus less NIC/HBA cards). Indeed there are many design decisions and understanding the applications and organization is critical to making the right ones. Nice article.

albayaaabc
albayaaabc

for nice to be handle the protocol so easy way to verify each one in seperate manner so each protocol for each operand is increase the speed of evalution so be cool and do it.

b4real
b4real

So, they really become different infrastructures; another notch in the clooge category.

tony_ansley
tony_ansley

lopehiltz, FCoE is not the only storage protocol that can be used in a converged network. Data Center Bridging is not only for FCoE. iSCSI works very well within a DCB environment. An optional feature of DCBx is the Application TLV that - if implemented by switch and CNA vendors - allows iSCSI to play by the same rules within DCB architecture allowing iSCSI to have the same PFC and ETS abilities that are required to provide a "fair-use" shared network between storage and other Ethernet protocols.

b4real
b4real

I've seen a # of independent benchmarks pretty much saying it doesn't make a difference.

Editor's Picks