Storage

New servers shipping with 10 Gbps interfaces: Time to upgrade?

The 10 Gigabit Ethernet upgrade is a tough one for many organizations to roll in. IT pro Rick Vanover highlights how the built-in aspect is a big step in the right direction for the next generation of Ethernet technologies.

If you haven’t noticed yet, a number of new servers and storage devices are shipping with built-in interfaces that support 10 Gbps networking. In my opinion, this is one of the tell-tale signs that 10 Gbps networking needs to become the mainstream in the datacenter.

Dell, for example, now has a number of servers that are presenting either converged networking adapters (CNAs) or iSCSI interfaces that are 10 Gbps. This is the case on the PowerEdge R810 as well as some other newer models. This model still has two built-in 1 Gbps Ethernet interfaces, but also includes two 10 Gbps interfaces through an OEM agreement with Emulex. I’ve also seen a number of other storage and networking products start to offer 10 Gbps interfaces as well, making me wonder what the state of the union is for endpoint connectivity to servers and storage. We’ll save 10 Gbps to the desktop for another day!

This of course is much easier said than done in any datacenter. There are a number of issues surrounding a wholesale upgrade of the base network fabric. This can be initial investment, not really needing 10 Gbps, and determining how the storage network(s) are to be provisioned. Today’s storage networks are a driving force in determining why these servers and storage products now have 10 Gbps interfaces.

Having interfaces built-in is a big plus. This saves a lot of the back and forth configuration requirements of accessory cards, yet also can be limiting as (in this case) it only provides two interfaces. Should more be needed, we are back to the configuration stage. Further, we may end up neglecting the tried and true 1 Gbps interfaces. A lot of models come with four 1 Gbps interfaces, and we may in the end have to add both 1 Gbps and 10 Gbps interfaces to new server provisioning, effectively complicating our configuration.

What is your take on the 10 Gbps interfaces being built-in on some servers? Good, bad, not ready yet? Share your comments below.

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

22 comments
nick.ferrar
nick.ferrar

10Gb is compelling for high-density VM environments, we're looking at it ourselves to move away from 6-12 1Gb uplinks from an ESX host + 2-4 FC for storage to jsut going to 2 x FC uplinks from a CNA and doing FCoE as well. Trouble is we'll need to put in a new core (Nexus 7000) and hang Nexus 5000's off those for the server connections. It's the server connections that don't really add up, you save money on the cards (a CNA is suprisingly cheap compared with HBAs) but the per-port cost of a Nexus 5000 is huge compared with 1Gb copper switches.

Tony_Scarpelli
Tony_Scarpelli

I wonder if 10Gb either net pushed down to smaller enterprises and companies opens a new era of distributed multiprocessing. Some organizations have the need for modeling or high use mapping/video. I wonder if a seti like setup will soon show up on business networks perhaps in the off hours at first.

tw5000
tw5000

The manufacturers shipping servers with 10 Gbps interfaces is a good thing, so long as they are keeping the 1 Gbps interfaces as well (the Dell indicated, for instance). Upgrading the switches involved in this is where the real expense is. I seem to remember running tandem 1 Gbps interfaces at 100 Mbps for a while, until we managed to get the budget dollars together to upgrade the main server interface switch. Even at that time, 5 or so years ago, I would have only wanted a server with 1 Gbps interfaces, knowing going forward that we would be moving to that and didn't want to be stuck with slower speeds. That's the one thing this article doesn't specify, whether 10 Gbps is backwards compatible with 1 Gbps? I'd honestly want the faster speed available on any new server purchases, but only provided I could still use it on my current network. 3-5 years from now, I'd bet everything on the backbone, even for small shops, will be 10 Gbps.

tgabonowe
tgabonowe

Technologically its sound to have a 10gb, but when you are in the boardroom explaining the ROI to the numbers people and why the current 1gb needs to be upgraded its going to be a tough sell. Lets face it IT budgets are shrinking from year in year out, let alone replacing the entire backbone..... we will see as it unfolds...

websisc
websisc

Until We see a true ROI, we'll suffice with teamed 1GB NICs

George@2ndfloorcomputers
George@2ndfloorcomputers

Good bye Copper, hello Fiber! Copper is past it's due date, and yes you can get 10G Copper Switches, but the price is huge, since you still have to lay new Cables (no the rumor is true, Cat 5E is dead). Time to go Multimode (or Single if need be).

jhoward
jhoward

As mentioned above the up front costs for 10Gb infrastructure is really exorbitant right now. Not to mention I know of quite a few companies who have only just recently (past 2 years) upgraded their 100Mb infrastructure to 1Gb. Without the core infrastructure to support the server interfaces how do you present a business case? Seeing the 10Gb interfaces "built in" is a great step in commoditizing the technology on the server side however the gap in price on managed core switches to support 1Gb vs 10Gb is still just ridiculous as any relatively new technology always is. I would like to revisit this in 12 months just to see how much - if at all - things have changed in this respect. I would also like to see how much this is driven by the increasing bandwidth needs of the consumers especially as 4G mobile networks bring home WAN speeds to mobile devices.

SgtPappy
SgtPappy

....possibly be against this?/

dgennello
dgennello

In addition to being an SMB MSP, we recently expanded into the Electonic Security and Surveillance market space. With many of the CCTV vendors moving towards megapixel CCTV cameras, 10GB interaces are exciting to us. Deploying 30+ 5MP cameras streaming video data to the server requires high throughput NICs and we will use the the 10GB interfaces.

FAST!!!
FAST!!!

The server infrastructure is where 10GB belongs to support high density VM deployments and network attached iSCSI storage. Unless you are a small shop there is no reason not to be investing in this.

kevnad1966
kevnad1966

Yeah, 10GB sounds nice, and in some high level datacenters is probably a requirement; other than that, an org. would have to invest quite a sum to get the supporting infrastructure to support 10GB, just in their own DC's.

pgit
pgit

...and isn't it somewhat worthless unless the supporting infrastructure is updated as well? I can see this getting it's first toe hold in the storage/retrieval arena, but what's the goal if the overall service to the end user is limited to 1 Gb or even 100Mb? Decreased processor loads?

b4real
b4real

It is quite easy to deploy 4 x 1 Gbps instead of going for 10 Gbps.

b4real
b4real

Acquisition cost, ease of termination and existing infrastructure make it tough to move away from.

georgeb
georgeb

We find 10G interesting for things like fan-in issues where several front end servers with 1G interfaces are taking to something like a Hadoop cluster. We put 10G interfaces on the backend boxes. Arista makes a 10GBaseT switch that works over regular twisted pair copper, NICs are under $1K for dual-port 10GBaseT NICs. This is a much cheaper solution than optical 10G.

b4real
b4real

I'm not privy to the ROI, but had sensed that the upfront investment is the killer. There must be performance requirements at this point making 10 Gbps go in. Further, there is so much competition in the 1 Gpbs market, that price is driven down there because of it.

b4real
b4real

The core infrastructure is a huge investment. And 'historically' 10 Gbps is the backplane, not the endpoint connectivity.

b4real
b4real

Hadn't thought about media applications like that. Definitely the case for LAN traffic. Are there any requirements for storage protocols in that space at 10 Gbps?

b4real
b4real

As this model is targeted to that application. But, will we see it happen on other models? Further, for those who need this model - is it a boon or a bust?

b4real
b4real

But, built-in is a big step forward in making it happen.

dwerhart
dwerhart

10 Gig interfaces are the standard for us in my organization - in the backbone. However, we have not migrated servers to 10 gig at this time. We will be moving that diretion for our mission critical servers - all of our wiring closets connect via 10 gig and we plan on moving several of our servers to 10 gig as well. Why we want it - backbone aggregation and bottlenecking is a posibility when a switch presents data at 48 ports running a 1 gig interface. We want to see less bottlenecks in our nework at the layer 2 level and at the layer 3 level. Servers that work in this 10 gig environment are going to be the norm for us soon.

pgit
pgit

I imagined the prime benefit would be maintaining bandwidth overhead and not speed of data delivery. You're doing it right, upgrading everything to 10 gig across the board. I don't have any clients that can afford that, in fact I don't think any but maybe one or two that could benefit from 10 gig.

Editor's Picks