There's an Upper Midwest adage that is true more often than not: "No matter how big the garage, there is never enough room for the cars." Data center operators are suffering something similar: just replace garage with bandwidth and cars with data traffic.
What is driving the bandwidth demand?
Shreyas Shah, an experienced system architect with Xilinx, in this report, looks at what's driving the increased demand for bandwidth within a data center -- it's not surprising that much of the bandwidth results from applications we all use.
- Social media: Social media content is comprised of several types of data, probably residing on different virtual and possibly different physical servers. Presenting the webpage's content (video, audio, images, etc.) in a timely fashion requires a great deal of data center interconnect bandwidth.
- Video on demand: The popularity of latency-sensitive video products and the desire to transfer video in real time are both major reasons why a data center's bandwidth needs are never enough.
- Big data analytics: Big data analytics and "just-in-time" targeted advertising consume huge amounts of bandwidth, because of data moving between servers and from servers to storage devices.
- Internet of Things: The impact from the Internet of Things is just starting to be felt. Every device that is connected to the internet (estimates are at 30 billion devices by 2020) will use bandwidth to supply data and receive command and control information.
- Internal backups: More data requires more backup capability, and with our ever-increasing reliance on the internet to function, not having access to an application and/or data could ruin a business.
- Server virtual machines (VM): Virtualizing servers impacts data center bandwidth in that the VM to VM traffic is high-volume and requires low latency server-to-server connections.
- Network virtualization: Software-Defined Networking (SDN) and network virtualization place additional bandwidth demands on the network, making it a "convenience of SDN vs. more bandwidth for production" decision.
One example of the amount of effort being spent on upping available bandwidth in data centers is the quest for a new 100 GbE optical interface protocol and the technology to make it work. It is no small challenge to obtain readable 100 Gb/sec signals after a two-kilometer fiber optic cable run.
Another technology coming under pressure is Ethernet, the ubiquitous workhorse of interconnect technology. The Ethernet protocol oversees connections between servers, networking implements, and storage devices in the data center. Ethernet technology also connects physically-separated data centers.
As clever as network engineers are in finding innovative ways to mold Ethernet to their needs, the 34-year-old technology is just about wrung out when it comes to squeezing more bandwidth out of Ethernet connections. One such area under duress is the Spanning Tree Protocol (STP). The protocol is called Spanning Tree, because when diagrammed the chart looks like a tree's root system (Figure A).
On top are the core routers. All traffic is under their direction, with the flow from edge devices to core routers and onto other edge devices.
Because of the way engineers map Spanning Tree networks, they say the flow pattern is in a north-south direction. That is also a hint of what's wrong with Spanning Tree. All networked devices must send their traffic to the core routers; this approach quickly becomes a bottleneck when traffic loads are bumping up against design limits or if enough devices are added to the network.
The solution might be L2MP
Layer 2 Multipath (L2MP) is a technology of interest to data center network engineers who are trying to find all available bandwidth. Simply put, L2MP removes the need to pass traffic through the core routers. This is where the "east-west bandwidth" comes into play. Figure B shows the difference in traffic paths of L2MP compared to STP. The blue line depicts traffic flowing from one server to another in as direct a manner as possible.
The authors of a Cisco Press paper outline the operational benefits of L2MP:
- It eliminates the need for north-south bandwidth allocation;
- The reason for using STP was to prevent debilitating loops between devices in the network -- L2MP eliminates the possibility of network loops;
- It provides a single control plane for unknown unicast, unicast, broadcast, and multicast traffic; and
- It enhances mobility and virtualization in the FabricPath network with a larger OSI Layer 2 domain.
Those working on standards sometimes add a flourish to the dreadfully boring (to most of us) standard's draft. The authors of the Cisco Press paper note that Ray Perlner added the following poem to the Transparent Interconnection of Lots of Links (TRILL) draft. It alludes to a network topology that is free of STP.
I hope that we shall one day see,
A graph more lovely than a tree.
A graph to boost efficiency,
While still configuration-free.
A network where RBridges can,
Route packets to their target LAN.
The paths they find, to our elation,
Are least cost paths to destination!
With packet hop counts we now see,
The network need not be loop-free!
RBridges work transparently,
Without a common spanning tree.
I do not profess to be a network engineer. However, I try to keep up on developments pertaining to digital networking, and yet, east-west bandwidth eluded me. I thought there might be others in the same predicament. Hopefully, that is no longer the case.
Information is my field...Writing is my passion...Coupling the two is my mission.