Network-switch vendors will go broke if they are relying on Facebook for sales. Facebook now makes all its own network gear under the auspices of the Open Compute Project (OCP). From the OCP website: “A set of technologies that are disaggregated and fully open, allowing for rapid innovation in the network space. We aim to facilitate the development of network hardware and software — together with trusted project validation and testing — in a truly open and collaborative community environment.”

Most who follow data-center networking figured it was a matter of time before Facebook would design and build all the network equipment needed in its data centers. The first piece of equipment to be redesigned was the Top Of Rack (TOR) switch. The new TOR switch was code-named Wedge — of note is the powerful server Facebook design engineers added to Wedge’s internal hardware. Facebook software developers dutifully created an operating system code-named FBOSS to run Wedge. It wasn’t much of a stretch to realize Facebook had other things in mind for the Wedge hardware design and FBOSS.

The data-center network architecture succumbed next. Facebook introduced Fabric a few months after Wedge. Yuval Bachar, hardware networking engineer at Facebook, explained the importance of Fabric and Wedge, “For both projects, we broke apart the hardware and software layers of the stack and opened up greater visibility, automation, and control in the operation of our network.”

The last piece of the puzzle

As tech pundits expected, Facebook’s data-center network overhaul was not yet finished. “Even with all that progress, we still had one more step to take,” said Bachar. “We had a TOR, a fabric, and the software to make it run, but we still lacked a scalable solution for all the modular switches in our fabric. So we built the first open modular switch platform.”

The platform Balchar referred to in his Feb. 11, 2015 press release is known as 6-pack (Figure A). Facebook considers 6-pack a full mesh, dual backplane, non-blocking, two-stage switch with 12 independent switching elements. Each element can move an impressive 1.28 Tb per second. Bachar added, “We have two configurations: One configuration exposes 16x40GE ports to the front and 640G (16x40GE) to the back, and the other is used for aggregation and exposes all 1.28T to the back.”

Components in a 6-pack

The modular design starts with “the line card” (Figure B), the component nearly identical to Wedge. “Each element runs its own operating system on the local server and is completely independent, from the switching aspects to the low-level board control and cooling system,” continued Bachar. “This means we can modify any part of the system with no system-level impact, software or hardware.”

Figure B

The second 6-pack hardware module is called “the fabric card,” and contains two line-card boards with their business ends facing the back of the 6-pack hardware platform. The fabric card’s configuration allows a full mesh locally, meaning non-blocking connectivity is provided within the 6-pack switch. The fabric card also aggregates out-of-band management network traffic reachable via the external ports shown in Figure C.

Figure C

The two components come together in the 6-pack platform shown in Figure A. There are eight line cards and two Fabric cards. The design in the schematic below (Figure D) offers an idea of how the network modules interconnect. (In Figure D, BP is Backplane, Fabric is Fabric card, and LC is Line card.)

Figure D

Bachar concluded his press release by mentioning that 6-pack, Wedge, and FBOSS are in production testing. And as promised, Facebook intends to contribute the 6-pack modular network switch design to the OCP.