PCI Express 3.0 is enterprise computing’s dominant standard for communication between memory, microprocessors, networking, and storage, but it’s facing new competition, as its upcoming major update is underwhelming to some of the most important observers.
Officials of PCI-SIG, an industry group that controls the specification, talked about their fourth-generation plans more than five years ago and said version 4.0 would likely arrive by 2015. Now, fresh delays, combined with recent trends in big data, Internet of Things, and mobile computing, are pushing several top IT vendors to call for new approaches to data bottlenecks.
PCIe 4.0, as it’s known, will move data at 16 gigatransfers per second–that is double the current version. This can be performed at half or quarter of the current voltage levels by using a burst method of sending large amounts of data for shorter times, explained IBM’s Al Yanes, an engineer who chairs PCI-SIG.
Yanes announced last year that PCIe 4.0 would roll out early this year. Now, due to testing issues, “I think it’s going to be more in the third quarter before the cycle rolls out,” he said. There hasn’t been much demand until recently, he added.
SEE: Hardware Procurement Policy (Tech Pro Research)
Delays in computer industry standards are not normally a big deal. It’s common knowledge that such things take time to get right, but almost all major hardware companies except Intel are starting to work together on alternate plans. Such companies already use and support PCIe 3.0 and 3.1, but the information world is moving too fast to wait much longer, they argue. There is also a business motivation: Intel’s recent purchases of smaller companies in the hardware acceleration and programmable chip niches are leaving other hardware giants without a choice.
There are three consortia–CCIX, Gen-Z, and OpenCAPI–that could work together to force Intel’s cooperation and possibly render PCIe 4.0 moot. CCIX, which advocates pronounce “C-6,” stands out for a slightly different perspective, as the goal is to work with PCIe 4.0, not against it.
CCIX, officially the Cache Coherent Interconnect for Accelerators, includes name brands such as AMD, Amphenol, ARM, Broadcom, Huawei, IBM, Mellanox, Micron, Qualcomm, Red Hat, Texas Instruments, and Xilinx.
“We are always interested in the data acceleration market… PCI Express is really good because it’s everywhere,” noted Gaurav Singh, vice president of architecture for Xilinx and chair of CCIX. However, “In order to improve the performance, we need to have a higher bandwidth on the interface, and we also need to eliminate the bottleneck,” he said.
Programming for PCI requires a lot of software overhead, and it experiences too much delay because of bit ordering and software drives, Singh explained. As such, “What we’re doing with CCIX is we’re extending that memory domain into the accelerator. This is something that PCI does not have.”
“The CCIX approach is to say, ‘Let’s not throw the baby out with the bath water’,” Singh continued. “The same pins can be used for PCI Express, and it can also be used for CCIX.”
CCIX is already developing speeds of 25 gigabits and 56 gigabits per second, Singh stated; it works at PCIe 4.0 speed, too. The consortium expects to demonstrate its chips in the first half of 2018, he said.
Dell is part of the PCI-SIG. Dell and its new subsidiary EMC both widely use PCIe in servers and storage, yet Dell-EMC exemplifies a hardware giant that’s involved part of the Gen-Z and OpenCAPI organizations. “Anytime we can get more and more in and out of processors, we can build better storage platforms, we can build better servers, and our customers can get better results accordingly,” said Danny Cobb, the EMC vice president of flash storage strategy and fellow.
“The transition from [PCI Express] generation two to generation three, there were some hiccups. Things didn’t just work as smoothly as we would like to see. But that’s life in the big city,” Cobb said. “For the longest time, PCIe hasn’t been viewed as the easiest thing to implement… PCIe just wasn’t there in terms of the distances it could support, the bandwidth it could deliver [or] multistream, multinode,” he said.
EMC has a precedent for doing what must be done to make high-end products work. For example, the company used PCIe alternatives such as RapidIO in previous versions of its high-end storage arrays and currently uses another alternative, InfiniBand, in similar products including those involved flash drives.
What does it all mean for enterprise technologists?
“What customers need to do, and what they are doing, is start thinking about where they have information-intensive applications,” particularly those running atop high-speed transactional databases, Cobb said. “What I’m advising customers is to start to think about their real-time workloads,” which can include anything from serving immediately customized web advertisements to security detection.
Intel, for its part, is sticking to the three-pronged approach of its proprietary QuickPath technology between processors, double-data rate memory, and PCIe for connecting to other systems. Asked about the three consortia, “I don’t think there’s anything they’re planning that we don’t have plans on our roadmap to go do,” said Rob Hays, vice president and general manager of strategic planning in the Intel data center group. Special versions of the Intel approach to hardware acceleration are being prepared for upcoming applications such as machine learning and 5G mobile communications, he added.
Veteran IT workers will grin upon hearing Intel’s PCIe hardware identifier number: 8086–it’s the same name as its milestone processor released in 1978 that kicked off Intel’s era of dominance in microprocessors. Whether Intel can maintain the lead 40 years later may come down to a nice game of chess between PCIe 4.0 vs. industry consortia, which are looking out for customers’ and their own interests.