Storage connection protocol testing offers lessons for real-world implementations

Non-Volatile Memory Express offers great promise to expedite the connections between SSDs and their hosts; however, testers are finding that real-world installations may not be a smooth process.

Image: Getty Images/iStockphoto, kynny

The fiber channel version of the Non-Volatile Memory Express standard coalesced last summer, and now interoperability testers are beginning to understand the unique challenges of making it work.

NVMe, as it's written in storage parlance, is a new way of connecting solid-state drives to host hardware as a replacement for the older Advanced Host Control Interface. AHCI worked fine for traditional spinning media but is a limiting factor now.

SEE: IT hardware procurement policy (Tech Pro Research)

Recent tests by the University of New Hampshire InterOperability Laboratory, long a partner of the Storage Networking Industry Association, found that working with NVMe is more complicated than working with other storage standards.

"The things that were unique about other technologies, they're all popping up," said the laboratory's David Woolf, senior engineer for data center technologies.

Woolf elaborated on several examples of NVMe challenges that could pose concerns for corporate workers who'll be performing real-world installations of the promising but as-yet unproven technology.

The NVMe protocol can work atop transportation layers such as fiber channel, but it can also work with less-common alternatives such as InfiniBand and Remote Direct Memory Access (RDMA). "Each of those has its own little wrinkles," Woolf explained. "The tuning of that fabric, that in and of itself is a little bit of an art."

Fiber channel is widespread enough that tuning software already exists, but not for high-performance connections like RDMA where the tuning is more custom, Woolf said. "If you're deploying this, you're going to want to pay attention to that because it's going to have a huge impact on you," he said.

Ethernet also presents challenges because it's not up to the fiber channel level of sturdiness for high-performance storage applications, Woolf noted. "You're going to need to do things to enable Ethernet to be lossless in the way fiber channel is... to eliminate packets being dropped due to congestion. When you're talking storage, losing a packet is very serious."

It may seem contradictory to buy NVMe-enabled hardware and then connect it to an Ethernet network, but for some organizations that scenario may be viable. "If there's cost savings to be had there, all of a sudden they can be very significant especially if deployed in a large-scale network," Woolf observed.

Products certified for fiber channel are not yet posted on the laboratory's web page. Products from Cavium, Mellanox, and Toshiba are in the laboratory's first batch of NVMe over Ethernet certifications.

"The demands of performance-intensive cloud workloads are best addressed through higher IOPs, lower latencies and superior reliability--all of which NVMe SSDs provide," Toshiba said in a press release announcing its certification. Toshiba is working on a software to use NVMe connections, the company stated.

Cavium is currently being acquired by Marvell Technology. analyst Paige Tanner noted that Marvell would like to compete with bigger players such as Broadcom, which recently acquired the non-fiber channel products of networking specialist Brocade Communications Systems.

Also see

By Evan Koblentz

Evan became a technology reporter during the dot-com boom of the late 1990s. He published a book, "Abacus to smartphone: The evolution of mobile and portable computers" in 2015 and is executive director of Vintage Computer Federation, a 501(c)3 non-p...