It’s hard to measure the level of impact of a new vendor product. EMC is especially difficult to gauge, as they are known for flashy product announcements. A great example is an instance when EMC sponsored the Lotus F1 team to perform a semi-truck jumping over a Formula 1 race car, in order to promote its Redefine marketing campaign.

SEE: Cloud Data Storage Policy Template (Tech Pro Research)

Recently, EMC announced the DSSD D5 rack scale storage platform. The solution targets Tier 0 workloads, which are workloads requiring the fastest layer of persistent storage available. I’ll try to cut through the marketing haze and offer the top five points of interest from the release.

1.The Origin – DSSD D5

DSSD is the name of a company that EMC acquired in 2014. The privately-held DSSD was working on a storage platform that would cater to large analytic platforms such as SAP HANA. The DSSD name is a tip of the cap to the startup EMC acquired. According to VCE President Chad Sakac, the D5 is representative of the number of rack units the base solution consumes in a standard server rack–five.

2. Speeds and feeds

The speeds and feeds are as follows:

  • 10 Million IOps in 5U
  • 100 Gbps of bandwidth in 5U
  • 100 usec of latency
  • 144TB in 5U

3. Connectivity

On my podcast, I spoke with Deep Storage chief founder and chief scientist Howard Marks about DSSD D5. One question I had for Marks is how the DSSD solution is more appealing than FLASH-based DRAM, also known as NVDIMM, from companies such as Micron or startups such as Plexistor.

Marks pointed out the shared nature of the DSSD over its server-based NVDIMM competitors. NVDIMMs provide ultra-low latency FLASH memory on a per server basis. In theory, a customer with the proper hardware drivers could use a solution such as VMWare VSAN to create a virtual storage array. However, network latency would hamper the performance of an NVDIMM-based virtual SAN.

DSSD solves the shared connectivity problem by leveraging a PCIe switch. Each server in the rack connects via a PCIe NVMe connection that reduces latency while providing the 100Gbps connectivity.

4. Access

Unlike traditional storage arrays, DSSD doesn’t present storage based on the traditional LUN concept. Operating systems access DSSD-based storage in three ways.

  1. Traditional block storage via a kernel driver (currently only available in Linux).
  2. libHDFS, which allows native access via applications or databases such as Hadoop.
  3. Flood Direct Memory API, which allows building of other APIs such as libHDFS. Developers could create all new applications or integrate an existing application to leverage DSSD view in the Flood Direct Memory API.

SEE: All-flash arrays gaining popularity but with unique side effects (TechRepublic)

5. Use case

Plain and simple, any application that benefits from ultra-low latency persistent storage can take advantage of this architecture. The usual suspects come to mind, such as big data analytics and financial applications. As the technology becomes more mainstream, companies like SAP and Oracle can take advantage of extending the capability of their in-memory database solutions, as DSSD allows a tiered approach to data for in-memory use cases.

Your thoughts

This approach to storage is different than what we’ve seen from major storage vendors in the past. Is DSSD something truly special from a technology perspective, or is this another semi jumping over a Formula 1 car? Share your thoughts in the comments section.