Data Centers

Diamanti believes appliances can simplify containers for the enterprise

Containers are hot, but they still have some barriers to adoption. TechRepublic spoke with Diamanti VP Mark Balch on how appliances could help solve the problem.

containers.jpg
Image: iStockphoto/Panatfoto

Once or twice a decade, the IT industry reaches an agreement that an emerging technology is the next big thing—and containers are clearly the next big platform for applications. For most enterprises, it's not a matter of "if" containers make it into production, but "when" containers make it into production.

I wrote earlier this spring about Red Hat's unique opportunity to make money by simplifying open source for the enterprise. And, as the Red Hat Summit in San Francisco opens this week, it's clear that simplicity is still what is missing around containers. The container market is still very much the Wild West in terms of emerging open source (Kubernetes, DC/OS, Docker SWARM) and commercial platforms (Azure Container Service, Amazon Lambda).

It's also a tough market for buyers to decipher as they place their architecture bets.

As organizations evolve their architectures beyond virtual machines, and look for better ways to embrace the new container form factor, they're also looking for a little simplicity. One startup that hopes to crack the code on open source monetization and simplification of containers in one fell swoop is Diamanti. I spoke with Diamanti vice president of products, Mark Balch, to learn more about the hardware innovation that's happening around containers, the network and storage barriers that have challenged container operations, and the company's latest efforts around Red Hat's OpenShift.

TechRepublic: What's so tough about networking and storage for containers, and why hasn't this problem been solved already?

Balch: The old world of storage and networking is big scale-up storage arrays and fat networks, and a ton of manual configuration for operators. Modern applications built using containers require much improved agility and economics. Carrying legacy data center complexity forward means that human operators can no longer reason with all the moving parts in a timely manner.

SEE Consider this operational challenge before implementing containers (TechRepublic)

The most common approach has been stovepiping each application on dedicated hardware clusters to guarantee performance, which is not only incredibly expensive due to poor utilization, but is also contradictory to containers' original goals of application and data portability.

Many new, containerized applications are data-intensive, often including real-time analytics using data-tier workloads such as Cassandra, Kafka, MongoDB, and ElasticSearch. While scaling stateful applications has always been challenging, the problem is compounded by user expectations of instant service delivery with global scale.

When enterprises attempt to run modern, stateful applications multi-tenant on shared infrastructure, they typically find that the "noisy neighbor" problem, already challenging in the virtual machine world, is now even more difficult. When you have a high density of containers with 3-5x more workloads on the same infrastructure, one app begins to conflict with another app, and you start to get unpredictable behavior and performance. And all these apps that we're targeting are data-driven, so I/O is a big part of the picture.

TechRepublic: Why an appliance?

Balch: A lot of our team came from Cisco. We were the Cisco UCS engineering team. We know servers, we know virtualization, and we know networking. The other portion of the team is from Veritas and VMware VSAN, so we also know storage.

We started with a white sheet of paper, and what we came up with was a new container-optimized architecture that merges network and storage data paths on a PCIe IO controller card that we ship within a standard x86 server appliance, including software innovations in clustering and resource management. Because a lot of the operational challenges of containers boil down to an IO problem—a data movement and data at rest problem—we believe container network and storage must be solved together on converged infrastructure. Every byte travelling between containers, through the fabric, and to storage media goes through our card. We sit underneath the operating system in the data plane. And the control plane is tightly integrated with mainstream open source orchestration software including Kubernetes, Mesos, and Docker.

Scale-out virtualized and containerized applications are great candidates for Diamanti. We take care of network and storage interoperability, without the overhead and integration complexity of port mappings and overlay networks that an enterprise would otherwise have to handle. We also address the weird one-offs when it comes to implementing persistent storage across a storage area network. We're offering persistent storage that looks like direct-attached storage, and the infrastructure enforces performance guarantees in a form factor that just plugs into existing datacenter racks.

TechRepublic: First you supported Kubernetes. Then DC/OS. Now OpenShift. What's the strategy behind these various open source frameworks and how do you support them in the appliance?

Balch: Interesting question because most people associate appliances with being proprietary.

We work with the users' orchestration of choice and require no code changes, kernel modules, or drivers. This is an open source world where users don't want to be locked in. We recently announced DC/OS support at MesosCon. We also work with Kubernetes and Docker. No specialized tools or customization are needed to use Diamanti. We want to be users' network and storage of choice, and we allow users to make their choice of operating system, container format, and orchestration software.

Going into Red Hat Summit this week we're announcing support for OpenShift Container Platform, their popular PaaS for containers. The Red Hat-Diamanti solution supports containerized applications with rapid development-to-production rollouts and guaranteed high performance networking and storage resources. OpenShift leverages Kubernetes and works seamlessly with Diamanti's network, storage, and scheduling extensions. For the first time, OpenShift users can specify their applications' network and storage requirements natively and quickly deploy across environments knowing those needs will be fulfilled.

TechRepublic: Is it accurate to say that Diamanti is trying to be to containers what Nutanix is to virtual machines?

Balch: A lot of people look at our converged architecture and say, "You're Nutanix." But, there are fundamental differences.

For one, we don't require a hypervisor—we believe that users should be able to run containers on bare metal. We also target a completely different set of applications—modern, containerized apps —where quality of service and performance guarantees enable whole new levels of productivity and time to market.

Nutanix is aimed at legacy virtualized apps, what Gartner calls "mode 1," often where data sets fit on individual nodes. Diamanti is addressing the new application stack, also called "mode 2," where DevOps cycles are far more agile, the dataset spans across the cluster, and there's a lot more east-west movement of data, where IO becomes crucial.

Also see

About Matt Asay

Matt Asay is a veteran technology columnist who has written for CNET, ReadWrite, and other tech media. Asay has also held a variety of executive roles with leading mobile and big data software companies.

Editor's Picks

Free Newsletters, In your Inbox