Data Centers

Why your traditional virtualization vendor can't help you with containers

Containers are the next big thing, but traditional hyper-converged vendors may be constitutionally incapable of managing them at scale.

Image: iStockphoto/cybrain

As container adoption soars, more and more operators discover the networking and storage challenges of scaling the containers that developers love. Containers were easy on the laptop, but not so much in production--or at scale.

Unfortunately, traditional hyper-converged infrastructure (HCI) vendors like VMware haven't kept pace with the changes afforded by the container revolution. Indeed, Mark Balch, vice president of products for Diamanti, a startup that launched a beta appliance that solves challenges posed by containers in storage and networking I/O, told me in an interview that such vendors can't keep pace. According to Balch, a new approach--and a new appliance (now in GA)--is required for containers.

Cant' get there from here

TechRepublic: Gartner has forecasted hyper-converged becoming a $5 billion market by 2019, and containers are an obvious "next big thing" opportunity that affects infrastructure. So, why aren't the existing hyper-converged vendors delivering optimized appliances for containers already?

Balch: Vendors naturally try to retrofit older technology for newer use cases like containers. The problem for the product category of hyper-converged infrastructure (HCI) is that they are dependent on hypervisors ("hyper") and fundamentally aren't able to function without the software overlay hypervisor. Legacy HCI systems would have to be completely redesigned, essentially made into completely new products without a hypervisor, to support bare metal containers on the same infrastructure as the storage.

SEE Why MemSQL's Pokemon-themed testing solution dumped VMs for containers

In addition, HCI has continually ignored networking with poor performance consequences. As a result, many users have turned away from HCI for broad classes of applications like analytics and transaction processing because of the poor performance. Existing HCI vendors will need to address the performance and cost overhead of hypervisors and implement native container networking to effectively address the new container technology and operational processes.

The end of the VM world as we know it

TechRepublic: Gartner called the virtualization market "mature" recently, with most firms reporting 75% or higher virtualization of their data center. What are the basic challenges these companies face when they try to run containers on infrastructure designed for VMs? What is the basic difference--from an infrastructure perspective--of running VMs vs. containers?

Balch: VMs were designed to replicate physical servers and that carries a lot of overhead with it. Containers allow you to run multiple applications on the same OS, and much less overhead than with the software overlay of a hypervisor. So basically, the utility of the hypervisor goes away. You don't need the overhead of the hypervisor (financial and performance), and you don't need the hypervisor to give you multi-tenancy because containers, on their own, allow you to run multiple applications on the same operating system. Hypervisors were created for a different problem that's already run its course.

Thinking different about networking

TechRepublic: Why should enterprises look at networking and storage differently for containers than for VMs?

Balch: Containers, on their own, break traditional networking, because traditional networking expects one application per operating system, as a single IP address. Containers allow you to run multiple applications on a single OS, but there's no way to give every container its own distinct IP address. The industry has therefore responded with all sorts of software overlays to try to solve that problem. Virtual machines are subject to the same limitations because VMs provide a single network interface to a single OS.

SEE Why Kubernetes could be crowned king of container management

Diamanti gives a distinct network interface to every container. In this way every container can be managed like a VM on the network, with a distinct address and without the overhead of a software overlay. This allows containers to use existing network environments including load balancers and firewalls, and standard service discovery like DNS without re-architecting the network. Similarly, the whole storage ecosystem was designed around the assumption of each OS having a single application running on it.

In order to operate containers with storage, you have to manually assign individual storage volumes to individual containers via each operating system. Diamanti automatically associates storage volume with each individual container regardless of the operating system that they are running on. So, each container is treated as a first-class citizen from a storage perspective, without the cost and complexity of software overlays and manual operations.

Also see

Visit TechRepublic