In a previous post, I discussed the challenges of integrating Docker containers with VMware. While VMware would prefer you run containers inside a virtual machine (VM), a more common use case for containers is to run them on physical hardware as you would a VM.
Containers are an abstraction performed at the operating system (OS) level that allow for efficiencies over VMs. In this post, I'll explore some pros and cons of containers vs. VMs.
Basic high-level differences between containers and VMs
A VM is an abstraction of physical hardware. Each VM has a full server hardware stack from virtualized BIOS to virtualized network adapters, storage, and CPU. The entire hardware stack means that each VM needs a complete OS. Each VM instantiation requires starting a full OS. The VM boot process is normally much quicker than a physical piece of equipment; however, the process can still take seconds or minutes based on the OS and the physical hardware performance as well as the system load.
Instead of virtualizing the entire server hardware stack, container abstraction occurs at the OS level. In most container systems, the user space is abstracted. A typical example is application presentation systems such as Citrix XenApp. XenApp creates a segmented user space for each instance of an application. A typical use case for XenApp is the deployment of an office suite to dozens or thousands of remote workers. To accomplish this goal, XenApp creates sandboxed user spaces on a Windows Server for each connected user. While each user shares the same OS instance including kernel, network connection, and base file system, each instance of the office suite has a separate user space.
Since a separate kernel doesn't load for each user session, the overhead associated with multiple OSs isn't experienced in containers; this is why containers use less memory and CPU than VMs running similar workloads. It's common to see XenApp support hundreds of users on a single server vs. XenDeskop, which leverages full VMs support dozens of users on the same hardware. Also, since containers are just sandboxed environments within an OS, the time to initiate a container can be milliseconds.
The challenges of using containers
Security: An advantage of using VMs is the abstraction at the physical hardware level that translates to individual kernels; these individual kernels limit the attack surface to the hypervisor. In theory, vulnerabilities in particular OS versions can't be leveraged to compromise other VMs running on the same physical host. Since containers share the same kernel, admins and software vendors need to apply special care to avoid security issues from adjacent containers.
Management: Solutions such as Docker make container management easier, but many customers still find container management more of an art than a science. One customer who has been running Docker recently wrote about shared his experience and his frustration of managing Docker in a production environment.
As container management platforms such as Docker continue to mature, data center managers should keep examining potential workloads for containers. Enterprise customers should start slowly.
I recommend deploying containers within VMs for specific workloads to gain experience with the technology in production. An example would be grouping internally facing web servers onto a single VM using a container technology such as Docker. Another option is to provide containers as development environments for new applications. The experience can be used to provide feedback to the community and understand how containers integrate into your data center operations.
Has your organization considered using containers? If so, please share your experience in the comments.
- Containers: replacements or alternatives to virtual machines?
- Questions remain about the integration of containers within cloud virtual machines
- Why Docker... and why now?
- Just how hot is Docker?
Keith Townsend is a technology management consultant with more than 15 years of related experience designing, implementing, and managing data center technologies. His areas of expertise include virtualization, networking, and storage solutions for Fortune 500 organizations. He holds a BA in computing and a MS in information technology from DePaul University.