Microsoft announced on August 19, 2015 the availability of Windows Server 2016 Technical Preview 3 with support for containers and Docker. This announcement marks a milestone in Windows history, because the team at Redmond contributed to the open source Docker engine hosted on GitHub. Microsoft has reduced the gap between the Docker and .NET communities, and enabled developers to mix and match the capabilities of Linux and Windows to develop complex distributed applications.
A brief timeline leading up to the milestone
Since November 2014, Microsoft and Docker have been working closely to bring containerization to Windows Server. At Build 2015, Mark Russinovich, CTO for Microsoft Azure, demonstrated the integration capabilities of Docker with Windows. He showed a microservices application that used a combination of Linux and Windows containers.
Just ahead of the Build conference, Docker announced the availability of the command line interface (CLI) for Microsoft Windows. In July 2015, Microsoft joined the Open Container Initiative (OCI) as the founding member and pledged support for a common container format and runtime.
Containers' capabilities in Windows Servers 2016 TP3
The various capabilities for running containerized applications in the latest technical preview of Windows Server include:
- Docker Engine for Windows Server
- Docker Command Line Tools
- Visual Studio 2015 Tools for Docker
Microsoft had two choices for exposing containerization in Windows: a native Windows API or a Docker-compatible API. By choosing the latter, it can participate in the Docker ecosystem by supporting existing tools and extensions. Orchestration tools such as Docker Swarm, Kubernetes, and Mesosphere can instantly talk to the containers deployed on Microsoft Windows. Microsoft and Docker worked closely to implement Docker Engine on Windows, replacing Linux-specific system calls with Windows internal APIs.
The Docker command line tools natively run on Windows, enabling DevOps teams to manage a Docker Engine running locally on Windows Server or even on remote Linux hosts. The Docker command line interface (CLI) and PowerShell cmdlets bring automation capabilities to containers. Administrators can choose one of them for integrating with the automation scripts.
Microsoft is trying its best to attract non-.NET developers, including Node.js, Python, and Ruby programmers, to use Visual Studio. With Visual Studio 2015 Tools for Docker, Microsoft's flagship IDE supports publishing code directly to a running Docker container. Developers can set breakpoints and step through the code deployed in containers. This integration delivers greater productivity to developers targeting containers.
It is important to understand that the container support in Windows is primarily around the tooling and compatibility and not about cross-platform applications; it is not possible to run a Linux container on Windows Server or vice versa. However, the tools running on either platform can manage both the containers.
Windows Server Containers vs. Hyper-V Containers
Microsoft's container philosophy revolves around Windows. The company is offering two varieties of containers: a lightweight solution called Windows Server Containers, and a virtualization-based implementation called Hyper-V Containers. Both flavors expose the Docker API and can be managed through the Docker CLI and tools.
- Windows Server Containers share the underlying OS kernel; this architecture enables faster startup and efficient packaging, while delivering the capability to run a number of containers per host. Containers share the local data and APIs with lower isolation levels between each; this architecture offers lesser security because of the way the running containers are isolated from each other.
According to Mark Russinovich, these containers are best for homogenous applications that don't require strong isolation and security constraints. Large microservices applications composed of multiple containers can use Windows Server Containers for performance and efficiency.
- Hyper-V Containers offer the best of both worlds: virtual machines and containers. Since each container gets a dedicated copy of Windows kernel and memory, Hyper-V Containers have better isolation and security levels than Windows Server Containers. The containers are more secure because the interaction with the host operating system and other containers is minimal. This limited sharing of resources also increases startup time and the size of packaged containers.
Hyper-V Containers are preferred in multi-tenant environments such as public clouds. This approach is similar to VMware's Project Bonneville, which exposes vSphere through container APIs.
It's important to understand that both the implementations — Windows Server Containers and Hyper-V Containers — are compatible with Docker.
The bottom line
As we transition into the post-virtualization world, containers and microservices are becoming important. Though they are primarily used by web-scale startups such as Uber and Airbnb, enterprises are closely watching the trend. In the future, workloads will be a hybrid of virtualization and containerization.
Traditional virtualization vendors including VMware, Microsoft, and Red Hat are gearing up to support containers. VMware's Photon and Project Bonneville are signs that the virtualization leader wants to embrace containers. Microsoft has gone one step forward with its close partnership with Docker. Windows Containers and Hyper-V Containers provide enterprise customers with the best of virtualization and containerization. Red Hat is moving in the same direction with Atomic Host and OpenShift.
The future deployment model looks complicated, with the infrastructure spanning on-premises and public cloud, and the applications packaged as virtual machines and containers. This trend will indeed change the definition of hybrid computing.
Janakiram MSV is the Principal Analyst at Janakiram & Associates and a guest faculty member at the International Institute of Information Technology. He is also a Google Qualified Cloud Developer, an Amazon Certified Solution Architect, an Amazon Certified Developer, an Amazon Certified SysOps Administrator, and a Microsoft Certified Azure Professional. His previous experience includes Microsoft, AWS, Gigaom Research, and Alcatel-Lucent.