This year container-based technology announcements went crazy. This isn’t the craze for packing data centers into shipping containers (that’s so last decade) — this is the Linux OS container craze. What’s a container, and why is everyone so excited?

A brief history of fitting IT supply to demand

Ten years ago computing capacity requirements were calculated for the next business year. The process of estimating what would happen in the next year was practically impossible, but something had to be done (and is still being done in the enterprise).

Five years ago you could at least figure out the requirements to run a distributed application and rent virtual machines (VMs) and object storage from a cloud provider, on-demand. It was a new way of working. Cloud computing gave us a solution to an impossible problem. It didn’t matter if it wasn’t well understood, or even very good — it was something, where before there was nothing.

The spread of VM rental brought its own problems, such as clunky scaling, a lack of portability, and vendor lock-in. Now container technology is spreading, and promising solutions to those problems.

The limited scaling of a VM

Richard Davies, CEO of cloud provider Elastichosts, described how cloud’s widespread VM technology, while a vast improvement on bare metal, still wastes resources.

Davies summarized how cloud’s on-demand and scaling benefits are largely provided by “VMs running on top of a hypervisor — in our case, Linux KVMs. Customers can choose what size they want, they can stop and start them on demand, etcetera.”

This approach to scaling extends to the billing. “If you think about the traditional billing model of a cloud server with Amazon, with Google Compute Engine, with Elastichosts, whoever it might be with, it’s on-demand and it’s scalable. The sense in which it’s on-demand and scalable is that you can start a server of any size, and you pay for the size you started. You can say you want an 8 GB instance and you start an 8 GB instance. Every hour you pay for an 8 GB instance. When you turn it off, you stop paying.”

Leaving a VM at its default size leads to some wasted resources. “If you think about servers in the real world, if you boot a server with four cores and 8 GB of RAM, sometimes it will be running at 100% utilization — it will be putting 100% CPU through all four cores, and it will actually have all 8 GB of RAM used by software — and sometimes it won’t. Sometimes it will be in a period of lower load. It might be idle overnight.”

Davies said that whether those resources are used or not, the customer is still charged. “All IaaS providers today still bill you the whole size of that VM. They still bill you for 8 GB even if your software is only using 2. They still bill you for 4 cores even if your server is running at 50% CPU utilization.”

An LXC container

LXC, Davies said, is an alternative to VMs. “LXC is another mechanism by which you can take a large physical machine and split it into a number of sub-servers. It’s not a virtualization approach — it’s not an approach where you simulate a set of hardware environments and run an entire OS in each of those environments.”

“It’s a containerization approach. What that means is a single Linux kernel that’s running on the hardware divides itself into a number of isolated containers. Each sub-server runs in each of those containers.”

One of the main differences between VMs and containers is hidden behind the scenes. “The sub-server doesn’t have its own Linux kernel — it has a part of the main Linux kernel. But it does have its own software, its own filing system, its own users, and everything else — it just doesn’t have a kernel of its own.”

Customers may not notice the difference between VMs and containers. “What a user gets looks extremely similar. They still have an IP, they still have root access, they can log in. When they log in, they see their SSH server running, they see their database server running, their web server running, and so forth.”

Surprisingly, container technology is not new. Containers like Solaris Zones, LXC (Linux Containers), and FreeBSD jails have been around for most of a decade. Some PaaS vendors have been using containers in their products for years, seeing them as more efficient than full virtualization. ActiveState Stackato uses LXC, Parallels Cloud Server uses OpenVZ, and Joyent’s SmartOS uses Solaris Zones.

Are containers better than VMs?

The realization that application hosting could be much easier using containers is now spreading. Stripping out some OS components does mean less complication and more flexibility, but it isn’t a replacement for the VM world — it’s a different approach to resource sharing.

This isn’t a containers vs. VMs fight, where only one can win. This is another option on the customer’s menu.

Disclaimer: TechRepublic and ZDNet are CBS Interactive properties.