While it remains to be seen just how and when enterprises will deploy Docker containers, it’s no longer a question of if. Yet there are still many open questions as to how to deploy them at scale.

Or were.

This week, Docker announced Swarm, which provides native clustering for Dockerized distributed apps. Why does this matter? Well, it “turns a pool of Docker hosts into a single, virtual host,” making them easier to manage at scale. There’s scale, however, and then there’s scale. For serious Docker clusters, the company recommends Apache Mesos to orchestrate Docker mega-clusters.

I sat down with Mesosphere senior vice president Matt Trifiro (@mtrifiro) to get the low-down on why Docker+Mesos is the latest in a long line of perfect partners, from peanut butter and jelly to milk and cookies.

TechRepublic: Containers are hot and, as Docker’s announcement of Swarm suggests, containers are even hotter when scaled with Apache Mesos. Why do you think containers hot right now, and what is the connection between Docker and Apache Mesos?

Trifiro: Developers are super interested in Docker. But why now? Linux containers have been around since 2006 when Google introduced them into the Linux kernel. Sun Microsystems actually created the original idea of containers nearly two decades ago!

Docker’s secret sauce is that they simplified the creation of containers. Docker containers are now the emerging paradigm for packaging and deploying services.

And Docker is a smart company. We’ve been working closely with them for a long time on ways our engineers can partner to optimize the orchestration and scheduling of Docker containers at scale with Apache Mesos and the Mesosphere Datacenter Operating System (DCOS).

Because, as easy as it sounds to push a container into production, there’s actually a lot to it.

Yes, Docker makes it super easy for developers to package their app. But you also have to nail the operations side, particularly when you want to take an app to significant scale. It needs, in other words, to be just as easy to push an app into the cloud — be it AWS or whatever — and have it just do what it’s supposed to do, which is run as many times as it needs to and never go down and never page you.

That’s not easy.

And it’s why Docker and Mesos go so well together. Docker understands the very deep challenges around availability, scale, and performance, which are solved by Mesos and Mesosphere, as well as the business demands for our technology from customers. On the operations side, enterprises want flexibility of choice in how they manage and scale containers in production.

So, we applaud Docker’s decision to provide an open system with pluggable back-end, rather than prescribe a single approach. And we believe that Mesos and the DCOS offer the most practical way for enterprises to operate containers at scale, so we are excited about our integration with Docker Swarm to support those Docker users.

TechRepublic: In what kinds of use cases would you use the Mesosphere Swarm integration for Docker orchestration instead of just going with Docker’s generic Swarm feature?

Trifiro: I can think of two clear use cases where Mesosphere is probably a better fit to the workload requirements than the generic Docker Swarm offering.

The first would be hyper-scale use cases. Any company looking to run containers at large scale in a highly automated environment across hundreds or thousands of servers — either on premise or in the cloud — should look at using Swarm with our technology.

Mesosphere’s technology is the only publicly-available container orchestration system proven at scale, running millions of containers at companies like Twitter, Groupon, and Netflix, as well as at some of the largest consumer electronics and financial services companies.

The other use case is what I’d call multi-tenant diversity of workloads.

Mesosphere’s technology is the only way for an organization to run a Docker Swarm workload in a highly-elastic way on the same cluster as other types of workloads. For example, you can run Cassandra, Kafka, Storm, Hadoop alongside Docker Swarm workloads on a single Mesosphere cluster. All of these workloads can be sharing the same resources, elastically.

This makes much more efficient use of cluster resources and greatly reduces operational cost and complexity.

TechRepublic: Let’s go back to your suggestion that orchestrating containers isn’t simple. I thought one of the core tenants of Docker was simplicity. Can you provide more detail?

Trifiro: Pushing a container into production sounds like a simple idea, and it should be easy.

But “into production” means a lot of things.

How do I run it at scale? Where do I push it? Do I have to push it at every machine and worry about where they’re running? What happens when a machine goes down or an entire top of rack switch goes belly up? How do you solve for all of those failure cases? How do you automate healing so nobody has to SSH into individual boxes after being paged in the middle of the night? And do I have to configure each machine?

These are easy questions to ask, but much harder to actually solve.

Mesosphere solves these problems for developers using Docker containers automatically. Developers want it to be just as easy to throw Docker containers into the cloud — whether literally on Amazon or your own hardware, private cloud or public cloud — and have it just do what it’s supposed to do. And if you have a new version, allow you to put that new version out there in a graceful way.

That’s what developers care about — they care about pushing code into production and just having it run without having to wear a pager or worry about burdening the operations teams. Mesosphere’s stack can run on any cloud or private infrastructure, from Amazon to Microsoft to OpenStack to VMware to bare metal. We want to ensure Docker Swarm application portability across any infrastructure on our platform.