VM-native or container-native? Your future depends on it

Despite the meteoric rise in containerization, virtual machines are still the very definition of cloud infrastructure for many people. Hardware VMs are one solution to the problem of securely sharing large servers among multiple customers, and that sharing is necessary to enjoy the elasticity we've come to expect of cloud infrastructure. Being able to automatically provision an "instance" with 2 or 256GB of RAM, or any size, and and have it serving workloads in almost no time is a hallmark of what we've come to expect of the cloud. Indeed, we must credit VMs with making the cloud as we knew it yesterday possible.

The thing is, VMs are just one means of achieving that elasticity, and as we increasingly look to Dockerize our applications for speed and agility, it's becoming clear that VMs are an evolutionary dead end. Like dinosaurs, VHS, and even HD-DVDs, they are at the long end of a evolutionary fork that ends with no path forward.

Containers, including Docker, depend on a different approach to virtualization. Instead of virtualizing the hardware--things like CPUs, floppy drives, and network adapters--and running whole guest operating systems on top of that, containerization separates users and processes from each other. Joyent CTO Bryan Cantrill's history of virtualization is more authoritative and entertaining than anything I can share, and his work on both Solaris Zones and KVM allows him a unique perspective on both containerization and hardware virtualization.

Docker is a type of containerization that's driving new thinking about application architecture and workflows. What Docker introduced on top of containerization is a new interface to the container. By treating the container as an application, rather than a whole Unix machine, Docker invented an approach called "application containers" that is lowering barriers to the kind of truly modern apps and automated ops we dream of building.

Unfortunately for many people, the most common approach to Docker deployments is in hardware virtual machines. This layering of virtualization technologies is possible, but those added layers sap performance, increase complexity, and add costs.

Consider for a moment what happens when we tie the lifecycle of the container to the lifecycle of the VM. It means VMs have to be provisioned before the containers can be, and scaling down means deprovisioning the VMs and consolidating the remaining containers into the smaller VM surface area.

This may seem like a small problem, but it can turn into a huge additional cost. I learned about how real this can be from an engineering director1 who shared his story of his company's recent Dockerization with me. They undertook the effort with great excitement about both the development and operational benefits. It was, after all, a great opportunity to clean up cruft that had accumulated from years of operations. What they didn't anticipate, however, was the increased costs: their cloud infrastructure bill jumped from $18,000 to $32,000 per month as a result.

The root cause was difficulty in scaling their app using containers in VMs. Scaling up was easy: provision more VMs and then provision containers into them. Scaling down was less easy: stateless containers could be shut down quickly, but stateful containers had long lifecycles that meant they couldn't easily be shut down or reprovisioned elsewhere. Those long running containers prevented them from shutting down VM capacity as their needs scaled down, forcing them to pay for capacity that was going largely unused.

Unfortunately, it wasn't just infrastructure costs that increased. The added complexity of managing containers in VMs left the team even more resource constrained and needing additional headcount. Revenue generating projects are suffering now as the team scrambles to figure out what to do next.

I've also heard stories from some performance hungry users who've already given up on virtual machines and are running Docker on bare metal in private data centers or leased hardware. For many of them this includes a stripped-down Linux distribution, homemade deployment and update automation (aspirationally, anyway; more often than not this is a "planned" feature), and a cluster of hardware. While this approach yields better application performance, it comes at a huge operational cost. Who builds the deployment and update automation, for example? Worse, these users are finding scaling even harder than cloud VM users, since adding or removing bare metal takes a lot longer than VMs.

So, if running containers on dedicated VMs or physical servers isn't the answer, what is? The better solution can be found in the structure of the cloud itself. If the automation tools that allow cloud operators to provision virtual machines could be redirected towards provisioning containers, if the network virtualization that we've depended on to bring network interfaces to each virtual machine could be applied to containers, if we could enjoy the security of hardware virtual machines around each of our containers...would we ever hobble ourselves or our apps by trying to run them in VMs again? Would we ever choose to add performance and elasticity-sapping layers of management complexity?

The best solution is a container-native cloud that runs containers on bare metal, with no VMs. Importantly, the container lifecycle would be independent from the container host, so as your apps scale, you'd never have to pay for larger resources just because you don't want to kill off long running containers. In a container-native cloud, you'd provision and pay for containers, they would last until you delete them, and you would never have to worry about complexities of managing VM or server hosts.

This is not idle speculation or science fiction, this is Joyent Triton, the open source, container-native data center automation solution for public and private clouds.


  1. I've been allowed to share this story only on the condition that I don't reveal the origin. 



Post written by Casey Bisson