I have some time to while away before I get on to a plane to head back home to my amazing wife and often-amazing zoo of animals. Am I at the Tokyo Zoo, or in some ancient temple looking for a speed date with spiritual enlightenment? Of course not. I came to the airport early to find good wifi and work on an OpenStack deployment lab I’m running in a few weeks with some co-workers. Good grief, I’m a geek. But anyway.
But before I get into my lab install I wanted to talk a little bit about something I saw way too much at OpenStack Summit. For some reason, people have decided that they are going to try to make money by making Linux Containers ‘the next Virtualization’. Canonical/Ubuntu is certainly the worst example of this, but they are certainly not the only one. To repeat a line I often use when I’m talking about the nuts and bolts of how containers work:
If you are trying to replace you virtualization solution with a container solution, you are almost certainly doing both of them wrong.
First off, at the end of the day it’s about money. And the biggest sinkhole inside a datacenter is not fully utilizing your hardware.
Think about how datacenter density has evolved:
- The good ole days – a new project meant we racked a new server so we could segregate it from other resources (read: use port 80 again). If the new project only needed 20% of the box we racked for it, we just warmed the datacenter with the other 80%.
- The Virtualization Wars – Instead of a new pizza box or blade, we spun up a new VM. This gives us finer grain control of our resources. We are filling in those resource utilization gaps with smaller units. So that same 20% could be set up multiple times on the same pizza box, giving up closer to 100% resource consumption. But even then, admins tended to err on the side of wasted heat, and we were only using a fraction of the VM’s allocated resources.
- The Golden Age of Containers – Now we can confidently take a VM and run multiple apps on it (zomg! multiple port 80s!) So we can take that VM and utilize much more of it much more of the time without the fear that we’ll topple something over and crash a server or a service.
This is where someone always shoves their hand in the air and says
<stinkeye>But I need better performance than a VM can give me so I’m running MY CONTAINERS on BAREMETAL.</stink_eye>
My response is always the same. “Awesome. Those use cases DO exist. But what performance do you need?”.
Here’s the short version.
A properly tuned KVM virtual machine can get you withing 3-4% of bare-metal speed.
Leaving out the VM layer of your datacenter means that once you consume that extra 3-4% of your baremetal system that KVM was consuming, you have to go rack another system to get past it. You lose a lot of the elastic scalability that virtualization gives you. You also lose a common interface for your systems that allow you to have relatively homogeneous solutions across multiple providers like your own datacenter and AWS.
Containers bring some of that flexibility back. but they only account for the Dev side of the DevOps paradigm. What happens when you need to harden an AWS system and all you care about are the containers?
Secondly, hypervisors and Linux containers are FUNDAMENTALLY DIFFERENT TECHNOLOGIES.
A hypervisor virtualizes hardware (with QEMU in the case of KVM), and runs a completely independent kernel on that virtual hardware.
A container isolates a process with SELinux, kernel namespaces, and kernel control groups. There are still portions of the kernel that are shared among all the containers. And there is ONLY ONE KERNEL.
All of that to say they are not interchangeable parts. They may feel that way to the end user. But they don’t feel that way to the end user’s application.
So take the time to look at what your application needs to do. But also take some time to figure out how it needs to do it. All of the use cases are valid under the right circumstances.
- Straight virtualization
- Straight baremetal
- containers on VMs
- containers on baremetal