Virtualization vs. Containers – a fight you should never have

I have some time to while away before I get on to a plane to head back home to my amazing wife and often-amazing zoo of animals. Am I at the Tokyo Zoo, or in some ancient temple looking for a speed date with spiritual enlightenment? Of course not. I came to the airport early to find good wifi and work on an OpenStack deployment lab I’m running in a few weeks with some co-workers. Good grief, I’m a geek. But anyway.

Auburn the beagle in her natural habitat
Auburn the beagle in her natural habitat

But before I get into my lab install I wanted to talk a little bit about something I saw way too much at OpenStack Summit. For some reason, people have decided that they are going to try to make money by making Linux Containers ‘the next Virtualization’. Canonical/Ubuntu is certainly the worst example of this, but they are certainly not the only one. To repeat a line I often use when I’m talking about the nuts and bolts of how containers work:

If you are trying to replace you virtualization solution with a container solution, you are almost certainly doing both of them wrong.

First off, at the end of the day it’s about money. And the biggest sinkhole inside a datacenter is not fully utilizing your hardware.

Think about how datacenter density has evolved:

  1. The good ole days – a new project meant we racked a new server so we could segregate it from other resources (read: use port 80 again). If the new project only needed 20% of the box we racked for it, we just warmed the datacenter with the other 80%.
  2. The Virtualization Wars – Instead of a new pizza box or blade, we spun up a new VM. This gives us finer grain control of our resources. We are filling in those resource utilization gaps with smaller units. So that same 20% could be set up multiple times on the same pizza box, giving up closer to 100% resource consumption. But even then, admins tended to err on the side of wasted heat, and we were only using a fraction of the VM’s allocated resources.
  3. The Golden Age of Containers – Now we can confidently take a VM and run multiple apps on it (zomg! multiple port 80s!) So we can take that VM and utilize much more of it much more of the time without the fear that we’ll topple something over and crash a server or a service.

This is where someone always shoves their hand in the air and says

<stinkeye>But I need better performance than a VM can give me so I’m running MY CONTAINERS on BAREMETAL.</stink_eye>

My response is always the same. “Awesome. Those use cases DO exist. But what performance do you need?”.

Here’s the short version.

A properly tuned KVM virtual machine can get you withing 3-4% of bare-metal speed.

Leaving out the VM layer of your datacenter means that once you consume that extra 3-4% of your baremetal system that KVM was consuming, you have to go rack another system to get past it. You lose a lot of the elastic scalability that virtualization gives you. You also lose a common interface for your systems that allow you to have relatively homogeneous solutions across multiple providers like your own datacenter and AWS.

Containers bring some of that flexibility back. but they only account for the Dev side of the DevOps paradigm. What happens when you need to harden an AWS system and all you care about are the containers?

Secondly, hypervisors and Linux containers are FUNDAMENTALLY DIFFERENT TECHNOLOGIES.

A hypervisor virtualizes hardware (with QEMU in the case of KVM), and runs a completely independent kernel on that virtual hardware.

A container isolates a process with SELinux, kernel namespaces, and kernel control groups. There are still portions of the kernel that are shared among all the containers. And there is ONLY ONE KERNEL.

All of that to say they are not interchangeable parts. They may feel that way to the end user. But they don’t feel that way to the end user’s application.

So take the time to look at what your application needs to do. But also take some time to figure out how it needs to do it. All of the use cases are valid under the right circumstances.

  • Straight virtualization
  • Straight baremetal
  • containers on VMs
  • containers on baremetal

A well thought out It infrastructure is likely to have a combination of all of these things. Tools like OpenStack, Project Atomic, and CloudForms make that all much much easier to do these days.

Open Source Networking — Are OTS hardware and Virtualized Appliances the future?

Working for a small company, I sometimes have to put on my networking hat (which I haven’t worn consistently in a LOOOOONG time). We are blessed enough to have an AMAZING networking consultant, but if something goes awry while he’s working his day job, the troubleshooting and band-aids are primarily my responsibility. I also typically have a voice when new infrastructure is designed and hardware is purchased. These things, and my love of all things geek, have me keeping at least an eye in the FOSS Networking world.

Tomorrow evening RVaLUG (http://www.rvalug.org), my hometown Linux User Group, has the pleasure tomorrow of having Scott Clark, Senior Director at Vyatta (http://www.vyatta.com and http://www.vyatta.org), in town to talk about his company and how their products work. In the hope of having some intelligent questions to ask, I’ve been taking a look at Vyatta as well as pondering on some of the broader concepts they use.

The term “virtualized networking” confuses me a little bit. It seems that there are 2 concepts in indirect competition.  One thought is that this means that you’re replacing datacenter switches and routers with virtual appliances. A second related but different thought is that you are securing an already virtualized network via standard networking principles in virtual appliances.  #1 freaks out your local network admins. #2 is adding layers of encryption and additional networking stacks on a virtual network like Amazon’s AWS. Both present interesting challenges and different (?) solutions.

I’ve always assumed specialized networking equipment was, well…. specialized. I assumed those little blue boxes with all the flashing lights had specialized chipsets and algorithms running inside special chips that helped them move packets faster.  Is that still the case? Or has a standard OTS (Off The Shelf) server today just surpassed the needs of 10Gbps? This article (from Vyatta’s CEO </full disclosure>) is talking about 20 Gbps Line Speeds using a single core Nehalem box.  A 10Gbps for sub-$5k? Why yes, please. IF they’re as reliable as the $100k solutions out there now. Sadly, like email and telephones, networks have to “just work”.

To back this up even further, from http://www.vyatta.com/solutions:

The performance of off-the shelf x86 processors has increased over 100x in the past 4 years resulting in readily available systems capable of performing 10Gbps routing and security.

(Vyatta-specific) The “open core” model? Really? The open source version of VyattaOS is at best, limited. They want you to use it in your development and test environments, but in production you should move to their “Subscription Edition” (their words, not mine). I’m pretty happily on record at not being a huge fan of the “open core” business model, and I just don’t understand it here, either. One of the questions I’ll ask (and follow up on, of course, is how degrading their FOSS offering is a viable business model.  Has anyone done it successfully yet?

Needless to say I’m looking forward to a great talk and discussion tomorrow.  Thanks again to Vyatta and Scott Clark for taking the time out to come to Richmond.