OpenStack Summit Day 2 – Wrapping my head around NFV

Day 3 isn’t technically over yet. But I’m exhausted. And jet lag is hard. So I’m sitting in one of the conference hotels in a very low chair with my laptop in my lap. Don’t judge.

One of the biggest ideas at OpenStack Summit this year is NFV (Network Function Virtualization). A straw poll of the talk descriptions reveals approximately 213245 talks on the topic this week in Tokyo.

A thousand years ago I worked for a crappy phone company in Richmond and I got to know how a telephone company works. I got to spend some time in telephone central offices and helped solve carrier-level problems. With NFV, I understood why people wanted to get into that world (there’s a TON of money sitting in those dusty places). But I didn’t quite understand the technical plan. What is going to be virtualized? Where is the ceiling for it? It just didn’t make engineering sense to me.

To help combat that ignorance I’ve gone to 4 or 5 of those 213245 sessions today. I’ve also asked pretty much every booth in the Marketplace ‘how are we doing this stuff?’. At the HP Helion booth, I got my engineering answer. My disconnect was that the logistics of defining a big pipe (OC-48 or something like that) in software would just be an exercise in futility. Going all the way up the software stack with that many packets would require a horizontal scale that wasn’t cost-effective.

Of course that’s not the goal. There IS a project name CORD (Central Office Re-imagined as a Datacenter) that is intriguing. But it’s also very new, and mostly theory at this point.

But If we can take some of the equipment that is currently out in the remote sites (central offices, cell towers) and virtualize it, we can then move it into the datacenter instead of having it out in the wild. That makes maintenance and fault-tolerance a lot cheaper. It also saves on man-hours since you don’t have to get in a truck and drive out to the middle of nowhere to work on something.

Another added benefit is that it would disrupt the de facto monopoly that currently exists with the companies that provide that specialized equipment. Competition is a good thing.

That’s the gist. And it’s a good one. We can take commodity hardware and use it to virtualize specialied equipment that normally lives in the remote locations. And we can virtualize it in a datacenter that’s easier and cheaper to get to.

OpenStack Summit Day 1 – The Big Tent is BIG, and Tokyo Lessons Learned

My morning started off with a few lessons learned about being in Tokyo, where I speak 0 words of the language.

  • have cash, Japanese cash, if you plan on getting on Tokyo public transit. After learning this lesson I spent 45 minutes looking for a 7-11 (they use Citi ATMs which apparently easier for us gringoes) before getting in a cab to get me to the Summit on time. We passed 4 7-11’s in the first 1/2 mile of my trip. Of Course.

    This guy laughed at me multiple times
    This guy laughed at me multiple times
  • It is a serious walking city. the walking. omg the walking. and then the walking.

But on to the OpenStack Summit stuff, of which there is a lot.

After getting registered with the required keynote addresses. They are all on the schedule, so I won’t go into the who and what, but a few observations.

  1. The production quality is incredibly high. Like giant tv cameras on platforms high. Like 5 big monitors so us in the back can see too, high.

    big crowd filing in before the keynotes started
  2. The speakers were, on the whole, a little unpolished. They usually had good things to say, but could have used a few more dry runs for a crowd this big.
  3. ZOMG the crowd. Well over 5000 people from 56 countries. The big tent really is big these days. It is awesome, in a word. It is also the most inclusive conference I’ve ever attended. That is also very awesome.
  4. Double ZOMG THE HEAT. The conference is stretched out over 3 (4?) hotels plus a conference center. All of the thermostats seem to be set on ~81 Farenheit (Celsius?). Take that and toss in an overcrowded room full of sweaty geeks and things can get a little uncomfortable. Especially in the middles of the aisles. Especially especially after lunch.
  5. The Marketplace (vendors tables) is utter chaos. With that said, Mirantis easily wins this year. They have
  6. There is now an OpenStack Certification. Or there will be soon, at least. You can be a Certified OpenStack Administrator (COA). I don’t know how this is going to play with the existing Red Hat Certification, but I’m interested in finding out.
  7. Openstack has a new way of visualizing it constituent parts. is way WAY better than the old wiki-style nastiness.
  8. Bitnami COO Erica Brescia took some pretty awesome shots at Docker Hub and its lack of curation. It’s the wild west out there, and it comes with consequences. I’m not a huge fan of Bitnami. But I am a huge fan of how Erica Brescia does her job.

My least favorite observation on the day was Canonical’s slogan for LXD. They had an ad on the spashes before the keynotes started and it was something along the lines of “Ubuntu/Canonical has the fastest hypervisor on the planet with lxd $something $something $something”

Hey Canonical, you are aware that containers and virtual machines are different things, right? So are you trying to re-define the word, or are you trying to pass off a container manager as a hypervisor? Huh? At any rate, it’s an awful slogan and even worse marketecture. I’m debating a drive by of their booth tomorrow.

After lunch I went to a talk held by Mirantis where the compared a base install of their offering to a GA(ish?)-release of RHEL OSP 7. They were more fair and balanced than I thought they would be. Their product, Fuse, is 3 or 4 years old at this point and very polished. OSP 7 uses OSP Director, which is based on TripleO. OSP 7 is Red Hat‘s first release based on this installer. It suffers from exactly the warts you think it would.

With that said, I was surprised they had to pic some pretty small nits to make their presentation work. A lot of their documentation issues were already addressed. But they correctly identified the biggest areas of need for OSPd as Red Hat works to mature it in OSP 8 and beyond.

All in all Day 1 was great fun. I’m looking way forward to Day 2. On top of that I’m PRETTY SURE I can get to and return from the conference using Tokyo Public Transport.

OpenStack Summit Day 0 – a mode B train ride

This year I get to go to Open Stack Summit in Tokyo. It will be my first time visiting Japan. Right now I am in a very small hotel room at 3am (local time), wide awake because I went to bed at 7pm. Such is jet lag, I guess. My personal goal for this even is to create a short post daily with some initial thoughts / reactions / fun things I learned.

Day 0 was just getting here. I got on a plane in Richmond, VA just before 9am Eastern on Sunday morning. I got off a plane an hour outside of Tokyo at 3:40pm Monday. I was in the air for ~16 hours total. Time zones and datelines will forever befuddle me. I took the Narita Express train from the airport to downtown Tokyo.

The practical thing I learned is that GPS on your phone SUCKS in downtown Tokyo. I was going to walk the ~1 mile from Tokyo Station to my hotel. Google Maps took me in 4 different directions while thinking I was on 3 different roads before I gave up and got into a cab. I’m pretty sure I would still be walking if not for that nice man.

The other thing that jumped out at me was during the train ride in. Space is so much more utilized in Japan. At first I thought it was just sort of stacked up and haphazard. But as I rode by it I began to see the organization and beauty in how the space in the Tokyo area is utilized. It’s pretty amazing.

It made me start thinking about my own house and the 5 acres of trees that it sits on. Not in a better/worse sort of way. Obviously I have different goals than someone who lives near downtown Tokyo. But when I give a talk about containers I talk a lot about them being the ‘next layer of density’ in computing. Bimodal IT is one of the biggest concepts in that area.

Over the next few days, I will definitely be a mode A guy walking around in a mode B country. Wish me luck!


Multi-Node OpenStack on your laptop in about an hour

OpenStack is just about the hottest thing going in IT today.  Started in 2010 as a joint project between Rackspace and NASA, it is quickly maturing into the premier solution for anyone who wants to make their infrastructure more self-service without having to go behind and clean up after developers all day every day.

It’s biggest hurdle is still its learning curve and the associated pain you are almost guaranteed to suffer during your first install.  After approximately 243535 tries, I have a pretty solid process to stand up an OpenStack demo setup across multiple nodes on your laptop. It could also be altered (if any alterations are even needed) to deploy it across multiple physical or virtual systems in any environment.

Depending on your bandwidth to patch the servers, it takes about an hour, soup to nuts.

Which Flavor?

RHEL OSP 7 comes with an awesome tool called OSP Director, which is based on TripleO. It is essentially a canned OpenStack install that you then use to deploy production OpenStack nodes. It’s called an ‘undercloud’.

For two reasons, I’m not using OSP Director for this demo.

  1. It takes more time and more resources. If I were doing this in my environment and I was an everyday engineer, I’d totally use it. But this is an exercise in tire-kicking.
  2. I haven’t had time yet to play with it very much.

Instead I’m using RDO’s Quickstart tool, which is based on Packstack.

OpenStack in 60 seconds

The goal when OpenStack was started was to engineer an FOSS alternative to Amazon Web Services. What they came up with were an ever-growing list of services that each perform a task required (or that is optionally neat) to build out a virutalized infrastructure.

The services are all federated together with RESTful APIs. Python is the language of choice.

Core Services

  • Nova – compute services. The core brains of the operation and the initial product
  • Neutron – Software-defined Networking service. Nova also has some less flexible networking components built in.
  • Cinder – provides block devices to virtual machines (instances in OpenStack parlance)
  • Glance – manages images used to create instances
  • Swift – provides object/blob storage
  • Keystone – Identity and Identification services for all other services as well as users
  • Horizon – a Django-based web frontend that’s customizable and extensible
  • Heat – Orchestration services
  • Ceilometer – Telemetry

Optional Fun Services

  • Trove – Database-as-a-service
  • Ironic – Bare metal provisioning – treat your racked stuff like your virtual stuff
  • Elastic Map Reduce – Big-Data-as-a-Service (?!?!)
  • $instert_awesome_project_here

All Openstack modules that are currently canon are listed on their roadmap.


Cluster Setup

The demo setup I’ll be going through was setup on my laptop (Fedora 21) using good ol’ KVM Virtual Machines running RHEL 7. The laptop has 8 cores and 16GB of RAM total.

  • rdo0.localdomain ( – 4GB RAM, 2 VCPU
    • Controller Node (all services except Nova Compute)
  • rdo1.localdomain ( – 2GB RAM, 2 VCPU
    • Nova Compute Node
  • rdo2.localdomain ( 2GB RAM, 2 VCPU
    • Nova Compute Node

Host OS Setup

NOTE – since these are VMs, the single NIC I assigned them was designated eth0. We all know the naming convention has changed in RHEL 7

subscription-manager register --username=$RHN_USERNAME --password=$RHN_PASSWORD
subscription-manager attach --pool=$SUBSCRIPTION_MANAGER_POOL_ID
subscription-manager repos --disable=\* --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-extras-rpms
yum install -y
sudo yum install -y
yum install -y openstack-packstack vim-enhanced
yum update -y
systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl enable network
#confirm the network setup is working
ifdown eth0 && systemctl start network && ifup eth0 
#reboot to apply any patches that require it, etc.

The above snippet will

  • register your system to Red Hat via subscription-manager
  • attach the proper subscription pool (supplied by you)
  • enable the needed channels
  • install the RDO package repository
  • install a few things (I’m a Vim guy, feel free to edit)
  • disable NetworkManager (required for OpenStack) and replace it with a legacy network service script
  • activate the new network setup

Once this is set up on each host (I did it on one and cloned it twice to create the new other VMs) , you are ready to get OpenStack rolling.

Creating an answers file

On rdo0.localdomain, run the following command. It willThe next command will generate a default answers file that you can then edit and keep up with over time as you deploy various OpenStack incarnations.

packstack --gen-answer-file rdo.txt

The following changes were made.

NOTE – if you create 2 answer files and diff them, you will see many other changes, as passwords are randomized each time.

# diff -u rdo.txt rdo-edited.txt 
--- rdo.txt 2015-08-23 15:41:45.041000000 -0400
+++ rdo-edited.txt 2015-08-21 20:17:05.538000000 -0400
@@ -64,7 +64,7 @@
 # Specify 'y' to install Nagios to monitor OpenStack hosts. Nagios
 # provides additional tools for monitoring the OpenStack environment.
 # ['y', 'n']
 # Comma-separated list of servers to be excluded from the
 # installation. This is helpful if you are running Packstack a second
@@ -84,7 +84,7 @@
 # List of IP addresses of the servers on which to install the Compute
 # service.
 # Specify 'y' to provision for demo usage and testing. ['y', 'n']

And then you run packstack. Depending on your install targets and have much horsepower is available, this can take a while. On my laptop, it takes the better part of an hour.

packstack --answer-file=rdo.txt

Getting Networking Working

The next part of the setup will borrow heavily from This RDO blog post about setting up Neutron with an existing network.

After packstack does its thing, assuming you have a ‘Success!’ sort of output on your screen, you will then have a 3-node OpenStack cluster with 2 Nova Compute nodes and 1 node doing pretty much everything else. Unfortunately, out of the box you need to make a few tweaks so you can see your new instances from your libvirt networking (or the network in your lab or whatever your use case is).

NOTE – this needs to happen on each host

Create your bridge and set up your NIC

On my VMs the only NIC is named eth0 (benefit of using a VM). So you may need to edit this slightly to fit your set up’s naming conventions.

We want to use a bridge device to get our VMs on to our network so we create a device named br-ex. We then edit $YOUR_NIC to

[root@rdo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex 
IPADDR= # Old eth0 IP since we want the network restart to not 
 # kill the connection, otherwise pick something outside your dhcp range
NETMASK= # your netmask
GATEWAY= # your gateway
DNS1= # your nameserver

[root@rdo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 

Tell Neutron about your bridge

We then run the following to tell Neutron to use a bridge called ‘br-ex’, and to use the proper plugins

openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs bridge_mappings extnet:br-ex
openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vxlan,flat,vlan

You could probably restart Neutron and be OK here, but I prefer the belts and suspenders.

Define your software network

After the reboot, you should be able to ssh back into your normal IP address. We now have a host infrastructure that is ready to serve our OpenStack instances. So let’s define our SDN components so we can get going!

NOTE – This should be done on your controller node, rdo0.localdomain in my case

Provider network

# source keystonerc_admin
# neutron net-create external_network --provider:network_type flat --provider:physical_network extnet --router:external --shared

Public subnet and router

# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=,end= \
 --gateway= external_network
# neutron router-create router1
# neutron router-gateway-set router1 external_network

Private subnet

# neutron net-create private_network
# neutron subnet-create --name private_subnet private_network

Connect the two networks with the router

# neutron router-interface-add router1 private_subnet

Wrapping Up

And that’s it! You should now have 3 nodes ready to handle your OpenStack demo loads.

My current plan is to keep evolving this setup and re-deploying to do things like

  • take advantage of Ceph
  • Further distribute loads (maybe a virtual/physical hybrid setup)
  • handle multiple NICs
  • ???

If you have grief or ideas for improvement please feel free to comment below.

and you think the code is the hardest part

Well, you’re pretty much right. BUT.

I’ve been working, on and off, on a project called soscleaner since last December-ish. It’s a pretty straight-forward tool. It takes an existing sosreport and obfuscates data that people don’t typically like to release like hostnames and IP addresses. The novel part is that it maintains the relationships between obfuscated items and their counterparts. So a hostname or IP address is obfuscated with the same value in all of the files in an sosreport. It allows the person looking at the ‘scrubbed’ report to still perform meaningful troubleshooting.

It’s not a big enough problem to get a true company or engineer’s attention, but it’s too big for a hack script. So I decided to try and tackle it. And I have to say that the current iteration isn’t too bad. It doesn’t what it’s supposed to pretty reliably, and all of the artifacts to make it a ‘real program’ are in place. Artifacts like:

  • issue tracking
  • wiki
  • README and Licensing Decisions
  • publishing binary packages (RPMs in this case, right now)
  • publishing to language-specific repositories (PyPi in this case, since it’s a Python application)
  • creating repositories (see RPM’s link)
  • submitting it to a Linux distro (Fedora in this case, for now)
  • writing unittests (a first for me)
  • creating some sort of ‘homepage
  • mailing lists

All of this has been an amazing learning experience, of course. But my biggest take away, easily, is that all of the things that wrap around the code to actually ‘publish’ an application is almost as hard as the coding itself. I am truly stunned, and I have a new appreciation now for the people who do it well every day.

git2changelog – converting your git logs into a spec file changelog automatically

Especially when it comes to application development, I’m a bit lazy and a general malcontent. I love solving the problem, but I hate dealing with the packaging and versioning and all of the stuff that makes something usable.  One of the things I always have trouble with is keeping track of my spec file changelog when I am rolling something into an RPM.

To help ease that I put together a small script that will take a git repository’s log between any two tags and output it in a format that is acceptable in an RPM spec file.

To do this I started with the Fedora Packaging Guidelines for Changelogs. This gave me the proper formatting to adhere to for my script.

Next I used the changelog in the sosreport package for inspiration. It’s available in its spec file.

The script I wrote is designed to run inside of a git repository.  If you can come up with a better way to collage this data from the .git directory then please feel free to share. So I’ve stuck it in a git repo so you can grab it if you want.

The output of my soscleaner app looks like this:

$ ./git2changelog -b 0.1-8

* Sat Jun 07 2014 Jamie Duncan <> - HEAD:UNRELEASED
- 2f78c26 =review - added comment in _skip_file to likely remove a now useless if clause
- Merge pull request #11 from bmr-cymru/bmr-libmagic-fixes : Commit 4427b06
- Convert python to use native libmagic python-magic bindings : Commit 7db6a99
- Rename __init__ options argument for clarity : Commit f1353ea
- cleaning up the magic import - fixes #10 : Commit 554af49
- removing shebang from py module : Commit f70b856
- more cleanup : Commit 6ee1339

* Wed Jun 04 2014 Jamie Duncan <> - 0.1-12
- 47b3a47 =adding dist flag to spec file
- clean up : Commit 400a74a
- getting in with the guidelines : Commit ea1ad7c
- more spec refinements : Commit 64eb638
- more spec refinements : Commit 34eadd4
- making source ref in spec file a URL : Commit aefdd9b
- bringing spec file inline with Fedora standards : Commit 767e1d2
- updating macros in spec file for koji : Commit e4b46ea
- adding spec file : Commit e6738c0

* Tue Jun 03 2014 Jamie Duncan <> - 0.1-11
- ab0050b =updating changelog
- packaging cleanup : Commit d4a3428
- tweaking for rhn-support-tool : Commit 06a151f
- minor cleanup of an unused module and a repetitive line or 2 : Commit 7eb1726
- Update : Commit 97f047f
- cleaning up tarball paths. fixes #9 : Commit caa6536
- cleaning up tarball paths. fixes #9 : Commit e4e1cef
- updated : Commit f6c064a
- updated : Commit a070ebd
- fixing issue where checking compression type could error out because of capital letters where i thought it would always be capitalized : Commit cbf0e7d
- removing some cruft : Commit fdf82eb
- removed some old chmod bits that became un-needed when it became required to run as root : Commit cb15d97
- removed xsos option that never existed in the first place. we should just use xsos for that. : Commit ed5992e
- removed xsos option that never existed in the first place. we should just use xsos for that. : Commit 5c2dc32
- removed xsos option that never existed in the first place. we should just use xsos for that. : Commit 1df7e0d
- Merge branch 'master' of : Commit e462b7b
- minor cleanup : Commit 5fc8ce2
- Updating with better File Creation explanations : Commit bac8368
- Updating with better usage examples : Commit 6589b4c
- Updating : Commit 7dce4a9
- disallowing octects > 3 digits - to help cut down on false positives : Commit 319591f
- no longer matches IP addys starting with a 0 : Commit 0223d82

If you specify and ‘end tag’, then you won’t see the untagged commits in HEAD.

I don’t hate the word cloud anymore, maybe

Since it hit the scene in earnest a few years ago, I’ve despised the word ‘cloud’ in the context of what I do for a living. I’ve warned people prior to using it in presentations, proclaimed my joy for having not used the word and bashed it most every time it was mentioned.  I’m here to say that my position on ‘the cloud’ has matured now. I don’t hate the word. But I do hate how most people in the world are defining it.

My own definition has taken a long time to develop. I’ve known it was a powerful concept for some time. I’ve also known most of the IT world has been talking about of the side of their neck when they made their salaries by talking about it. I particularly enjoy people proclaiming they know all about the ‘next generation of cloud’ when we were all still defining the first one. At any rate, to define ‘cloud’, I first have to define ‘PaaS’ to my own satisfaction.

PaaS (noun) – short for Platform As A Service. It provides a complete application platform for users or developers (or both) by sufficiently abstracting those services away from their underlying technology platforms. 

A little wordy, I know, but we need to be specific here. Not only must a PaaS provide a platform, but it must do it in a way that abstracts developers and users from administration of the platform.  It must also handle all of the tertiary services (DNS, port forwarding, scalability, inter-connectivity, etc.) that administers usually have to handle after the fact.

PaaS lets developers develop, and lets administers admin.

So what is cloud?

cloud (noun) – an implementation of a PaaS solution that is seamlessly and automatically scalable to handle load demands and they grow and shrink.

So you start with a PaaS and you build it out so it will grow and shrink automatically as needed for it work load.

What is NOT a cloud?

  • provisioning virtual machines really quickly
  • setting up a PaaS that is brittle and confining for developers and users
  • writing 3 or 4 scripts to help automate your virtualization infrastructure
  • almost everything being marketed as a cloud today

To define a cloud you have to define PaaS. PaaS is defined as that slick layer of magic that abstracts the application away from everything the application runs on or in. A cloud is a seamlessly scalable instance of a good PaaS.  Easy, isn’t it? Step 3, profit!