Multi-Node OpenStack on your laptop in about an hour

OpenStack is just about the hottest thing going in IT today.  Started in 2010 as a joint project between Rackspace and NASA, it is quickly maturing into the premier solution for anyone who wants to make their infrastructure more self-service without having to go behind and clean up after developers all day every day.

It’s biggest hurdle is still its learning curve and the associated pain you are almost guaranteed to suffer during your first install.  After approximately 243535 tries, I have a pretty solid process to stand up an OpenStack demo setup across multiple nodes on your laptop. It could also be altered (if any alterations are even needed) to deploy it across multiple physical or virtual systems in any environment.

Depending on your bandwidth to patch the servers, it takes about an hour, soup to nuts.

Which Flavor?

RHEL OSP 7 comes with an awesome tool called OSP Director, which is based on TripleO. It is essentially a canned OpenStack install that you then use to deploy production OpenStack nodes. It’s called an ‘undercloud’.

For two reasons, I’m not using OSP Director for this demo.

  1. It takes more time and more resources. If I were doing this in my environment and I was an everyday engineer, I’d totally use it. But this is an exercise in tire-kicking.
  2. I haven’t had time yet to play with it very much.

Instead I’m using RDO’s Quickstart tool, which is based on Packstack.

OpenStack in 60 seconds

The goal when OpenStack was started was to engineer an FOSS alternative to Amazon Web Services. What they came up with were an ever-growing list of services that each perform a task required (or that is optionally neat) to build out a virutalized infrastructure.

The services are all federated together with RESTful APIs. Python is the language of choice.

Core Services

  • Nova – compute services. The core brains of the operation and the initial product
  • Neutron – Software-defined Networking service. Nova also has some less flexible networking components built in.
  • Cinder – provides block devices to virtual machines (instances in OpenStack parlance)
  • Glance – manages images used to create instances
  • Swift – provides object/blob storage
  • Keystone – Identity and Identification services for all other services as well as users
  • Horizon – a Django-based web frontend that’s customizable and extensible
  • Heat – Orchestration services
  • Ceilometer – Telemetry

Optional Fun Services

  • Trove – Database-as-a-service
  • Ironic – Bare metal provisioning – treat your racked stuff like your virtual stuff
  • Elastic Map Reduce – Big-Data-as-a-Service (?!?!)
  • $instert_awesome_project_here

All Openstack modules that are currently canon are listed on their roadmap.


Cluster Setup

The demo setup I’ll be going through was setup on my laptop (Fedora 21) using good ol’ KVM Virtual Machines running RHEL 7. The laptop has 8 cores and 16GB of RAM total.

  • rdo0.localdomain ( – 4GB RAM, 2 VCPU
    • Controller Node (all services except Nova Compute)
  • rdo1.localdomain ( – 2GB RAM, 2 VCPU
    • Nova Compute Node
  • rdo2.localdomain ( 2GB RAM, 2 VCPU
    • Nova Compute Node

Host OS Setup

NOTE – since these are VMs, the single NIC I assigned them was designated eth0. We all know the naming convention has changed in RHEL 7

subscription-manager register --username=$RHN_USERNAME --password=$RHN_PASSWORD
subscription-manager attach --pool=$SUBSCRIPTION_MANAGER_POOL_ID
subscription-manager repos --disable=\* --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-extras-rpms
yum install -y
sudo yum install -y
yum install -y openstack-packstack vim-enhanced
yum update -y
systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl enable network
#confirm the network setup is working
ifdown eth0 && systemctl start network && ifup eth0 
#reboot to apply any patches that require it, etc.

The above snippet will

  • register your system to Red Hat via subscription-manager
  • attach the proper subscription pool (supplied by you)
  • enable the needed channels
  • install the RDO package repository
  • install a few things (I’m a Vim guy, feel free to edit)
  • disable NetworkManager (required for OpenStack) and replace it with a legacy network service script
  • activate the new network setup

Once this is set up on each host (I did it on one and cloned it twice to create the new other VMs) , you are ready to get OpenStack rolling.

Creating an answers file

On rdo0.localdomain, run the following command. It willThe next command will generate a default answers file that you can then edit and keep up with over time as you deploy various OpenStack incarnations.

packstack --gen-answer-file rdo.txt

The following changes were made.

NOTE – if you create 2 answer files and diff them, you will see many other changes, as passwords are randomized each time.

# diff -u rdo.txt rdo-edited.txt 
--- rdo.txt 2015-08-23 15:41:45.041000000 -0400
+++ rdo-edited.txt 2015-08-21 20:17:05.538000000 -0400
@@ -64,7 +64,7 @@
 # Specify 'y' to install Nagios to monitor OpenStack hosts. Nagios
 # provides additional tools for monitoring the OpenStack environment.
 # ['y', 'n']
 # Comma-separated list of servers to be excluded from the
 # installation. This is helpful if you are running Packstack a second
@@ -84,7 +84,7 @@
 # List of IP addresses of the servers on which to install the Compute
 # service.
 # Specify 'y' to provision for demo usage and testing. ['y', 'n']

And then you run packstack. Depending on your install targets and have much horsepower is available, this can take a while. On my laptop, it takes the better part of an hour.

packstack --answer-file=rdo.txt

Getting Networking Working

The next part of the setup will borrow heavily from This RDO blog post about setting up Neutron with an existing network.

After packstack does its thing, assuming you have a ‘Success!’ sort of output on your screen, you will then have a 3-node OpenStack cluster with 2 Nova Compute nodes and 1 node doing pretty much everything else. Unfortunately, out of the box you need to make a few tweaks so you can see your new instances from your libvirt networking (or the network in your lab or whatever your use case is).

NOTE – this needs to happen on each host

Create your bridge and set up your NIC

On my VMs the only NIC is named eth0 (benefit of using a VM). So you may need to edit this slightly to fit your set up’s naming conventions.

We want to use a bridge device to get our VMs on to our network so we create a device named br-ex. We then edit $YOUR_NIC to

[root@rdo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex 
IPADDR= # Old eth0 IP since we want the network restart to not 
 # kill the connection, otherwise pick something outside your dhcp range
NETMASK= # your netmask
GATEWAY= # your gateway
DNS1= # your nameserver

[root@rdo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 

Tell Neutron about your bridge

We then run the following to tell Neutron to use a bridge called ‘br-ex’, and to use the proper plugins

openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs bridge_mappings extnet:br-ex
openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vxlan,flat,vlan

You could probably restart Neutron and be OK here, but I prefer the belts and suspenders.

Define your software network

After the reboot, you should be able to ssh back into your normal IP address. We now have a host infrastructure that is ready to serve our OpenStack instances. So let’s define our SDN components so we can get going!

NOTE – This should be done on your controller node, rdo0.localdomain in my case

Provider network

# source keystonerc_admin
# neutron net-create external_network --provider:network_type flat --provider:physical_network extnet --router:external --shared

Public subnet and router

# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=,end= \
 --gateway= external_network
# neutron router-create router1
# neutron router-gateway-set router1 external_network

Private subnet

# neutron net-create private_network
# neutron subnet-create --name private_subnet private_network

Connect the two networks with the router

# neutron router-interface-add router1 private_subnet

Wrapping Up

And that’s it! You should now have 3 nodes ready to handle your OpenStack demo loads.

My current plan is to keep evolving this setup and re-deploying to do things like

  • take advantage of Ceph
  • Further distribute loads (maybe a virtual/physical hybrid setup)
  • handle multiple NICs
  • ???

If you have grief or ideas for improvement please feel free to comment below.

and you think the code is the hardest part

Well, you’re pretty much right. BUT.

I’ve been working, on and off, on a project called soscleaner since last December-ish. It’s a pretty straight-forward tool. It takes an existing sosreport and obfuscates data that people don’t typically like to release like hostnames and IP addresses. The novel part is that it maintains the relationships between obfuscated items and their counterparts. So a hostname or IP address is obfuscated with the same value in all of the files in an sosreport. It allows the person looking at the ‘scrubbed’ report to still perform meaningful troubleshooting.

It’s not a big enough problem to get a true company or engineer’s attention, but it’s too big for a hack script. So I decided to try and tackle it. And I have to say that the current iteration isn’t too bad. It doesn’t what it’s supposed to pretty reliably, and all of the artifacts to make it a ‘real program’ are in place. Artifacts like:

  • issue tracking
  • wiki
  • README and Licensing Decisions
  • publishing binary packages (RPMs in this case, right now)
  • publishing to language-specific repositories (PyPi in this case, since it’s a Python application)
  • creating repositories (see RPM’s link)
  • submitting it to a Linux distro (Fedora in this case, for now)
  • writing unittests (a first for me)
  • creating some sort of ‘homepage
  • mailing lists

All of this has been an amazing learning experience, of course. But my biggest take away, easily, is that all of the things that wrap around the code to actually ‘publish’ an application is almost as hard as the coding itself. I am truly stunned, and I have a new appreciation now for the people who do it well every day.

git2changelog – converting your git logs into a spec file changelog automatically

Especially when it comes to application development, I’m a bit lazy and a general malcontent. I love solving the problem, but I hate dealing with the packaging and versioning and all of the stuff that makes something usable.  One of the things I always have trouble with is keeping track of my spec file changelog when I am rolling something into an RPM.

To help ease that I put together a small script that will take a git repository’s log between any two tags and output it in a format that is acceptable in an RPM spec file.

To do this I started with the Fedora Packaging Guidelines for Changelogs. This gave me the proper formatting to adhere to for my script.

Next I used the changelog in the sosreport package for inspiration. It’s available in its spec file.

The script I wrote is designed to run inside of a git repository.  If you can come up with a better way to collage this data from the .git directory then please feel free to share. So I’ve stuck it in a git repo so you can grab it if you want.

The output of my soscleaner app looks like this:

$ ./git2changelog -b 0.1-8

* Sat Jun 07 2014 Jamie Duncan <> - HEAD:UNRELEASED
- 2f78c26 =review - added comment in _skip_file to likely remove a now useless if clause
- Merge pull request #11 from bmr-cymru/bmr-libmagic-fixes : Commit 4427b06
- Convert python to use native libmagic python-magic bindings : Commit 7db6a99
- Rename __init__ options argument for clarity : Commit f1353ea
- cleaning up the magic import - fixes #10 : Commit 554af49
- removing shebang from py module : Commit f70b856
- more cleanup : Commit 6ee1339

* Wed Jun 04 2014 Jamie Duncan <> - 0.1-12
- 47b3a47 =adding dist flag to spec file
- clean up : Commit 400a74a
- getting in with the guidelines : Commit ea1ad7c
- more spec refinements : Commit 64eb638
- more spec refinements : Commit 34eadd4
- making source ref in spec file a URL : Commit aefdd9b
- bringing spec file inline with Fedora standards : Commit 767e1d2
- updating macros in spec file for koji : Commit e4b46ea
- adding spec file : Commit e6738c0

* Tue Jun 03 2014 Jamie Duncan <> - 0.1-11
- ab0050b =updating changelog
- packaging cleanup : Commit d4a3428
- tweaking for rhn-support-tool : Commit 06a151f
- minor cleanup of an unused module and a repetitive line or 2 : Commit 7eb1726
- Update : Commit 97f047f
- cleaning up tarball paths. fixes #9 : Commit caa6536
- cleaning up tarball paths. fixes #9 : Commit e4e1cef
- updated : Commit f6c064a
- updated : Commit a070ebd
- fixing issue where checking compression type could error out because of capital letters where i thought it would always be capitalized : Commit cbf0e7d
- removing some cruft : Commit fdf82eb
- removed some old chmod bits that became un-needed when it became required to run as root : Commit cb15d97
- removed xsos option that never existed in the first place. we should just use xsos for that. : Commit ed5992e
- removed xsos option that never existed in the first place. we should just use xsos for that. : Commit 5c2dc32
- removed xsos option that never existed in the first place. we should just use xsos for that. : Commit 1df7e0d
- Merge branch 'master' of : Commit e462b7b
- minor cleanup : Commit 5fc8ce2
- Updating with better File Creation explanations : Commit bac8368
- Updating with better usage examples : Commit 6589b4c
- Updating : Commit 7dce4a9
- disallowing octects > 3 digits - to help cut down on false positives : Commit 319591f
- no longer matches IP addys starting with a 0 : Commit 0223d82

If you specify and ‘end tag’, then you won’t see the untagged commits in HEAD.

I don’t hate the word cloud anymore, maybe

Since it hit the scene in earnest a few years ago, I’ve despised the word ‘cloud’ in the context of what I do for a living. I’ve warned people prior to using it in presentations, proclaimed my joy for having not used the word and bashed it most every time it was mentioned.  I’m here to say that my position on ‘the cloud’ has matured now. I don’t hate the word. But I do hate how most people in the world are defining it.

My own definition has taken a long time to develop. I’ve known it was a powerful concept for some time. I’ve also known most of the IT world has been talking about of the side of their neck when they made their salaries by talking about it. I particularly enjoy people proclaiming they know all about the ‘next generation of cloud’ when we were all still defining the first one. At any rate, to define ‘cloud’, I first have to define ‘PaaS’ to my own satisfaction.

PaaS (noun) – short for Platform As A Service. It provides a complete application platform for users or developers (or both) by sufficiently abstracting those services away from their underlying technology platforms. 

A little wordy, I know, but we need to be specific here. Not only must a PaaS provide a platform, but it must do it in a way that abstracts developers and users from administration of the platform.  It must also handle all of the tertiary services (DNS, port forwarding, scalability, inter-connectivity, etc.) that administers usually have to handle after the fact.

PaaS lets developers develop, and lets administers admin.

So what is cloud?

cloud (noun) – an implementation of a PaaS solution that is seamlessly and automatically scalable to handle load demands and they grow and shrink.

So you start with a PaaS and you build it out so it will grow and shrink automatically as needed for it work load.

What is NOT a cloud?

  • provisioning virtual machines really quickly
  • setting up a PaaS that is brittle and confining for developers and users
  • writing 3 or 4 scripts to help automate your virtualization infrastructure
  • almost everything being marketed as a cloud today

To define a cloud you have to define PaaS. PaaS is defined as that slick layer of magic that abstracts the application away from everything the application runs on or in. A cloud is a seamlessly scalable instance of a good PaaS.  Easy, isn’t it? Step 3, profit!

SELinux talk at RVaLUG – 20140419

This morning I gave what was a pretty well-received talk about SELinux. We got into the important definitions and pretty down deep into how type enforcement works. Lots of practical examples and fun stuff.

Of course why spend hours coming up with a new slide deck when you can borrow from amazing work done by co-workers. :)

The slide deck I used was a slightly modified deck used (last I know of) for a Red Hat TAM Webinar last April.  It also came with a set of lab questions that we didn’t have time to go through today.

And of course, there is the SELinux Coloring Book.

The talk was long for a LUG meeting (right around 90 minutes plus a little follow-up), but the interaction was great and I think we had some good communication going.



getting crafty with my hospital stay

Healthcare and Insurance – My Snapshot

Last May I got sick. Like for real sick; for the first time in my life. I had what was apparently a massive blood clot that was impeding the functioning of both of my lungs. The official term is a ‘bi-cameral (sp?) pulmonary embolism’. In reality it meant that if I walked 50 feet I would pass out, have what looked like a seizure and start throwing up all over myself in the emergency room while my wife is screaming. I still owe that security guard a firm handshake and a bottle of his drink of choice. I liken it to getting hit by lightning. It came out of nowhere and laid me completely low in the span of 2 hours. There was no discernible warning and a root cause was never determined.

getting crafty with my hospital stay

getting crafty with my hospital stay

Last October I was tapped by my company to help work on You might have heard of it. The initial launch didn’t go so well. But it made a pretty strong comeback here recently. It even managed to make some of the major news outlets.
Business Insider
CBS News

Today I logged into my insurance company’s website to get some information, and I started looking at the claims filed for me in the past year year out of morbid curiosity. This would cover the time I was actually in the hospital, the 6 months of follow ups, and the maintenance for a condition I’d known about for a while but didn’t start addressing until I got sick (sleep apnea).

In the past 365 days, my insurance has been billed $47,752.13.
In the past 365 days, I have owed $768.09 for those billings.

My insurance has covered 98.4% of my medical bills in the past year.

The lessons I’ve learned today:

  1. If I didn’t have health insurance I would be bankrupt.
  2. I’ve never felt more contempt for people fighting insurance and healthcare reform in the United States
  3. If you don’t have health insurance, I am truly fearful for you on multiple levels
  4. I’ve never been more proud to have contributed professionally to something than the work I did during the last quarter of 2013 with the people working on

kpatch – my kneejerk reaction

Oracle gobbled up a company called KSplice 50 ITYA (IT Years Ago – or July 2011). They then shoe-horned it into their downstream clone of RHEL so people could slip in kernel upgrades without rebooting systems sort of like how magicians yank table cloths out from under dishes on a table. It’s scary on any number of levels.

Now there is a new-ish project called kpatch that has the backing of Red Hat (full disclosure – I work for Shadowman). I’ve only had a little time to look at the incomplete documentation on how it works. That said, it looks to be a huge step forward over ksplice. From it’s Red Hat Blog announcement:

With respect to granularity, kpatch works at the function level; put simply, old functions are replaced with new ones.  It has four main components:

  • kpatch-build: a collection of tools which convert a source diff patch to a hot patch module. They work by compiling the kernel both with and without the source patch, comparing the binaries, and generating a hot patch module which includes new binary versions of the functions to be replaced.
  • hot patch module: a kernel module (.ko file) which includes the replacement functions and metadata about the original functions.
  • kpatch core module: a kernel module (.ko file) which provides an interface for the hot patch modules to register new functions for replacement.  It uses the kernel ftrace subsystem to hook into the original function’s mcount call instruction, so that a call to the original function is redirected to the replacement function.
  • kpatch utility: a command-line tool which allows a user to manage a collection of hot patch modules.  One or more hot patch modules may be configured to load at boot time, so that a system can remain patched even after a reboot into the same version of the kernel.

That’s way cooler than just doing some fancy RAM voodoo and slipping new kernels in like ksplice.

But I still don’t see where it has a place on a company’s production server or in their security plans.

I believe that if a system cannot sustain the reboot of a single instance of Linux (physical or virtual) then there is a serious flaw in its architecture. To further that I think something like kpatch could end up being a strong crutch to bad architects out there; allowing them to  keep working in this flawed manner.

I know that my crazy idealism doesn’t represent the current reality everywhere (or almost anywhere). But if this is the only justification for its existence then I think we could have and should be using our cycles better somewhere else.

More details as I discover them.