OpenStack is just about the hottest thing going in IT today. Started in 2010 as a joint project between Rackspace and NASA, it is quickly maturing into the premier solution for anyone who wants to make their infrastructure more self-service without having to go behind and clean up after developers all day every day.
It’s biggest hurdle is still its learning curve and the associated pain you are almost guaranteed to suffer during your first install. After approximately 243535 tries, I have a pretty solid process to stand up an OpenStack demo setup across multiple nodes on your laptop. It could also be altered (if any alterations are even needed) to deploy it across multiple physical or virtual systems in any environment.
Depending on your bandwidth to patch the servers, it takes about an hour, soup to nuts.
RHEL OSP 7 comes with an awesome tool called OSP Director, which is based on TripleO. It is essentially a canned OpenStack install that you then use to deploy production OpenStack nodes. It’s called an ‘undercloud’.
For two reasons, I’m not using OSP Director for this demo.
- It takes more time and more resources. If I were doing this in my environment and I was an everyday engineer, I’d totally use it. But this is an exercise in tire-kicking.
- I haven’t had time yet to play with it very much.
Instead I’m using RDO’s Quickstart tool, which is based on Packstack.
OpenStack in 60 seconds
The goal when OpenStack was started was to engineer an FOSS alternative to Amazon Web Services. What they came up with were an ever-growing list of services that each perform a task required (or that is optionally neat) to build out a virutalized infrastructure.
The services are all federated together with RESTful APIs. Python is the language of choice.
- Nova – compute services. The core brains of the operation and the initial product
- Neutron – Software-defined Networking service. Nova also has some less flexible networking components built in.
- Cinder – provides block devices to virtual machines (instances in OpenStack parlance)
- Glance – manages images used to create instances
- Swift – provides object/blob storage
- Keystone – Identity and Identification services for all other services as well as users
- Horizon – a Django-based web frontend that’s customizable and extensible
- Heat – Orchestration services
- Ceilometer – Telemetry
Optional Fun Services
- Trove – Database-as-a-service
- Ironic – Bare metal provisioning – treat your racked stuff like your virtual stuff
- Elastic Map Reduce – Big-Data-as-a-Service (?!?!)
All Openstack modules that are currently canon are listed on their roadmap.
The demo setup I’ll be going through was setup on my laptop (Fedora 21) using good ol’ KVM Virtual Machines running RHEL 7. The laptop has 8 cores and 16GB of RAM total.
- rdo0.localdomain (192.168.122.100) – 4GB RAM, 2 VCPU
- Controller Node (all services except Nova Compute)
- rdo1.localdomain (192.168.122.101) – 2GB RAM, 2 VCPU
- rdo2.localdomain (192.168.122.102)- 2GB RAM, 2 VCPU
Host OS Setup
NOTE – since these are VMs, the single NIC I assigned them was designated eth0. We all know the naming convention has changed in RHEL 7
subscription-manager register --username=$RHN_USERNAME --password=$RHN_PASSWORD
subscription-manager attach --pool=$SUBSCRIPTION_MANAGER_POOL_ID
subscription-manager repos --disable=\* --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-extras-rpms
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y
sudo yum install -y https://rdoproject.org/repos/rdo-release.rpm
yum install -y openstack-packstack vim-enhanced
yum update -y
systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl enable network
#confirm the network setup is working
ifdown eth0 && systemctl start network && ifup eth0
#reboot to apply any patches that require it, etc.
The above snippet will
- register your system to Red Hat via subscription-manager
- attach the proper subscription pool (supplied by you)
- enable the needed channels
- install the RDO package repository
- install a few things (I’m a Vim guy, feel free to edit)
- disable NetworkManager (required for OpenStack) and replace it with a legacy network service script
- activate the new network setup
Once this is set up on each host (I did it on one and cloned it twice to create the new other VMs) , you are ready to get OpenStack rolling.
Creating an answers file
On rdo0.localdomain, run the following command. It willThe next command will generate a default answers file that you can then edit and keep up with over time as you deploy various OpenStack incarnations.
packstack --gen-answer-file rdo.txt
The following changes were made.
NOTE – if you create 2 answer files and diff them, you will see many other changes, as passwords are randomized each time.
# diff -u rdo.txt rdo-edited.txt
--- rdo.txt 2015-08-23 15:41:45.041000000 -0400
+++ rdo-edited.txt 2015-08-21 20:17:05.538000000 -0400
@@ -64,7 +64,7 @@
# Specify 'y' to install Nagios to monitor OpenStack hosts. Nagios
# provides additional tools for monitoring the OpenStack environment.
# ['y', 'n']
# Comma-separated list of servers to be excluded from the
# installation. This is helpful if you are running Packstack a second
@@ -84,7 +84,7 @@
# List of IP addresses of the servers on which to install the Compute
# Specify 'y' to provision for demo usage and testing. ['y', 'n']
And then you run packstack. Depending on your install targets and have much horsepower is available, this can take a while. On my laptop, it takes the better part of an hour.
Getting Networking Working
The next part of the setup will borrow heavily from This RDO blog post about setting up Neutron with an existing network.
After packstack does its thing, assuming you have a ‘Success!’ sort of output on your screen, you will then have a 3-node OpenStack cluster with 2 Nova Compute nodes and 1 node doing pretty much everything else. Unfortunately, out of the box you need to make a few tweaks so you can see your new instances from your libvirt networking (or the network in your lab or whatever your use case is).
NOTE – this needs to happen on each host
Create your bridge and set up your NIC
On my VMs the only NIC is named eth0 (benefit of using a VM). So you may need to edit this slightly to fit your set up’s naming conventions.
We want to use a bridge device to get our VMs on to our network so we create a device named br-ex. We then edit $YOUR_NIC to
[root@rdo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex
IPADDR=192.168.122.100 # Old eth0 IP since we want the network restart to not
# kill the connection, otherwise pick something outside your dhcp range
NETMASK=255.255.255.0 # your netmask
GATEWAY=192.168.122.1 # your gateway
DNS1=192.168.122.1 # your nameserver
[root@rdo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
Tell Neutron about your bridge
We then run the following to tell Neutron to use a bridge called ‘br-ex’, and to use the proper plugins
openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs bridge_mappings extnet:br-ex
openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vxlan,flat,vlan
You could probably restart Neutron and be OK here, but I prefer the belts and suspenders.
Define your software network
After the reboot, you should be able to ssh back into your normal IP address. We now have a host infrastructure that is ready to serve our OpenStack instances. So let’s define our SDN components so we can get going!
NOTE – This should be done on your controller node, rdo0.localdomain in my case
# source keystonerc_admin
# neutron net-create external_network --provider:network_type flat --provider:physical_network extnet --router:external --shared
Public subnet and router
# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=192.168.122.10,end=192.168.122.20 \
--gateway=192.168.122.1 external_network 192.168.122.0/24
# neutron router-create router1
# neutron router-gateway-set router1 external_network
# neutron net-create private_network
# neutron subnet-create --name private_subnet private_network 192.168.100.0/24
Connect the two networks with the router
# neutron router-interface-add router1 private_subnet
And that’s it! You should now have 3 nodes ready to handle your OpenStack demo loads.
My current plan is to keep evolving this setup and re-deploying to do things like
- take advantage of Ceph
- Further distribute loads (maybe a virtual/physical hybrid setup)
- handle multiple NICs
If you have grief or ideas for improvement please feel free to comment below.