VMWare – A Cautionary Tale for Docker?

Of course VMWare has made a ton of money over the last ~12 years. They won every battle in ‘The Hypervisor Wars‘.  Now, at the turn of 2015 it looks to me like they’ve lost the wars themselves.

What? Am I crazy? VMWare has made stockholders a TON of money over the years. There’s certainly no denying that. They also have a stable, robust core product. So how did they lose? They lost because there’s not a war to fight anymore.

Virtualization has become a commodity. The workflows and business processes surrounding virtualization is where VMWare has spent the lion’s share of their R&D budgets on over the years. And now that is the least important part of virtualization. With kvm being the default hypervisor for OpenStack, those workflows have been abstracted higher up the Operations tool chain. Sure there will always be profit margins in commodities like virtualization. But the sizzle is gone. And in IT today, if your company doesn’t have sizzle, you’re a target for the wolves.

Of course docker and VMWare are very different companies. Docker, inc. has released its code as an open source project for ages. They also have an incredibly engaged (if not always listened to) community around it. They had a the genius idea, not of containers, but of making containers easily portable between systems. It’s a once-in-a-lifetime idea, and it is revolutionizing how we create and deliver software.

But as an idea, there isn’t a ton of money in it.  Sure Docker got a ton of VC to go out and build a business around this idea. But where are they building that business?

I’m not saying these aren’t good products. Most of them have value. But they are all business process improvements for their original idea (docker-style containers).

VMWare had a good (some would call great) run by wrapping business process improvements around their take on a hypervisor. Unfortunately they now find themselves trying to play catch-up as they shoehorn new ideas like IaaS and Containers into their suddenly antiquated business model.

I don’t have an answer here, because I’m no closer to internal Docker, Inc. strategy meetings than I am Mars. But I do wonder if they are working on their next great idea, or if they are focused on taking a great idea and making a decent business around it. It has proven to be pennywise for them. But will it be pound-foolish? VMWare may have some interesting insights on that.

 

RSS from Trello with Jenkins

Trello is a pretty nice web site. It is (sort of) a kanban board that is very useful when organizing groups of people in situations where a full agile framework would be too cumbersome. Kanban is used a lot in IT Operations. If you want a great story on it, go check out The Phoenix Project.

One thing Trello is lacking, however, is the ability to tap into an RSS-style feed for one or more of your boards. But, where there is an API, there’s a way. This took me about 30 minutes to iron out, and is heavily borrowed from the basic example in the documentation for trello-rss.

Step One – Three-legged OAuth

Trello uses OAuth. So you will need to get your developer API keys from Trello. You will also need to get permanent (or expiring whenever you want) OAuth tokens from them. This process is a little cloudy, but I found a post on StackOverflow that got me over the hump.

Step Two – a little python

I created a little bit of python to handle this for me. Bear in mind it’s still VERY rough. My though is to start to incorporate other Trello automation and time-savers into it down the road. If that happens I’ll stick it out on github.

#!/usr/bin/env python
from trello_rss.trellorss import TrelloRSS
from optparse import OptionParser
import sys
class TrelloAutomate:
 ''' 
 Used for basic automation tasks with Trello, 
 particularly with CI/CD platforms like Jenkins.
 Author: jduncan
 Licence: GPL2+
 Dependencies (py modules):
 - httplib2
 - oauthlib / oauth2
 '''
 def __init__(self):
  reload(sys)
  sys.setdefaultencoding('utf8')
  self.oauth_token = $my_token
  self.oauth_token_secret = $my_token_secret
  self.oauth_apikey = $my_api_key
  self.oauth_api_private_key = $my_api_private_key
 def _get_rss_data(self):
  try:
   rss = TrelloRSS(self.oauth_apikey,
     self.oauth_api_private_key,
     self.oauth_token,
     channel_title="My RSS Title",
     rss_channel_link="https://trello.com/b/XXX/board_name",
     description="My Description")
   rss.get_all(50)
   data = rss.rss
   return data
  except Exception,e:
   raise e
 def create_rss_file(self, filename):
  data = self._get_rss_data()
  fh = open(filename,'w')
  for line in data:
   fh.write(line)
  fh.close()
 def main():
  parser = OptionParser(usage="%prog ", version="%prog 0.1")
  parser.add_option("-r", "--rss", 
    action="store_true", 
    dest="rss", 
    help="create the rss feed", 
    metavar="RSS")
  parser.add_option("-f", "--file", 
    dest="filename", 
    default="trellorss.xml", 
    help="output filename. 
    default = trello.xml", 
    metavar="FILENAME")
  (options, args) = parser.parse_args()
  trello = TrelloAutomate()
  if options.rss:
   trello.create_rss_file(options.filename)

if __name__ == '__main__':
main()

Step Three – Jenkins Automation

At this point I could stick this little script on a web server and have it generate my feed for me with a cron tab. But that would mean my web server would have to have to build content instead of just serving it. I don’t like that.

Instead I will build my content on a build server (Jenkins) and then move deploy it to my web server so people can access my RSS feed easily.

Put your python on your build server

Get your python script to your build server, and make sure you satisfy all of the needed dependencies. You will know if you haven’t, because your script won’t work. 🙂 For one-off scripts like this I tend to put them in /usr/local/bin/$appname/. But that’s just my take on the FHS.

Create your build job

This is a simple build job, especially since it’s not pulling anything out of source control. You just tell it what command to run, how often to run it, and where to put what is generated.

trello-rss-1
The key at the beginning is to not keep all of these builds. If you run this frequently you could fill up lots of things on your system with old cruft from 1023483248 builds ago. I run mine every 15 minutes (you’ll see later) and keep output from the last 10.
trello-rss-2
Here I tell Jenkins to run this job every 15 minutes. The syntax is sorta’ like a crontab, but not exactly. The help icon is your friend here.
trello-rss-3
I have previously defined where to send my web docs (see my previous post about automating documentation). If you don’t specify a filename, the script above saves the RSS feed as ‘trello.xml’. I just take the default here and send trello.xml to the root directory on my web server.
trello-rss-4
And this is the actual command to run. You can see the -f and -r options I define in the script above. $WORKSPACE is a Jenkins variable that is the filesystem location for the current build workspace. I just output the file there.

Summary

So using a little python and my trusty Jenkins server, I now have an RSS Feed at $mywebserver/trello.xml that is updated every 15 minutes (or however often you want).

Of course this code could get way more involved. The py-trello module that it uses is very robust and easy to use for all of your Trello needs. I highly recommend it.

If I have time to expand on this idea I’ll post a link to the github where I upload it.

-jduncan

 

Multi-Node OpenStack on your laptop in about an hour

OpenStack is just about the hottest thing going in IT today.  Started in 2010 as a joint project between Rackspace and NASA, it is quickly maturing into the premier solution for anyone who wants to make their infrastructure more self-service without having to go behind and clean up after developers all day every day.

It’s biggest hurdle is still its learning curve and the associated pain you are almost guaranteed to suffer during your first install.  After approximately 243535 tries, I have a pretty solid process to stand up an OpenStack demo setup across multiple nodes on your laptop. It could also be altered (if any alterations are even needed) to deploy it across multiple physical or virtual systems in any environment.

Depending on your bandwidth to patch the servers, it takes about an hour, soup to nuts.

Which Flavor?

RHEL OSP 7 comes with an awesome tool called OSP Director, which is based on TripleO. It is essentially a canned OpenStack install that you then use to deploy production OpenStack nodes. It’s called an ‘undercloud’.

For two reasons, I’m not using OSP Director for this demo.

  1. It takes more time and more resources. If I were doing this in my environment and I was an everyday engineer, I’d totally use it. But this is an exercise in tire-kicking.
  2. I haven’t had time yet to play with it very much.

Instead I’m using RDO’s Quickstart tool, which is based on Packstack.

OpenStack in 60 seconds

The goal when OpenStack was started was to engineer an FOSS alternative to Amazon Web Services. What they came up with were an ever-growing list of services that each perform a task required (or that is optionally neat) to build out a virutalized infrastructure.

The services are all federated together with RESTful APIs. Python is the language of choice.

Core Services

  • Nova – compute services. The core brains of the operation and the initial product
  • Neutron – Software-defined Networking service. Nova also has some less flexible networking components built in.
  • Cinder – provides block devices to virtual machines (instances in OpenStack parlance)
  • Glance – manages images used to create instances
  • Swift – provides object/blob storage
  • Keystone – Identity and Identification services for all other services as well as users
  • Horizon – a Django-based web frontend that’s customizable and extensible
  • Heat – Orchestration services
  • Ceilometer – Telemetry

Optional Fun Services

  • Trove – Database-as-a-service
  • Ironic – Bare metal provisioning – treat your racked stuff like your virtual stuff
  • Elastic Map Reduce – Big-Data-as-a-Service (?!?!)
  • $instert_awesome_project_here

All Openstack modules that are currently canon are listed on their roadmap.

HOWTO

Cluster Setup

The demo setup I’ll be going through was setup on my laptop (Fedora 21) using good ol’ KVM Virtual Machines running RHEL 7. The laptop has 8 cores and 16GB of RAM total.

  • rdo0.localdomain (192.168.122.100) – 4GB RAM, 2 VCPU
    • Controller Node (all services except Nova Compute)
  • rdo1.localdomain (192.168.122.101) – 2GB RAM, 2 VCPU
    • Nova Compute Node
  • rdo2.localdomain (192.168.122.102)- 2GB RAM, 2 VCPU
    • Nova Compute Node

Host OS Setup

NOTE – since these are VMs, the single NIC I assigned them was designated eth0. We all know the naming convention has changed in RHEL 7

subscription-manager register --username=$RHN_USERNAME --password=$RHN_PASSWORD
subscription-manager attach --pool=$SUBSCRIPTION_MANAGER_POOL_ID
subscription-manager repos --disable=\* --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-extras-rpms
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y
sudo yum install -y https://rdoproject.org/repos/rdo-release.rpm
yum install -y openstack-packstack vim-enhanced
yum update -y
systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl enable network
#confirm the network setup is working
ifdown eth0 && systemctl start network && ifup eth0 
#reboot to apply any patches that require it, etc.
reboot

The above snippet will

  • register your system to Red Hat via subscription-manager
  • attach the proper subscription pool (supplied by you)
  • enable the needed channels
  • install the RDO package repository
  • install a few things (I’m a Vim guy, feel free to edit)
  • disable NetworkManager (required for OpenStack) and replace it with a legacy network service script
  • activate the new network setup

Once this is set up on each host (I did it on one and cloned it twice to create the new other VMs) , you are ready to get OpenStack rolling.

Creating an answers file

On rdo0.localdomain, run the following command. It willThe next command will generate a default answers file that you can then edit and keep up with over time as you deploy various OpenStack incarnations.

packstack --gen-answer-file rdo.txt

The following changes were made.

NOTE – if you create 2 answer files and diff them, you will see many other changes, as passwords are randomized each time.

# diff -u rdo.txt rdo-edited.txt 
--- rdo.txt 2015-08-23 15:41:45.041000000 -0400
+++ rdo-edited.txt 2015-08-21 20:17:05.538000000 -0400
@@ -64,7 +64,7 @@
 # Specify 'y' to install Nagios to monitor OpenStack hosts. Nagios
 # provides additional tools for monitoring the OpenStack environment.
 # ['y', 'n']
-CONFIG_NAGIOS_INSTALL=y
+CONFIG_NAGIOS_INSTALL=n
 
 # Comma-separated list of servers to be excluded from the
 # installation. This is helpful if you are running Packstack a second
@@ -84,7 +84,7 @@
 
 # List of IP addresses of the servers on which to install the Compute
 # service.
-CONFIG_COMPUTE_HOSTS=192.168.122.100
+CONFIG_COMPUTE_HOSTS=192.168.122.101,192.168.122.102
 
 # Specify 'y' to provision for demo usage and testing. ['y', 'n']
-CONFIG_PROVISION_DEMO=y
+CONFIG_PROVISION_DEMO=n

And then you run packstack. Depending on your install targets and have much horsepower is available, this can take a while. On my laptop, it takes the better part of an hour.

packstack --answer-file=rdo.txt

Getting Networking Working

The next part of the setup will borrow heavily from This RDO blog post about setting up Neutron with an existing network.

After packstack does its thing, assuming you have a ‘Success!’ sort of output on your screen, you will then have a 3-node OpenStack cluster with 2 Nova Compute nodes and 1 node doing pretty much everything else. Unfortunately, out of the box you need to make a few tweaks so you can see your new instances from your libvirt networking (or the network in your lab or whatever your use case is).

NOTE – this needs to happen on each host

Create your bridge and set up your NIC

On my VMs the only NIC is named eth0 (benefit of using a VM). So you may need to edit this slightly to fit your set up’s naming conventions.

We want to use a bridge device to get our VMs on to our network so we create a device named br-ex. We then edit $YOUR_NIC to

[root@rdo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-ex 
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.122.100 # Old eth0 IP since we want the network restart to not 
 # kill the connection, otherwise pick something outside your dhcp range
NETMASK=255.255.255.0 # your netmask
GATEWAY=192.168.122.1 # your gateway
DNS1=192.168.122.1 # your nameserver
ONBOOT=yes

[root@rdo0 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
#TYPE=Ethernet
#BOOTPROTO=none
#DEFROUTE=yes
#IPV4_FAILURE_FATAL=no
#IPV6INIT=yes
#IPV6_AUTOCONF=yes
#IPV6_DEFROUTE=yes
#IPV6_FAILURE_FATAL=no
NAME=eth0
UUID=f16a4a9c-184c-403d-bfff-25092c9519b0
DEVICE=eth0
#ONBOOT=yes
#IPADDR=192.168.122.100
#PREFIX=24
#GATEWAY=192.168.122.1
#DNS1=192.168.122.1
#DOMAIN=localdomain
#IPV6_PEERDNS=yes
#IPV6_PEERROUTES=yes
#IPV6_PRIVACY=no
HWADDR=52:54:00:2f:f0:a2
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

Tell Neutron about your bridge

We then run the following to tell Neutron to use a bridge called ‘br-ex’, and to use the proper plugins

openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs bridge_mappings extnet:br-ex
openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vxlan,flat,vlan
reboot

You could probably restart Neutron and be OK here, but I prefer the belts and suspenders.

Define your software network

After the reboot, you should be able to ssh back into your normal IP address. We now have a host infrastructure that is ready to serve our OpenStack instances. So let’s define our SDN components so we can get going!

NOTE – This should be done on your controller node, rdo0.localdomain in my case

Provider network

# source keystonerc_admin
# neutron net-create external_network --provider:network_type flat --provider:physical_network extnet --router:external --shared

Public subnet and router

# neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=192.168.122.10,end=192.168.122.20 \
 --gateway=192.168.122.1 external_network 192.168.122.0/24
# neutron router-create router1
# neutron router-gateway-set router1 external_network

Private subnet

# neutron net-create private_network
# neutron subnet-create --name private_subnet private_network 192.168.100.0/24

Connect the two networks with the router

# neutron router-interface-add router1 private_subnet

Wrapping Up

And that’s it! You should now have 3 nodes ready to handle your OpenStack demo loads.

My current plan is to keep evolving this setup and re-deploying to do things like

  • take advantage of Ceph
  • Further distribute loads (maybe a virtual/physical hybrid setup)
  • handle multiple NICs
  • ???

If you have grief or ideas for improvement please feel free to comment below.

and you think the code is the hardest part

Well, you’re pretty much right. BUT.

I’ve been working, on and off, on a project called soscleaner since last December-ish. It’s a pretty straight-forward tool. It takes an existing sosreport and obfuscates data that people don’t typically like to release like hostnames and IP addresses. The novel part is that it maintains the relationships between obfuscated items and their counterparts. So a hostname or IP address is obfuscated with the same value in all of the files in an sosreport. It allows the person looking at the ‘scrubbed’ report to still perform meaningful troubleshooting.

It’s not a big enough problem to get a true company or engineer’s attention, but it’s too big for a hack script. So I decided to try and tackle it. And I have to say that the current iteration isn’t too bad. It doesn’t what it’s supposed to pretty reliably, and all of the artifacts to make it a ‘real program’ are in place. Artifacts like:

  • issue tracking
  • wiki
  • README and Licensing Decisions
  • publishing binary packages (RPMs in this case, right now)
  • publishing to language-specific repositories (PyPi in this case, since it’s a Python application)
  • creating repositories (see RPM’s link)
  • submitting it to a Linux distro (Fedora in this case, for now)
  • writing unittests (a first for me)
  • creating some sort of ‘homepage
  • mailing lists

All of this has been an amazing learning experience, of course. But my biggest take away, easily, is that all of the things that wrap around the code to actually ‘publish’ an application is almost as hard as the coding itself. I am truly stunned, and I have a new appreciation now for the people who do it well every day.

git2changelog – converting your git logs into a spec file changelog automatically

Especially when it comes to application development, I’m a bit lazy and a general malcontent. I love solving the problem, but I hate dealing with the packaging and versioning and all of the stuff that makes something usable.  One of the things I always have trouble with is keeping track of my spec file changelog when I am rolling something into an RPM.

To help ease that I put together a small script that will take a git repository’s log between any two tags and output it in a format that is acceptable in an RPM spec file.

To do this I started with the Fedora Packaging Guidelines for Changelogs. This gave me the proper formatting to adhere to for my script.

Next I used the changelog in the sosreport package for inspiration. It’s available in its spec file.

The script I wrote is designed to run inside of a git repository.  If you can come up with a better way to collage this data from the .git directory then please feel free to share. So I’ve stuck it in a git repo so you can grab it if you want.

https://github.com/jduncan-rva/git2changelog/

The output of my soscleaner app looks like this:

$ ./git2changelog -b 0.1-8

* Sat Jun 07 2014 Jamie Duncan <jamie.e.duncan@gmail.com> - HEAD:UNRELEASED
- 2f78c26 =review - added comment in _skip_file to likely remove a now useless if clause
- Merge pull request #11 from bmr-cymru/bmr-libmagic-fixes : Commit 4427b06
- Convert python to use native libmagic python-magic bindings : Commit 7db6a99
- Rename __init__ options argument for clarity : Commit f1353ea
- cleaning up the magic import - fixes #10 : Commit 554af49
- removing shebang from py module : Commit f70b856
- more cleanup : Commit 6ee1339

* Wed Jun 04 2014 Jamie Duncan <jamie.e.duncan@gmail.com> - 0.1-12
- 47b3a47 =adding dist flag to spec file
- clean up : Commit 400a74a
- getting in with the guidelines : Commit ea1ad7c
- more spec refinements : Commit 64eb638
- more spec refinements : Commit 34eadd4
- making source ref in spec file a URL : Commit aefdd9b
- bringing spec file inline with Fedora standards : Commit 767e1d2
- updating macros in spec file for koji : Commit e4b46ea
- adding spec file : Commit e6738c0

* Tue Jun 03 2014 Jamie Duncan <jamie.e.duncan@gmail.com> - 0.1-11
- ab0050b =updating changelog
- packaging cleanup : Commit d4a3428
- tweaking for rhn-support-tool : Commit 06a151f
- minor cleanup of an unused module and a repetitive line or 2 : Commit 7eb1726
- Update README.md : Commit 97f047f
- cleaning up tarball paths. fixes #9 : Commit caa6536
- cleaning up tarball paths. fixes #9 : Commit e4e1cef
- updated README.md : Commit f6c064a
- updated README.md : Commit a070ebd
- fixing issue where checking compression type could error out because of capital letters where i thought it would always be capitalized : Commit cbf0e7d
- removing some cruft : Commit fdf82eb
- removed some old chmod bits that became un-needed when it became required to run as root : Commit cb15d97
- removed xsos option that never existed in the first place. we should just use xsos for that. : Commit ed5992e
- removed xsos option that never existed in the first place. we should just use xsos for that. : Commit 5c2dc32
- removed xsos option that never existed in the first place. we should just use xsos for that. : Commit 1df7e0d
- Merge branch 'master' of github.com:jduncan-rva/soscleaner : Commit e462b7b
- minor cleanup : Commit 5fc8ce2
- Updating README.md with better File Creation explanations : Commit bac8368
- Updating README.md with better usage examples : Commit 6589b4c
- Updating README.md : Commit 7dce4a9
- disallowing octects > 3 digits - to help cut down on false positives : Commit 319591f
- no longer matches IP addys starting with a 0 : Commit 0223d82

If you specify and ‘end tag’, then you won’t see the untagged commits in HEAD.

SELinux talk at RVaLUG – 20140419

This morning I gave what was a pretty well-received talk about SELinux. We got into the important definitions and pretty down deep into how type enforcement works. Lots of practical examples and fun stuff.

Of course why spend hours coming up with a new slide deck when you can borrow from amazing work done by co-workers. 🙂

The slide deck I used was a slightly modified deck used (last I know of) for a Red Hat TAM Webinar last April.  It also came with a set of lab questions that we didn’t have time to go through today.

And of course, there is the SELinux Coloring Book.

The talk was long for a LUG meeting (right around 90 minutes plus a little follow-up), but the interaction was great and I think we had some good communication going.

20140419-jduncan-selinux_workshop-lab_redux

20140419-jduncan-selinux_workshop_redux

Healthcare and Insurance – My Snapshot

Last May I got sick. Like for real sick; for the first time in my life. I had what was apparently a massive blood clot that was impeding the functioning of both of my lungs. The official term is a ‘bi-cameral (sp?) pulmonary embolism’. In reality it meant that if I walked 50 feet I would pass out, have what looked like a seizure and start throwing up all over myself in the emergency room while my wife is screaming. I still owe that security guard a firm handshake and a bottle of his drink of choice. I liken it to getting hit by lightning. It came out of nowhere and laid me completely low in the span of 2 hours. There was no discernible warning and a root cause was never determined.

getting crafty with my hospital stay
getting crafty with my hospital stay

Last October I was tapped by my company to help work on healthcare.gov. You might have heard of it. The initial launch didn’t go so well. But it made a pretty strong comeback here recently. It even managed to make some of the major news outlets.

BizJournal.com
Slashdot
ZDNet
Business Insider
CBS News
Yahoo
Bloomberg
CNN

Today I logged into my insurance company’s website to get some information, and I started looking at the claims filed for me in the past year year out of morbid curiosity. This would cover the time I was actually in the hospital, the 6 months of follow ups, and the maintenance for a condition I’d known about for a while but didn’t start addressing until I got sick (sleep apnea).

In the past 365 days, my insurance has been billed $47,752.13.
In the past 365 days, I have owed $768.09 for those billings.

My insurance has covered 98.4% of my medical bills in the past year.

The lessons I’ve learned today:

  1. If I didn’t have health insurance I would be bankrupt.
  2. I’ve never felt more contempt for people fighting insurance and healthcare reform in the United States
  3. If you don’t have health insurance, I am truly fearful for you on multiple levels
  4. I’ve never been more proud to have contributed professionally to something than the work I did during the last quarter of 2013 with the people working on healthcare.gov