RSS from Trello with Jenkins

Trello is a pretty nice web site. It is (sort of) a kanban board that is very useful when organizing groups of people in situations where a full agile framework would be too cumbersome. Kanban is used a lot in IT Operations. If you want a great story on it, go check out The Phoenix Project.

One thing Trello is lacking, however, is the ability to tap into an RSS-style feed for one or more of your boards. But, where there is an API, there’s a way. This took me about 30 minutes to iron out, and is heavily borrowed from the basic example in the documentation for trello-rss.

Step One – Three-legged OAuth

Trello uses OAuth. So you will need to get your developer API keys from Trello. You will also need to get permanent (or expiring whenever you want) OAuth tokens from them. This process is a little cloudy, but I found a post on StackOverflow that got me over the hump.

Step Two – a little python

I created a little bit of python to handle this for me. Bear in mind it’s still VERY rough. My though is to start to incorporate other Trello automation and time-savers into it down the road. If that happens I’ll stick it out on github.

#!/usr/bin/env python
from trello_rss.trellorss import TrelloRSS
from optparse import OptionParser
import sys
class TrelloAutomate:
 Used for basic automation tasks with Trello, 
 particularly with CI/CD platforms like Jenkins.
 Author: jduncan
 Licence: GPL2+
 Dependencies (py modules):
 - httplib2
 - oauthlib / oauth2
 def __init__(self):
  self.oauth_token = $my_token
  self.oauth_token_secret = $my_token_secret
  self.oauth_apikey = $my_api_key
  self.oauth_api_private_key = $my_api_private_key
 def _get_rss_data(self):
   rss = TrelloRSS(self.oauth_apikey,
     channel_title="My RSS Title",
     description="My Description")
   data = rss.rss
   return data
  except Exception,e:
   raise e
 def create_rss_file(self, filename):
  data = self._get_rss_data()
  fh = open(filename,'w')
  for line in data:
 def main():
  parser = OptionParser(usage="%prog ", version="%prog 0.1")
  parser.add_option("-r", "--rss", 
    help="create the rss feed", 
  parser.add_option("-f", "--file", 
    help="output filename. 
    default = trello.xml", 
  (options, args) = parser.parse_args()
  trello = TrelloAutomate()
  if options.rss:

if __name__ == '__main__':

Step Three – Jenkins Automation

At this point I could stick this little script on a web server and have it generate my feed for me with a cron tab. But that would mean my web server would have to have to build content instead of just serving it. I don’t like that.

Instead I will build my content on a build server (Jenkins) and then move deploy it to my web server so people can access my RSS feed easily.

Put your python on your build server

Get your python script to your build server, and make sure you satisfy all of the needed dependencies. You will know if you haven’t, because your script won’t work. :) For one-off scripts like this I tend to put them in /usr/local/bin/$appname/. But that’s just my take on the FHS.

Create your build job

This is a simple build job, especially since it’s not pulling anything out of source control. You just tell it what command to run, how often to run it, and where to put what is generated.

The key at the beginning is to not keep all of these builds. If you run this frequently you could fill up lots of things on your system with old cruft from 1023483248 builds ago. I run mine every 15 minutes (you’ll see later) and keep output from the last 10.
Here I tell Jenkins to run this job every 15 minutes. The syntax is sorta’ like a crontab, but not exactly. The help icon is your friend here.
I have previously defined where to send my web docs (see my previous post about automating documentation). If you don’t specify a filename, the script above saves the RSS feed as ‘trello.xml’. I just take the default here and send trello.xml to the root directory on my web server.
And this is the actual command to run. You can see the -f and -r options I define in the script above. $WORKSPACE is a Jenkins variable that is the filesystem location for the current build workspace. I just output the file there.


So using a little python and my trusty Jenkins server, I now have an RSS Feed at $mywebserver/trello.xml that is updated every 15 minutes (or however often you want).

Of course this code could get way more involved. The py-trello module that it uses is very robust and easy to use for all of your Trello needs. I highly recommend it.

If I have time to expand on this idea I’ll post a link to the github where I upload it.



CI/CD Documentation for people who hate writing docs

I like solving problems. I hate writing up the documentation that comes along with it. Since I took a new position within Red Hat, I have found an increasing amount of my time taken up with writing docs. I decided the time had come for some innovation and workflow improvement.

Problem Statement

  • Our current document store of record is Google Drive. So any solution has to keep the final product in there. It can keep them in other locations, but this one is a requirement.
  • I don’t want to have to transcribe notes from calls and meetings. It’s annoying enough to take notes. It’s doubly-annoying to have to then transcribe them into another format for consumption by other people. A little clean-up is OK, but nothing far beyond that.
  • Copy/Paste into multiple platforms isn’t something I want to do. I want to take my notes, perform an action and have them published.
  • I need a universal format. PDF, HTML. Something.

My Solution

Available Tools

Internally, Red Hat uses the following self-service tools that I am utilizing for this time-saver.

  • DDNS (Dynamic DNS)
  • OpenStack
  • GitLab for version control
  • Jenkins for CI/CD
  • Google Drive for docs store


I looked around quite a bit before setting on using asciidoctor to process asciidoc files for me. It will take extremely light markdown and use it to render really pretty HTML. I’m a huge fan of it. I won’t be providing a primer on it here, but the one from my bookmarks that I use the most is found on the asciidoctor website.

The biggest benefit is that I can generate it almost as fast as someone can talk. So after a meeting, it’s just a minute or two of clean up and clarification and BANG, I have a consumable record of the event.

The Workflow

Having settled on asciidoc as my format, the workflow cleared up a lot.

  1. Generate asciidoc files during meetings / events / whatever.
  2. Manage them per-project / customer in git repos on GitLab.
  3. When a push is made to a given repo, have GitLab trigger a Jenkins build job.
  4. The Jenkins build job will take the updated repo, render the finished HTML, and upload it to Google Drive as well as a secondary web server that I will maintain (my choice, not a hard requirement).
  5. Profit and World Domination


I ran into a few obstacles when I started bringing this to life

  • I had never really used the more advanced features in GitLab.
  • I hadn’t used Jenkins in years
  • I am not familiar with the Google Drive API.

Dynamic DNS

I used our internal DynamicDNS for my web server and my Jenkins server. I don’t control any DNS zones inside Red Hat, so this was a quick and easy solution.

We have an internal registration page, as well as an RPM that configures a system. I just edit a file with the host, domain and hash and POOF, I have DDNS wherever I want it.

Setting up GitLab

The GitLab instance I’m using is 7.2.2, and is maintained by our internal IT team. So I won’t be covering how to set it up. I have done this in the past and it was dead simple, however. I followed their walkthrough and it worked like a champ.

Installing and Configuring Jenkins

We do have multiple internal Jenkins servers for our Engineers. However, I decided to go with my own so I could play around with plugins and break it without incurring the wrath of some project manager or delaying a major product release. The process was very straight-forward. I followed their wiki to get it  up and running in approximately 20 minutes.

Of course, adding a job to an existing Jenkins environment is possible, too. You just need the correct plugins installed.

Jenkins Plugins

I am utilizing a handful of Jenkins plugins to produce this workflow. Note: A few of these me come installed in a default install. I simply don’t remember. It comes with quite a few plugins to enable the default configuration.

  • Git plugin (this may be pulled in with the GitLab plugin, but I installed it first while experimenting)
  • Gitlab plugin – for integration with our GitLab instance
  • Publish Over SSH plugin – for publishing to a simple web server

Helpful Tip I forgot about Jenkins

Make sure your Jenkins server has any needed build software installed. In my case, git and asciidoctor are very important. That is 20 minutes I’ll never get back.

Integrating with Google Drive

This turned out to be the biggest challenge. There is no good glue out there for this already. The biggest obstacle is OAuth. It’s just designed for user interaction. I didn’t want to enable less secure passwords, so I decided to try and tackle this.

This is the only place I had to write any new code. I ended up using PyDrive to access the Google Drive API more easily because I’m not very familiar with the API itself. It worked well. Since GDrive is really an object store more than anything else, updating a document instead of just adding another copy of it. This is a first attempt to deal with that cleanly. I worked on it for about an hour, so there is no concept of polish there as of yet. Think of it more as a POC that it’s doable.

The code is in a public Github repo. Ideas and code-heckling are welcome.

Gluing it all together

I now have my asciidoc code, GitLab, Jenkins, GDrive, and a web server. Now I need to glue them together to make my life easier.

The local git repo itself doesn’t get changed at all. No post_commit hooks, although that method would work as well I’m sure.

  1. Create a GitLab repo
    1. This is outside of this blog’s scope, and more importantly it’s dead easy.
  2. Getting Jenkins Connected
    1. Define a server to SSH your finished HTML to
      1. Manage Jenkins > Configure System
        1. Publish Over SSH
          1. Key
            1. a private key that will work on your web server. Since I’m using a VM from our internal OpenStack instance, I am using the same key I use for ‘cloud-user’ on those VM’s
          2. SSH Server
            1. Name – anything you like. I used ‘Web Docs Server’
            2. Hostname – I used the DDNS name I set up for this system
            3. Username – cloud-user (the key is already there)
            4. Remote Directory – /var/www/html
              1. Since this is a standard RHEL 7.1 install, that is where the default DocRoot is for apache. Since I am the only one using this system and it has no external visibility I chown’d /var/www/html to I know that’s a total hack, but this is also just a POC. I promise, I do know a little bit about web security.
  3. Setting up the google drive updated code to work on your Jenkins server
    1. This was put together from several PyDrive tutorials, especially this one.
    2. Since this code uses OAuth2 to handle authentication, you have to set that up for your Google Account.
    3. Go into the Google API Control Panel and create a new Application. Their instructions are pretty solid.
      1. You will need the ‘Client ID’ and ‘Client Secret’ for that application.
      2. Inside the App Details, click the ‘Download JSON’. Save this file as ‘client_secrets.json’ (what PyDrive looks for by default)
      3. Create a file called ‘settings.yaml’ and populate it like the sample file in the links above. All you need to change are the values for your Client ID and Client Secret.
      4. At this point I used their demo code at to generate an additional file named ‘credentials.json’. This is the active token that is referenced during the login session. It is refreshed by the OAuth code in PyDrive. Take these 4 files and upload them somewhere easily read on your Jenkins server. I placed them all in /usr/local/bin/gdriveupdate. Be sure to make sure the gdriveupdate file itself is executable. It is what will be called during the Jenkins Build
        1. I’m not sure how long this token will be refreshed. I guess I will ultimately know that once the build fails because of it. Hopefully I’ll have conquered that little challenge by then. Feel free to file an issue on GitHub.
  4. Create a Jenkins job
    1. Fill in the GitLab Repository Name (user/project)
    2. Source Control Management
      1. Git
      2. Repository URL – the ssh compact version for your project
      3. Credentials
        1. I’m using an ssh key for this one. It’s associated with my Jenkins user credentials
      4. Repository Broswer – gitlab
        1. URL – URL for your project
        2. Version – this auto-populated for me
    3. Build Triggers
      1. Build when a change is pushed to GitLab
        1. make note of the CI Service URL
      2. I took the default values
    4. Build Environment
      1. Select the Server you created previously
      2. Source Files – index.html (or more if you’re generating other stuff)
      3. Remote Directory – this will be auto-created in the remote server’s root directory auto-magically. You can name it anything that makes sense for your project
    5. Build – Execute Shell
      1. /usr/bin/asciidoctor -dbook index.adoc
        cp -r /usr/local/bin/gdriveupdate/* $WORKSPACE
        /usr/local/bin/gdriveupdate/gdriveupdate -f $WORKSPACE/index.html -g CSA_Philips_Home_Monitoring
      2. index.adoc is just the convention I’ve adopted. You can redirect the output name via the command line and call it whatever you like.
      3. copying everything for the GDrive into the build workspace is a total hack. I know that. PyDrive can’t find the json and secrets files unless they’re in the current working directory for some reason. Some weird pathing issue that I don’t yet feel like debugging in the project.
  5. Configure GitLab to trigger a Jenkins build
    1. This is based on the GitLab plugin for Jenkins documentation
      1. It is very version specific, but I found that the simple instructions for version 8.0 and higher worked just fine for me. You just have to create a web hook for push and merge events.
      2. The Jenkins URL for the webhook is in the Project Config page in the Build Trigger section where you select the GitLab option.
      3. Go to your GitLab project > Settings > Web Hooks
        1. Select Merge Request and Push events
        2. Paste in the URL from your Jenkins project.


And that’s it. You write about 100 lines of Python to incorporate Google Drive, create a GitLab repo and Jenkins build job. You then link GitLab to Jenkins with a web hook. The Jenkins build then creates your HTML (or your desired format) from your asciidoc and uploads it to Google Drive and your web server.

Now, when I make a push to my GitLab repo after taking notes or writing docs for a given project or customer, the workflow kicks off and publishes my docs in both locations. The build takes < 10 seconds on average. And since it’s a push and not a poll-driven event, they are available almost instantly.


Technical Knowledge Needed – 8/10. You’re not writing kernel modules but it is gluing together several large tools).
Time Requirement – 4/10. This is less than a full day’s work once you have the answers in front of you. To generate the workflow took me about 2 days, all told.

Virtualization vs. Containers – a fight you should never have

I have some time to while away before I get on to a plane to head back home to my amazing wife and often-amazing zoo of animals. Am I at the Tokyo Zoo, or in some ancient temple looking for a speed date with spiritual enlightenment? Of course not. I came to the airport early to find good wifi and work on an OpenStack deployment lab I’m running in a few weeks with some co-workers. Good grief, I’m a geek. But anyway.

Auburn the beagle in her natural habitat
Auburn the beagle in her natural habitat

But before I get into my lab install I wanted to talk a little bit about something I saw way too much at OpenStack Summit. For some reason, people have decided that they are going to try to make money by making Linux Containers ‘the next Virtualization’. Canonical/Ubuntu is certainly the worst example of this, but they are certainly not the only one. To repeat a line I often use when I’m talking about the nuts and bolts of how containers work:

If you are trying to replace you virtualization solution with a container solution, you are almost certainly doing both of them wrong.

First off, at the end of the day it’s about money. And the biggest sinkhole inside a datacenter is not fully utilizing your hardware.

Think about how datacenter density has evolved:

  1. The good ole days – a new project meant we racked a new server so we could segregate it from other resources (read: use port 80 again). If the new project only needed 20% of the box we racked for it, we just warmed the datacenter with the other 80%.
  2. The Virtualization Wars – Instead of a new pizza box or blade, we spun up a new VM. This gives us finer grain control of our resources. We are filling in those resource utilization gaps with smaller units. So that same 20% could be set up multiple times on the same pizza box, giving up closer to 100% resource consumption. But even then, admins tended to err on the side of wasted heat, and we were only using a fraction of the VM’s allocated resources.
  3. The Golden Age of Containers – Now we can confidently take a VM and run multiple apps on it (zomg! multiple port 80s!) So we can take that VM and utilize much more of it much more of the time without the fear that we’ll topple something over and crash a server or a service.

This is where someone always shoves their hand in the air and says

<stinkeye>But I need better performance than a VM can give me so I’m running MY CONTAINERS on BAREMETAL.</stink_eye>

My response is always the same. “Awesome. Those use cases DO exist. But what performance do you need?”.

Here’s the short version.

A properly tuned KVM virtual machine can get you withing 3-4% of bare-metal speed.

Leaving out the VM layer of your datacenter means that once you consume that extra 3-4% of your baremetal system that KVM was consuming, you have to go rack another system to get past it. You lose a lot of the elastic scalability that virtualization gives you. You also lose a common interface for your systems that allow you to have relatively homogeneous solutions across multiple providers like your own datacenter and AWS.

Containers bring some of that flexibility back. but they only account for the Dev side of the DevOps paradigm. What happens when you need to harden an AWS system and all you care about are the containers?

Secondly, hypervisors and Linux containers are FUNDAMENTALLY DIFFERENT TECHNOLOGIES.

A hypervisor virtualizes hardware (with QEMU in the case of KVM), and runs a completely independent kernel on that virtual hardware.

A container isolates a process with SELinux, kernel namespaces, and kernel control groups. There are still portions of the kernel that are shared among all the containers. And there is ONLY ONE KERNEL.

All of that to say they are not interchangeable parts. They may feel that way to the end user. But they don’t feel that way to the end user’s application.

So take the time to look at what your application needs to do. But also take some time to figure out how it needs to do it. All of the use cases are valid under the right circumstances.

  • Straight virtualization
  • Straight baremetal
  • containers on VMs
  • containers on baremetal

A well thought out It infrastructure is likely to have a combination of all of these things. Tools like OpenStack, Project Atomic, and CloudForms make that all much much easier to do these days.

Open Stack Summit Day 3 – Closing Thoughts

wow. I’m exhausted.

OpenStack Summit 2015, Tokyo Edition is over. It was amazing. I have a handful of ideas for follow up technical posts after I have time to get home and dig into them a little bit. But I want to get a few thoughts down on the conference as a whole while I’m sitting in my incredibly small room in Tokyo being too tired to go out on the town.

There could have been a container summit inside OpenStack Summit. Everywhere I turned, people were talking about containers. How to use them effectively and innovate around scaling them. It was awesome. These 2 technologies (IaaS and Containers) are going going to collide somewhere not very far up the road. When they do it is going to be something to behold. I can’t wait to be part of it.

The conference on the whole was incredible. I can’t give enough credit to the team who put it all together. It was stretched out across (at least) 4 buildings on multiple floors, and it worked the vast majority of the time. The rooms were a little over-crowded for the biggest talks (or any talk that had the words ‘container’ or ‘kubernetes’ or ‘nfv’ in the title), and they tended to be a little too warm. The warm seems to be common for most public areas in Japan. I guess that’s just how they roll here.

Probably my biggest criticism of the conference is angled at most of the keynote speakers. They were, on the whole, not great. When I am at a large IT conference like this, I expect the keynote presentations to be motivational and polished. Too many of these were history lessons and needed a few more rounds in front of a mirror. There were exceptions of course (particular kudos to the IBM BlueBox folks!). But that was my biggest ‘needs improvement’ factor the OpenStack Summit Tokyo.

Out of 10, I would give this conference a solid 8. My score for Tokyo would be similar, if not higher.

I can’t wait to see what happens in Austin. I’m already working on ideas for talks. :)

OpenStack Summit Day 2 – Wrapping my head around NFV

Day 3 isn’t technically over yet. But I’m exhausted. And jet lag is hard. So I’m sitting in one of the conference hotels in a very low chair with my laptop in my lap. Don’t judge.

One of the biggest ideas at OpenStack Summit this year is NFV (Network Function Virtualization). A straw poll of the talk descriptions reveals approximately 213245 talks on the topic this week in Tokyo.

A thousand years ago I worked for a crappy phone company in Richmond and I got to know how a telephone company works. I got to spend some time in telephone central offices and helped solve carrier-level problems. With NFV, I understood why people wanted to get into that world (there’s a TON of money sitting in those dusty places). But I didn’t quite understand the technical plan. What is going to be virtualized? Where is the ceiling for it? It just didn’t make engineering sense to me.

To help combat that ignorance I’ve gone to 4 or 5 of those 213245 sessions today. I’ve also asked pretty much every booth in the Marketplace ‘how are we doing this stuff?’. At the HP Helion booth, I got my engineering answer. My disconnect was that the logistics of defining a big pipe (OC-48 or something like that) in software would just be an exercise in futility. Going all the way up the software stack with that many packets would require a horizontal scale that wasn’t cost-effective.

Of course that’s not the goal. There IS a project name CORD (Central Office Re-imagined as a Datacenter) that is intriguing. But it’s also very new, and mostly theory at this point.

But If we can take some of the equipment that is currently out in the remote sites (central offices, cell towers) and virtualize it, we can then move it into the datacenter instead of having it out in the wild. That makes maintenance and fault-tolerance a lot cheaper. It also saves on man-hours since you don’t have to get in a truck and drive out to the middle of nowhere to work on something.

Another added benefit is that it would disrupt the de facto monopoly that currently exists with the companies that provide that specialized equipment. Competition is a good thing.

That’s the gist. And it’s a good one. We can take commodity hardware and use it to virtualize specialied equipment that normally lives in the remote locations. And we can virtualize it in a datacenter that’s easier and cheaper to get to.

OpenStack Summit Day 1 – The Big Tent is BIG, and Tokyo Lessons Learned

My morning started off with a few lessons learned about being in Tokyo, where I speak 0 words of the language.

  • have cash, Japanese cash, if you plan on getting on Tokyo public transit. After learning this lesson I spent 45 minutes looking for a 7-11 (they use Citi ATMs which apparently easier for us gringoes) before getting in a cab to get me to the Summit on time. We passed 4 7-11’s in the first 1/2 mile of my trip. Of Course.

    This guy laughed at me multiple times
    This guy laughed at me multiple times
  • It is a serious walking city. the walking. omg the walking. and then the walking.

But on to the OpenStack Summit stuff, of which there is a lot.

After getting registered with the required keynote addresses. They are all on the schedule, so I won’t go into the who and what, but a few observations.

  1. The production quality is incredibly high. Like giant tv cameras on platforms high. Like 5 big monitors so us in the back can see too, high.

    big crowd filing in before the keynotes started
  2. The speakers were, on the whole, a little unpolished. They usually had good things to say, but could have used a few more dry runs for a crowd this big.
  3. ZOMG the crowd. Well over 5000 people from 56 countries. The big tent really is big these days. It is awesome, in a word. It is also the most inclusive conference I’ve ever attended. That is also very awesome.
  4. Double ZOMG THE HEAT. The conference is stretched out over 3 (4?) hotels plus a conference center. All of the thermostats seem to be set on ~81 Farenheit (Celsius?). Take that and toss in an overcrowded room full of sweaty geeks and things can get a little uncomfortable. Especially in the middles of the aisles. Especially especially after lunch.
  5. The Marketplace (vendors tables) is utter chaos. With that said, Mirantis easily wins this year. They have
  6. There is now an OpenStack Certification. Or there will be soon, at least. You can be a Certified OpenStack Administrator (COA). I don’t know how this is going to play with the existing Red Hat Certification, but I’m interested in finding out.
  7. Openstack has a new way of visualizing it constituent parts. is way WAY better than the old wiki-style nastiness.
  8. Bitnami COO Erica Brescia took some pretty awesome shots at Docker Hub and its lack of curation. It’s the wild west out there, and it comes with consequences. I’m not a huge fan of Bitnami. But I am a huge fan of how Erica Brescia does her job.

My least favorite observation on the day was Canonical’s slogan for LXD. They had an ad on the spashes before the keynotes started and it was something along the lines of “Ubuntu/Canonical has the fastest hypervisor on the planet with lxd $something $something $something”

Hey Canonical, you are aware that containers and virtual machines are different things, right? So are you trying to re-define the word, or are you trying to pass off a container manager as a hypervisor? Huh? At any rate, it’s an awful slogan and even worse marketecture. I’m debating a drive by of their booth tomorrow.

After lunch I went to a talk held by Mirantis where the compared a base install of their offering to a GA(ish?)-release of RHEL OSP 7. They were more fair and balanced than I thought they would be. Their product, Fuse, is 3 or 4 years old at this point and very polished. OSP 7 uses OSP Director, which is based on TripleO. OSP 7 is Red Hat‘s first release based on this installer. It suffers from exactly the warts you think it would.

With that said, I was surprised they had to pic some pretty small nits to make their presentation work. A lot of their documentation issues were already addressed. But they correctly identified the biggest areas of need for OSPd as Red Hat works to mature it in OSP 8 and beyond.

All in all Day 1 was great fun. I’m looking way forward to Day 2. On top of that I’m PRETTY SURE I can get to and return from the conference using Tokyo Public Transport.

OpenStack Summit Day 0 – a mode B train ride

This year I get to go to Open Stack Summit in Tokyo. It will be my first time visiting Japan. Right now I am in a very small hotel room at 3am (local time), wide awake because I went to bed at 7pm. Such is jet lag, I guess. My personal goal for this even is to create a short post daily with some initial thoughts / reactions / fun things I learned.

Day 0 was just getting here. I got on a plane in Richmond, VA just before 9am Eastern on Sunday morning. I got off a plane an hour outside of Tokyo at 3:40pm Monday. I was in the air for ~16 hours total. Time zones and datelines will forever befuddle me. I took the Narita Express train from the airport to downtown Tokyo.

The practical thing I learned is that GPS on your phone SUCKS in downtown Tokyo. I was going to walk the ~1 mile from Tokyo Station to my hotel. Google Maps took me in 4 different directions while thinking I was on 3 different roads before I gave up and got into a cab. I’m pretty sure I would still be walking if not for that nice man.

The other thing that jumped out at me was during the train ride in. Space is so much more utilized in Japan. At first I thought it was just sort of stacked up and haphazard. But as I rode by it I began to see the organization and beauty in how the space in the Tokyo area is utilized. It’s pretty amazing.

It made me start thinking about my own house and the 5 acres of trees that it sits on. Not in a better/worse sort of way. Obviously I have different goals than someone who lives near downtown Tokyo. But when I give a talk about containers I talk a lot about them being the ‘next layer of density’ in computing. Bimodal IT is one of the biggest concepts in that area.

Over the next few days, I will definitely be a mode A guy walking around in a mode B country. Wish me luck!