RSS from Trello with Jenkins

Trello is a pretty nice web site. It is (sort of) a kanban board that is very useful when organizing groups of people in situations where a full agile framework would be too cumbersome. Kanban is used a lot in IT Operations. If you want a great story on it, go check out The Phoenix Project.

One thing Trello is lacking, however, is the ability to tap into an RSS-style feed for one or more of your boards. But, where there is an API, there’s a way. This took me about 30 minutes to iron out, and is heavily borrowed from the basic example in the documentation for trello-rss.

Step One – Three-legged OAuth

Trello uses OAuth. So you will need to get your developer API keys from Trello. You will also need to get permanent (or expiring whenever you want) OAuth tokens from them. This process is a little cloudy, but I found a post on StackOverflow that got me over the hump.

Step Two – a little python

I created a little bit of python to handle this for me. Bear in mind it’s still VERY rough. My though is to start to incorporate other Trello automation and time-savers into it down the road. If that happens I’ll stick it out on github.

#!/usr/bin/env python
from trello_rss.trellorss import TrelloRSS
from optparse import OptionParser
import sys
class TrelloAutomate:
 ''' 
 Used for basic automation tasks with Trello, 
 particularly with CI/CD platforms like Jenkins.
 Author: jduncan
 Licence: GPL2+
 Dependencies (py modules):
 - httplib2
 - oauthlib / oauth2
 '''
 def __init__(self):
  reload(sys)
  sys.setdefaultencoding('utf8')
  self.oauth_token = $my_token
  self.oauth_token_secret = $my_token_secret
  self.oauth_apikey = $my_api_key
  self.oauth_api_private_key = $my_api_private_key
 def _get_rss_data(self):
  try:
   rss = TrelloRSS(self.oauth_apikey,
     self.oauth_api_private_key,
     self.oauth_token,
     channel_title="My RSS Title",
     rss_channel_link="https://trello.com/b/XXX/board_name",
     description="My Description")
   rss.get_all(50)
   data = rss.rss
   return data
  except Exception,e:
   raise e
 def create_rss_file(self, filename):
  data = self._get_rss_data()
  fh = open(filename,'w')
  for line in data:
   fh.write(line)
  fh.close()
 def main():
  parser = OptionParser(usage="%prog ", version="%prog 0.1")
  parser.add_option("-r", "--rss", 
    action="store_true", 
    dest="rss", 
    help="create the rss feed", 
    metavar="RSS")
  parser.add_option("-f", "--file", 
    dest="filename", 
    default="trellorss.xml", 
    help="output filename. 
    default = trello.xml", 
    metavar="FILENAME")
  (options, args) = parser.parse_args()
  trello = TrelloAutomate()
  if options.rss:
   trello.create_rss_file(options.filename)

if __name__ == '__main__':
main()

Step Three – Jenkins Automation

At this point I could stick this little script on a web server and have it generate my feed for me with a cron tab. But that would mean my web server would have to have to build content instead of just serving it. I don’t like that.

Instead I will build my content on a build server (Jenkins) and then move deploy it to my web server so people can access my RSS feed easily.

Put your python on your build server

Get your python script to your build server, and make sure you satisfy all of the needed dependencies. You will know if you haven’t, because your script won’t work. 🙂 For one-off scripts like this I tend to put them in /usr/local/bin/$appname/. But that’s just my take on the FHS.

Create your build job

This is a simple build job, especially since it’s not pulling anything out of source control. You just tell it what command to run, how often to run it, and where to put what is generated.

trello-rss-1
The key at the beginning is to not keep all of these builds. If you run this frequently you could fill up lots of things on your system with old cruft from 1023483248 builds ago. I run mine every 15 minutes (you’ll see later) and keep output from the last 10.
trello-rss-2
Here I tell Jenkins to run this job every 15 minutes. The syntax is sorta’ like a crontab, but not exactly. The help icon is your friend here.
trello-rss-3
I have previously defined where to send my web docs (see my previous post about automating documentation). If you don’t specify a filename, the script above saves the RSS feed as ‘trello.xml’. I just take the default here and send trello.xml to the root directory on my web server.
trello-rss-4
And this is the actual command to run. You can see the -f and -r options I define in the script above. $WORKSPACE is a Jenkins variable that is the filesystem location for the current build workspace. I just output the file there.

Summary

So using a little python and my trusty Jenkins server, I now have an RSS Feed at $mywebserver/trello.xml that is updated every 15 minutes (or however often you want).

Of course this code could get way more involved. The py-trello module that it uses is very robust and easy to use for all of your Trello needs. I highly recommend it.

If I have time to expand on this idea I’ll post a link to the github where I upload it.

-jduncan

 

Virtualization vs. Containers – a fight you should never have

I have some time to while away before I get on to a plane to head back home to my amazing wife and often-amazing zoo of animals. Am I at the Tokyo Zoo, or in some ancient temple looking for a speed date with spiritual enlightenment? Of course not. I came to the airport early to find good wifi and work on an OpenStack deployment lab I’m running in a few weeks with some co-workers. Good grief, I’m a geek. But anyway.

Auburn the beagle in her natural habitat
Auburn the beagle in her natural habitat

But before I get into my lab install I wanted to talk a little bit about something I saw way too much at OpenStack Summit. For some reason, people have decided that they are going to try to make money by making Linux Containers ‘the next Virtualization’. Canonical/Ubuntu is certainly the worst example of this, but they are certainly not the only one. To repeat a line I often use when I’m talking about the nuts and bolts of how containers work:

If you are trying to replace you virtualization solution with a container solution, you are almost certainly doing both of them wrong.

First off, at the end of the day it’s about money. And the biggest sinkhole inside a datacenter is not fully utilizing your hardware.

Think about how datacenter density has evolved:

  1. The good ole days – a new project meant we racked a new server so we could segregate it from other resources (read: use port 80 again). If the new project only needed 20% of the box we racked for it, we just warmed the datacenter with the other 80%.
  2. The Virtualization Wars – Instead of a new pizza box or blade, we spun up a new VM. This gives us finer grain control of our resources. We are filling in those resource utilization gaps with smaller units. So that same 20% could be set up multiple times on the same pizza box, giving up closer to 100% resource consumption. But even then, admins tended to err on the side of wasted heat, and we were only using a fraction of the VM’s allocated resources.
  3. The Golden Age of Containers – Now we can confidently take a VM and run multiple apps on it (zomg! multiple port 80s!) So we can take that VM and utilize much more of it much more of the time without the fear that we’ll topple something over and crash a server or a service.

This is where someone always shoves their hand in the air and says

<stinkeye>But I need better performance than a VM can give me so I’m running MY CONTAINERS on BAREMETAL.</stink_eye>

My response is always the same. “Awesome. Those use cases DO exist. But what performance do you need?”.

Here’s the short version.

A properly tuned KVM virtual machine can get you withing 3-4% of bare-metal speed.

Leaving out the VM layer of your datacenter means that once you consume that extra 3-4% of your baremetal system that KVM was consuming, you have to go rack another system to get past it. You lose a lot of the elastic scalability that virtualization gives you. You also lose a common interface for your systems that allow you to have relatively homogeneous solutions across multiple providers like your own datacenter and AWS.

Containers bring some of that flexibility back. but they only account for the Dev side of the DevOps paradigm. What happens when you need to harden an AWS system and all you care about are the containers?

Secondly, hypervisors and Linux containers are FUNDAMENTALLY DIFFERENT TECHNOLOGIES.

A hypervisor virtualizes hardware (with QEMU in the case of KVM), and runs a completely independent kernel on that virtual hardware.

A container isolates a process with SELinux, kernel namespaces, and kernel control groups. There are still portions of the kernel that are shared among all the containers. And there is ONLY ONE KERNEL.

All of that to say they are not interchangeable parts. They may feel that way to the end user. But they don’t feel that way to the end user’s application.

So take the time to look at what your application needs to do. But also take some time to figure out how it needs to do it. All of the use cases are valid under the right circumstances.

  • Straight virtualization
  • Straight baremetal
  • containers on VMs
  • containers on baremetal

A well thought out It infrastructure is likely to have a combination of all of these things. Tools like OpenStack, Project Atomic, and CloudForms make that all much much easier to do these days.

Old Sysadmin Dog and New Monitoring Tricks

For as long as I’ve been paid to know stuff about computers, I’ve been a fan of Zabbix (http://www.zabbix.com). It’s been my go-to monitoring application because it’s incredibly powerful, tune-able and isn’t hard to get up and running (especially considering the power and tune-ability).

However, recently I’ve been presented with a new product that is making me re-think my stance that there is one best monitoring application in modern IT. I was introduced to New Relic (http://www.newrelic.com).  Normally this ‘cloudy’, ‘monitoring-as-a-service’ application wouldn’t get my attention.  But I was made to use recently at a customer site and I have to admit that it has some really good points.

Big Bang For Your Network Traffic

New Relic condenses down the data into a JSON object that you POST into their API.  So if you have 1000 characters in your JSON post and you send the data once per minute, you have 1000 bytes of data uploaded per minute per server for your monitoring solution.  1K per minute isn’t too bad for 1 minute granularity with your monitoring application.

For comparison, a Linux Zabbix agent monitoring 40-50 items with various frequency averages 2-3Kbps of data per second to its server.

Simple to Configure

You can monitor anything you like, with a caveat.  As long as you can gather the data as an numeric value (integer/float), then you can upload it to their API and track it over time.  The server plugin is a little thin (I’d imagine because it has to work with any OS), so I wrote my own RHEL-centric plugin (end of personal plug).

Presentation is Key

The New Relic UI is really gorgeous.  It’s also intuitive and pretty powerful. You can customize dashboards for your plugin to present data in ways that make sense to you and your purpose.  All very web 2.0.

It’s Free, as in food.

You can register for a New Relic account at their free level and get a pretty usable monitoring platform. Of course $$ unlocks cooler features.

It’s Not ALL Cupcakes and Sunshine

New Relic is what I call a ‘track and graph’ monitoring application.  Simply, you can track data and graph it to analyze and look for performance trends. With their application-level analytics they can do all sorts of neat stuff. I’ve seen some Java thread analysis that would knock your socks off. But from a DevOps perspective, New Relic can be likened to a lightweight 1-minute granularity replacement for ‘sar’ data and all of those scripts we’ve all written over the years to track stuff.

The alerting is somewhat limited as well.  You can set threshold alerts, but that’s about all.  There’s no staggered alerts and as far as I know, there’s not the ability to script alerts from within New Relic either.

Conclusions

New Relic is not a replacement for a monitoring solution that is as robust as something like Zabbix.  It’s not designed to be, and it doesn’t try to be. From a DevOps perspective (and that’s not even really its best use case), it can be a great way to get useful data on your systems quickly and with very little overhead.  On top of that, a box only needs to be able to use New Relic is outbound web access to their API or to an HTTP/HTTPS web proxy.

Is Ubuntu getting itself ready for the big F word?

FULL DISCLOSURE: I’m an active member of the Fedora Project community, and I also happen to work at Red Hat, Inc. (not in a code development engineering role).

No.  Not that F word. In an open source project, especially one with large corporate backing, there’s an even worse utterance out there…

Fork.

Is the Ubuntu project setting itself up for a fork down the road? 

Obviously if I could tell the future I would be doing much more productive things than sorting through Canonical’s messy handling of their Linux distribution. Winning on the ponies at Aqueduct jumps to mind. At any rate, I have no idea if the SABDFL will hold his ship together or not, but there is evidence to suggest that it could easily be heading for a messy fork in the river.

What is Upstream? What is Downstream?

This has always confused me, but where does “derived” fit in the standard model of how an open source project lives. It’s derived from Debian, but what is the actual relationship? Is it upstream from Debian, Downstream? Were they just standing in line together at the DMV one Saturday? It’s a small issue, but it nags at me to no end.

The “Community” Organization

Back in the days of Dapper Drake, it was funny to refer to Mark Shuttleworth as the “Self-Appointed Benevolent Dictator For Life”. It was cute. It was quirky. It was sort of orangey-brown. Just like Ubuntu. But Ubuntu was about togetherness and community, and it was the coolest Linux distro out there at the time. Upstream drivers and a 6-month release cycle and holy crap, it supports my video card!

Ubuntu’s dictator also has the large task of staffing Ubuntu’s Community and Technology governing entities.

The Community Council’s charter is to:

The social structures and community processes of Ubuntu are supervised by the Ubuntu Community Council. It is the Community Council that approves the creation of a new team or project, along with team leader appointments. The council is also responsible for the Code of Conductand tasked with ensuring that community members follow its guidelines.

The Technology Board is responsible for:

The Ubuntu Technical Board is responsible for the technical direction that Ubuntu takes. It makes decisions on package selection, packaging policy, installation systems and processes, kernel, X server, library versions and dependencies. The board works with the relevant team to try to establish a consensus on the right direction to take.

Fast forward to 2012 and Canonical is tryin to monetize Ubuntu, squeezing it into anything that someone has the guts to ask them about. Sadly squeezing into places where Linux itself has been for ages is the most common use case I’ve been able to find (Ubuntu TV and Ubuntu in your car).  You also have 8 years of Mark Shuttleworth picking the people on and direction of the two major governing bodies within Ubuntu itself. Fun examples of this attempt to monetize Ubuntu can be found in the latest releases Amazon “integration” (shame on you, Amazon) and also in this bug, talking about searches run from the Unity dash.

Unity

I’ve used Unity for a grand total of 8 minutes. But I know that one of two scenarios about it is true:

1. the minority of unsatisfied Unity users is exponentially more vocal than the satisfied majority

OR

2. there are a LOT of Ubuntu users out there that are NOT HAPPY WITH UNITY.

A search of Mark Shuttleworth’s blog for “unity” shows one early conceit that the original versions “sucked”, but that they are now “well positioned”. Whatever the community wants, it looks like Unity isn’t going anwwhere except into Ubuntu.

Pulling Bits out of the Community’s Hands

I’ve seen spin on this article and the blog post that caused it to be written, but I can only read it one way that makes sense to me.

Canonical (read Mark Shuttleworth) believes that a small group of people in a closed environment can do better work than a large community.

I’ve “done” FLOSS for a while now, and if there is a single immutable truth it is this: Open Source is noisy and often messy. But it’s the noise and mess where you find the genius that changes the world. The off-hand idea in a minor listserv. The idea floated as “impossible” or “impractical”, just like the GCC was back in the day.

I don’t care if he releases all of the code under the GPL after he makes it. When he decided to create skunkworks teams of his hand-selected people, Ubuntu stopped being a community project. And maybe it never was. I’m not saying that Ubuntu is invalidated as a product because it’s not community-driven. I’m just saying stop talking the talk. One thing I do know is that if happens within Fedora (a little response to this), it happens out in the light of day, on a mailing list or in IRC or in the web tools. If I were a contributor to the Ubuntu project I’d be seriously thinking about offering up my time and talents.

Who can NOT afford Open Source Software?

I just read an interesting article (http://stop.zona-m.net/2012/01/who-can-afford-open-source/), where a maker of school administration software (I’m thinking like Blackboard?) in Italy thinks:

you must either have a big organization that supports the development of that software, or be yourself a big company that can afford to make money also in other ways.

It really makes me sad that this business owner has reached this particular conclusion. I’m not familiar with this product, of course, and I don’t speak Italian, but I imagine that the owner is selling licenses and guaranteeing support, etc. as a sole proprietor or with a couple of employees. He has his code in a vault, and is spending time on development and (likely the biggest time sink) support.

Imagine now, if you will:

This same person concentrated his efforts in developing an open source community around his product. If it’s a good product and he is committed to open source, people will join the effort. Suddenly he has 2 contributors, and then five. and then 10. and then 50, and he’s steering his project instead of having to produce everything himself. His project is more robust because more eyes have seen the code. It’s stronger because the group is large enough to effectively integrate test-driven development and CI principles.

So how does our project owner make money? Well, there are several open source business models (by no means an exhaustive or scientific list):

  • The Subscription Model (a la Red Hat) – You sell access to the software for a given time period, and can attach support SLA’s on as premiums. Proven to be effective.
  • The Support/Training Model – The software is open source, and the company sells installation and implementation expertise. They typically also offer training and customization. Proven to be effective by OpenNMS,Zabbix, and many others.
  • The Open Core Model – The core application is open source, and the company offers extensions or plugins that may be closed to customers, along with support & training. I have some personal issues with this model, but several companys (Zenoss, Vyatta, and others) are currently making money with it.

So at the end of the day this project could have:

  • a better code base
  • a more robust development environment
  • wider adoption
  • more contributors

and many other benefits by moving to a FOSS business model. Of course I’m not saying that this person should automatically do this. I’m not a business advisor by any stretch of the imagination. But I really don’t think that his statement holds water. It is completely and increasingly feasible to make money with an open source software projcet.

Kicking off 2012 with some downstream fun

Red Hat makes it a not-impossible task to remove the Red Hat branding from their flagship product and make your own distribution. In a comedically over-simplified way, essentially you can:

  • download the freely-available RHEL source RPMs
  • de-brand them (essentially replace redhat-logos and a few other key packages)
  • build out all of your altered RPMs and make a distro out of it (WAY out of this scope)
  • release your distribution under the same open source license as RHEL

There are several players in the universe of downstream, RHEL-derived releases, currently, and their situations are always changing. That is, of course, because they are all-volunteer efforts. Last year CentOS (http://www.centos.org) had some serious issues with their 6.0 release, and have always had a reputation of being an opaque and non-communicative group. Scientific Linux seized on the CentOS issues and attempted to make some real in-roads into their user base and community visibility. Another group actually formed in 2011, tryin to take the lessons learned from CentOS and Scientific Linux and make a new distribution with transparency built-in to the community. So how is every one doing today?

CentOS (http://www.centos.org)

In North America, CentOS has been the goto RHEL-derived Linux distribution since before I ever got into IT. The releases are normally stable, but have been plagued in the past by delays in security updates, point upgrades, and almost always a lack of communication from the CentOS team. There was talk in 2011 of CentOS finally moving to a rolling release methodology, but I can’t find any confirmation that it ever happened. Their site also has no links I could find to 6.x documentation or process. This isn’t to say that it isn’t happening, it’s just to say that they’re not telling anyone about it. They do announce 6.2 being released, which brings them more or less up to their upstream source. With all of the turbulence and lack of communication and confusion, the mirrors stay up and I have to say I’ve got a 6.x CentOS ISO sitting on my desk… somewhere.

Scientific Linux (http://www.scientificlinux.org/)

Scientific jumped pretty hard at CentOS last year when it took CentOS a LONG time to get their 6.0 release out. For the first time, people who had never really considered anything other than CentOS were scratching their head and wondering who this upstart from CERN was. Of course, SL is no upstart, being around since 2004. This 2011 inroad into CentOS-land was relatively short-lived, however.  In August, Troy Dawson announced that he was leaving the SL project to work for Red Hat on their new OpenShift project. Not long after that, rumors of lack of team cohesion and direction started to bubble up. Currently, SL’s most recent release is Scientific Linux 6.2 beta 2 (as of January 9, 2012).

AscendOS (http://www.ascendos.org/)

AscendOS is the new kid on the block in this neighborhood, hoping to have their first production release based on RHEL 6.3. I actually had a conversation via email with Andrew, one of the AscendOS team leaders. He confirmed that AscendOS is going through a lot of the growin pains of any new open source project. Those problems are, of course, the number of contributors and the amout of time those contributors can allocate to a project. AscendOS has some developer builds available for download, and are still actively working to refine their build process and environment. If you can and want to help an interesting new project, I highly encourage you to give this project a look.

While these aren’t by any stretch all of the RHEL-derived downstream distributions, these are the ones that most interest me currently. Are there any other interesting ones out there?

(A few) random observations from day one of Ohio Linux Fest

I came up yesterday from Richmond, VA with two other Linux geeks and we’re all experiencing our first Ohio Linux Fest. A few observations from day One.

  • The Columbus Convention Center is HUGE. Getting from the hotel to the actual meeting rooms is sort of a spoof of The Two Towers in its own right. Definitely a journey.
  • The size of the convention center, so far, is making it feel a little less cozy as compared to SELF this past year. I’ve no doubt that will change tomorrow.
  • The people here are top-notch, period.
  • The t-shirts are pretty cool, even if they are “Ohio State Red”.
  • A lack of free wifi in the convention center and a shortage of power plugin spots isn’t great.
  • The Red Roof across the street doesn’t have a great continental breakfast
  • Both local bars we’ve tried so far have been great
  • I’m really looking forward to tomorrow.
I’ll try to have something more substantive up later tonight, booze-willing. 🙂