RSS from Trello with Jenkins

Trello is a pretty nice web site. It is (sort of) a kanban board that is very useful when organizing groups of people in situations where a full agile framework would be too cumbersome. Kanban is used a lot in IT Operations. If you want a great story on it, go check out The Phoenix Project.

One thing Trello is lacking, however, is the ability to tap into an RSS-style feed for one or more of your boards. But, where there is an API, there’s a way. This took me about 30 minutes to iron out, and is heavily borrowed from the basic example in the documentation for trello-rss.

Step One – Three-legged OAuth

Trello uses OAuth. So you will need to get your developer API keys from Trello. You will also need to get permanent (or expiring whenever you want) OAuth tokens from them. This process is a little cloudy, but I found a post on StackOverflow that got me over the hump.

Step Two – a little python

I created a little bit of python to handle this for me. Bear in mind it’s still VERY rough. My though is to start to incorporate other Trello automation and time-savers into it down the road. If that happens I’ll stick it out on github.

#!/usr/bin/env python
from trello_rss.trellorss import TrelloRSS
from optparse import OptionParser
import sys
class TrelloAutomate:
 ''' 
 Used for basic automation tasks with Trello, 
 particularly with CI/CD platforms like Jenkins.
 Author: jduncan
 Licence: GPL2+
 Dependencies (py modules):
 - httplib2
 - oauthlib / oauth2
 '''
 def __init__(self):
  reload(sys)
  sys.setdefaultencoding('utf8')
  self.oauth_token = $my_token
  self.oauth_token_secret = $my_token_secret
  self.oauth_apikey = $my_api_key
  self.oauth_api_private_key = $my_api_private_key
 def _get_rss_data(self):
  try:
   rss = TrelloRSS(self.oauth_apikey,
     self.oauth_api_private_key,
     self.oauth_token,
     channel_title="My RSS Title",
     rss_channel_link="https://trello.com/b/XXX/board_name",
     description="My Description")
   rss.get_all(50)
   data = rss.rss
   return data
  except Exception,e:
   raise e
 def create_rss_file(self, filename):
  data = self._get_rss_data()
  fh = open(filename,'w')
  for line in data:
   fh.write(line)
  fh.close()
 def main():
  parser = OptionParser(usage="%prog ", version="%prog 0.1")
  parser.add_option("-r", "--rss", 
    action="store_true", 
    dest="rss", 
    help="create the rss feed", 
    metavar="RSS")
  parser.add_option("-f", "--file", 
    dest="filename", 
    default="trellorss.xml", 
    help="output filename. 
    default = trello.xml", 
    metavar="FILENAME")
  (options, args) = parser.parse_args()
  trello = TrelloAutomate()
  if options.rss:
   trello.create_rss_file(options.filename)

if __name__ == '__main__':
main()

Step Three – Jenkins Automation

At this point I could stick this little script on a web server and have it generate my feed for me with a cron tab. But that would mean my web server would have to have to build content instead of just serving it. I don’t like that.

Instead I will build my content on a build server (Jenkins) and then move deploy it to my web server so people can access my RSS feed easily.

Put your python on your build server

Get your python script to your build server, and make sure you satisfy all of the needed dependencies. You will know if you haven’t, because your script won’t work. 🙂 For one-off scripts like this I tend to put them in /usr/local/bin/$appname/. But that’s just my take on the FHS.

Create your build job

This is a simple build job, especially since it’s not pulling anything out of source control. You just tell it what command to run, how often to run it, and where to put what is generated.

trello-rss-1
The key at the beginning is to not keep all of these builds. If you run this frequently you could fill up lots of things on your system with old cruft from 1023483248 builds ago. I run mine every 15 minutes (you’ll see later) and keep output from the last 10.
trello-rss-2
Here I tell Jenkins to run this job every 15 minutes. The syntax is sorta’ like a crontab, but not exactly. The help icon is your friend here.
trello-rss-3
I have previously defined where to send my web docs (see my previous post about automating documentation). If you don’t specify a filename, the script above saves the RSS feed as ‘trello.xml’. I just take the default here and send trello.xml to the root directory on my web server.
trello-rss-4
And this is the actual command to run. You can see the -f and -r options I define in the script above. $WORKSPACE is a Jenkins variable that is the filesystem location for the current build workspace. I just output the file there.

Summary

So using a little python and my trusty Jenkins server, I now have an RSS Feed at $mywebserver/trello.xml that is updated every 15 minutes (or however often you want).

Of course this code could get way more involved. The py-trello module that it uses is very robust and easy to use for all of your Trello needs. I highly recommend it.

If I have time to expand on this idea I’ll post a link to the github where I upload it.

-jduncan

 

and you think the code is the hardest part

Well, you’re pretty much right. BUT.

I’ve been working, on and off, on a project called soscleaner since last December-ish. It’s a pretty straight-forward tool. It takes an existing sosreport and obfuscates data that people don’t typically like to release like hostnames and IP addresses. The novel part is that it maintains the relationships between obfuscated items and their counterparts. So a hostname or IP address is obfuscated with the same value in all of the files in an sosreport. It allows the person looking at the ‘scrubbed’ report to still perform meaningful troubleshooting.

It’s not a big enough problem to get a true company or engineer’s attention, but it’s too big for a hack script. So I decided to try and tackle it. And I have to say that the current iteration isn’t too bad. It doesn’t what it’s supposed to pretty reliably, and all of the artifacts to make it a ‘real program’ are in place. Artifacts like:

  • issue tracking
  • wiki
  • README and Licensing Decisions
  • publishing binary packages (RPMs in this case, right now)
  • publishing to language-specific repositories (PyPi in this case, since it’s a Python application)
  • creating repositories (see RPM’s link)
  • submitting it to a Linux distro (Fedora in this case, for now)
  • writing unittests (a first for me)
  • creating some sort of ‘homepage
  • mailing lists

All of this has been an amazing learning experience, of course. But my biggest take away, easily, is that all of the things that wrap around the code to actually ‘publish’ an application is almost as hard as the coding itself. I am truly stunned, and I have a new appreciation now for the people who do it well every day.

SELinux talk at RVaLUG – 20140419

This morning I gave what was a pretty well-received talk about SELinux. We got into the important definitions and pretty down deep into how type enforcement works. Lots of practical examples and fun stuff.

Of course why spend hours coming up with a new slide deck when you can borrow from amazing work done by co-workers. 🙂

The slide deck I used was a slightly modified deck used (last I know of) for a Red Hat TAM Webinar last April.  It also came with a set of lab questions that we didn’t have time to go through today.

And of course, there is the SELinux Coloring Book.

The talk was long for a LUG meeting (right around 90 minutes plus a little follow-up), but the interaction was great and I think we had some good communication going.

20140419-jduncan-selinux_workshop-lab_redux

20140419-jduncan-selinux_workshop_redux

kpatch – my kneejerk reaction

Oracle gobbled up a company called KSplice 50 ITYA (IT Years Ago – or July 2011). They then shoe-horned it into their downstream clone of RHEL so people could slip in kernel upgrades without rebooting systems sort of like how magicians yank table cloths out from under dishes on a table. It’s scary on any number of levels.

Now there is a new-ish project called kpatch that has the backing of Red Hat (full disclosure – I work for Shadowman). I’ve only had a little time to look at the incomplete documentation on how it works. That said, it looks to be a huge step forward over ksplice. From it’s Red Hat Blog announcement:

With respect to granularity, kpatch works at the function level; put simply, old functions are replaced with new ones.  It has four main components:

  • kpatch-build: a collection of tools which convert a source diff patch to a hot patch module. They work by compiling the kernel both with and without the source patch, comparing the binaries, and generating a hot patch module which includes new binary versions of the functions to be replaced.
  • hot patch module: a kernel module (.ko file) which includes the replacement functions and metadata about the original functions.
  • kpatch core module: a kernel module (.ko file) which provides an interface for the hot patch modules to register new functions for replacement.  It uses the kernel ftrace subsystem to hook into the original function’s mcount call instruction, so that a call to the original function is redirected to the replacement function.
  • kpatch utility: a command-line tool which allows a user to manage a collection of hot patch modules.  One or more hot patch modules may be configured to load at boot time, so that a system can remain patched even after a reboot into the same version of the kernel.

That’s way cooler than just doing some fancy RAM voodoo and slipping new kernels in like ksplice.

But I still don’t see where it has a place on a company’s production server or in their security plans.

I believe that if a system cannot sustain the reboot of a single instance of Linux (physical or virtual) then there is a serious flaw in its architecture. To further that I think something like kpatch could end up being a strong crutch to bad architects out there; allowing them to  keep working in this flawed manner.

I know that my crazy idealism doesn’t represent the current reality everywhere (or almost anywhere). But if this is the only justification for its existence then I think we could have and should be using our cycles better somewhere else.

More details as I discover them.

My Own Private Cloud Part One – The Hardware

Purpose

I work from home. That means I often need (or at least desire) to be a relatively self-sustained node. For a lot of my work and research, that means I need a pretty well stocked and stable home lab environment. I’ve recently decided to refresh and improve my home/work lab. This series of posts will document what I have going on now (and why), and what I plan to have going on in the future.

In The Beginning

I love technology. I’m literally paid dollars to stay on top of it and help people get the most out of it. But for a long time my own home lab was a pretty pathetic creation. Before coming to work at Red Hat, it was nothing more than what I could fit on my laptop. Since coming to Red Hat it was a Dell T3500 Workstation running RHEL 6 and KVM. On and off, I would have a Satellite server up to help provision systems faster, but it wasn’t a mainstay.

Initial Hypervisor / Lab System

Welcome the New (to me) blood

***Everything going forward has to be prefaced with the fact that I am a complete and utter cheapskate. </full_disclosure>.***

After hemming and hawing about it for a few months I decided to pull the trigger. I needed 3 things:

  • A control server / utility server
  • A second hypervisor (along with my T3500) so I can high-availability tests with things like Openstack RDO and also survive downtime.
  • A NAS so I can do at least NFS-style sharing

So off to Google shopping I go. I finally decided on the following:

Second Hypervisor

Utility Server

NAS Storage

  • Western Digital MyBook Live 3TB appliance
  • I know. No redundant disks. The stuff I really care about is backed up to multiple ‘net-based services. This lab isn’t to serve pictures and be a media hub for my house. It’s a research and development lab/playground.

Gigabit Switch

  • TrendNet 8-port gigabit switch
  • The first one died in a puff of smoke when i plugged in the power supply. A quick replacement by amazon and its replacement seems to be working really well.

After months on the request list, my company also approved my request for a second hypervisor (a little late, but awesome). So now I have three.

Third Hypervisor

Next Up

So all of the boxes are unpacked and set with the recycling. Now what? Now we get to the fun part is what. It took me a few iterations to arrive at a toolchain that I really liked and worked well. The next few posts will be dedicated to talking about that toolchain and how I got it all set up and glued together. Here’s a pic of my not-very-well-organized-yet lab up and running.

My refreshed Home Computer Lab
My refreshed Home Computer Lab

Reason 924308 to GET RID OF YOUR BLACKBERRY

(full disclosure – no, I don not have the other 924307 reasons documented currently. But I have no doubt that’s a conservative estimate)

I can’t imagine, at this point in 2013 with its comedic dominance of Android and iOS and the rampant adoption of “BYOD” principles, that a company out there would still be holding on to their BES Server and crates full of Blackberry World Editions. I think that if I were to sit in an interview with someone and they pulled out a blackberry I would be sorely tempted to get up and leave. And while walking through the lobby of their Web 1.0 company, I’d be tweeting about it on my Swype on-screen keyboard while simultaneously listening to Spotify (yes, multiple simultaneous programs running, RIM, it can happen).

I can hear the people who managed this stuff in places I used to work, with their Neanderthal mindsets; “Jamie, you idiot, we have to have a RIM server so we can control the security ourselves. That cloud stuff can’t be secured!”.

Normally I would just roll my eyes at them, and then turn around and keep dragging what I could of their company out of the dark ages. But Richmond, VA based Risk-Based Security was recently talking about a newly found and really really pathetic vulnerability in RIM’s on-premise server.

The RIM server would send my email username and password out on the network to my mail servers in cleartext.

Not their own BBM or similar outdated software. MY EMAIL ACCOUNT CREDENTIALS.

I mean, seriously?!

While I don’t wish anyone to lose their jobs, I do wish RIM would go ahead and finish folding already. You missed the boat by failing to innovate. At least die with some dignity. Don’t be another Kodak.

 

The Game Changers Are Arriving

Like people all across the planet, I have been watching the Raspberry Pi Project for a long time (http://www.raspberrypi.org). $35 for a fully usable computer the size of a credit card.

Undoable.

Well, now they’ve done it. The first few have been bought for charity and the next 10,000 are being manufactured.

People aren’t waiting around to do amazing things with these little wonders, either. Here is a video (from SCALE 10x, I think) of XMBC (http://www.socallinuxexpo.org/scale10x) running pretty well on a Raspberry Pi card.

This is exciting, but imagine when this device, or some derivative, hits the 10,000,000 units sold mark. They’ll cost $10 at that point, and have twice the power.

  • They’ll be a low-cost upgrade to coats, embedding weather information and music players and Lord knows what else
  • Every school desk on the planet can have one screwed to the bottom. Universal computer access for students. On a budget.
  • Hackers will combine these with Arduinos (http://www.arduino.cc/) and similar devices and begin building out household automation.

That’s the stuff that really gets me going. Attaching a programmable microcontroller (like an Arduino), a handful of $10 low-speed wireless transmitters, a combination of cameras, accelerometers, pressure switches (all made very small and cheap thanks to your smartphone) and a shelf full of these wonderful little micro-servers and *poof*…

Disney’s house of tomorrow finally arrives. Accelerometers can tell when the floor moves to decide when someone walks in a room to turn on the lights or up to a door to open it. Cameras are wireless and cheap and easily placed for security and to provide data on when people were in rooms. *Poof* Heating your house just became more intelligent and efficient. Pressure sensors can essentially act as barometers to gauge weather, opening and closing shades and windows to most effectively cool your house in the summer. Toss in some passive RFID wrist-bands and we’re halfway to Gattaca.

And this technology isn’t coming out of Oracle or Microsoft or Apple. It’s coming out of open source projects and hacker spaces. Vive la Revolution.