I don’t hate the word cloud anymore, maybe

Since it hit the scene in earnest a few years ago, I’ve despised the word ‘cloud’ in the context of what I do for a living. I’ve warned people prior to using it in presentations, proclaimed my joy for having not used the word and bashed it most every time it was mentioned.  I’m here to say that my position on ‘the cloud’ has matured now. I don’t hate the word. But I do hate how most people in the world are defining it.

My own definition has taken a long time to develop. I’ve known it was a powerful concept for some time. I’ve also known most of the IT world has been talking about of the side of their neck when they made their salaries by talking about it. I particularly enjoy people proclaiming they know all about the ‘next generation of cloud’ when we were all still defining the first one. At any rate, to define ‘cloud’, I first have to define ‘PaaS’ to my own satisfaction.

PaaS (noun) – short for Platform As A Service. It provides a complete application platform for users or developers (or both) by sufficiently abstracting those services away from their underlying technology platforms. 

A little wordy, I know, but we need to be specific here. Not only must a PaaS provide a platform, but it must do it in a way that abstracts developers and users from administration of the platform.  It must also handle all of the tertiary services (DNS, port forwarding, scalability, inter-connectivity, etc.) that administers usually have to handle after the fact.

PaaS lets developers develop, and lets administers admin.

So what is cloud?

cloud (noun) – an implementation of a PaaS solution that is seamlessly and automatically scalable to handle load demands and they grow and shrink.

So you start with a PaaS and you build it out so it will grow and shrink automatically as needed for it work load.

What is NOT a cloud?

  • provisioning virtual machines really quickly
  • setting up a PaaS that is brittle and confining for developers and users
  • writing 3 or 4 scripts to help automate your virtualization infrastructure
  • almost everything being marketed as a cloud today

To define a cloud you have to define PaaS. PaaS is defined as that slick layer of magic that abstracts the application away from everything the application runs on or in. A cloud is a seamlessly scalable instance of a good PaaS.  Easy, isn’t it? Step 3, profit!


My Own Private Cloud Part One – The Hardware


I work from home. That means I often need (or at least desire) to be a relatively self-sustained node. For a lot of my work and research, that means I need a pretty well stocked and stable home lab environment. I’ve recently decided to refresh and improve my home/work lab. This series of posts will document what I have going on now (and why), and what I plan to have going on in the future.

In The Beginning

I love technology. I’m literally paid dollars to stay on top of it and help people get the most out of it. But for a long time my own home lab was a pretty pathetic creation. Before coming to work at Red Hat, it was nothing more than what I could fit on my laptop. Since coming to Red Hat it was a Dell T3500 Workstation running RHEL 6 and KVM. On and off, I would have a Satellite server up to help provision systems faster, but it wasn’t a mainstay.

Initial Hypervisor / Lab System

Welcome the New (to me) blood

***Everything going forward has to be prefaced with the fact that I am a complete and utter cheapskate. </full_disclosure>.***

After hemming and hawing about it for a few months I decided to pull the trigger. I needed 3 things:

  • A control server / utility server
  • A second hypervisor (along with my T3500) so I can high-availability tests with things like Openstack RDO and also survive downtime.
  • A NAS so I can do at least NFS-style sharing

So off to Google shopping I go. I finally decided on the following:

Second Hypervisor

Utility Server

NAS Storage

  • Western Digital MyBook Live 3TB appliance
  • I know. No redundant disks. The stuff I really care about is backed up to multiple ‘net-based services. This lab isn’t to serve pictures and be a media hub for my house. It’s a research and development lab/playground.

Gigabit Switch

  • TrendNet 8-port gigabit switch
  • The first one died in a puff of smoke when i plugged in the power supply. A quick replacement by amazon and its replacement seems to be working really well.

After months on the request list, my company also approved my request for a second hypervisor (a little late, but awesome). So now I have three.

Third Hypervisor

Next Up

So all of the boxes are unpacked and set with the recycling. Now what? Now we get to the fun part is what. It took me a few iterations to arrive at a toolchain that I really liked and worked well. The next few posts will be dedicated to talking about that toolchain and how I got it all set up and glued together. Here’s a pic of my not-very-well-organized-yet lab up and running.

My refreshed Home Computer Lab
My refreshed Home Computer Lab

Starting your project with a super-mirco-mini IT budget. Can it work?

A few months agao I wrote up a series of thoughts (and here, here, and here) about tools that you really shouldn’t wait on when you started up a new project. I totally stand by them.

Today I had a really great conversation with a proven IT/web heavyweight and he’s starting up an IT project with as close to a non-existant IT budget as I’ve ever heard. Their codebase is Python (lots of Django love), so it’s living in Google Project’s realm. Most of their other services (CI is what we primarily talked about) was running on Heroku (http://www.heroku.com/), which at the levels they currently need, is free.

So they’re pretty effectively leveraging IaaS and PaaS offerings at the low, free levels to get their company started. But are they really saving money or are they just deferring the cost with interest like all of our student loans?

With the math I’ve done in the past and recently, IaaS and PaaS are most cost-effective for high-intensity, low-duration operations. At 5AM one of the biggest uses we’ve found for AWS is processing large chunks of scientific data. When you’re dealing with a service you want up 24/7/36[5-6], unless it’s AWFULLY lightweight, it doesn’t end up being a cost-effective candidate for pushing to you “cloud” provider.  I think there are still a few years of truth in these statements. Eventually it will become too cheap to not do it, but that’s a little ways off, still. A company still needs its “40 Acres” somewhere.

So what about these guys? I’m entering into a little bit of speculation here, as I’ve never used Heroku, and I’ve only lightly used Google Project, but both of these are pretty highly customized environments. It would seem to me that the longer you live exclusively in these environments the more your code will become dependent on the idiosyncracies of these environments. I don’t think these environments are bad or undesirable.  I’m just saying that I think they will diverge from a “vanilla” build by their very nature, and bringing an application back from one of these environments to run on bare-metal or a “regular” virtual machine will be increasingly difficult.

So, provided that cost exists, the longer you defer the cost, the higher it will be.  Is it worth it? I don’t know.

If someone has experience either way, I’d love to hear about it. I know I’m going to continue thinking about it…