Github vs. Gerrit

Julien Danjou, the project technical lead for the OpenStack Ceilometer project, had some choice words to say about github pull requests, which resonates very strongly with me:

The pull-request system looks like an incredible easy way to contribute to any project hosted on Github. You're a click away to send your contribution to any software. But the problem is that any worthy contribution isn't an effort of a single click.

Doing any proper and useful contribution to a software is never done right the first time. There's a dance you will have to play. A slowly rhythmed back and forth between you and the software maintainer or team. You'll have to dance it until your contribution is correct and can be merged.

But as a software maintainer, not everybody is going to follow you on this choregraphy, and you'll end up with pull-request you'll never get finished unless you wrap things up yourself. So the gain in pull-requests here, isn't really bigger than a good bug report in most cases.

This is where the social argument of Github isn't anymore. As soon as you're talking about projects bigger than a color theme for your favorite text editor, this feature is overrated.

After working on OpenStack for the last year, I'm completely spoiled by our workflow and how it enables developer productivity. Recently I went back to just using git without gerrit to try to work on a 4 person side project, and it literally felt like developing in a thick sea of tar.

A system like Gerrit, and pre-merge interactive reviews, lets you build project culture quickly (it's possible to do it other ways, but I've seen gerrit really facilitate it). The onus is on the contributors to get it right before it's merged, and they get the feedback to get a patch done the right way. Coherent project culture is one of the biggest factors in attaining project velocity, as then everyone is working towards the same goals, with the same standards.

How an Idea becomes a Commit in OpenStack

My talk from the OpenStack summit is now up on youtube, where I walked people through the process of getting your idea into OpenStack. A big part of the explanation is what's going on behind the scenes with code reviews and our continuous integration system.

I'm hoping it pulls away some of the mystery of the process, and provides a more gentle on ramp to everything for new contributors. I'll probably be giving some version of this again at future events, so feedback (here or on youtube) is appreciated.

How puppet rescued my botched server install

Saturday was a rainy day, so I decided to deal with switching out the root disk on my home server with an SSD that I purchased a couple weeks ago. It's part of my quest to get all the root disks of my machines off spinning media. My home server was a build from parts machine, that's long enough in the tooth that it won't boot from USB. So I found a stack of CDRs upstairs, of equally dubious age, burned an Ubuntu 12.10 server iso, and started the install.

Things were chugging along quite well until the installer was supposed to install additional packages. Then it bombed out (I blame the ancient CDRs). I was able to get it to at least install grub, and get the thing to boot back onto the network.

What I found myself with was a super minimal install. It didn't yet have a normal sources.list, it didn't have openssh-server, it didn't have ssh client even, it didn't have any of the normal even minimal server install tools. I had about 30 minutes of manual to typing to get the base apt repo in, and get me so I could ssh in from upstairs to drive the rest of the process.

Boostrapping a Puppet Master

This is the machine that's my puppet master. I had a copy of the oldroot over in one of my software raid arrays, so the moment I got that mounted, I dumped over the /etc/puppet this machine should have, and tried to just puppet my way up the rest of the way. I'd been on a month long kick to puppetize my home infrastructure, so this was a promising direction.

It turns out puppet up from nothing is a little harder when you are the puppetmaster, and dnsserver for the network as well. 🙂 So it was about another 30 minutes of manually installing what was needed to get my puppetmaster started. Once that was up, I managed to get the first puppet agent run in, and it was epic. 45 minutes chugging away pulling down all the policies I needed, applying packages and configs, all the kind of magic that prevented me from spending my whole day trying to figure out how I had this server setup before.

It also showed me where my policy had holes. I've got xfs filesystems now, so xfsprogs need to be in the base case. My libvirt setup didn't actually install kvm, but in the super minimal install, that wasn't there. I hadn't gotten around to managing my openvpn server yet, that's in there now.

If I was to do it again...

One thing I really need is both a puppet and puppetmaster bootstrapping script. Using puppet to manage your puppetmaster is cool and all, but there is a bit of snake eating it's own tail to get you started that required a little more manual command slinging than I liked.

But, had I not had so much of my server policy encoded in puppet, I'd still be typing commands now to get that box up and running. So I'm sold on the whole process, even for a smallish IT environment, like a few home servers and remote guests.