Category Archives: OpenStack

Robert Muth: Better Bash Scripting in 15 Minutes

Better Bash Scripting in 15 Minutes. The tips and tricks below originally appeared as one of Google’s “Testing on the Toilet” TOTT episodes. This is a revised and augmented version.

via Robert Muth: Better Bash Scripting in 15 Minutes.

Some good bits in here. We’ve implemented some of them in devstack, and I think a few more (like uninitialized and enforcing double brackets on all conditionals) would be helpful. It also makes me think about things to enforce in bash8.

Devstack Vagrant

Devstack is tooling for OpenStack to make it easy to bring up an OpenStack environment based on the latest git trees. It’s used extensively in upstream testing, and by many OpenStack developers to set up dev/test environments.

One of my personal challenges in working on Devstack was testing devstack itself. Relying on the upstream gate means we have a limited number of configurations, and when something goes wrong, iterating on a fix is hard. Even more importantly, the upstream gate is currently only a single node test environment.

A month ago I hacked out a new tool – devstack-vagrant (available on github).

DevstackVagrant

Devstack vagrant provides a customized Vagrant environment that will build a 2 node devstack cluster under VirtualBox. The basic model is 2 devstack nodes (a controller and a compute) that bridge through a physical interface on your host. The bridged interface is set as the default route in the nodes so that 2nd level guests created on top of this 2 node devstack can route to the outside world.

The nodes start and build from official Ubuntu 12.04 cloud images, and are customized using the puppet provisioning support in vagrant. There are a few config variables you need to set in a config.yaml, including hostnames, bridge interface, and the password hash you want your stack user to have. Basically enough to bootstrap the environment and then run devstack from upstream git.

I added a bit of additional logic to the end of the normal devstack process that includes installing an Ubuntu 12.04 and Fedora 20 cloud image in your glance, injecting the ssh public key for the stack user into the keyserver, and opening up ping and ssh in all the security groups.

I still consider this an expert tool at this point, as in, if it breaks you get to keep all the pieces. However, this has been useful to me so far, and given the pull requests I got the other day, seemingly is useful for others as well. Patches definitely welcomed. And if it looks like more folks want to contribute I’ll happily move to stackforge.

One of the things I’d love to do is sort out a functioning libvirt backend for vagrant (there are 2, they are both a little wonky) because then the 2nd level guests could use nested KVM and not be ridiculously slow.

This tool has already proved very useful to me, so hopefully it will be useful to others as well.

Why you should be reviewing more OpenStack code

“Read, read, read. Read everything—trash, classics, good and bad, and see how they do it. Just like a carpenter who works as an apprentice and studies the master. Read! You’ll absorb it.”

– William Faulkner

Icehouse 3 is upon us, and as someone that is on a bunch of core review teams, it means a steady drum beat of everyone asking how do they get core reviewers to review their code. My standard response has been “make sure you are also reviewing code”.

Why?

Understanding implicit style

While most projects use the hacking program to check for trivial style issues (wrong formatting), there are a lot of other parts of style that exist inside a project. What’s a good function look like? When is a project handling exceptions vs. doing checks up front. What does spacing inside functions look like. What “feels” like Nova code, and what feels foreign and odd.

This is like when you are invited to a party at someone’s house for the first time. You walk in the door, and the first thing you do is look to the host, and the guests, and figure out if people are wearing shoes in the house or not. And follow suit if there looks like there is a pattern. It’s about being polite and adapting to the local environment.

Because unless you read other people’s code, you’ll never understand these things. There are lots of patches I look at briefly, realize that they are in some whacky style that’s so foreign to the code at hand, that I don’t have the energy to figure out what the author means, and move on.

Taking load off review teams

As a core reviewer, I currently have about 800 patches right now that I could +2 or -2. Given the rate of code coming in, that might as well be infinite. And it grows all the time.

By reviewing code, even when you don’t have approval authority, you’ll be helping the review teams weed out patches which aren’t in any way ready to move forward. That’s a huge time savings, and one that I appreciate.

Even if it’s something as simple as making sure author’s provide good commit messages, that’s huge. Because I’ll completely skip over reviews with commit messages that I can’t understand. That’s your opportunity to sell me on why I should spend the next 30 minutes looking at your code. A good commit message, being really clear about what problem this code hits, and what this solution is, and why this implementation is the right approach, will make me dive in.

A terrible or unclear commit message will probably just make me ignore the code, because if the author didn’t care enough to explain that to me in the commit message, there are probably lots of issues in the code itself. Even if you and I had a conversation about this code last week, don’t assume I remember all of that. That was probably 50 code reviews ago for me, which means the context of that conversation has long since flushed from my brain.

If you review a bunch of code, you’ll understand how these things impact your ability to review code, and will naturally adapt how you write commits (including the message) to make life of a reviewer easier.

Seeing the bigger picture

People tend to start contributing in just one corner of OpenStack, but OpenStack is a big, interconnected project. What you do in one corner can effect the rest of the project. If you aren’t reviewing code and changes happening at other layers of the project, it’s really hard to know how your piece fits into the larger picture.

Changes can look fine in the small, but have a negative impact on the wider project. If you are proactive in reviewing code more broadly you can see some of that coming, and won’t be surprised when a core reviewer -2s you because you were going in a different direction than the rest of the project.

Becoming a better programmer

When I started on the OpenStack project 2 years ago I hadn’t done python in a real way for years. My python was very rusty. But one of the first things I did was start reviewing a bunch of code, especially by some of the top people in the project.

There are some really smart and really skilled people in the OpenStack project. There are people that have been part of the python community for 15+ years. People that live and breath in python. Just reading their code makes you realize some of what can be done with the language, and what the “pythonic” way of doing things is. Nothing is better training for becoming a better python developer than learning from these folks.

Some times you’ll find a real issue, because no one is perfect. Some times you’ll find something you don’t understand, and can leave a comment as a question, which you’ll probably get an answer to. But all of it will be learning. And you will become a better developer.

It does make a difference

I’ll be 100% honest, with 800+ reviews I should be looking at, I play favorites. People that I see contributing a lot on the review side (not just volume, but real quality reviews that save me time) are people who’s code I want to review, because they are contributing to the whole of the project, not just their little corner.

So that’s why you should review more code in OpenStack. It will really contribute to the project, make you a better developer, and through all this you’ll find your code is naturally aligning better with OpenStack and gets reviewed more often. Realize this is not an overnight fix, but a long term strategy for aligning with the community and becoming part of it.

OpenStack doesn’t need a leader, it just needs to evolve

Third, and perhaps the best argument against OpenStack needing a leader, is the open nature of the beast itself. It’s precisely because there’s no dominant leader that OpenStack remains so transparent and competitive – everyone’s contributions can be seen by everyone else, and this drives people to do even better.

Most likely, those who say that OpenStack needs a leader do so because of history – previous open-source projects like Java, Linux and Android have all had a ‘dictator’ at the helm, but that doesn’t necessarily mean it’s the best path for OpenStack.

via OpenStack doesn’t need a leader, it just needs to evolve | SiliconANGLE.

If you remember correctly, Linux’s leadership and development model was largely dismissed by pundits, until it had 15 years of success under it’s belt. Then it became gospel of how Open Source projects should run.

But everything evolves over time. It doesn’t really surprise me that the pundits see OpenStack’s leadership model as different, and immediately dismiss it. We’ve got 3.5 years under our belt. Maybe at 5 or 6 everyone will now say all Open Source projects need to run like OpenStack.

Which would of course be wrong. While there are certain common threads between different Open Source communities, every community is different. Why? Because Communities are made of real people. Real people with different passions, strengths, weaknesses, biases, loves, constraints, and moments of brilliance. This isn’t something you can model with spheroid approximations of upstream developers. Replicating another project’s leadership model might be easy, but in most cases isn’t what your community actually needs.

Are there areas for improvement? Sure. There always are. But improvement is a watch word for OpenStack, something we apply everywhere: to code, to process, to communication.

So I agree, we don’t need a single leader. And the evolution that continues in OpenStack will be a key strength, not a weakness as the project goes forward.

OpenStack CI by the numbers

For the week of Monday Oct 20th to Sunday Oct 27th (partial, it’s still Sunday morning).

  • 34894 – # of test jobs run by the CI system
  • 25316 – # of devstack clouds created by the CI system
    • 8254 – # of large ops runs (devstack + fake virt driver + limitted tempest test to drive it)
    • 940 – # of swift functional runs (devstack + swift functional tests)
    • 16122 – # of runs that do some level of devstack + tempest with libvirt qemu
  • 508536 – # of qemu guests started (successfully) in the devstack clouds
  • 128 – Max # of libvirt qemu guests started (successfully) in a single test run

Things that you figure out when you are playing with elastic search on a Sunday morning. One of the surprises for me was how much we use devstack outside of the base tempest run use case, and that our max # of guests spawned in a run is now over 100.

Update: Clark Boylan correctly pointed out that our guests are 2nd level, on platforms that don’t support nested kvm, and thus are libvirt/qemu guests not libvirt/kvm guests.

OpenStack Havana – the Quality Perspective

Like a lot of others, I’m currently trying to catch my breath after the incredible OpenStack Havana release. One of they key reasons that OpenStack is able to evolve as fast as it does, and the whole thing not fall apart, is because of the incredible preemptive integration gate that we have (think continuous integration++).

In Havana, beyond just increasing the number of tests we run, we made some changes in the nature of what we do in the gate. These changes are easy to overlook, so I wanted to highlight some of my favorites, and give a perspective in everything that’s going on behind the scenes when you try to land code in OpenStack.

Parallel Test Runner

Every proposed commit to an OpenStack project needs to survive being integrated into a single node devstack install, and hit with 1300 API & integration tests from Tempest, but until Havana, these were run serially. Right before Havana 3 milestone we merged parallel tempest testing for most of our jobs. This dropped their run time in half, but more importantly it meant all our testing was defaulting to 4 simultaneous requests, as well as running every test under tenant isolation, where a separate tenant is created for every test group. Every time you ratchet up testing like this you expose new race conditions, which is exactly what we saw. That made for a rough RC phase (the gate was a sad panda for many days), but everyone buckled down to get these new issues fixed, which were previously only visible to large OpenStack installations. The result, everyone wins.

This work was a long time coming, and had been started in the Grizzly cycle by Chris Yeoh, and spearheaded to completion by Matt Treinish.

Large Ops Testing

A really clever idea was spawned this summer by Joe Gordon: could we actually manage to run Tempest tests on a devstack with a fake virt driver that would always “succeed” and do so instantaneously. In doing so we could turn the pressure up on the control plane in OpenStack without the overhead of real virt drivers slowing down control plane execution enough that bugs could hide. Again, the first time we cranked this to 11, lots of interesting results fell out, including some timeout and deadlock situations. All hands went on deck, the issues were addressed, and now Large Ops Testing is part of our arsenal, run on every single proposed commit.

Upgrade Testing

Most people familiar with OpenStack are familiar with Devstack, the opinionated installer for OpenStack from upstream git. Devstack actually makes the base of our QA system, because it can build a single node environment from git trees. Lesser known is it’s sister tool, Grenade. Grenade uses 2 devstack trees (the last stable and master) to build an OpenStack at the previous version, inject some data, then shut down everything, and try to restart it with latest version of OpenStack. The ensures config files roll forward smoothly (or have specific minimal upgrade scripts in Grenade), database schemas roll forward smoothly, and that we don’t violate certain deprecation guarantees.

Grenade was created by Dean Troyer, I did a lot of work towards the end of Grizzly to get it in the gate, and Adalberto Medeiros took it the final mile in Havana and got this to be something running on every proposed commit.

New Tools for an Asynchronous World

September was the 30th anniversary of the GNU project. I remember some time in the late 90s reading or watching something about Richard Stallman and GNU Hurd. The biggest challenge of building a system with dozens of daemons sending asynchronous messages, is having any idea what broke when something goes wrong. They just didn’t have the tools or methods to make consistent forward progress. Linux emerged with a simpler model which could make progress, and the rest is history.

If you zoom back on OpenStack, this is exactly what we are building. A data center OS micro kernel. And as I can attest, debugging is often “interesting”. Without the preemptive integration system, we’d never be able to keep up our rate of change. However as the number of integrated projects has increased we’ve definitely seen emergent behavior that is not straight forward to track down.

Jobs in our gate will fail, seemingly at random. People unfamiliar with the situation will complain about “flakey tests” or a “flakey gate”, and just recheck their patch and see it pass on the second attempt. Most of the time neither the gate nor the tests are to blame, but the core of OpenStack itself. We managed to trigger a race condition, that maybe shows up 1% of the time in our configuration. We have moved to a world where test results aren’t binary, pass or fail, but better classified with a race percentage.

This is a problem we’ve been mulling over for nearly a year, and the solution which has been created is ElasticRecheck, a toolchain that uses Elastic Search on our test logs to check new failures against known failures. While finding a “fingerprint” for a failure is still a manual step, it was still of dramatic benefit for the release process. It got us out of thinking that there were only a couple of race conditions we were hitting, and realizing there were dozens of very specific races, each with their own fix. It also gave us a systematic way of determining which race conditions were most impacting us, so they could be prioritized and fixed.

This work was spearheaded by Joe Gordon and Matt Treinish, and leveraged some background work that Clark Boylan and I had done early in the cycle. ElasticRecheck is exciting enough technology all by itself, it deserves it’s own detailed dive. But that is for another day.

And many more…

These are just some of the sexiest highlights from the Havana release on the quality front.

The number of tests in Tempest that we run on every proposed patch has risen from 800 to 1300 during the cycle. This included new scenarios and a massive enhancement on coverage in all our services. 100 different developers contributed to Tempest during the Havana release (up from 60 in the Grizzly release), enhancing our integration suite. We’ve got a new stress framework which can provide load generation to burn in your cloud, which I expect will make an appearance in our gate during Icehouse.

The point being, lots of people, from lots of places, contributed heavily to make the Havana release the most solid release we’ve ever had from OpenStack. They did this not just with new features that make for good press releases, they also did this with contributions to the overall system that validates our software not once a day, not even once an hour, but on every single proposed patch.

So to everyone that contributed in this extraordinary effort: THANK YOU!

And I look forward, excitedly, to what we’ll create for the Icehouse release.

 

Gerrit queries to avoid OpenStack review overload

As with many OpenStack core reviewers, my review queue can be completely overwhelming, often 300 – 400 active reviews that I have +2 / -2 authority on. It’s really easy to get discouraged on a list that big. Fortunately there are ways to trim that down.

Gerrit provides a simple query language to select which reviews you see, using the query bar in the top right of the page:

The way this works is by adding criteria into the search box, which by default is ANDed together to get the final results. In the process these queries change the URL for Gerrit, so you can bookmark the resultant queries for easy access later.

Restricting to Single Project (and pulling your own stuff)

This query is basically what you get when you click on a project link:

status:open project:openstack/tempest

Nothing special, but you can go one step further by removing yourself from the list of reviews:

status:open project:openstack/tempest -owner:sdague@linux.vnet.ibm.com

This also demonstrates that we can have both positive criteria and negative criteria.

Little Lost Projects (don’t loose the little ones)

In addition to having +2 on nova, devstack, tempest, I’ve got it on a bunch of smaller projects, which I often forget I need to go review. You can build a single query that has all your little lost projects in a single list:

status:open (project:openstack-dev/hacking OR project:openstack-dev/grenade)

No Objections

You can also filter based on votes in the various columns. It’s not nearly as detailed as I’d like, but it is still useful. I have a basic query for No Objections on most projects that I review which looks something like this:

status:open project:openstack/tempest -Verified-1 -CodeReview-1 -CodeReview-2

This removes all reviews that have a current -1 in Verified column, and a -1 or -2 in the CodeReview column. So patches with negative feedback are dropped from view. The top of your review list may contain patches that haven’t cleared CI yet, but that’s easy to see. There might also be Jenkins -2 reviews in this list, but gate failed merges can usually use extra eyes.

I consider this a base list of patches that there is no reason I shouldn’t be reviewing them.

Potential Merges

I’m typically up and at my computer at 7am EST, which is often a very slow time for zuul. So one of the things I look for is code that only requires one more +2 to go to merge on projects like Nova. Many of these are easy to review fixes, and clear the decks before the queue gets busy in the afternoon.

status:open -Verified-1 CodeReview+2 -CodeReview-1 -CodeReview-2 (project:openstack/nova OR project:openstack/python-novaclient)

Like the last one, we are filtering out all patches with negative feedback, but also requiring that there is an active +2 on the patch. I also make sure to do this for both nova and python-novaclient, which often gets lost in the noise.

Lost Patches

Especially in Nova it’s easy for a patch to get lost, as there are so many of them. I define lost as a patch that’s passed CI, but has no feedback in code review.

status:open -Verified-1 Verified+1 -CodeReview+2 -CodeReview+1 -CodeReview-1 -CodeReview-2 (project:openstack/nova OR project:openstack/python-novaclient) branch:master

These patches are often from newer folks on the project, and as such often need more time, so I typically only go after lost patches if I know I can set aside a solid hour on them. However I try hard to get to this query at least once a week, to make sure things don’t get fully lost, as a -1 will give the patch originator feedback to work on, and a +2 will make it far more likely to get the attention of other core reviewers when they are looking for mergable code.

Experimenting with your own

The gerrit query language is somewhat limited (full docs are online), so it can’t do everything I’d like, but even just these few slices make it easier to be able to get into a certain mindset for reviewing different slices of code. I have a toolbar folder full of bookmarks for these slices on different projects to do just that.

If you have other gerrit queries you regularly use, please leave a comment. Would love to see the ways other folks optimize gerrit for their workload.

OpenStack Infrastructure Bootcamp

It was a cool week for OpenStack gatherings. Down in Washington DC an OpenStack Security Book Sprint was happening, while up in New York City, 20 of us were gathered for an OpenStack Infrastructure Bootcamp.

[pe2-image src=”http://lh4.ggpht.com/-BwzMYcZ6oRM/Uc2AY1dXcJI/AAAAAAAAPCA/ctDMY1gep6Y/s144-c-o/20130627_093725_Prince%252520St.jpg” href=”https://picasaweb.google.com/112421715628617571991/20130628#5894508651453313170″ caption=”20130627_093725_Prince St.jpg” type=”image” alt=”20130627_093725_Prince St.jpg” ]

Why do an infrastructure bootcamp? OpenStack, as a project, is really breaking some interesting new ground when it comes to software process flow and continuous integration. Unlike other projects, that test after code has landed in upstream master, we’ve got this incredible pre-merge test system that ensures that upstream master won’t be broken. It’s a system you need when you have over 550 contributors during a six month cycle. This is something beyond Continuous Integration as people normally think about it, though we realized we’re still quite lacking the words to describe it concisely.

This bootcamp was a great chance to go through that, in detail, and expose some of the areas where more contributors are needed to accelerate the project even further. We had all the “coremudgeons” of OpenStack infrastructure (Monty, Jim, Clark, and Jeremy), folks like myself that have landed some patches, or helped with specific efforts, and folks that were new to the whole thing, and just wanted to learn. Some of this I’d seen before, other bits I saw for the first time, and the whole system now makes more sense in my own head.

There were dinner and drinks after day one (the only day I could attend, sadly), and further ideas for improving the whole system flowed over beer, wine, food, and good company. I was struck again, during all of this, just how amazing of a community OpenStack is. We got 20 people together not to discuss or plan out features on OpenStack, but for features and improvements on the systems that facility OpenStack development. The kind of things we’re working towards are as advanced as semi-automatic failure coloration on build logs, to find statistically infrequent race conditions, upstream, instead of ever letting that hit a user in production. Awesome stuff.

Extra special thanks to Monty Taylor for pulling this together. It was no small task, and this wouldn’t have been possible without all his hard work on logistics to make it happen.

How an Idea becomes a Commit in OpenStack

My talk from the OpenStack summit is now up on youtube, where I walked people through the process of getting your idea into OpenStack. A big part of the explanation is what’s going on behind the scenes with code reviews and our continuous integration system.

I’m hoping it pulls away some of the mystery of the process, and provides a more gentle on ramp to everything for new contributors. I’ll probably be giving some version of this again at future events, so feedback (here or on youtube) is appreciated.

The OpenStack Gate

The OpenStack project has a really impressive continuous integration system, which is one of its core strengths as a project. Every proposed change to our gerrit review system is subjected to a battery of tests on each commit, which has grown dramatically with time, and after formal review by core contributors, we run them all again before the merge.

These tests take on the order of 1 hour to run on a commit, which would make you immediately think the most code that OpenStack could merge in a day would be 24 commits. So how did Nova itself manage to merge 94 changes since Monday (not to mention all the other projects, which adds up to ~200 in 3 days)? The magic of this is Zuul, the gatekeeper.

Zuul is a queuing system for CI jobs, written and maintained by the OpenStack infrastructure team. It does many cool things, but what I want to focus on is the gate queue. When the gate queue is empty (yes it does happen some times), the job is simple: add a new commit, run the tests, and we’re off. What happens if there are already 5 jobs ahead of you in the gate? Let’s take a concrete example of nova.

Speculative Merge

By the time a commit has gotten this far, it’s already passed the test suites at least once, and has had at least 2 core contributors sign off on the change in code review. So Zuul assumes everything ahead of the change in the gate will succeed, and starts the tests immediately cherry picking this change on top everything that’s ahead of it in the queue.

That means that merge time on the gate is O(1), that is merging 10 changes takes the same time as 1 change. If the queue gets too big, we do eventually run out of devstack nodes, so the ability to run tests is not strictly constant time. On the run up to grizzly-3 both the cloud providers (HP and Rackspace) which contribute these VMs provided some extra quota to the OpenStack team to help keep things moving. So we had an elastic burst of OpenStack CI onto additional OpenStack public cloud resources, which is just fun to think about.

Speculation Can Fail

Of course, speculation can fail. Maybe change 3 doesn’t merge because something goes wrong in the tests. If that happens we then kick the change out of the queue, and then all the changes behind it have to be reset to pull change 3 out of the speculation. This is the dreaded gate reset, because when gate resets happen, all the time spent on speculative tests behind the failure is lost, and the jobs need to restart.

Speculation failures largely fall into a few core classes:

Jenkins crashes – it doesn’t happen often, but Jenkins is software too, and OpenStack CI tends to drive software really hard, so we force out edge cases everywhere.

Upstream service failures – we try to isolate ourselves from upstream failures as much as possible. Our git trees pull from our gerrit, not directly from github. Our apt repository is a Rackspace local mirror, not generically upstream. And the majority of pip python packages come from our own proxy server. But if someone adds a new python dependency, or a version of one updates and we don’t yet have it cached, we pass through to pypi for that pip install. On Tuesday pypi converted from HTTP to HTTPS, and didn’t fully grok the load implications, which broke OpenStack CI (as well as lots of other python developers) for a few hours when pypi effectively was down from load.

Transient OpenStack bugs – OpenStack is complicated software, 7 core components interacting with each other asynchronously over REST web services. Each core component being a collection of daemons that interact with each other asynchronously. Sometimes, something goes wrong. It’s a real bug, but only shows up under very specific timing and state conditions. Because OpenStack CI runs so many tests every day (OpenStack CI may be one of the largest creators of OpenStack guests in the world every day), very obscure edge and race conditions can be exposed in the system. We try to track these as recheck bugs, and are making them high priority to address. By definition they are hard to track down (they expose themselves on maybe 1 out of 1000 or fewer test runs), so the logs captured in OpenStack CI are the tools to get to the bottom of these.

Towards an Even Better Gate

In my year working on OpenStack I’ve found the unofficial motto of the project to be “always try to make everything better”. Continuous improvement is not just left to the code, and the tests, but the infrastructure as well.

We’re trying to get more urgency and eyes on the transient failures, coming up with ways to discover the patterns from the 1 in 1000 fails. After you get two or three that fail in the same way it helps triangulate the core issue. Core developers from all the projects are making these high priority items to fix.

On the upstream service failures the OpenStack infrastructure team already has proxies sitting in front of many of the services, but the pypi outage showed we probably need something even more robust to handle that upstream service outage, possibly rotating between pypi mirrors on the fall-through case, or a better proxy model. The team is already actively exploring solutions to prevent that from happening again.

As always, everyone is welcomed to come help us make everything better. Take a look at the recheck bugs and help us solve them. Join us on #openstack-infra and help with Zuul. Check out what the live Zuul queue looks like. All the code for this system is open source, and available under either the openstack, or openstack-infra github accounts. Patches are always welcome!