Tag Archives: openstack

Robert Muth: Better Bash Scripting in 15 Minutes

Better Bash Scripting in 15 Minutes. The tips and tricks below originally appeared as one of Google’s “Testing on the Toilet” TOTT episodes. This is a revised and augmented version.

via Robert Muth: Better Bash Scripting in 15 Minutes.

Some good bits in here. We’ve implemented some of them in devstack, and I think a few more (like uninitialized and enforcing double brackets on all conditionals) would be helpful. It also makes me think about things to enforce in bash8.

Why you should be reviewing more OpenStack code

“Read, read, read. Read everything—trash, classics, good and bad, and see how they do it. Just like a carpenter who works as an apprentice and studies the master. Read! You’ll absorb it.”

– William Faulkner

Icehouse 3 is upon us, and as someone that is on a bunch of core review teams, it means a steady drum beat of everyone asking how do they get core reviewers to review their code. My standard response has been “make sure you are also reviewing code”.

Why?

Understanding implicit style

While most projects use the hacking program to check for trivial style issues (wrong formatting), there are a lot of other parts of style that exist inside a project. What’s a good function look like? When is a project handling exceptions vs. doing checks up front. What does spacing inside functions look like. What “feels” like Nova code, and what feels foreign and odd.

This is like when you are invited to a party at someone’s house for the first time. You walk in the door, and the first thing you do is look to the host, and the guests, and figure out if people are wearing shoes in the house or not. And follow suit if there looks like there is a pattern. It’s about being polite and adapting to the local environment.

Because unless you read other people’s code, you’ll never understand these things. There are lots of patches I look at briefly, realize that they are in some whacky style that’s so foreign to the code at hand, that I don’t have the energy to figure out what the author means, and move on.

Taking load off review teams

As a core reviewer, I currently have about 800 patches right now that I could +2 or -2. Given the rate of code coming in, that might as well be infinite. And it grows all the time.

By reviewing code, even when you don’t have approval authority, you’ll be helping the review teams weed out patches which aren’t in any way ready to move forward. That’s a huge time savings, and one that I appreciate.

Even if it’s something as simple as making sure author’s provide good commit messages, that’s huge. Because I’ll completely skip over reviews with commit messages that I can’t understand. That’s your opportunity to sell me on why I should spend the next 30 minutes looking at your code. A good commit message, being really clear about what problem this code hits, and what this solution is, and why this implementation is the right approach, will make me dive in.

A terrible or unclear commit message will probably just make me ignore the code, because if the author didn’t care enough to explain that to me in the commit message, there are probably lots of issues in the code itself. Even if you and I had a conversation about this code last week, don’t assume I remember all of that. That was probably 50 code reviews ago for me, which means the context of that conversation has long since flushed from my brain.

If you review a bunch of code, you’ll understand how these things impact your ability to review code, and will naturally adapt how you write commits (including the message) to make life of a reviewer easier.

Seeing the bigger picture

People tend to start contributing in just one corner of OpenStack, but OpenStack is a big, interconnected project. What you do in one corner can effect the rest of the project. If you aren’t reviewing code and changes happening at other layers of the project, it’s really hard to know how your piece fits into the larger picture.

Changes can look fine in the small, but have a negative impact on the wider project. If you are proactive in reviewing code more broadly you can see some of that coming, and won’t be surprised when a core reviewer -2s you because you were going in a different direction than the rest of the project.

Becoming a better programmer

When I started on the OpenStack project 2 years ago I hadn’t done python in a real way for years. My python was very rusty. But one of the first things I did was start reviewing a bunch of code, especially by some of the top people in the project.

There are some really smart and really skilled people in the OpenStack project. There are people that have been part of the python community for 15+ years. People that live and breath in python. Just reading their code makes you realize some of what can be done with the language, and what the “pythonic” way of doing things is. Nothing is better training for becoming a better python developer than learning from these folks.

Some times you’ll find a real issue, because no one is perfect. Some times you’ll find something you don’t understand, and can leave a comment as a question, which you’ll probably get an answer to. But all of it will be learning. And you will become a better developer.

It does make a difference

I’ll be 100% honest, with 800+ reviews I should be looking at, I play favorites. People that I see contributing a lot on the review side (not just volume, but real quality reviews that save me time) are people who’s code I want to review, because they are contributing to the whole of the project, not just their little corner.

So that’s why you should review more code in OpenStack. It will really contribute to the project, make you a better developer, and through all this you’ll find your code is naturally aligning better with OpenStack and gets reviewed more often. Realize this is not an overnight fix, but a long term strategy for aligning with the community and becoming part of it.

OpenStack doesn’t need a leader, it just needs to evolve

Third, and perhaps the best argument against OpenStack needing a leader, is the open nature of the beast itself. It’s precisely because there’s no dominant leader that OpenStack remains so transparent and competitive – everyone’s contributions can be seen by everyone else, and this drives people to do even better.

Most likely, those who say that OpenStack needs a leader do so because of history – previous open-source projects like Java, Linux and Android have all had a ‘dictator’ at the helm, but that doesn’t necessarily mean it’s the best path for OpenStack.

via OpenStack doesn’t need a leader, it just needs to evolve | SiliconANGLE.

If you remember correctly, Linux’s leadership and development model was largely dismissed by pundits, until it had 15 years of success under it’s belt. Then it became gospel of how Open Source projects should run.

But everything evolves over time. It doesn’t really surprise me that the pundits see OpenStack’s leadership model as different, and immediately dismiss it. We’ve got 3.5 years under our belt. Maybe at 5 or 6 everyone will now say all Open Source projects need to run like OpenStack.

Which would of course be wrong. While there are certain common threads between different Open Source communities, every community is different. Why? Because Communities are made of real people. Real people with different passions, strengths, weaknesses, biases, loves, constraints, and moments of brilliance. This isn’t something you can model with spheroid approximations of upstream developers. Replicating another project’s leadership model might be easy, but in most cases isn’t what your community actually needs.

Are there areas for improvement? Sure. There always are. But improvement is a watch word for OpenStack, something we apply everywhere: to code, to process, to communication.

So I agree, we don’t need a single leader. And the evolution that continues in OpenStack will be a key strength, not a weakness as the project goes forward.

Github vs. Gerrit

Julien Danjou, the project technical lead for the OpenStack Ceilometer project, had some choice words to say about github pull requests, which resonates very strongly with me:

The pull-request system looks like an incredible easy way to contribute to any project hosted on Github. You’re a click away to send your contribution to any software. But the problem is that any worthy contribution isn’t an effort of a single click.

Doing any proper and useful contribution to a software is never done right the first time. There’s a dance you will have to play. A slowly rhythmed back and forth between you and the software maintainer or team. You’ll have to dance it until your contribution is correct and can be merged.

But as a software maintainer, not everybody is going to follow you on this choregraphy, and you’ll end up with pull-request you’ll never get finished unless you wrap things up yourself. So the gain in pull-requests here, isn’t really bigger than a good bug report in most cases.

This is where the social argument of Github isn’t anymore. As soon as you’re talking about projects bigger than a color theme for your favorite text editor, this feature is overrated.

After working on OpenStack for the last year, I’m completely spoiled by our workflow and how it enables developer productivity. Recently I went back to just using git without gerrit to try to work on a 4 person side project, and it literally felt like developing in a thick sea of tar.

A system like Gerrit, and pre-merge interactive reviews, lets you build project culture quickly (it’s possible to do it other ways, but I’ve seen gerrit really facilitate it). The onus is on the contributors to get it right before it’s merged, and they get the feedback to get a patch done the right way. Coherent project culture is one of the biggest factors in attaining project velocity, as then everyone is working towards the same goals, with the same standards.

How an Idea becomes a Commit in OpenStack

My talk from the OpenStack summit is now up on youtube, where I walked people through the process of getting your idea into OpenStack. A big part of the explanation is what’s going on behind the scenes with code reviews and our continuous integration system.

I’m hoping it pulls away some of the mystery of the process, and provides a more gentle on ramp to everything for new contributors. I’ll probably be giving some version of this again at future events, so feedback (here or on youtube) is appreciated.

The OpenStack Gate

The OpenStack project has a really impressive continuous integration system, which is one of its core strengths as a project. Every proposed change to our gerrit review system is subjected to a battery of tests on each commit, which has grown dramatically with time, and after formal review by core contributors, we run them all again before the merge.

These tests take on the order of 1 hour to run on a commit, which would make you immediately think the most code that OpenStack could merge in a day would be 24 commits. So how did Nova itself manage to merge 94 changes since Monday (not to mention all the other projects, which adds up to ~200 in 3 days)? The magic of this is Zuul, the gatekeeper.

Zuul is a queuing system for CI jobs, written and maintained by the OpenStack infrastructure team. It does many cool things, but what I want to focus on is the gate queue. When the gate queue is empty (yes it does happen some times), the job is simple: add a new commit, run the tests, and we’re off. What happens if there are already 5 jobs ahead of you in the gate? Let’s take a concrete example of nova.

Speculative Merge

By the time a commit has gotten this far, it’s already passed the test suites at least once, and has had at least 2 core contributors sign off on the change in code review. So Zuul assumes everything ahead of the change in the gate will succeed, and starts the tests immediately cherry picking this change on top everything that’s ahead of it in the queue.

That means that merge time on the gate is O(1), that is merging 10 changes takes the same time as 1 change. If the queue gets too big, we do eventually run out of devstack nodes, so the ability to run tests is not strictly constant time. On the run up to grizzly-3 both the cloud providers (HP and Rackspace) which contribute these VMs provided some extra quota to the OpenStack team to help keep things moving. So we had an elastic burst of OpenStack CI onto additional OpenStack public cloud resources, which is just fun to think about.

Speculation Can Fail

Of course, speculation can fail. Maybe change 3 doesn’t merge because something goes wrong in the tests. If that happens we then kick the change out of the queue, and then all the changes behind it have to be reset to pull change 3 out of the speculation. This is the dreaded gate reset, because when gate resets happen, all the time spent on speculative tests behind the failure is lost, and the jobs need to restart.

Speculation failures largely fall into a few core classes:

Jenkins crashes – it doesn’t happen often, but Jenkins is software too, and OpenStack CI tends to drive software really hard, so we force out edge cases everywhere.

Upstream service failures – we try to isolate ourselves from upstream failures as much as possible. Our git trees pull from our gerrit, not directly from github. Our apt repository is a Rackspace local mirror, not generically upstream. And the majority of pip python packages come from our own proxy server. But if someone adds a new python dependency, or a version of one updates and we don’t yet have it cached, we pass through to pypi for that pip install. On Tuesday pypi converted from HTTP to HTTPS, and didn’t fully grok the load implications, which broke OpenStack CI (as well as lots of other python developers) for a few hours when pypi effectively was down from load.

Transient OpenStack bugs – OpenStack is complicated software, 7 core components interacting with each other asynchronously over REST web services. Each core component being a collection of daemons that interact with each other asynchronously. Sometimes, something goes wrong. It’s a real bug, but only shows up under very specific timing and state conditions. Because OpenStack CI runs so many tests every day (OpenStack CI may be one of the largest creators of OpenStack guests in the world every day), very obscure edge and race conditions can be exposed in the system. We try to track these as recheck bugs, and are making them high priority to address. By definition they are hard to track down (they expose themselves on maybe 1 out of 1000 or fewer test runs), so the logs captured in OpenStack CI are the tools to get to the bottom of these.

Towards an Even Better Gate

In my year working on OpenStack I’ve found the unofficial motto of the project to be “always try to make everything better”. Continuous improvement is not just left to the code, and the tests, but the infrastructure as well.

We’re trying to get more urgency and eyes on the transient failures, coming up with ways to discover the patterns from the 1 in 1000 fails. After you get two or three that fail in the same way it helps triangulate the core issue. Core developers from all the projects are making these high priority items to fix.

On the upstream service failures the OpenStack infrastructure team already has proxies sitting in front of many of the services, but the pypi outage showed we probably need something even more robust to handle that upstream service outage, possibly rotating between pypi mirrors on the fall-through case, or a better proxy model. The team is already actively exploring solutions to prevent that from happening again.

As always, everyone is welcomed to come help us make everything better. Take a look at the recheck bugs and help us solve them. Join us on #openstack-infra and help with Zuul. Check out what the live Zuul queue looks like. All the code for this system is open source, and available under either the openstack, or openstack-infra github accounts. Patches are always welcome!

OpenStack Talk at MHVLUG

On Wed, Sept 5th, I’ll be giving the talk on OpenStack at MHVLUG. The last six months working on the project have been really spectacular, great learning curve, really good community members, and a very exciting potential for where the project is going to go. I’m quite looking forward to going back to work next week after this summer holiday, because I can’t wait to get back into the code.

I’ll provide my personal take with current trends in cloud computing, and hopefully create a lot of in room discussion. We’ll go from that industry lens, to a deeper look at OpenStack. I’m a big believer that like operating systems, web stacks, and virtualization, the essential infrastructure of cloud computing needs to be open source.

If you are in the Mid Hudson Valley next Wed, come check out my talk.