Category Archives: Software

Maize that fixes it’s own Nitrogen

For thousands of years, people from Sierra Mixe, a mountainous region in southern Mexico, have been cultivating an unusual variety of giant corn. They grow the crop on soils that are poor in nitrogen—an essential nutrient—and they barely use any additional fertilizer. And yet, their corn towers over conventional varieties, reaching heights of more than 16 feet.

A team of researchers led by Alan Bennett from UC Davis has shown that the secret of the corn’s success lies in its aerial roots—necklaces of finger-sized, rhubarb-red tubes that encircle the stem. These roots drip with a thick, clear, glistening mucus that’s loaded with bacteria. Thanks to these microbes, the corn can fertilize itself by pulling nitrogen directly from the surrounding air.

Source: The Indigenous Mexican Corn That Uses Air as Fertilizer – The Atlantic

Take 1: Holy crap this is cool. Corn is a huge staple grain, and requires a lot of off farm inputs to grow because it takes a lot of nutrients out of the ground.

Take 2: This maize matures in 8 months instead of 3 months for commercial corn. Interesting. Dr Sarah Taber pointed out on twitter that this is a really critical point. Nitrogen fixation takes a lot of energy, that has to come from somewhere. Modern varieties of maize might have had this bred out of them for a reason, so they put their energy into sugar and maturation instead of the ground. It may not be possible to keep this trait, and have the maize mature any faster.

This is important. Because the headlines for most articles on this make it sound like we’ve solved a hard problem in farm science and corn won’t need fertilizer in the future. That’s definitely not what the science says.

Take 3: The science behind verifying this is kind of amazing. You can’t tag nitrogen atoms to prove where they are coming from. So they did 5 different independent ways that each provide circumstantial evidence that the maize is actually doing this.

Take 4: The IP generated by this goes into the public trust. This is done under the Nagoya Protocol to address the very real concerns of bio-piracy by indigenous peoples. Good on them!

Take 5: The url of the Altantic piece is https://www.theatlantic.com/science/archive/2018/08/amaizeballs/567140/. Yes, they really did go there.

When algorithms surprise us

Machine learning algorithms are not like other computer programs. In the usual sort of programming, a human programmer tells the computer exactly what to do. In machine learning, the human programmer merely gives the algorithm the problem to be solved, and through trial-and-error the algorithm has to figure out how to solve it.

This often works really well – machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.

But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and kept labeling empty green fields as containing sheep.

Source: Letting neural networks be weird • When algorithms surprise us

There are so many really interesting examples she has collected here, and show us the power and danger of black boxes. In a lot of ways machine learning is just an extreme case of all software. People tend to write software on an optimistic path, and ship it after it looks like it’s doing what they intended. When it doesn’t, we call that a bug.

The difference between traditional approaches and machine learning, is debugging machine learning is far harder. You can’t just put an extra if condition in, because the logic to get an answer isn’t expressed that way. It’s expressed in 100,000 weights on a 4 level convolution network. Which means QA is much harder, and Machine Learning is far more likely to surprise you with unexpected wrong answers on edge conditions.

MQTT, Kubernetes, and CO2 in NY State

Back in November we decided to stop waiting for our Tesla Model 3 (ever changing estimates) and bought a Chevy Bolt EV (which we could do right off the lot). A week later we had a level 2 charger installed at home, and a work order in for a time of use meter. Central Hudson’s current time of use peak times are just 2 – 7pm on weekdays, and everything else is considered off peak. That’s very easy to not charge during, but is it actually the optimal time to charge? Especially if you are trying to limit your CO2 footprint on the electricity? How would we find out?

The NY Independent System Operator (ISO) generates between 75% and 85% of the electricity used in the state at any given time. For the electricity they generate, they provide some very detailed views about what is going on.

There is no public API for this data, but they do publish CSV files at 5 minute resolution on a public site that you can ingest. For current day they are updated every 5 to 20 minutes. So you can get a near real time view of the world. That shows a much more complicated mix of energy demand over the course of the day which isn’t just about avoiding the 2 – 7pm window.

Building a public event stream

With my upcoming talk at IndexConf next week on MQTT, this actually jumped up as an interesting demonstration of that. Turn these public polling data sets into an MQTT live stream. And, add some data calculation on top to calculate what the estimated CO2 emitted per kWh is currently. The entire system is written as a set of micro services on IBM Cloud running in Kubernetes.

The services are as follows:

  • ny-power-pump – a polling system that is looking for new published content and publishing it to an MQTT bus
  • ny-power-mqtt – A mosquitto MQTT server (exposed at mqtt.ny-power.org). It can be anonymously read by anyone
  • ny-power-archive – An mqtt client that’s watching the MQTT event stream and sending data to influx for time series calculations. It also exposes recent time series as additional MQTT messages.
  • ny-power-influx – influx time series database.
  • ny-power-api – serves up a sample webpage that runs an MQTT over websocket bit of javascript (available at http://ny-power.org)

Why MQTT?

MQTT is a light weight message protocol using a publish / subscribe server. It’s extremely popular in the Internet of Things space because of how simple the protocol is. That lets it be embedded in micro controllers like arduino.

MQTT has the advantage of being something you can just subscribe to, then take actions only when interesting information is provided. For a slow changing data stream like this, giving applications access to an open event stream means being able to start doing something more quickly. It also drastically reduces network traffic. Instead of constantly downloading and comparing CSV files, the application gets a few bytes when it’s relevant.

The Demo App

That’s the current instantaneous fuel mix, as well as the estimated CO2 per kWh being emitted. That’s done through a set of simplifying assumptions by looking at 2016 historic data (explained here, any better assumptions would be welcomed).

The demo app also includes an MQTT console, where you can see the messages coming in that are feeding it as well.

The code for the python applications running in the services is open source here. The code for the deploying the microservices will be open sourced in the near future after some terrible hardcoding is removed (so others can more easily replicate it).

The Verdict

While NY State does have variability in fuel mix, especially depending on how the wind load happens. There is a pretty good fixed point which is “finish charging by 5am”. That’s when there is a ramp up in Natural Gas infrastructure to support people waking up in the morning. Completing charging before that means the grid is largely Nuclear, Hydro, and whatever Wind is available that day, with Natural Gas filling in some gaps.

Once I got that answer, I set my departure charging schedule in my Chevy Bolt. If the car had a more dynamic charge API, you could do better, and specify charging once it flat lined at 1am, or dropped below a certain threshold.

Learn more at IndexConf

On Feb 22nd I’ll be diving into MQTT the protocol, and applications like this one at IndexConf in San Francisco. If you’d love to discuss more about turning public data sets into public event streams with the cloud, come check it out.

Python functions on OpenWhisk

Part of the wonderful time I had at North Bay Python was also getting to represent IBM on stage for a few minutes as part of our sponsorship of the conference. The thing I showed during those few minutes was writing some Python functions running in OpenWhisk on IBM’s Cloud Functions service.

A little bit about OpenWhisk

OpenWhisk is an Apache Foundation open source project to build a serverless / function as a service environment. It uses Docker containers as the foundation, spinning up either predefined or custom named containers, running to completion, then exiting. It was started before Kubernetes, so has it’s own Docker orchestration built in.

In addition to just the run time, it also has pretty solid logging and interactive editing through the webui. This becomes critical when you do anything that’s more than trivial with cloud functions, because the execution environment looks very different than just your laptop.

What are Cloud Functions good for?

Cloud Functions are really good when you have code that you want to run after some event has occurred, and you don’t want to maintain a daemon sitting around polling or waiting for that event. A good concrete instance of this is Github Webhooks.

If you have a repository that you’d like to do some things automatically on a new issue or PR, doing with with Cloud Functions means you don’t need to maintain a full system just to run a small bit of code on these events.

They can also be used kind of like a web cron, so that you don’t need a full vm running if there is just something you want to fire off once a week to do 30 seconds of work.

Github Helpers

I wrote a few example uses of this for my open source work. Because my default mode for writing source code is open source, I have quite a few open source repositories on Github. They are all under very low levels of maintenance. That’s a thing I know, but others don’t. So instead of having PR requests just sit in the void for a month I thought it would be nice to auto respond to folks (especially new folks) the state of the world.

#
#
# main() will be invoked when you Run This Action
#
# @param Cloud Functions actions accept a single parameter, which must be a JSON object.
#
# @return The output of this action, which must be a JSON object.
#
#

import github
from openwhisk import openwhisk as ow


def thank_you(params):
    p = ow.params_from_pkg(params["github_creds"])
    g = github.Github(p["accessToken"], per_page=100)

    issue = str(params["issue"]["number"])


    repo = g.get_repo(params["repository"]["full_name"])
    name = params["sender"]["login"]
    user_issues = repo.get_issues(creator=name)
    num_issues = len(list(user_issues))

    issue = repo.get_issue(params["issue"]["number"])

    if num_issues < 3:
        comment = """
I really appreciate finding out how people are using this software in
the wide world, and people taking the time to report issues when they
find them.
I only get a chance to work on this project on the weekends, so please
be patient as it takes time to get around to looking into the issues
in depth.
"""
    else:
        comment = """
Thanks very much for reporting an issue. Always excited to see
returning contributors with %d issues created . This is a spare time
project so I only tend to get around to things on the weekends. Please
be patient for me getting a chance to look into this.
""" % num_issues

    issue.create_comment(comment)


def main(params):
    action = params["action"]
    issue = str(params["issue"]["number"])
    if action == "opened":
        thank_you(params)
        return { 'message': 'Success' }
    return { 'message': 'Skipped invocation for %s' % action }

Pretty basic, it responses back within a second or two of folks posting to an issue telling them what’s up. While you can do a light weight version of this with templates in github native, using a cloud functions platform lets you be more specific to individuals based on their previous contribution rates. You can also see how you might extend it to do different things based on the content of the PR itself.

Using a Custom Docker Image

IBM’s Cloud Functions provides a set of docker images for different programming languages (Javascript, Java, Go, Python2, Python3). In my case I needed more content then was available in the Python3 base image.

The entire system runs on Docker images, so extending those is straight forward. Here is the Dockerfile I used to do that:

# Dockerfile for example whisk docker action
FROM openwhisk/python3action

# add package build dependencies
RUN apk add --no-cache git

RUN pip install pygithub

RUN pip install git+git://github.com/sdague/python-openwhisk.git

This builds with the base, and installs 2 additional python libraries: pygithub to make github api access (especially paging) easier, and a utility library I put up on github to keep from repeating code to interact with the openwhisk environment.

When you create your actions in Cloud Functions, you just have to specify the docker image instead of language environment.

Weekly Emails

My spare time open source work mostly ends up falling between the hours of 6 – 8am on Saturdays and Sundays, which I’m awake before the rest of the family. One of the biggest problems is figuring out what I should look at then, because if I spend and hour figuring that out, then there isn’t much time to do much that requires code. So I set up 2 weekly emails to myself using Cloud Functions.

The first email looks at all the projects I own, and provides a list of all the open issues & PRs for them. These are issues coming in from other folks, that I should probably respond to, or make some progress on. Even just tackling one a week would get me to a zero issue space by the middle of spring. That’s one of my 2018 goals.

The second does a keyword search on Home Assistant’s issue tracker for components I wrote, or that I run in my house that I’m pretty familiar with. Those are issues that I can probably meaningfully contribute to. Home Assistant is a big enough project now, that as a part time contributor, finding a narrower slice is important to getting anything done.

Those show up at 5am in my Inbox on Saturday, so it will be the top of my email when I wake up, and a good reminder to have a look.

The Unknown Unknowns

This had been my first dive down the function as a service rabbit hole, and it was a very educational one. The biggest challenge I had was getting into a workflow of iterative development. The execution environment here is pretty specialized, including a bunch of environmental setup.

I did not realize how truly valuable a robust Web IDE and detailed log server is in these environments. Being someone that would typically just run a vm and put some code under cron, or run a daemon, you get to keep all your normal tools. But the trade off of getting rid of a server that you need to keep patched is worth it some times. I think that as we see a lot of new entrants into the function-as-a-service space, that is going to be what makes or breaks them: how good their tooling is for interactive debug and iterative development.

Replicate and Extend

I’ve got a pretty detailed write up in the README for how all this works, and how you would replicate this yourself. Pull requests are welcomed, and discussions of related things you might be doing are as well.

This is code that I’ll continue to run to make my github experience better. The pricing on IBM’s Cloud Functions means that this kind of basic usage works fine at the free tier.

Slow AI

Charlie Stross’s keynote at the 34th Chaos Communications Congress Leipzig is entitled “Dude, you broke the Future!” and it’s an excellent, Strossian look at the future we’re barelling towards, best understood by a critical examination of the past we’ve just gone through.

Stross is very interested in what it means that today’s tech billionaires are terrified of being slaughtered by psychotic runaway AIs. Like Ted Chiang and me, Stross thinks that corporations are “slow AIs” that show what happens when we build “machines” designed to optimize for one kind of growth above all moral or ethical considerations, and that these captains of industry are projecting their fears of the businesses they nominally command onto the computers around them.

Charlie Stross’s CCC talk: the future of psychotic AIs can be read in today’s sociopathic corporations

The talk is an hour long, and really worth watching the whole thing. I especially loved the setup explaining the process of writing believable near term science fiction. Until recently, 90% of everything that would exist in 10 years already did exist, the next 9% you could extrapolate from physical laws, and only really 1% was stuff you couldn’t image. (Stross makes the point that the current ratios are more like 80 / 15 / 5, as evidenced by brexit and related upheavals, which makes his work harder).

It matches well with Clay Shirky’s premise in Here Comes Everyone, that first goal of a formal organization is future existence, even if it’s stated first goal is something else.

Syncing Sieve Rules in Fastmail, the hard way

I’ve been hosting my email over at Fastmail for years, and for the most part the service is great. The company understands privacy, contributes back to open source, and is incredibly reliable. One of the main reasons I moved off of gmail was their mail filtering system was not fine grained enough to deal with my email stream (especially open source project emails). Fastmail supports sieve, which lets you write quite complex filtering rules. There was only one problem, syncing those rules.

My sieve rules are currently just north of 700 lines. Anything that complex is something that I like to manage in git, so that if I mess something up, it’s easy to revert to known good state.

No API for Sieve

Fastmail does not support any kind of API for syncing Sieve rules. There is an official standard for this, called MANAGESIEVE, but the technology stack Fastmail uses doesn’t support it. I’ve filed tickets over the years that mostly got filed away as future features.

When I first joined Fastmail, their website was entirely classic html forms. Being no slouch, I had a python mechanize script that would log in as me, then navigate to the upload form, and submit it. This worked well for years. I had a workflow where I’d make a sieve change, sync via script, see that it generated no errors, then commit. I have 77 commits to my sieve rules repository going back to 2013.

But, a couple of years ago the Fastmail team refreshed their user interface to a Javascript based UI (called Overture). It’s a much nicer UI, but it means it only works with a javascript enabled browser. Getting to the form box where I can upload my sieve rules is about 6 clicks. I stopped really tweaking the rules regularly because of the friction of updating them through clear / copy / paste.

Using Selenium for unintended purposes

Selenium is pretty amazing web test tool. It gives you an API to drive a web browser remotely. With recent versions of Chrome, there is even a headless chrome driver, so you can do this without popping up a graphics window. You can drive this all from python (or your language of choice).

An off hand comment by Nibz about using Selenium for something no one intended got me thinking: could I manage to get this to do my synchronization?

Answer, yes. Also, this is one of the goofiest bits of code that I’ve ever written.

#!/usr/bin/env python3

import configparser
import os
import sys

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By

config = configparser.ConfigParser()
config.read("config.ini")

chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(executable_path=os.path.abspath("/usr/local/bin/chromedriver"),
                          chrome_options=chrome_options)

driver.get("https://fastmail.fm")

timeout = 120
try:
    element_present = EC.presence_of_element_located((By.NAME, 'username'))
    WebDriverWait(driver, timeout).until(element_present)

    # Send login information

    user = driver.find_element_by_name("username")
    passwd = driver.find_element_by_name("password")
    user.send_keys(config["default"]["user"])
    passwd.send_keys(config["default"]["pass"])
    driver.find_element_by_class_name("v-Button").click()

    print("Logged in")

    # wait for login to complete
    element_present = EC.presence_of_element_located((By.CLASS_NAME, 'v-MainNavToolbar'))
    WebDriverWait(driver, timeout).until(element_present)

    # click settings menu to make elements visible
    driver.find_element_by_class_name("v-MainNavToolbar").click()

    # And follow to settings page
    driver.find_element_by_link_text("Settings").click()

    # Wait for settings page to render, oh Javascript
    element_present = EC.presence_of_element_located((By.LINK_TEXT, 'Rules'))
    WebDriverWait(driver, timeout).until(element_present)

    # Click on Rules link
    driver.find_element_by_link_text("Rules").click()

    # Click on edit custom sieve code
    element_present = EC.presence_of_element_located((By.LINK_TEXT, 'Edit custom sieve code'))
    WebDriverWait(driver, timeout).until(element_present)
    driver.find_element_by_link_text("Edit custom sieve code").click()

    print("Editing")

    # This is super unstable, I hate that we have to go by webid
    element_present = EC.presence_of_element_located((By.CLASS_NAME, 'v-EditSieve-rules'))
    WebDriverWait(driver, timeout).until(element_present)

    print("Find form")
    elements = driver.find_elements_by_css_selector("textarea.v-Text-input")
    element = elements[-1]

    # Find the submit button
    elements = driver.find_elements_by_css_selector("button")
    for e in elements:
        if "Save" in e.text:
            submit = e

    print("Found form")
    # And replace the contents
    element.clear()

    with open("rules.txt") as f:
        element.send_keys(f.read())

    # This is the Save button
    print("Submitted!")
    submit.click()

except TimeoutException as e:
    print(e)
    print("Timed out waiting for page to load")
    sys.exit(0)

print("Done!")

Basic Flow

I won’t do a line by line explanation, but there are a few concepts that make the whole thing fall in line.

The first is the use of WebDriverWait. This is an OvertureJS application, which means that clicking parts of the screen trigger an ajax interaction, and it may be some time before the screen “repaints”. This could be a new page, a change to the existing page, an element becoming visible. Find a thing, click a thing, wait for the next thing. There is a 5 click interaction before I get to the sieve edit form, then a save button click to finish it off.

Finding things is important, and sometimes hard. Being an OvertureJS application, div ids are pretty much useless. So I stared a lot in Chrome inspector at what looked like stable classes to find the right things to click on. All of those could change with new versions of the UI, so this is fragile at best. Some times you just have to count, like finding the last textarea on the Rules page. Some times you have to inspect elements, like looking through all the buttons on a page to find the one that says “Save”.

Filling out forms is done with sendKeys, which approximates typing by sending 1 character every few milliseconds. If you run non headless it makes for amusing animation. My sieve file is close to 20,000 characters, so this takes more than a full minute to put that content in one character at a time. But at least it’s a machine, so no typos.

The Good and the Bad

The good thing is this all seems to work, pretty reliably. I’ve been running it for the last week and all my changes are getting saved correctly.

The bad things are you can’t have 2 factor enabled and use this, because unlike things like IMAP where you can provision an App password for Fastmail, this is really logging in and pretending to be you clicking through the website and typing. There are no limited users for that.

It’s also slow. A full run takes

It’s definitely fragile, I’m sure an update to their site is going to break it. And then I’ll be in Chrome inspector again to figure out how to make this work.

But, on the upside, this let me learn a more general purpose set of tools for crawling and automating the modern web (which requires javascript). I’ve used this technique for a few sites now, and it’s a good technique to add to your bag of tricks.

The Future

Right now this script is in the same repo as my rules. This also requires setting up the selenium environment and headless chrome, which I’ve not really documented. I will take some time to split this out on github so others could use it.

I would love it if Fastmail would support MANAGESIEVE, or have an HTTP API to fetch / store sieve rules. Anything where I could use a limited app user instead of my full user. I really want to delete this code and never speak of it again, but a couple of years and closed support tickets later, and this is the best I’ve got.

If you know someone in Fastmail engineering and can ask them about having a supported path to programatically update sieve rules, that would be wonderful. I know a number of software developers that have considered the switch to Fastmail, but stopped when the discovered that updating sieve can only be done in the webui.

Updated (12/15/2017): via Twitter the Fastmail team corrected me that it’s not Angular, but their own JS toolkit called OvertureJS. The article has been corrected to reflect that.

 

Getting Chevy Bolt Charge Data with Python

Filed under: kind of insane code, be careful about doing this at home.

Recently we went electric, and got a Chevy Bolt to replace our 12 year old Toyota Prius (who has and continues to be a workhorse). I had a spot in line for a Tesla Model 3, but due to many factors, we decided to go test drive and ultimately purchase the Bolt. It’s a week in and so far so good.

One of the things GM does far worse than Tesla, is make its data available to owners. There is quite a lot of telemetry captured by the Bolt, through OnStar, which you can see by logging into their website or app. But, no API (or at least no clear path to get access to the API).

However, it’s the 21st century. That means we can do ridiculous things with software, like use python to start a full web browser, log into their web application, and scrape out data….. so I did that.

The Code

#!/usr/bin/env python

import configparser
import os

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By

config = configparser.ConfigParser()
config.read("config.ini")

chrome_options = Options()
# chrome_options.add_argument("--headless")
driver = webdriver.Chrome(executable_path=os.path.abspath("/usr/local/bin/chromedriver"),
                          chrome_options=chrome_options)

driver.get("https://my.chevrolet.com/login")

user = driver.find_element_by_id("Login_Username")
passwd = driver.find_element_by_id("Login_Password")
user.send_keys(config["default"]["user"])
passwd.send_keys(config["default"]["passwd"])
driver.find_element_by_id("Login_Button").click()

timeout = 120
try:
    element_present = EC.presence_of_element_located((By.CLASS_NAME, 'status-box'))
    WebDriverWait(driver, timeout).until(element_present)
    print(driver.find_element_by_class_name("status-box").text)
    print(driver.find_element_by_class_name("status-right").text)
except TimeoutException:
    print("Timed out waiting for page to load")

print("Done!")

This uses selenium, which is a tool used to test websites automatically. To get started you have to install selenium python drivers, as well as the chrome web driver. I’ll leave those as an exercise to the reader.

After that, the process looks a little like one might expect. Start with the login screen, find the fields for user/password, send_keys (which literally acts like typing), and submit.

The My Chevrolet site is an Angular JS site, which seems to have no stateful caching of the telemetry data for the car. Instead, once you log in you are presented with an overview of your car, and it makes an async call through the OnStar network back to your car to get its data. That includes charge level, charge state, estimated range. The OnStar network is a CDMA network, proprietary protocol, and ends up taking at least 60 seconds to return that call.

This means that you can’t just pull data out of the page once you’ve logged in, because the data isn’t there, there is a spinner instead. Selenium provides you a WebDriverWait class for that, which will wait until an element shows up in the DOM. We can just wait for the status-box to arrive. Then dump its text.

The output from this script looks like this:

Current
Charge:
100%
Plugged in(120V)
Your battery is fully charged.
Estimated Electric Range:
203 Miles
Estimated Total Range:
203 Miles
Charge Mode:
Immediate
Change Mode
Done!

Which was enough for what I was hoping to return.

The Future

Honestly, I really didn’t want to write any of this code. I really would rather get access to the GM API and do this the right way. Ideally I’d really like to make the Chevy Bolt in Home Assistant as easy as using a Tesla. With chrome inspector, I can see that the inner call is actually returning a very nice json structure back to the angular app. I’ve sent an email to the GM developer program to try to get real access, thus far, black hole.

Lots of Caveats on this code. That OnStar link and the My Chevrolet site are sometimes flakey, don’t know why, so running something like this on a busy loop probably is not a thing you want to do. For about 2 hours last night I just got “there is no OnStar account associated with this vehicle”, which then magically went away. I’d honestly probably not run it more than hourly. I made no claims about the integrity of things like this.

Once you see the thing working, it can be run headless by uncommenting line 18. Then it could be run on any Linux system, even one without graphics.

Again, this is one of the more rediculous pieces of code I’ve ever written. It is definitely a “currently seems to work for me” state, and don’t expect it be robust. I make no claims about whether or not it might damage anything in the process, though if logging into a website damages your car, GM has bigger issues.

 

Triple Bottom Line in Open Source

One of the more thought provoking things that came out of the OpenStack leadership training at Zingerman’s last year, was the idea of the Triple Bottom Line. It’s something I continue to ponder regularly.

The Zingerman’s family of businesses definitely exist to make money, there are no apologies for that. However, it’s not their only bottom line that they measure against they’ve defined for themselves. Their full bottom line is “Great Food, Great Service, Great Finance.” In practice this means you have to ensure that all are being met, and not sacrifice the food and service just to make a buck.

If you look at Open Source through this kind of lens, a lot of trade offs that successful projects make make a lot more sense. The TBL for OpenStack would probably be something like: Code, Community, Contributors. Yes, this is about building great code, to make a great cloud, but it’s also really critical to grow the community, and mentor and grow individual contributors as well. Those contributors might stay in OpenStack, or they might go on to use their skills to help other Open Source projects be better in the future. All of these are measures of success.

This was one of the reasons we recently switch the development tooling in OpenStack (DevStack) to using systemd more natively. Not only did it solve a bunch of long standing technical issues, that had really ugly work arounds, but it also meant enhancing our contributors. Systemd and the journal are default in every new Linux environment now, so skills that our contributors gained working with DevStack would now directly transfer to any Linux environment. It would make them better Linux users in any context, not just OpenStack. It also makes the environment easier for people coming from the outside to understand, because it looks more like what they are used to.

While I don’t have enough data to back it up, it feels like this central question is really important to success in Open Source: “In order to be successful in this project you must learn X, which will be useful in these other contexts outside of the project.” X has to be small enough to be learnable, but also has to be useful in other contexts, so time invested has larger payoffs. That’s what growing a contributor looks like, they don’t just become better at your project, they become a better developer for everything they touch in the future.

IoT & Home Assistant at OpenWest

I’m thrilled to be talking about the Internet of Things and Home Assistant at the OpenWest conference next week. The talk for it has come together quite nicely, and I’ll hopefully be giving it a few more places over the coming year as well. The goal of the talk is to explain some of the complexity of the space, and see why it is so complex, and why the only real path forward in the short / medium term is an open source hub at the heart of everything.

For those that can’t make it all the way to Utah, there is a trimmed down Article version of it up at opensource.com. The article seems to be doing well, and was #2 for this week on the site.

I will also be forever indebted to Benjamin Walker and his complete throw away line “this is why we can’t have the internet of nice things” during his New York After Rent series (which is really incredible, and completely unrelated to any of this), which stuck in my brain for months afterwards, and became the seed of inspiration for this talk.

Hacking Windmills

Staggs sat in the front seat and opened a MacBook Pro while the researchers looked up at the towering machine. Like the dozens of other turbines in the field, its white blades—each longer than a wing of a Boeing 747—turned hypnotically. Staggs typed into his laptop’s command line and soon saw a list of IP addresses representing every networked turbine in the field. A few minutes later he typed another command, and the hackers watched as the single turbine above them emitted a muted screech like the brakes of an aging 18-wheel truck, slowed, and came to a stop.

Source: Researchers Found They Could Hack Entire Wind Farms | WIRED

In a networked world, you need cyber security everywhere. Especially when physical access is so easy to get. The BeyondCorp model of not trusting the network is a really good starting place for systems like this.