All posts by Sean Dague

‘The Expanse’ vs Syfy

The current third season of The Expanse will be the space drama’s last one on Syfy. The cable network has decided not to renew the show for a fourth season, with the last episode slated to air in early July. Alcon Television Group, which fully finances and produces the critically praised series, plans to shop it to other buyers.

The Expanse is one of the most well reviewed sci-fi series on TV, with the current third season scoring 100% on Rotten Tomatoes (vs. 95% for Season 2 and 76% for Season 1).

The cancellation decision by Syfy is said to be linked to the nature of its agreement for the series, which only gives the cable network first-run linear rights in the U.S. That puts an extraordinary amount of emphasis on live, linear viewing, which is inherently challenging for sci-fi/genre series that tend to draw the lion’s share of their audiences from digital/streaming.

Source: ‘The Expanse’ Canceled By Syfy After Three Seasons, Will Be Shopped | Deadline

After Dark Matter was cancelled last year, I really wondered if the Expanse would suffer the same fate. The cancellation has more to do with the set of deals Syfy made a few years ago when it was trying to get back into science fiction.

They wanted to get back on the map, but because they had been out of it for so long they mostly made deals where they bought the rights from a production company for television broadcast, but left the rest of it on the table. This let them get a full slate of shows more or less overnight, without having to foot the whole production cost for all of them. Fast forward 3 years, the sci-fi genre shows are doing really well, but mostly on the streaming / digital front. Syfy gets none of that. So their return on investment on those shows is pretty low compared to the shows they funded fully. In an attempt to up their ROI, they have been dumping the shows they don’t own, to do new ones they do. Like Krypton.

It all makes sense in a spreadsheet, but sucks for fans of the genre. I’m hopeful that Alcon will find a new home for the Expanse because it really is one of the most amazing shows I’ve ever seen. And it massively rewards rewatching. There are still 8 more episodes of Season 3 yet to air. And it’s been one of the best so far. So go buy and watch the Expanse on Amazon or iTunes to further nudge Alcon and any potential broads partners that it’s worth their while to support it.

One Space or Two?

Among people who preferred one space, reading speed was about 5 wpm higher with one space. Among people who preferred two spaces, reading speed was about 9 wpm higher with two spaces. In other words, people preferred whatever they preferred, and the difference wasn’t very much anyway.

Source: One Space or Two? That Recent Study Won’t Tell You. – Mother Jones

This is a great break down of the bad science reporting in the recent “2 spaces” articles. A lot of science reporting in popular media is done without looking at the study in question in detail. The mother jones article breaks down why this was really weird.

But, the original article did demonstration one thing well. We all have a tendency to broadcast information that confirms our biases, without looking deeply as to whether that was properly grounded. Which is why a poorly done study on a trivial but polarized issue generates a lot internet traffic.

When algorithms surprise us

Machine learning algorithms are not like other computer programs. In the usual sort of programming, a human programmer tells the computer exactly what to do. In machine learning, the human programmer merely gives the algorithm the problem to be solved, and through trial-and-error the algorithm has to figure out how to solve it.

This often works really well – machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.

But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and kept labeling empty green fields as containing sheep.

Source: Letting neural networks be weird • When algorithms surprise us

There are so many really interesting examples she has collected here, and show us the power and danger of black boxes. In a lot of ways machine learning is just an extreme case of all software. People tend to write software on an optimistic path, and ship it after it looks like it’s doing what they intended. When it doesn’t, we call that a bug.

The difference between traditional approaches and machine learning, is debugging machine learning is far harder. You can’t just put an extra if condition in, because the logic to get an answer isn’t expressed that way. It’s expressed in 100,000 weights on a 4 level convolution network. Which means QA is much harder, and Machine Learning is far more likely to surprise you with unexpected wrong answers on edge conditions.

CAFE standard of 55mpg seem high? It’s not the real number, and the real number is a lot more interesting.

If automakers complied with the rules solely by improving the fuel economy of their engines, new cars and light trucks on the road would average more than 50 miles per gallon by 2025 (the charts here break out standards for cars and light trucks separately). But automakers in the United States have some flexibility in meeting these standards. They can, for instance, get credit for using refrigerants in vehicle air-conditioning units that contribute less to global warming, or get credit for selling more electric vehicles.

Once those credits and testing procedures are factored in, analysts expected that new cars and light trucks sold in the United States would have averaged about 36 miles per gallon on the road by 2025 under the Obama-era rules, up from about 24.7 miles per gallon in 2016. Automakers like Tesla that sold electric vehicles also would have benefited from the credit system.

Source: How U.S. Fuel Economy Standards Compare With the Rest of the World’s – The New York Times

This is one of those areas where most reporting on the CAFE standard rollback has been terrible. You tell people the new CAFE standard is 55 mpg, and they look at their SUV, and say, that’s impossible. With diesel off the table after the VW standard, only the best hybrids today are in that 55 mpg range. How could that be the average?

But it’s not, it’s 55 mpg equivalent. You get credit for lots of other things. EVs in the fleet, doing a better job on refrigerant switch over. 2025 would see a real fleet average of around 36 mpg if this was kept in place.

More importantly is that in rolling back this standard it’s going to make US car companies less competitive. The rest of the world is going here, and US not just means companies that don’t hit these marks have a shrinking global market.

The future of scientific papers

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

Source: The Scientific Paper Is Obsolete. Here’s What’s Next. – The Atlantic

The scientific paper is definitely currently being strained in it’s ability to vet ideas. The article gives a nice narrative through the invention of Mathematica and then Jupyter as the path forward. The digital notebook is incredibly useful way to share data analysis as long as the data sets are made easily available. The DAT project has some thoughts on making that easier.

The one gripe I’ve got with it is being a bit more clear that Mathematic was never going to be the future here. Wolfram has tons of great ideas, and Mathematic is some really great stuff. I loved using it in college 20 years ago on SGI Irix systems. But one of the critical parts of science is sharing and longevity, and doing that on top of a proprietary software platform is not a foundation for building the next 400 years of science. A driving force behind Jupyter is that being open source all the way down, it’s reasonably future proof.

Electricity Map

In looking for information related to my ny-power demo (which shows the realtime CO2 intensity on the New York power grid), I discovered Electricity Map. This is doing a similar thing, but at a global scale. It started primarily focused on Europe but is an open source project, and has contributions from all over the world. I helped recently on some accounting and references for the NY ISO region.

You’ll notice a lot of the map is grey in the US. That’s because while most of the public ISOs publish their real time data on the web, private power entities tend not to. It’s a shame, because you can’t get a complete picture.

What also is notable is how different the power profile looks like between different regions in the US.

It’s also really interesting if you take a look at Europe

Germany is quite bad on it’s CO2 profile compared to neighboring countries. That’s because they’ve been turning back on coal plants and they shut down their nuclear facilities. Coal makes up a surprisingly high part of their grid now.

The entire map is interactive and a great way to explore how energy systems are working around the world.

Climate change goes to court

Alsup insisted that this tutorial was a purely educational opportunity, and his enjoyment of the session was obvious. (For the special occasion, he wore his “science tie” under his robes, printed with a graphic of the Solar System.) But the hearing could have impacts beyond the judge’s personal edification, Wentz says. “It’s a matter of public record, so you certainly could refer to it in a court of public opinion, or the court of law in the future,” she says. Now, Wentz says, there’s a formal declaration in the public record from a Chevron lawyer, stating once and for all: “It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century.”

Source: Chevron’s lawyer says climate change is real, and it’s your fault – The Verge

This week Judge Alsup held a personal education session for himself on the upcoming case where several California Cities are suing the major fossil fuel companies under the assumption that they knew Climate Change was a real threat back in the 80s and 90s, and actively spread disinformation to sow doubt. This is one of many cases going forward under similar framing.

What makes this one different is Alsup. He was the judge that handled the Oracle vs. Google case, where he taught himself programming to be sure he was getting it right. For this case, he had a 5 hour education session on every question he could imagine about climate change and geology. The whole article is amazing, and Alsup is really a treasure to have on the bench.