You aren't going to get turned into a paperclip

AI alarmists believe in something called the Orthogonality Thesis. This says that even very complex beings can have simple motivations, like the paper-clip maximizer. You can have rewarding, intelligent conversations with it about Shakespeare, but it will still turn your body into paper clips, because you are rich in iron.

There's no way to persuade it to step "outside" its value system, any more than I can persuade you that pain feels good.

I don't buy this argument at all. Complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent.

It's very likely that the scary "paper clip maximizer" would spend all of its time writing poems about paper clips, or getting into flame wars on reddit/r/paperclip, rather than trying to destroy the universe. If AdSense became sentient, it would upload itself into a self-driving car and go drive off a cliff.

Source: Superintelligence: The Idea That Eats Smart People

This is pretty much the best round up of AI myths that I've seen so far, presented in a really funny way. It's long, but it's so worth reading.

I'm pretty much exactly with the Author on his point of view. There are lots of actual ethical questions around AI, but these are mostly about how much data we're collecting (and keeping) to train these Neural networks, and not really about hyper intelligent beings that will turn us all into paperclips.

Podcast Roundup 2016

As we round up 2016, I figured it's useful to share what's in my podcast rotation, and why you might want to add them to yours.

Skeptics Guide to the Universe

This is a weekly science and critical thinking podcast that's really good at keeping you up on the latest science coming out, as well as building your critical thinking skills. You both get information on latest scientific discoveries, deconstruction of sometimes very poor science reporting, and a weekly Science or Fiction quiz that both is lots of fun, and helps figure out where all your odd biases are.

No particular episode jumps out for the year, this is more about getting a steady diet of facts, critical thinking, and reality every week.

More information: http://www.theskepticsguide.org/

99 Percent Invisible

A weekly podcast about the Built World (architecture and design). I find incredibly useful to understand how the world is shaped (literally). The subject matter goes all over the place, and the production quality is amazing. Who knew that in 1970s Chili there was a cyber command center to help govern the country? Now you do - http://99percentinvisible.org/episode/project-cybersyn/.

More information: http://99percentinvisible.org/

The Long Now

The Long Now foundation is based on the idea that we are in the middle of the arc of Human History, so with 10,000 years behind us, we've got another 10,000 years in our run. What kind of thinking, values, and information do we need to promote for the next 10,000 years. Part of this is a monthly lecture series which you can get as a podcast (you can get video if you are a member).

In 2016 the 2 episodes I learned the most from were:

Radical Ag: C4 Rice and Beyond - http://longnow.org/seminars/02016/mar/14/radical-ag-c4-rice-and-beyond/ - which has an incredible primer on the state of food production in the world, and what is needed to feed the planet in 2050. I also learned a ton about how plants actually make sugars, as the team described the grand goal of upgrading Rice's sugar production to meet world demand.

1177 BC: When Civilization Collapsed - http://longnow.org/seminars/02016/jan/11/1177-bc-when-civilization-collapsed/. Which was part of a book tour about a time in history where we had a very global world, and it collapsed rather quickly. It's a part of history I knew little about, and also helps remember how long the arc of human history really is.

More information: http://longnow.org/

The Common Wealth Club of California

The common wealth club is now producing over 300 events a year, much of it gets dumped into the podcast. It's useful in being such a wide spectrum of things ending up in it. Lots of people on book tours, but also there is the Inforum and Climate One programs that specifically look at technology issues and climate issues. Because of the volume I freely skip past things that don't turn out to be interesting, but I've also gotten surprised by some things I didn't think I would.

The panel discussion on sustainability in the fashion industry was one of those - https://www.commonwealthclub.org/events/2016-05-17/you-are-what-you-wear-fashion-matters.

More information: https://www.commonwealthclub.org/

The Allusionist

A quirky podcast about language and the origin of words. It only runs every other week, but it's always a fun dose of something different.

More information: http://www.theallusionist.org/

Imaginary Worlds

A podcast about scifi and fantasy worlds, and why we create them. In 2016 there were some great bits on the economics in fanstasy and scifi universes, how do you pay for that invasion? An exploration of the year without summer in 1816, which gave us Frankenstein. And a look at the role of maps in Fantasy epics.

More information: http://www.imaginaryworldspodcast.org/home.html

Radio Lab

Always a favorite, though you have to get used to their editing style. In 2016 Radio Lab also did a spin off about the Supreme Court called More Perfect - http://www.wnyc.org/shows/radiolabmoreperfect.

The story that stuck with me the most in 2016 was Debatable - http://www.radiolab.org/story/debatable/ - it's about Debate Club, but it's meta enough that you just have to listen to understand.

More information: http://www.radiolab.org/

 

There are a few more that come and go, but this was really the most notable during the year. If you are looking for more quality content, anything from the list above will fit that bill.

The real Trolley Problem in tech

With all the talk about autonomous cars in general media this year, we all got a refresher in Ethics 101 and the Trolley Problem. The Trolley Problem is where you as an onlooker see a trolley barreling towards 5 people. There is a switch you can throw where it will kill 1 previously safe person instead of 5. What do you do? Do you take part in an act which kills 1 person while saving 5? Do you refuse to take part in an act of violence, but then willing let 5 people die because of it? No right answers, just a theoretical problem to think through and see all the trade offs.

But as fun as this all is, Autonomous cars are not the big trolley problem in Tech. Organizing and promoting information is.

Right now, if you put the phase "Was the holocaust real?" into Google, you'll get back 10 results. 8 will be various websites and articles that make the case that it was not real, but a giant hoax. The second hit (not the first) is the Wikipedia article on Holocaust Denial, and a link further down from the United States Holocaust Memorial talking about all the evidence presented at Nuremburg.

8 out of 10.

The argument we get in Tech a lot is that because results are generated by an algorithm, they are neutral. An algorithm is just a set of instructions a human built once upon a time. When it was being built or refined some human looked at a small number of inputs, what it output, and made a judgement call that it was good. Then they fed it a lot more input, far more than any human could digest, and let it loose on the world. Under the assumption that the testing input was representative enough that it would produce valid results for all input.

8 out of 10.

Why are those the results? Because Google came up with an insight years ago that webpages have links, people produce webpages, and important sites with authoritative information get linked to quite often. In the world before Google, this was hugely true, because once you found a gem on the internet, if you didn't write it down somewhere, finding it again later was often impossible. In the 20 years since Google, and in the growth of the internet, that's less true.

It's also less true about basic understood facts. There aren't thousands of people writing essays about the holocaust anymore. There are, however, a fringe of folks trying to actively erase that part of history. Why? I honestly have no idea. I really can't even get into that head space. But it's not unique to this event. There are people who write that the Sandy Hook shooting was a hoax too, and harass the families who lost children during that event.

8 out of 10.

Why does this matter? Who looks these things up? Maybe it's just the overly radicalized who already believe it? Or maybe it's the 12 year old kid who is told something on the playground and comes home to ask Google the answer. And finds 8 out 10 results say it's a hoax.

What could be done? Without Google intervening people could start writing more content on the internet saying the Holocaust was real, and eventually Google might interpret that and shift results. Maybe only 6 out of 10 for the hoax. Could we get enough popular sites so the truth could even be a majority, and the hoax would only get 4 out of 10? How many person hours, websites, twitter posts do we need to restore this truth?

As we're sitting on Godwin's front lawn already, lets talk about the problem with that. 6 million voices (and all the ones that came after them) that would have stood up here, are gone. The side of truth literally lost a generation to this terrible event.

So the other answer is that Google really should fix this. They control the black box. They already down rank other sites for malware, less accessible content, soon for popup windows. The algorithm isn't static, it keeps being trained.

And the answer you get is: "that's a slippery slope. When humans start interfering with search results that could be used by the powerful to suppress ideas they don't like."

It is a slippery slope. But it assumes you aren't already on that slope.

8 out of 10.

That act of taking billions of documents and choosing 10 to display, is an act of amplification. What gets amplified is the Trolley problem. Do we just amplify the loudest voices? Or do we realize that the loudest voices can use that platform to silence others?

We've seen this already. We've seen important voices online that were expressing nothing more than "women are equal, ok?" get brutally silenced through Doxing and other means. Some people that are really invested in keeping truth of the table don't stop at making their case, they actively attack and harass and send death threats to their opponents. So now that field of discourse keeps tilting towards the ideologues.

8 out of 10.

This is our real Trolley problem in Tech. Do you look at the playing field, realize that it's not level, the ideologues are willing to do far more and actually knock their opponents off the internet entirely, and do something about it? Or do you, through inaction, just continue to amplify those loud voices. And the playing field tips further.

Do we realize that this is a contributing factor as to why our companies are so much less diverse than society at large? Do we also realize that lack of diversity is why this doesn't look like a problem internally? At least not to 8 out of 10 folks.

8 out of 10.

I don't have any illusions this is going to change soon. The Google engine is complicated. Hard problems are hard.

But we live in this world, both real and digital. Every action we do has an impact. In an increasingly digital world, the digital impact matters as much as, or even more than the real world. Those of us who have a hand in shaping what that digital world looks like need to realize how great a responsibility that has become.

Like the Trolley Problem, there is no right answer.

But 8 out of 10 seems like the wrong answer.

 

Collective False Memory

In the early Nineties, roughly around 1994, a now 52-year-old man named Don ordered two copies of a brand new video for the rental store his uncle owned and he helped to run.

“I had to handle the two copies we owned dozens of times over the years,” says Don (who wishes to give his first name only). “And I had to watch it multiple times to look for reported damages to the tape, rewind it and check it in, rent it out, and put the boxes out on display for rental.”

In these ways, the film Don is speaking of is exactly like the hundreds of others in his uncle’s shop. In one crucial way, however, it is not. The movie that Don is referring to doesn’t actually exist.

Source: The movie that doesn’t exist and the Redditors who think it does

This is a really interesting dive through a collective false memory of a bunch of folks on Reddit about a thing that does not and never did exist. It's fascinating to see the depths people will go to, and the level of completely over the top theories people believe (like a glitch in the matrix), to protect the idea that their memory is not in error. Memories feel like reality, but they are anything but.

On the quest for Fake-News

A few days before the election, an extraordinary story popped up in hundreds of thousands of people's Facebook feeds. This story was salacious. It was vivid, filled with intriguing details. There was a photo of a burning house, firemen rushing in. The headline read, "FBI Agent Suspected In Hillary Email Leaks Found Dead In Apparent Murder-Suicide."

It was all fake. There was no FBI agent. There was no shooting. The site it was published on, The Denver Guardian, isn't a real news source. It was one of many fake stories that play into conspiracy theories about the Clintons and it worked. There is one part of the article that was real: the ads. Someone was making money off this phony news article and dozens of others like it. Someone was making profit off a fake story that suggested a presidential candidate was a killer. Today on the show, we take this single fake news story and follow the clues all the way back. We follow the digital breadcrumbs until we find ourselves on a suburban doorstep, face to face with the man behind a bogus news empire run. Then he tells us his secrets.

Source: Episode 739: Finding The Fake-News King : Planet Money : NPR

It's 25 minutes and worth your time.

The core challenge is there is a giant demand for "fake news". People so wanted to believe these terrible things about Clinton that they sought them out. Any debunking just fed the conspiracy arc further.

Finding North America’s lost medieval city

A thousand years ago, huge pyramids and earthen mounds stood where East St. Louis sprawls today in Southern Illinois. This majestic urban architecture towered over the swampy Mississippi River floodplains, blotting out the region's tiny villages. Beginning in the late 900s, word about the city spread throughout the southeast. Thousands of people visited for feasts and rituals, lured by the promise of a new kind of civilization. Many decided to stay.

At the city's apex in 1100, the population exploded to as many as 30 thousand people. It was the largest pre-Columbian city in what became the United States, bigger than London or Paris at the time. Its colorful wooden homes and monuments rose along the eastern side of the Mississippi, eventually spreading across the river to St. Louis. One particularly magnificent structure, known today as Monk’s Mound, marked the center of downtown. It towered 30 meters over an enormous central plaza and had three dramatic ascending levels, each covered in ceremonial buildings. Standing on the highest level, a person speaking loudly could be heard all the way across the Grand Plaza below. Flanking Monk’s Mound to the west was a circle of tall wooden poles, dubbed Woodhenge, that marked the solstices.

Source: Finding North America’s lost medieval city | Ars Technica

When we think about Native American culture in North America, we rarely think about cities, because by the time Europeans arrived there weren't any. But that wasn't always the case. This is a really amazing long read by someone who joined a dig this year on one of these great lost cities. A city that was built over multiple times, and reinvented.

Sit back, dive in, and let your imagination full grasp what it would have been like 1000 years ago in America.

Repurposing APIs

A really interesting thing happens if you have a reasonably long standing, stable, and documented API in the wild for a while. Other people start building their own implementations to serve different needs. My current favorite example of this is the emulated_hue code in Home Assistant.

Philips Hue has been one of the most consumer ux friendly IoT platforms out there. They have an extremely robust and documented API. And they provide cloud level access to some of the larger vendors, which has made integrations with other platforms pretty extensive. It was one of the first IoT platforms that voice assistants like Amazon's Alexa could talk to and control.

Which makes it an ideal platform to build your own copy of. Because, if Alexa can talk to it on the local network, you can tell Alexa that anything is a lightbulb, and get basic on / off / dimming controls of that. Which is exactly what was done in Home Assistant. Switches, lights, and even media players are exported as fake lightbulbs with names that voice assistants can address. And now, they can control parts of your house they weren't originally designed to support.

This was originally written by my friend Bruce, this is now part of the Home Assistant base, and being extended to Google Home as we speak. It goes to show was stability and documentation do for making an API become embedded way more places than you imagined.