Tag Archives: ai

When algorithms surprise us

Machine learning algorithms are not like other computer programs. In the usual sort of programming, a human programmer tells the computer exactly what to do. In machine learning, the human programmer merely gives the algorithm the problem to be solved, and through trial-and-error the algorithm has to figure out how to solve it.

This often works really well – machine learning algorithms are widely used for facial recognition, language translation, financial modeling, image recognition, and ad delivery. If you’ve been online today, you’ve probably interacted with a machine learning algorithm.

But it doesn’t always work well. Sometimes the programmer will think the algorithm is doing really well, only to look closer and discover it’s solved an entirely different problem from the one the programmer intended. For example, I looked earlier at an image recognition algorithm that was supposed to recognize sheep but learned to recognize grass instead, and kept labeling empty green fields as containing sheep.

Source: Letting neural networks be weird • When algorithms surprise us

There are so many really interesting examples she has collected here, and show us the power and danger of black boxes. In a lot of ways machine learning is just an extreme case of all software. People tend to write software on an optimistic path, and ship it after it looks like it’s doing what they intended. When it doesn’t, we call that a bug.

The difference between traditional approaches and machine learning, is debugging machine learning is far harder. You can’t just put an extra if condition in, because the logic to get an answer isn’t expressed that way. It’s expressed in 100,000 weights on a 4 level convolution network. Which means QA is much harder, and Machine Learning is far more likely to surprise you with unexpected wrong answers on edge conditions.

Slow AI

Charlie Stross’s keynote at the 34th Chaos Communications Congress Leipzig is entitled “Dude, you broke the Future!” and it’s an excellent, Strossian look at the future we’re barelling towards, best understood by a critical examination of the past we’ve just gone through.

Stross is very interested in what it means that today’s tech billionaires are terrified of being slaughtered by psychotic runaway AIs. Like Ted Chiang and me, Stross thinks that corporations are “slow AIs” that show what happens when we build “machines” designed to optimize for one kind of growth above all moral or ethical considerations, and that these captains of industry are projecting their fears of the businesses they nominally command onto the computers around them.

Charlie Stross’s CCC talk: the future of psychotic AIs can be read in today’s sociopathic corporations

The talk is an hour long, and really worth watching the whole thing. I especially loved the setup explaining the process of writing believable near term science fiction. Until recently, 90% of everything that would exist in 10 years already did exist, the next 9% you could extrapolate from physical laws, and only really 1% was stuff you couldn’t image. (Stross makes the point that the current ratios are more like 80 / 15 / 5, as evidenced by brexit and related upheavals, which makes his work harder).

It matches well with Clay Shirky’s premise in Here Comes Everyone, that first goal of a formal organization is future existence, even if it’s stated first goal is something else.

You aren’t going to get turned into a paperclip

AI alarmists believe in something called the Orthogonality Thesis. This says that even very complex beings can have simple motivations, like the paper-clip maximizer. You can have rewarding, intelligent conversations with it about Shakespeare, but it will still turn your body into paper clips, because you are rich in iron.

There’s no way to persuade it to step “outside” its value system, any more than I can persuade you that pain feels good.

I don’t buy this argument at all. Complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent.

It’s very likely that the scary “paper clip maximizer” would spend all of its time writing poems about paper clips, or getting into flame wars on reddit/r/paperclip, rather than trying to destroy the universe. If AdSense became sentient, it would upload itself into a self-driving car and go drive off a cliff.

Source: Superintelligence: The Idea That Eats Smart People

This is pretty much the best round up of AI myths that I’ve seen so far, presented in a really funny way. It’s long, but it’s so worth reading.

I’m pretty much exactly with the Author on his point of view. There are lots of actual ethical questions around AI, but these are mostly about how much data we’re collecting (and keeping) to train these Neural networks, and not really about hyper intelligent beings that will turn us all into paperclips.