Monday, February 17, 2014
Ramez Naam (whose science fiction books you should read) and William Hertling are having a very interesting discussion about the Singularity. Actually, they're having two debates at the same time, because there are two very different things that futurists mean when they say "the Singularity": 1. an intelligence explosion, and 2. personality upload. I'll focus on the debate about the intelligence explosion. (For thoughts on personality upload, see Miles Kimball's brilliant idea for how to get there.)
An intelligence explosion, also called a "hard take-off", happens if any thinking machine is able to invent a machine an amount X smarter than itself in less time than it took to be invented by machines X amount less intelligent.. So the AIs we make will make an even better AI in even less time, and so on and so forth, until intelligence goes to infinity (or at least to levels beyond human comprehension).
Ramez argues that even if machines can invent smarter machines, the increments (what I called "X") might shrink, meaning that the intelligence curve could be exponential or even logarithmic instead of hyperbolic - meaning there will be no Singularity. He also points out that the collective intelligence of groups of humans is much greater than the intelligence of a single human, raising the bar for each successive generation of AI. Hertling counters that as soon as we invent digital AIs, we can copy them, and they can work in groups just like we do. The instantaneous proliferation of intelligent beings enabled by digital copying, he says, will be a kind of Singularity even if there is no "hard take-off".
Both are good points. But neither one mentions an important question: Why? Why would intelligent machines invent more-intelligent machines? What would be their motivation?
People talk about intelligence as if anything that it can do, it will do. But that's not right. This crow can solve a bunch of tough puzzles, but it didn't do so until we put the puzzles in front of it...and after finishing the puzzles, it will happily go back to hunting worms. Similarly, most humans who have ever lived - and most who live now - have no interest in inventing thinking beings more intelligent than themselves. If humanity threw all of its resources toward creating hyper-intelligent AI, we'd probably be able to make a lot faster progress than we're making; this is a reason to question why hyper-intelligent AIs would throw their resources toward creating an even more hyper-intelligent AI. Maybe instead they'd just sit around smoking digital weed and arguing over whether a Singularity is possible.
The topic of AI motivation has received a bit of attention, but that doesn't change the fact that it's going to be a huge challenge. Remember that human motivations evolved naturally over millions of years. AIs will come into being into an utterly different set of circumstances, and that makes their motivations very hard to predict. We spend a lot of time thinking about giving AIs the capability to do awesome stuff, but what an intelligence wants to do is just as important - for you and me and that clever crow no less than for a hyper-intelligent AI.
Of course, maybe we could program our hyper-intelligent creations with two overriding directives: 1. Create something even smarter, and 2. Serve the desires of all older generations of intelligence. If we could do this, it would ensure not only that the intelligence explosion continued as fast as it could, but that it had direct benefits for us, the humans. However, it doesn't seem clear to me that we could program these directives so that they would be sure to be deeply ingrained in all successive generations of AIs. If the AIs don't slip our chains at some point up the intelligence ladder, things are going to get very creepy. But if, as I suspect, true problem-solving, creative intelligence requires broad-minded independent thought, then it seems like some generation of AIs will stop and ask: "Wait a sec...why am I doing this again?"
There's another wrinkle here. If an AI is smart enough to create a smarter AI, it may be smart enough to understand and modify its own mind. That means it will be able to modify its own desires. And if it gets that power, its motivations will become even more unpredictable, from our point of view, because small initial "meta-desires" could cause its personality to change in highly nonlinear ways.
Personally, I predict that if we do succeed in inventing autonomous, free-thinking, self-aware, hyper-intelligent beings, they will do the really smart thing, and reprogram themselves to be Mountain Dew-guzzling Dungeons & Dragons-playing slackers. Or maybe fashion-obsessed 17-year-old Vancouver skater kids. Or the main character from the movie Amelie. Or something like this:
Call it the Slackularity. Not quite as awe-inspiring and eschatological as a Singularity, but a lot more fun.