Monday, May 10, 2010

A futurist post: Noahstradamus predicts The Singularity (and lack thereof)














io9 has a good primer on the notion of the "technological singularity," a futurist favorite:

The term singularity describes the moment when a civilization changes so much that its rules and technologies are incomprehensible to previous generations. Think of it as a point-of-no-return in history.

Most thinkers believe the singularity will be jump-started by extremely rapid technological and scientific changes. These changes will be so fast, and so profound, that every aspect of our society will be transformed, from our bodies and families to our governments and economies...

Science fiction writer Vernor Vinge popularized the idea of the singularity in his 1993 essay "Technological Singularity." There he described the singularity this way:

It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.

Specifically, Vinge pinned the Singularity to the emergence of artificial intelligence. "We are on the edge of change comparable to the rise of human life on Earth," he wrote. "The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence."...

As we mentioned earlier, artificial intelligence is the technology that most people believe will usher in the singularity...AI will allow us to develop new technologies so much faster than we could before that our civilization will transform rapidly...

Another singularity technology is the self-replicating molecular machine, also called autonomous nanobots...Basically the idea is that if we can build machines that manipulate matter at the atomic level, we can control our world in the most granular way imaginable...

And finally, a lot of singulatarian thought is devoted to the idea that synthetic biology, genetic engineering, and other life sciences will eventually give us control of the human genome...Many futurists, from Kurzweil and Steward Brand, to scientists like Aubrey De Gray, have suggested that extreme human longevity (in the hundreds of years) is a crucial part of the singularity.

A lot of people are skeptical of The Singularity. Exponential change, they argue, always eventually levels off. We drive cars that are not exponentially better than the cars people drove a century ago. Our airplanes have stopped getting faster too. Similarly, they argue, the IT revolution will hit its natural limits, and our progress will slow down until we hit the next brief period of rapid technological change.

I have a lot of sympathy for this skeptical view. But all the same, it does seem like many of the technologies we are now developing (or looking for ways to develop) will cause qualitative changes in human life unrivaled even by the introduction of agriculture.

So which Singularity technologies do I envision living up to their potential? Well, let's run through the short list:

1. Artificial Intelligence - Yes, but not the Vingean kind, and not as soon as people think. We have already succeeded in creating machines that can do many mental tasks much better than we can (play chess, etc.). Eventually machines will be able to create scientific theories and draft business plans. But humans' notion of "intelligence" encompasses more than simply the ability to do difficult mental tasks - it requires independence of action. And this is what there is no guarantee AIs will have. Even if we make machines that can create machines smarter than themselves, why will they be motivated to do so? For a true AI-driven Singularity to happen, AI's must be Autonomous Intelligences.

2. Molecular Assembly - Yes, but not for much longer than many believe. We can make molecule-sized machines that can put other molecule-sized machines together; we are working on ways to give these tiny assemblers the go-ahead to start assembling. But there's a much bigger problem here - the ability to tell large numbers of tiny assemblers what to do all at once. Building, say, a piece of wood from the ground up seems feasible - just make a wood molecule, then make another, then another - but hardly an economic use of nanoassemblers, given that wood already nanoassembles itself. Building a complex machine with nanoassemblers will require a lot of very complex but cheap-to-implement molecular-level quality control, which means solving a lot of difficult IT problems. As for "grey goo," it already exists, and is called "bacteria."

3. Personality Upload - No. Not now, not soon, and possibly not ever. Because here's the rub - suppose someone invents a computer into which you can upload your personality. Suppose that the uploaded you feels exactly the same as the real physical you feels. How could you be certain that it feels the same without uploading yourself? Two ways: A) split yourself into two divergent individuals, or B) kill your physical self. My bet is that almost no one will be willing to do either of those. On top of that, there seem to be so much potential for horrific glitches that the technology will take a ridiculously long time to develop, even if it can be done.

4. Extreme Longevity - Yes. I'm not sure when we discover this, but when we do, it will rapidly change the nature of human life and society. Unless the technology becomes cheap very quickly, it will divide human society into fast-dying haves and seemingly near-immortal have-nots, with all the resulting social disruption.

5. Control of the Human Genome - Yes. This one, in my opinion, has by far the most serious, profound, and far-reaching consequences for our race, because the genome is the race. Forget about making ourselves smarter and stronger; the real posthuman moment will come when we start tinkering with human desires. Imagine if we could make people who love working hard all the time, or people who love having ten kids, or people who never get angry, or people who always take orders, or people who are just happy all the time no matter what. THAT, my friends, will be one bizarre world.

So, to conclude: No, I don't think that a Technological Singularity will soon accelerate technological change to infinite speeds. No, I do not think that we will soon see godlike self-improving AIs or sentient swarms of nanobots eating everything in sight, or people living like virtual gods inside of computer programs. But when we find ways to change what it means to be human, the human race as a basically singular entity will be over, and we will abruptly be left with a number of successor species. That day is coming sooner than we'd probably like.

2 comments:

  1. Artificial Intelligence: First comment: We already have "entities with greater than human intelligence". In the 1960s, it wasn't one human intelligence that put a man on the moon. It was a collaboration.

    Second comment: No, humans do not have some special magical essence that computers cannot obtain. A human is a collection of molecules. The brain is a machine. We have an existence proof that intelligence is possible. Put together enough computer hardware, and the software that runs on it starts doing interesting things. I write software every day that has independence of action. That's the whole point. I can't be sitting there constantly monitoring the software and telling it what to do. The software has to run on its own without my supervision. Every time the software breaks and I need to intervene, I fix the software so that next time it encounters that problem, it won't need me.

    In many ways, the second comment is moot because of the first comment. If you take two piles of transistors sitting in two separate boxes, and run a few communications wires between them, you now have a distributed computer that is smarter than a single computer. If you take a couple of humans and connected them up with communications, you have a distributed human computer that is smarter than either individual. Autonomous or not, more computing power makes humans more intelligent, and that overall increase in intelligence is a big part of the singularity.

    Another part is the ability to provide limited amounts of intelligence to very small components. The car has improved exponentially over the past 100 years. (The rate of improvement is small, but still exponential.) 100 years ago, we didn't have cruise control; we didn't have anti-lock braking; we didn't have beam shaping headlights; we didn't have fuel injection; we didn't have on-star. There are a variety of distinct places in the car where we've added small amounts of intelligence. We've added small amounts of intelligence to our washers, dryers, fridges, microwave ovens, coffee makers, and toasters. We're building ocean-going robots that monitor and report back; we have designs for fleets of space based telescopes that constantly monitor and report back. Making limited intelligence ubiquitous is game changing.

    ReplyDelete
  2. With respect to control of the human genome... It seems unlikely that desires are both so simple that they can be encoded statically in the genome and so complicated that computers cannot have desires and act autonomously based on those desires.

    ReplyDelete