World Exclusive: Ray Kurzweil's foreword to "9"

"Our emotional intelligence is not just a sideshow to human intelligence,
it's the cutting edge"

Inventor and futurist Ray Kurzweil has been described as "the restless genius" by the Wall Street Journal, and "the ultimate thinking machine" by Forbes. And for good reason: Kurzweil was the principal developer of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition. His website has an audience of over a million.

i-6720abb27f8f9e38e79475b87e0bbce3-kurzweil1.JPG

Among Kurzweil's many honors, he is the recipient of the $500,000 MIT-Lemelson Prize, the National Medal of Technology, and in 2002 was inducted into the National Inventor's Hall of Fame. He has received nineteen honorary Doctorates and honors from three U.S. presidents.
Ray has written six books, four of which have been national best sellers. Ray's latest book, The Singularity is Near, was a New York Times best seller, and has been the #1 book on Amazon in both science and philosophy. Wow!

So clearly it's with a great deal of excitement that I find myself in possession of a world exclusive: Ray Kurzweil's foreword to the "9", the upcoming Shane Acker / Tim Burton animation. In this, Kurzweil discusses the film's dark vision of the future and compares it with his own predictions, identifying the progress we need to make if we are to create successful thinking machines. In a nutshell - it's not our tool-making abilities or rational intelligence that makes us superior, it's our capacity to feel emotion, to form social bonds and collaborate as a group. We don't just need machines that are smart - we need ones that care.

Ray Kurzweil, July 2009


9 presents us with a post-apocalyptic vision of the future of the human-machine civilization. The evil machines have already wiped out all humans and are now bent on destroying the planet's remaining hope for the future: a band of numbered rag dolls (the stitchpunk creations). Let me share my own vision of the future, and then we can compare notes with the movie's dark prophecy.

The unique attribute of the human species is that we are capable of transcending our limitations with our tools, and our ability to do that with information technology is growing at an exponential rate. When I was an undergraduate at MIT, we all shared a computer costing tens of millions of dollars that took half a building. My cell phone today is a million times less expensive yet a thousand times more powerful. That's a billion fold increase in price-performance, and we'll do it again in the next 25 years.

This technology is becoming increasingly intelligent. Our civilization is already permeated with intelligent software performing tasks that used to require human intelligence. The most important trend is that we are making exponential gains in understanding the best example of intelligence that we can get our hands on: the human brain. We already having working models and simulations of two dozen brain regions and we will complete the job within twenty years. So by 2029 we will have machines that rival human intelligence and ultimately go way beyond it.

Since we first had fire and stone tools, technology has been a double-edged sword. We have indeed created tools that could destroy humanity. We still have thousands of thermonuclear weapons on a hair trigger despite the end of the cold war. We also have new existential risks. For example, the same biotechnology that will enable us to reprogram biology away from disease could also be used by a bioterrorist to reprogram a biological virus to make it more deadly, more communicable, or more stealthy.

On the other hand, technology has helped us to overcome suffering. Thomas Hobbes aptly described life a few hundred years ago as short, brutish, disaster prone, disease filled, and poverty ridden. Human life expectancy was 37 just 200 years ago. We've come a long way. But the dangers are still with us.

In the world of 9, this dark side of technology has already destroyed humankind before the movie even starts. Although this is not my vision of the future, it is a possibility that we ignore at our peril. In the movie, humanity was destroyed by an artificial intelligence (AI) run amok. Artificial intelligence at human levels and beyond will indeed be the most powerful technology we will ever create. We need to enhance our intelligence with AI to solve the major challenges that humanity faces. But if future AI's do not reflect our human values, they could turn on us, a scenario well known to fans of science futurism movies.

In the AI field we've been having extensive debates and dialogues on how to make sure that future AI is "friendly" versus "unfriendly" (unfriendly as in destroying all of humanity). 9 presents us with both of these kinds of AI. While the heroes are just, well, stitchpunk creations, they are also machines. So 9 presents us with both friendly (moral) machines - the stitchpunk creations-- and unfriendly (immoral) machines. At first, it appears that the evil machines are vastly superior. But the stitchpunk creations have emotional and social intelligence that the clumsy and mechanical destroyers lack. The friendly AI's (the stitchpunk creations) are capable of looking out for one another, of grieving, of collaborating, and of arguing with each other to come to deeper understandings based on their ability to keep an open mind.

So where have we seen this before? When mammals first appeared, they were small, frail creatures that hardly seemed a match for the dinosaurs that ruled the Earth (come to think of it, the evil machines do look a lot like dinosaurs). Yet mammals had a neocortex capable of higher levels of social thinking and they ultimately took over.

Then an especially delicate and weak species called homo sapiens emerged. They couldn't run very well, and had to spend years raising children who were not capable of productive work. But homo sapiens had an especially large neocortex that supported sophisticated forms of emotional thinking and social structure. With an opposable appendage to create ever more sophisticated tools and with complex social organizations, we were able to become dominant.

Our emotional intelligence is not just a sideshow to human intelligence, it's the cutting edge. The ability to be funny, to get the joke, to express a loving sentiment represent the most complex things we do. But these are not mystical attributes. They are forms of intelligence that take also place in our brains. And the complexity of the design of our brains - including our emotional and moral intelligence - is a level of technology that we can master. There are only about 25 million bytes of compressed design information underlying the human brain (that's the amount of data in the human genome for the brain's design). That's what accounts for our ability to create music, art and science, and to have relationships.

Mastering these capabilities is the future of AI. We will want our future AI's to master emotional intelligence and the movie 9 shows us why. We want our future machines to be like the stitchpunk creations, not like the rampaging machines.

We can take some comfort in the fact that technology is indeed moving in this direction. The technologies of the first industrial revolution -- such as the fossil fuels now harming our environment -- were like the evil machines in 9's world. And the decentralized delicate nanoengineered solar panels will save it.

Another good example of the "new" technology paradigm is the Internet, which is decentralized (the opposite of totalitarian control) and fosters human creativity and community. In my first book, which I wrote in the mid 1980s, I predicted that the Soviet Union which was then going strong, would be swept away by the soon to emerge individualized electronic communication. Indeed that is what we saw. We then saw a strong movement towards democracies in the 1990s fueled by the world wide web.

My view of the future is that we will work hand-in-hand with friendly machines, just as we do today. Indeed we will merge with them, and that process has already started, with machines like neural implants for Parkinson's patients and cochlear implants for the deaf. But my vision of the future is not utopian. While I don't foresee the end of conflict, future conflict will not simply be man-versus-machine. It will be among different groups of humans amplified in their abilities by their machines, just as we see today. Although in the movie, the machines have turned on their human creators, it is actually the totalitarian humans who take control of the machines away from the scientist who created them and apply them for evil purposes.

Interestingly, it is that same scientist who creates the "friendly AI's" in the form of the stitchpunk creations. This is exactly the direction we need to move in: to make our future intelligent machines "more moral than we are by our own moral standards," to quote Josh Storrs Hall, an AI scientist.

The stitchpunk creations succeed not despite their emotionalism and bickering with each other, but because of it. We will want our future machines to be emotionally, socially, and morally intelligent because we will become the machines. That is, we will become the rag dolls. We will extend our reach physically, mentally, and emotionally through our technology. This is the only way we can avoid the apocalyptic world that 9 wakes up to.

More like this

Many people today live in highly urban environments that don't allow much access to nature, so they might not question this statement:

"In a nutshell â itâs not our tool-making abilities or rational intelligence that makes us superior, itâs our capacity to feel emotion, to form social bonds and collaborate as a group."

This is characteristic of large mammals like deer, small mammals like ground squirrels, numerous bird species, and even fish. Social cohesion allows them to avoid predation (more eyes watching), have access to a wider variety of mates, and so on - the evolutionary benefits of social cohesion probably go back to the very dawn of multicellular life. There are even bacteria which, when faced with tough conditions, signal one another to swim together and form a spore-forming multicellular 'creature' - see the details:

"Cooperation is integral to all biological life but must be stabilized against obligate cheating to persist. Diverse cooperative traits have evolved among microbes, but particularly sophisticated forms of sociality have arisen in the myxobacteria, including group motility and multi-cellular fruiting body development."

http://www.ncbi.nlm.nih.gov/pubmed/19575567

Thus, making a singular artificial intelligence might the wrong approach. The best way to artificially evolve intelligence might be to start with a collection of independent autonomous 'units' that are in more-or-less constant communication with one another.

Thus, if you can make robots that can learn to work together to move like a school of fish or a flock of pigeons, then you might be well on your way to raising their intelligence level. In other words, artificial intelligences that can't form stable social groups with others probably won't succeed as independent entities for any length of time.

By cargo cult (not verified) on 13 Aug 2009 #permalink

The obvious question of course is how many applications would require AI capable of self awareness. For example robotic manufacture makes use of software to to control processes and even decide which processes or materials to use, but would not reqire a fully sentiant machine to run a factory. Unless you plan to automate all human input including things such as concept creation and design then all you need as a machine that does what its told. Same for an AI controlled vehical, AI for object ovoidance and pathfinding but it does not need to be able to choose a destination by itself.

@ Ramel: Just because you can't think of the applications doesn't mean AI is useless. Similarly, could you have guessed in 1980 what we'd use the internet for today?

By fullerenedream (not verified) on 04 Sep 2009 #permalink

@ Ramel: Also I think you're missing the concept that the AIs we create will be smarter than us, and that eventually we may even merge with them to become beings more intelligent than our current selves. Of course this is highly speculative, but so is the whole topic, and it's well worth discussing.

By fullerenedream (not verified) on 04 Sep 2009 #permalink

There are people who derisively refer to Ray Kurzweil's vision of the singularity as the "geek rapture": a technologists equivalent of eternal salvation. It is clear though that Ray Kurzweil has given a lot of thought to how technology and humanity intersect and how the rapid change of technology is going to impact what it means to be human in ways most people haven't begun to consider.

His predictions aren't based on fantasy. They are based on an well reasoned analysis of current growth and projected growth rates of certain critical technologies. The future we will have will likely be very different than he predicts or expects. He is right though to predict a level of change far beyond anything we have seen or are prepared for.

I came to know about Kurzweil from COBUILD project lead by John Sinclair. The project has done a wonderful job for English. The KDEM is an excellent tool for a lexicographer of contemporary form of any language. I am advocating the scholars of Kannada language to undertake similar project if KDEM could be accessed.

By Prof.B.B.Rajapurohit (not verified) on 20 May 2010 #permalink