Introducing superior truth about singularity: they are both right!

Introducing superior truth about singularity: they are both right!

Ray vs James. This post is a comment to two books which made the history of singularity.  The Singularity is near: when humans transcend biology, written by Ray Kurzweil. Our final invention: Artificial Intelligence and the End of the Human Era, by James Barrat.

Why together? Because the community built an artificial contraposition between the authors. Something like Beatles or Rolling Stones. Ray is the optimist, his view is that superintelligences will melt with us and humans will increase their life span to immortality. A combination of emerging technologies will make this possible: genetics, nanotechnologies, and artificial intelligence. James is the pessimist, his view is that we will not be able to control artificial intelligence. This might lead to catastrophic consequences. From the time that machines achieve the level of human intelligence, known as AGI or artificial general intelligence, machines will rapidly self-improve to the point of ASI, artificial super intelligence. To achieve their goals, ASI machines will compete with humans for valuable resources in a way that put at risk human life. That said, both books are masterpieces. Singularity is Near is an impact book full of visions. Our Final Invention is more a summary of the last 15 years of academic research on risks from advanced AI. They both offer great value and I’m going to tell you why.

Ray Kurzweil tells us that incoming singularity is due to an exponential growth of technology. Human intelligence creates technology. Technology grows fast and begins to improve human intelligence. This creates a positive feedback loop, exponential in nature. I disagree on some aspects of this theory. Exponential growth is not a general rule of thumb to apply blindly to all things technological. If you look at the speed of human progress, it is accelerating, it’s evident. But the reality is that the speed of growth is quite discontinuous. Disruptive innovation proceeds with big jumps and sleepy periods. I use this argument to say that any prediction on timing of singularity will be just an approximation. Ray makes singularity happen in 2045. This was a comfortable timeline for a man who wrote the book in his forties.

Introducing superior truth about singularity: they are both right!Unfortunately if we look at some of his predictions, they are not even on the horizon. He says that nanobots, genetics and AI will eradicate diseases, prolong life expectancy and promote human intelligence to astronomical levels. Transhumanists say that we are closer to life extension than to artificial intelligence. I’m not an expert, but I have the feeling that life expectation is improving thanks to hygiene and prevention. Although ingestibles are becoming a reality, I still don’t see bots entering in our body to deliver targeted cures. A similar situation applies to artificial intelligence. There still a lot of confusion between simple increases in computation power coupled with modern algorithms and real artificial intelligence taking decisions on its own.

So what is exactly singularity? Kurzweil writes “It’s a future period during which the pace of a technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycles of human life, including death itself.” Standalone this sentence is pretty meaningless. He uses singularity to introduce his main wish, defeat death. In fact elsewhere in the book he says “there is no clue as to the necessity of death”. This is his central message. Ray Kurzweil suffered of a severe disease in his life. It’s quite obvious that his logic is affected by his personal experience. Singularity is just a hook and artificial intelligence is a mean not an end.

This website is free, if you like our post do you mind sharing it?

Others tried a more precise definition of singularity. The common view now is that it is the moment when an artificial intelligence will be smarter than all human brains put together. And here comes Barrat saying that singularity is “A point beyond which we cannot really understand what’s going on and what is going to happen.” For Barrat is the same, singularity is not his main concern. His central theme is that humans won’t be able to control AI. Also Kurzweil, to be honest, states “…but super intelligence innately cannot be controlled”. Barrat decides to develop this topic in more detail, he makes our incapacity to control machines the main subject of his book. He interviewed Arthur C. Clarke in 2001 and ended up influenced by the genius view of a dangerous AI. His collection of interviews of AI experts makes his position reliable. We can argue that many technologies appeared dangerous in the past, but humankind is still here. Nuclear bombs are a classic example, but there are many in circulation. As Michu Kaku pointed out in one of his books, eminent doctors, at the end of 1.800 advised about the danger to the bones caused by trains! But my scope here is not to confute the monumental research of Barrat. I just point out that what people dislike about Our Final Invention is that it usually leaves the readers with a sense of hopelessness. The author offers no real solution or ideas to stop our future robot overlords. I think his words, always oscillating between a prudent and an alarmistic style, in reality are a cry to find a solution.

Now it seems that the two authors are on opposite positions. One leads to immortality, the other one to extinction. Under the conditions explained in their books, they are both right. In the real world they will be both wrong. Technology is helping us to create  artificial intelligences that today are just an aid. Tomorrow they might become peers of humans and support us in decision making. Of course we don’t want them to be our masters. Scientists are working in that direction and there’s a lot time ahead before an ASI becomes operative. Why we should not find a safe solution along the way?

I see that no recent innovation, from the steam machine to nuclear energy, has been developed exclusively in the interest of the masses. Innovation comes from military for defense reasons and privates for profit. None of them is interested in losing control of their creation. While militaries want a powerful weapon, privates want optimization algorithms. None of them is able to create an ASI without cooperating with the other. That’s why DARPA is working with Microsoft, Google, Apple, Facebook and all the tech giants. We might not even get to the point that a computer is able to re-write its code to achieve its goals. What will be the reason for it?


Bonus video

Are you telling me Ray Kurzweil and James Barrat are not great enough? Ok, so please let me add Marvin Minsky.

In this exclusive video, Kurzweil Interviews Minsky: Is Singularity Near?

Newsletter: because there’s much more than Singularity here!

I invite you to subscribe to the Newsletter and join many other smart people who receive our monthly update, free of spam.

Selected readings


Prev The promising world of smart clothes: a revolutionary future
Next How to unlock the secrets of amazing future office technology


No Comments Yet!

You can be first to comment this post!

Leave a Comment