I recently started reading Ray Kurzweill's new book The Singularity is Nearer: When We Merge with AI. Kurzweil is a computer scientist who has written numerous books on AI, including The Singularity is Near, the predecessor to his latest work.
'The singularity' is an event that Kurzweil believes technological development is guiding our society towards. He describes this event as a "transition that will be utterly transformative for humanity."1
In essence, the singularity, as the subtitle of the book states, refers to the point at which humans merge with AI:
...we will merge with AI and augment ourselves with millions of times the computational power that our biology gave us...[and] expand our intelligence and consciousness so profoundly that it's difficult to comprehend.2
There are three trends that contribute to the singularity:
The falling cost of computing power.
A greater understanding of human biology, in particular the brain.
Engineering being carried out at increasingly smaller scales.
According to Kurzweil, the evolution of our universe can be categorised into distinct epochs that involve improvements to intelligence and information processing. The fifth epoch, which involves the three aforementioned trends, will culminate in the singularity:
...we will directly merge biological human cognition with the speed and power of our digital technology. This is brain-computer interfaces. Human neural processing happens at a speed of several hundred cycles per second, as compared with several billion per second for digital technology. In addition to speed and memory size, augmenting our brains with nonbiological computers will allow us to add many more layers to our neocortices - unlocking vastly more complex and abstract cognition than we can currently imagine.3
As I have written previously, the current AI hype is driven by two prospects:
Generative AI will significantly improve our society by enabling unparalleled increases in productivity and growth.
Generative AI, and large language models (LLMs) in particular, constitute AGI or at least puts us on the path to AGI and beyond.
Kurzweil's view somewhat entertains both prospects. He believes that merging with AI will result in huge improvements to our own intelligence capabilities, and that this would involve achieving AGI, whereby we reinvent "the intelligence that nature gave us on a more powerful digital substrate, and then [merge] with it."4
However, while Kurzweil views the mastering of language by AI as a means to "the breathtaking generality of the human neocortex", current LLMs still fall short of this in numerous ways:
Contextual memory. Current models struggle to keep track of all the ideas in a given conversation or written text because "the demands of remembering the context for an entire chapter or book by brute force spiral rapidly out of control".5
Common sense. The ability to "imagine situations and anticipate their consequences in the real world" is something that current models struggle with since they are not equipped with "a robust model of how the real world works, and training data rarely includes such implicit knowledge."6
Social interaction. With today's AI models, "social nuances like an ironic tone of voice are not well represented in the text databases" they are trained on, which means that they lack the ability to recognise the beliefs of others, exercise empathy or infer the motivations of others.7
Kurzweil therefore only sees LLMs as providing a potential path to AGI. AI will need to improve in areas he highlights before it is possible to build the "powerful digital substrate" that we can merge with.
Additionally, Kurzweil's vision does not see AI as a competitor to humans. Rather, he sees AI becoming "an extension of ourselves" whereby "the nonbiological portions of our minds will provide thousands of times more cognitive capacity than the biological parts."8
I am interested to see if he addresses the predictions and arguments of others in this space, including those from Stuart Russell and Nick Bostrom, who warn about the existential risk of AI. Does the path toward the singularity involve a chance of us losing control of AI and getting 'paperclipped'?
Regardless, Kurzweil's latest book adds to the narrative driving the current AI hype cycle, and in time we will see how correct his predictions are.
Ray Kurzweil, The Singularity is Nearer: When We Merge With AI (The Bodley Head 2024), p.1.
Ray Kurzweil, The Singularity is Nearer: When We Merge With AI (The Bodley Head 2024), p.1.
Ray Kurzweil, The Singularity is Nearer: When We Merge With AI (The Bodley Head 2024), p.8.
Ray Kurzweil, The Singularity is Nearer: When We Merge With AI (The Bodley Head 2024), p.11.
Ray Kurzweil, The Singularity is Nearer: When We Merge With AI (The Bodley Head 2024), p.55.
Ray Kurzweil, The Singularity is Nearer: When We Merge With AI (The Bodley Head 2024), p.56.
Ray Kurzweil, The Singularity is Nearer: When We Merge With AI (The Bodley Head 2024), p.56.
Ray Kurzweil, The Singularity is Nearer: When We Merge With AI (The Bodley Head 2024), pp.9-10.
Interesting, I don't think Kurzweil has shifted much since he was promoting this in the 90s (when I saw him, Bill Joy and a few others speak about neural nets and the AI of the time). I'm glad he's not like the hopium peddlers selling that AGI is just around the corner though, because that's absolute rubbish.
The real question is, was the book worth it to you?