This posts lists some of the best books on AI that I have read (so far). The list is neither in any particular order nor exhaustive.
The list for this post includes the following:
Artificial Intelligence: A Guide for Thinking Humans (Penguin Random House 2019) by Melanie Mitchell
Human Compatible: AI and the Problem of Control (Penguin Random House 2019) by Stuart Russell
Algorithms Are Not Enough: Creating General Artificial Intelligence (MIT Press 2020) by Herbet L. Roitblat
The Coming Wave: AI, Power and the 21st Century's Greatest Dilemma (The Bodley Head 2023) by Mustafa Suleyman
Supremacy: AI, ChatGPT and the Race That Will Change the World (St Martin's Press 2024) by Parmy Olson
Please leave comments on your thoughts on these books if you have read them, or if there other books on AI that you like.
Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
If somebody who knows very little about AI and wants to learn more about it asked me for a book to get them started, this is what I would recommend.
Melanie Mitchell's Artificial Intelligence: A Guide for Thinking Humans explains the most important concepts around AI in an accessible manner. She explains how neural networks function, covers topics like reinforcement learning, and also delves into issues around AI ethics and safety.
Mitchell is a professor at the Sante Fe Institute and her research focuses on conceptual abstraction and analogy-making in artificial intelligence systems.
In the final chapter of her book, Mitchell addresses two interesting (and currently quite relevant) questions:
How far are we from creating general human-level AI?
How terrified should we be about AI?
Although her book was published in 2019, I think the answers are still correct today.
On the first question regarding AGI, Mitchell believes that whilst AI can be used for many narrow tasks, general intelligence has still not been achieved:
What we do know is that general human-level AI will require abilities that AI researchers have been struggling for decades to understand and reproduce - commonsense knowledge, abstraction and analogy, among others - but these abilities have proven to be profoundly elusive. Other major questions remain: Will general AI require consciousness? Having a sense of self? Feeling emotions? Possessing a survival instinct and fear of death? Having a body?
On the second question, Mitchell argues that while we should be worried about AI, the concerns should not be predominantly be focused on the rather speculative risks associated with artificial superintelligence, or ASI. Instead, our concerns should be focused on the more immediate, proven risks of current AI systems:
In any ranking of near-term worries about AI, superintelligence should be far down the list. In fact, the opposite of superintelligence is the real problem. Throughout this book, I've described how even the most accomplished AI systems are brittle; that is, they make errors when their inputs varies too much from the examples in which they've been trained. It's often hard to predict in what circumstances an AI system's brittleness will come to light. In transcribing speech, translating languages, describing the content of photos, driving in a crowded city - if robust performance is critical, then humans are still needed in the loop. I think the most worrisome aspects of AI systems in the short term is that we will give them too much autonomy without being fully aware of their limitations and vulnerabilities.
Human Compatible: AI and the Problem of Control by Stuart Russell
I am still trying to understanding AI x-risk (see my post on this below), and so far this book has been one of the most informative on this topic.
Stuart Russell's Human Compatible: AI and the Problem of Control takes a less hyperbolic look at AI development and the consequences this could have for humanity. In particular, Russell looks at the risks of creating another entity more intelligent than humans and what we should do about this.
Russell is a computer science professor at Berkeley and also wrote what is considered by many as the textbook on AI titled Artificial Intelligence: A Modern Approach.
Central to the argument in Human Compatible is the concept of control. If we lose control of superintelligence, then we expose ourselves to severe risks:
The reinforcement learning algorithms that optimize social media click-through have no capacity to reason about human behavior - in fact, they do not even know in any meaningful sense that humans exist. For machines with much greater understanding of human psychology, beliefs, and motivations, it should be relatively easy to gradually guide us in the directions that increase the degree of satisfaction of the machine's objectives. For example, it might reduce our energy consumption by persuading us to have fewer children, eventually - and inadvertently - achieving the dreams of anti-natalist philosophers who wish to eliminate the noxious impact of humanity on the natural world.
Algorithms Are Not Enough: Creating General Artificial Intelligence by Herbet L. Roitblat
The hype around OpenAI's language models, in particular the release of GPT-4 in March 2023, has triggered more conversations around whether it would be possible to create AI with a general intelligence akin to humans.
In Algorithms Are Not Enough: Creating General Artificial Intelligence, Herbert L. Roitblat argues that current AI systems like large language models (LLMs) do not provide a path toward artificial general intelligence (AGI). He does so by exploring what human intelligence is, what AI is, why current AI is not a precursor to AGI and what it would take to create AGI.
Roitblat is a data scientist with expertise in machine learning and natural language processing, among other areas, and a founder of several AI companies.
A key point made in Roitblat's book is that current AI systems are limited to solving 'path problems':
Solving [path problems] requires finding a path through a "space" that consists of all of the "moves" the system could make. Some combination of moves will solve the problem, and the computer's task is to find the specific path through the available moves that does actually solve it. Computational intelligence is the process of finding the sets of operations and their order (the path) necessary to solve a problem.
Current AI systems are therefore optimised for finding the 'correct' path from an input to a desired output. Even LLMs are, fundamentally, designed to use their learned understanding of language to predict the best response to a natural language prompt.
But general intelligence, as Roitblat contends, is not just limited to the ability to solve path problems. It also includes the ability to solve problems that require non-linear types of thinking that are currently outside the scope of AI systems.
In particular, the general intelligence that humans are capable of involves the ability to identify new problems, think of ways to solve them (i.e., identifying the correct path) and then execute the solution:
The key part of natural intelligence is the apparent ability to construct problem spaces, not just find paths through one that has already been constructed. But natural intelligence also has other properties. Natural intelligence is not concerned with finding the optimal solution to problems. Rather, natural intelligence is willing to jump to conclusions that cannot be "proven" to be correct in any sense of the word.
The Coming Wave: AI, Power and the 21st Century's Greatest Dilemma by Mustafa Suleyman
Rather than AI itself being the source of existential risk, there is another more compelling argument that the existential risk comes from humans doing bad things with AI.
This is what Mustafa Suleyman focuses on in The Coming Wave: AI, Power and the 21st Century's Greatest Dilemma. He looks at the coming wave of technologies like AI and the unique problems they will present to humanity.
Suleyman is the CEO of Microsoft AI and cofounder of DeepMind (which was acquired by Google in 2014).
The book defines 'waves' as the proliferation of transformative technologies whereby its affordability and use increases. But for Suleyman 'the coming wave' specifically refers to the proliferation of AI and synthetic biology:
Together they will usher in a new dawn for humanity, creating wealth and surplus unlike anything ever seen. And yet their rapid proliferation also threatens to empower a diverse array of bad actors to unleash disruption, instability, and even catastrophe on an unimaginable scale. This wave creates an immense challenge that will define the twenty-first century: our future both depends on these technologies and is imperiled by them.
The proliferation of these technologies could either be really good or really bad for society. This refers to the 'containment problem', which is about humanity getting the best out of these technologies:
How do we keep a grip on the most valuable technologies ever invented as they get cheaper and spread faster than any in history?
Suleyman's view is that such containment is NOT possible, meaning that the negatives of these technologies will end up outweighing the positives. He therefore argues that we must find a way to make containment possible.
Supremacy: AI, ChatGPT and the Race That Will Change the World by Parmy Olson
Much of the current AI hype is arguably being driven by two tech companies: OpenAI and Google DeepMind.
The story of these two companies, and their mission to build AGI, is the subject of Parmy Olson's Supremacy: AI, ChatGPT, and the Race That Will Change the World. It deals with the backstories of their respective founders (Sam Altman and Demis Hassabis) and their important involvement in the current generative AI hype cycle.
Olson is a columnist at Bloomberg covering technology and is also known for her research on the hacktivist network Anonymous provided in her 2012 book We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency.
The main argument presented by Olson in her latest work is that mission of both OpenAI and Google DeepMind to build AGI has (at least so far) ended up contributing to the power and dominance of Big Tech:
To build the most powerful software in history, they needed money and computing power, and their best source was Silicon Valley. Over time, both Altman and Hassabis deciced they needed the tech giants after all. As their efforts to create superintelligent AI became more successful and as strange new ideologies buffeted them from different directions, they compromised their noble goals. They handed over control to companies who rushed to sell AI tools to the public with virtually no oversight from regulators, and with far-reaching consequences.
Her chapters on how OpenAI entered into its partnership with Microsoft and its release of ChatGPT are particularly illustrative of this point. And in the penultimate chapter, Olson sums up the problem with the development of AI being subject to the motives of profit-seeking tech leviathans:
The most transformative technology in recent history was being developed by handfuls of people who were turning a deaf ear to its real-world side effects, who struggled to resist the desire to win big. The real dangers weren't so much from AI itself but from the capricious whims of the humans running it.