Thomas Hobbes was an English philosopher living around the time of the English Civil War around the mid-1600s. The political and religious backdrop to this conflict was therefore a key influence on his work.
In 1651, he published his book Leviathan, a text which has gone on the have a great influence on modern political philosophy. With the Civil War taking place during the time it was written, it consists of a "an analysis of the breakdown in civil society, and the construction of a political philosophy that would obviate the causes of such a breakdown."1
The central thesis presented by Hobbes in Leviathan is that the only way to achieve social cohesion and avoid the brutalities of a state of nature is by having one absolute sovereign power that everyone submits to:
The only way to erect...a common power, as may be able to defend [men] from the invasion of foreigners, and the injuries of one another, and thereby secure them in such sort, as that by their own industry, and by the fruits of the earth, they may nourish themselves and live contentedly; is to confer all of their power and strength upon one man, or upon one assembly of men, that may reduce all their wills, by plurality of voices, unto one will...This is the generation of that great Leviathan, or rather (to speak more reverently) of that Mortal God, to which we owe under the Immortal God, our peace and defence.2
His theory essentially consists of the following elements:
The Nature of Man. Hobbes believed that human nature is competitive, selfish, and violent. If "any two men desire the same thing, which nevertheless they cannot both enjoy, they become enemies; and in the way to their end, (which is principally their own conservation, and sometimes their delectation only,) endeavour to destroy or subdue each other."3 Therefore, it was in the nature of man "to make themselves masters of other men's persons, wives, children, and cattle."4
State of Nature. Accordingly, the consequence of these motives is that we are limited to a 'state of nature'. In other words, it is everyone for themselves, a harsh existence consisting of "continual fear, and danger of violent death" whereby "the life of man [is] solitary, poor, nasty, brutish, and short" in the pursuit of self-preservation.5 Therefore, left in a state of nature, there is no possibility for anything resembling a well-functioning society, and the "notions of right and wrong, justice and injustice have there no place."6
Sovereign. To avoid this state of nature, Hobbes advocated for a powerful sovereign who controls individuals to prevent conflict and ensure stability. A leviathan, or a commonwealth or Latin Civitas, that secures everyone from the perils of the harsh world that we live in.7
One interpretation of this view from Hobbes is that the relinquishment of individual autonomy to an absolute sovereign is necessary for peace. And this concept, relinquishment in return for safety, could also be observed in how many people treat the technological tools of modern times.
In his book Homo Deus, Yuval Noah Harari describes how modernity is dictated in large part by the 'Data Religion' or 'dataism'. Under this doctrine, "the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing."8 It also professes that artificial algorithms will "eventually decipher and outperform biochemical algorithms."9
Furthermore, dataists believe that we need and cannot ultimately "find meaning within ourselves" and that we only need to "record and connect our experience to the great data flow, and the algorithms will discover its meaning and tell us what to do."10 Essentially, our feelings are not the best algorithms and we should instead be increasing our reliance on super-sized computer algorithms powered by huge amounts of data and computing power. These algorithms, after all, "know exactly how you feel, they also know a million other things about you that you hardly suspect."11
What Harari's thesis attempts to highlight is how much more reliant we are becoming on modern technology, especially the current crop of AI systems. There is a growing belief that such systems can deliver on our wants and desires in a wide variety of contexts with the requisite reliability.
In this way, AI could be seen as a modern leviathan. Under the alleged data religion, people entrust more and more aspects of their lives to AI. The expectation is that, because these systems are said to be highly capable, this entrustment will provide those people with everything they need. You are trading agency for progress.
But, of course, there are drawbacks.
There are clear risks with being too reliant on AI. And this was made very clear in the legal industry, for example. In May this year, a judgment from the English High Court warned that lawyers submitting AI-generated material for court without proper verification could face contempt of court proceedings or even criminal investigation. In that judgment, Dame Victoria Sharp P acknowledged that while AI "is a powerful technology" and "can be a useful tool in litigation",12 its use should come with "an important proviso."13 In particular:
Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained.
[...]
In the context of legal research, the risks of using artificial intelligence are now well known. Freely available generative artificial intelligence tools, trained on a large language model such as ChatGPT are not capable of conducting reliable legal research. Such tools can produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.
Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example).14
Even Sam Altman has recognised the perils of AI as a leviathan. In a long post on X regarding OpenAI's GPT-5 rollout, he expressed concern that people "have used technology including AI in self-destructive ways." He went on to say:
A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good! A lot of people are getting value from it already today.
If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful, even if they use and rely on ChatGPT a lot. If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad. It’s also bad, for example, if a user wants to use ChatGPT less and feels like they cannot.
I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive.
This all brings up the question of the 21st century that Jamie Susskind put very articulately in his book Future Politics:
...to what extent should our lives by directed and controlled by powerful digital systems - and on what terms?15
If we embrace this current crop of AI systems as leviathans, we will not be trading agency for progress. These systems are not fully reliable enough to deliver the capabilities needed for the most delicate and consequential of use cases. True companionship seems a long shot.
But more importantly, even if these machines were so capable, should we be treating them as leviathans? If we continue to give up more of our agency to algorithms, perhaps, eventually, we are "reduced from engineers to chips, then to data, and eventually we might dissolve within the data torrent like a clump of earth within a gushing river."16 And maybe this is how AI eventually replaces humanity.
Thomas Hobbes, Leviathan (OUP 2008), p.xx.
Thomas Hobbes, Leviathan (OUP 2008), p.114.
Thomas Hobbes, Leviathan (OUP 2008), p.83.
Thomas Hobbes, Leviathan (OUP 2008), p.83.
Thomas Hobbes, Leviathan (OUP 2008), p.84.
Thomas Hobbes, Leviathan (OUP 2008), p.85.
Thomas Hobbes, Leviathan (OUP 2008), p.114.
Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow (Harvill Secker London 2016), p.367.
Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow (Harvill Secker London 2016), p.367.
Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow (Harvill Secker London 2016), p.386.
Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow (Harvill Secker London 2016), p.392.
Ayinde v London Borough of Haringey, and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin), para. 4.
Ayinde v London Borough of Haringey, and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin), para. 5.
Ayinde v London Borough of Haringey, and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin), paras. 5-7.
Jamie Susskind, Future Politics: Living Together in a World Transformed by Tech (OUP 2018), p.2.
Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow (Harvill Secker London 2016), p.395.