Los Angeles has been leveled in a nuclear strike launched by rogue Artificial Intelligence. Ten years later, the cleanup continues, the government has banned AI, and American military leaders testify in Washington about an ongoing battle against restive elements in New Asia — a continent now populated by humans, robots, and a race of anthropomorphised AI known as Simulants. The US military’s hope of victory against the rage of these artificial machines, rests in a machine of their own— a sky-darkening drone named Nomad, which hovers over the landscape to search for and obliterate the West’s artificial enemies and the mythic engineer responsible for bringing them to life.
This plot line, from the film The Creator (2023), is fanciful in all the ways imagined futures tend to be. The orchestrated drama is interpersonal, of course, but staged amidst a global struggle mirroring the rhetoric of our present, with AI development fuelling an international arms race between great powers. What the film reminds us, though, is that whatever the technological promises of AI, our future will be negotiated through the most traditional means: politics. In other words, safeguarding tomorrow will rely on our management of one of the most powerful and lasting artificial agents: states.
This is, at least in part, the argument made by David Runciman in his new book, The Handover: How We Gave Control of Our Lives to Corporations, States and AIs. For Runciman, our world has long been an artificial one— constructed of non-human entities that intervene, shape, and preserve what we’ve come to accept as the status quo. He highlights both states and corporations as most potent artificial forces — synthetic entities that retain both the independence and durability to exert a kind of superagency over the world they inhabit. And, thus, his book highlights our enduring and precarious balance: as we’ve handed decision-making power to artificial agents they have tended to expand their jurisdiction and exert persistent influence over the world we now share. The question is whether we — actors on their stage — are better or worse for it.
NEW: My podcast episode with David Runciman — and impetus for this review essay — is now live for Intelligence Squared.
In some ways, The Handover is not unlike Frankenstein’s monster: not only because Mary Shelly’s famous tale is a warning about post-Enlightenment scientific hubris, but also because Runciman’s book is a work of bricolage — a sewing together and connecting of thoughts across time and disciplines. The final assembly aims to awaken in the reader a kind of attentive focus on our everyday monsters — those states and corporations — that act (or fail to act) beyond our control.
Runciman, known widely for his writing on politics, democracy, and his series of engaging podcasts — the History of Ideas (and now Past Present Future) — cites the classic political philosopher Thomas Hobbes, and his book, Leviathan, as the impetus for this project. Leviathan is a canonical text in philosophy and political theory and contains the line Hobbes is best known for: describing man’s life in the state of nature as nasty, brutish, and short. Hobbes believed that humankind, left to explore its own natural freedom, would drive individuals into conflict with each other. To avoid a war of all against all, Hobbes wrote, would require a kind of common — and ultimate — authority, something he referred to as the sovereign. In short, Leviathan outlined why humans are better off relinquishing our natural freedoms for the stability afforded by a coercive and powerful state. For Runciman, however, it is Hobbes’ opening paragraph that introduces the state’s enduring model — “the great Leviathan … which is but an artificial man” — and highlights how this most powerful artificial agent still structures life for us today.
A professor in the Department of Politics at Cambridge, Runciman spends considerable ink cataloging his terms of reference, tracing similarity and divergence between states, corporations, and — nearly halfway through the book — nascent forms of artificial intelligence. These sections are not without merit, but they can suffer from plainspoken peril, with terms so commonly used they lose precision. Runciman’s treatise also suggests a historical metanarrative about how humans have built artificial entities for particular purposes. The net effect of these sections is a complex of nested arguments suggesting the state and corporation are emergent, necessary, and even twinned outgrowths of human endeavor. Endeavors, lined up through four centuries of history, that define what we've come to see as progress.
In this sense, The Handover does important work in at least two domains. First, it offers a story about how the state and corporation have emerged as powerful artificial agents — both in competition and then in a kind of symbiosis. Second, the book works to reframe the debates about AI — the latest and perhaps most profound artificial agent. Tomorrow’s balancing act is not merely between human and machine, but between human, machines, and their machines (AIs). Runciman’s contribution here is refreshing and helpful.
One glaring incongruence, however, is not how, but why?
While both the state and corporation appear as responses to specific problems, our so-called “Age of AI” is borne not of necessity, but opportunity. AI has been marketed as an emancipatory transformation — intelligence aimed at freeing us from annoying labour so that we might pursue creative — uniquely human? — ends. “I am excited about a world where AI is an extension of human will and an amplifier of our abilities,” Sam Altman, CEO of OpenAI said, in an interview on the Lex Fridman podcast last spring. In other words, we haven’t designed AIs to escape the state of nature, we — in that collective sense — don’t design these latest AIs at all: corporations do.
While the average user might benefit from increased productivity, of course, we should think of this as kind of capital service produced by, for, and through the private sector — a tool neither owned nor independently operated by the user. As such, AI development runs the risk of deepening the private sector’s hold over the stories that shape our social, economic, and political lives.
Reading between the lines, we ought to worry about the diminished (and diminishing) degree of control we have over the form of artificial agents. After all, in theory we consent to the state and — for those in representative systems, exert voting power over its direction and interests. Through the state, we also work to control the modern corporation, often through taxation and regulation. However, we are another full step removed from the AI tools those corporations are exasperatingly building. As AI emergence coincides with profound mis- and distrust in central institutions (from the government to the media), Runciman’s book reminds us how vulnerable we may be to the overtures of Big Tech and the corporate world if we fail to engage our older agents in the struggle to restrain their artificial progeny. If we fail to exert our own agency, these private actors won’t only build tomorrow’s tools, but they will determine how we should use them.
“These artificial creations are all around us. Many are so familiar that we fail to notice how artificial they are. Not just states and corporations, but schools and universities, town councils and city halls, sports clubs and religious organisations, political parties and radical cooperatives: all require that we build structures that allow them to go beyond the human frailties of their members, who may leave, or lose interest, or fall out, or get sick, or die.”
Most concerning, however, is how our status quo has been shaped by these artificial agents in ways that corrode our ability to individually and collectively respond. Technology’s essence is its intervening power — it is potent because these cutting edge capabilities alter how we should understand scale1 and how the externalities of technological change can resound — and sometimes wait to present — throughout time. This is why, in the end, The Handover is a story about politics, which means we’re really discussing power, which leaves us with a few enduring questions.
First, we ought to wonder about the role of the technocrat. Towards the end of the book, Runciman writes:
“Anyone who can control the machines and how they operate will have hugely enhanced powers. The remainder of humanity – the vast majority – will be relatively powerless. Knowledge of how the world works will no longer be shareable. It will be the jealously guarded preserve of a tiny elite. A few people’s imaginations – Elon Musk and his like – will be all important. Everyone else’s will be increasingly irrelevant.”
While a technocrat is thought to control a technology, this idea of control may be dubious in our age of AI (particularly if we think Artificial General Intelligence is possible). Instead, we may end up with a kind of techno-aristocracy: with owners of AI-producing corporations serving as de facto barons, baronesses, or otherwise. This future may look a lot like our past.
Second, if we are perched on the precipice of transformative AI, what are the implications of misaligned incentives with these new artificial actors? As Runciman writes:
“We have become immensely skilled at deploying our intelligence to make things work better, more efficiently, more securely, and yet we seem incapable of applying the same levels of thought and organisation to the preservation of the planet” adding “we have is a mismatch between the drives of these artificial persons and the needs of the planet.”
This is an instance of structure overwhelming agency. Capitalism doesn’t determine what we do in our everyday lives, but the structure of capitalism’s attention economy does impugn the ability for individuals to advocate, petition, or generate reforms for change. Climate change offers an essential example, as the structural incentives of our market economies corrodes how we leverage our modern human intelligence to work cooperatively against the extractive status quo.
“This is the terrible quandary of modern life. We need states to have a life of their own. As a result, they do have a life of their own. As a result of that, they don’t always do what we want, or need.”
In this same vein, I spend a lot of time thinking about whether AI’s emergence is merely the latest iteration of modern capitalism’s land grab. Runciman has noted that the largest (usually tech) companies today are, at heart, advertising firms. These insights matter for at least two reasons. One, the size and influence of private technology companies makes them both potent agents in the world and amplifies their power to shape the story, and thus our expected “future.” Just as we are born as citizens into the modern social contract with the state, AIs will emerge into a world shaped by market influences. If these companies are also the progenitors of the AI tools, why wouldn’t they design them in line with their corporate interests?
One of the challenges for any writer is finding a balance between past, present, and future. And The Handover is neither prescriptive nor prospective: it is illustrative. The book concludes with a kind of begrudging assertion that democracy, precisely because it is lumbering and inefficient, remains the most reasonable mechanism to shape a future we want to inherit.
In his previous book, How Democracy Ends, Runciman highlights the potential of citizen juries, weighted voting systems, and regulatory oversight committees as changes that might rejuvenate our aged artificial (political) machines. These are debates that couldn’t fit into the latest book, but speaks to the need for restoring political power to humans in an increasingly mechanised world.
In sum, The Handover might be seen as a kind of reclamation project — preserving an overlooked and ignored history, particularly given the breathlessness of our current discussion about AI. Seeing the precarity of our current moment as a battle between human and artificial agents, ignores our lengthy history of consciously or unconsciously handing decision-making power to non-human entities. That’s why, in casting our eyes forward, Runciman warns that our future will be borne through negotiation between the state, corporations, and AI. In other words, we are descendants of an endless intergenerational struggle, duty-bound to confront the ways artificial agents may threaten our social compact — frayed as it may be — to ensure we retain a seat at that negotiating table.
For more: You can listen to my discussion with David Runciman on the Intelligence Squared podcast.
Technology enables small changes with systemic effects: i.e., a single line of code, written in Silicon Valley, can influence the shape and management of global financial markets.