Two+ years is more than a digital lifetime. It has been too long — and I’ve been churning through too many ideas.
So … I’m back!
Since May 2021, I’ve submitted and defended my dissertation at Oxford, moved across the Atlantic (from London to Ottawa), and joined Canada’s Department of National Defence. These changes came with their own version of culture shock — i.e., like ice skating to work in February 2022 — and a lengthy period of intellectual soul-searching. I struggled (and struggle) to balance the strictures of my new responsibilities with the intellectual bounty of my previous work (as journalist and academic). So, of course, within the first six months I decided I would — surprise surprise — write a book.
The result is my current manuscript (very much under construction), The Price of Certainty: Artificial Intelligence, Decision-Making, and the Future of Conflict, for McGill-Queens University Press. The book is a departure from my previous work, which focused on civil wars and insurgent organizations, but harnesses much of what I’ve read, thought, and written about with respect to leadership and decision-making. This book is a way to address my own concerns that the breathlessness public discourse around AI technologies has obscured critical questions about how these tools might influence how humans make sense of and act in the world. In some ways, the book’s genesis started years ago and remains connected to thoughts I’d communicated in this newsletter (here and here).
But I’m also thinking about how to build a community of readers and thinkers around key debates and ideas in our pending ‘age of AI.’ I use “pending” because whatever materializes should be shaped by what we want these tools to deliver. So often, as narrated by industry insiders or the media outlets bleeding PR farms for “expert perspectives,” these technologies appear to be happening to us, instead of being tools designed by us. [The second “us” here is a particular class of technocrat, seemingly excited and terrified about what they’re building, see: Exhibit A]. It remains trite but apt to note that AI may still be a solution in search of a problem.1
For the last couple of years, I’ve read extensively and thought — perhaps too much — about what our technological future(s) might look like. And how the tech-industry narratives are working overtime to convince us of a particular vision of that future. At stake in this kind of corporate imagining is not merely the creation of new technologies, but the rebalancing of our social and political systems.
With colleagues, I’m exploring how AI tools may be responsible for the largest transfer of power from the public to the private sector in human history, while also thinking about tangible steps we might take to stabilize our information ecosystems (particularly as AI tools threaten mis- and disinformation at industrial scale). I’ll share more about these projects as they develop. I know these are some of many live-wire discussions, but they can breed a kind of intellectual helplessness — like we’re each battling currents too powerful for even the strongest swimmers.
Which is why I’m back.
My book project and many of these puzzles demand a kind of intellectual and imaginative commitment — a dedication to connecting the ideas and arguments of the past to the challenges of our present and the potential consequences for our future.
Over the next few months, I’ll be publishing a series of (book) review essays — where I’ll summarize the key arguments and place them in conversation with other thinkers, ideas, and debates to open space for discussion. To facilitate this, I’m launching a parallel newsletter We Will Survive As Ruins. I sincerely hope you’ll join me and (please, please) invite others.2
First Look: My first review essay will explore David Runciman’s new book, The Handover: How We Gave Control of Our Lives to Corporations, States, and AIs. David is a Professor of Politics at the University of Cambridge and author of Confronting Leviathan (2021) and How Democracy Ends (2018), among others. He is also the former host of the London Review of Books Talking Politics Podcast, and the wonderful History of Ideas series (which I’ve written about previously (here)). He currently hosts Past, Present, and Future, which expands his study on the history of ideas. Most exciting: the first review essay for the newsletter will coincide with my video interview with David for Intelligence Squared, featuring a companion podcast episode to follow (more — and links — to follow on this).
“A Solution In Search of a Problem basically means you are so focused on creating a product or solution that you believe is innovative and valuable, that you build without first identifying a clear and pressing problem. In the context of startups, it can be a critical mistake that leads to wasted time, resources, and ultimately, failure” as noted by Daivik Goel. There are many who believe the opposite — and even the inverse: according to Paul Graham, the founder of Y Combinator (which OpenAI’s Sam Altman previously helmed) AI provides solutions to more problems than their designers ever imagined.
We Will Survive As Ruins will be a free newsletter to begin, but I am leaving open an option to pledge support. A pledge is just as it sounds — it won’t cost to anything up front — but it gives me a sense of who might be willing to contribute if I migrate the newsletter to a paid subscription model in the future.