A while back, I read The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma, by Deepmind and Inflection AI co-founder Mustafa Suleyman, and I thought I would write up a quick book review for anyone who hasn’t read it yet. The book focuses on two technological waves. The first is AI, which of course has already captured the cultural imagination in the last couple of years. The second, synthetic biology, is perhaps less prevalent in popular discussions of technology. Of these two technologies, Suleyman writes:
Together they will usher in a new dawn for humanity, creating wealth and surplus unlike anything ever seen. And yet their rapid proliferation also threatens to empower a diverse array of bad actors to unleash disruption, instability, and even catastrophe on an unimaginable scale. This wave creates an immense challenge that will define the twenty-first century: our future both depends on these technologies and is imperiled by them. (p. 7)
According to Suleyman, AI is already effectively ubiquitous, with algorithms applied across industries to optimize and automate work; LLMs have just moved this ubiquity into the realm of language- and image-based work products, which heretofore have been the sole province of humans. The rapid advancement and growing sophistication of these models means we’re entering a phase of what Suleyman calls “‘artificial capable intelligence’ (ACI), the point at which AI can achieve complex goals and tasks with minimal oversight” (p. 77).
Synthetic biology, meanwhile, is poised for a similar technological explosion, due to our growing capability to manipulate DNA using gene editing technologies like CRISPR. As a writing scholar, I was struck by Suleyman’s metaphor here: “If [gene] sequencing is reading, synthesizing is writing. And writing doesn’t just involve reproducing known strands of DNA; it also enables scientists to write new strands, to engineer life itself” (p. 83). The connection between AI and synthetic biology should become clear here: if AI can “write” in the sense of producing probabilistically meaningful sequences of symbols, and writing also includes probabilistically meaningful sequences of DNA, then we’re looking at a point in which AI could, with relative independence, write life itself.
When Suleyman argues that these combined technologies are crucial for the future, he mainly means the potential for them to address issues like climate change and the spread of disease. But their potential is much more complex. Suleyman articulates four features of the coming wave:
Shifts in power: AI and synthetic biology “represent a colossal transfer of power away from traditional states and militaries toward anyone with the capacity, and motivation, to deploy these devices” (p. 106).
Technological acceleration: “Put simply, innovation in the ‘real world’ could start moving at a digital pace, in near-real time, with reduced friction and fewer dependencies. You will be able to experiment in small, speedy, malleable domains, creating near-perfect simulations, and then translate them into concrete products. And then do it again, and again, learning, evolving, and improving at rates previously impossible” (p. 108).
Omni-use: AI in particular “will be an on-demand utility that permeates and powers almost every aspect of daily life, society, the economy: a general-purpose technology embedded everywhere,” from education to research to industry to the military (p. 111).
Autonomy: “The new wave of autonomy heralds a world where constant intervention and oversight are increasingly unnecessary. What’s more, with every interaction we are teaching machines to be successfully autonomous. In this paradigm, there is no need for a human to laboriously define the manner in which as task should take place. Instead, we just specify a high-level goal and rely on a machine to figure out the optimal way of getting there. Keeping humans ‘in the loop,’ as the saying goes, is desirable, but optional” (p. 113).
These combined features of the coming wave lead Suleyman to argue for containment: for not just policies, but a wide-spread, international commitment to control the rate of change and the means of access to avoid both misuse by bad actors and a potential worst-case scenario posed by an artificial general intelligence (AGI) that turns against humanity. Luckily, he doesn’t spend too much time on this latter point because he is (rightly, I think) more concerned about near-term abuses of emerging technologies, such as militarized drones used against citizens, AI-driven misinformation, state-sponsored propaganda, leaks from bio labs, and labor market disruptions (pp. 160-182). Such questions are much more pressing than potential Terminator-esque futures.
Suleyman believes that nation-states are still in the best position to effect containment. To his credit, he does recognize that nation-states are fragile, and that technology poses a threat to their existence. Still, he argues that we need to thread the needle between failed liberal democracies and authoritarian regimes that will abuse omni-use technologies: “Responding effectively to one of the most far-reaching and transformative events in history will require mature, stable, and most of all trusted governments to perform at their best. States that work really, really well. That is what it will take to ensure that the coming wave delivers the great benefits it promises. It’s an incredibly tall order” (p. 159).
He urges the international community to work towards ten steps to ensure effective containment:
Investing in technical safety research and implementation
Auditing technology companies and state actors
Buying time by leveraging “choke points” such as a limit in the number of available computer chips (p. 251) to slow the rate of progress
Involving technology critics in the development process to help support ethical deployment
Supporting businesses with social benefit missions
Supporting healthy nation-states that can reform and regulate (p. 258) technologies
Creating international treaties that aim at technological containment
Supporting cultures of learning in technology corporations
Involving a broad popular base of individuals and communities in debates and developments
Aiming for “coherence” among the previous nine steps, “ensuring that each element works harmoniously with the others, that containment is a virtuous circle of mutually reinforcing measures and not a gap-filled cacaphony of competing programs” (p. 274).
It may be tempting to dismiss Suleyman’s book as just another hype mechanism. After all, as a technologist and CEO of Inflection AI, he has a vested interest in touting the profoundly transformative potential of emerging technologies—and indeed, Inflection AI’s latest LLM may be sophisticated enough to compete with GPT-4 and Gemini. A cynical reader might see The Coming Wave as simply more salesmanship.
However, speaking personally, I identify with Suleyman’s ethos in this book: to me, he appears to be threading a difficult needle between embrace and rejection, between uncritical cheerleading and Luddite-like criticism. I feel the need to strike a similar balance in my work with faculty across disciplines, in which I regularly propose critical engagement with AI, not simple endorsement or rejection. I think Suleyman and I both agree that the technology is not only here to stay, but rapidly advancing to such an extent that critical engagement is crucial.
I also think that if he were merely in hype mode, Suleyman might not be so apt to think multi-dimensionally about the political, economic, scientific, regulatory, and cultural factors that impact the development of these technologies and that might limit their potential containment. He does have a strong grasp of those many factors, and I highly recommend reading his book if you want to learn more about them.
That said, I also think he missed a few crucial points about the coming wave that bear considering:
First, I am particularly worried about the coming invisibility of AI. As the technology proliferates and integrates into our daily lives, it will become increasingly quotidian. The more accepted and invisible it becomes, the less it is open for critical engagement, and the less of a need people, especially everyday people, will see for engaging in the business of containment.
Second, as the technologies become more prevalent, they have the potential to change our ways of thinking. This goes beyond just propaganda and manipulation, although those are important. Even seemingly mundane uses of AI have the potential to contribute to new cultural scripts that influence the ways we understand technology, politics, knowledge, and the environment. The more automated AI becomes, the less control we humans may have over the production of such cultural scripts, which means we have all the more need to teach, learn, and practice critical ways of reading taught in the humanities.
Finally, while Suleyman does make some passing references to climate change and energy consumption, I think he underplays the threat AI poses to the climate. This is understandable: I, too, hope that it can help us devise useful solutions to climate catastrophe, but so long as AI contributes to water consumption and energy consumption at staggering levels, it looks poised to be more of a problem than a solution.
Despite these reservations, I think The Coming Wave is a good read for anyone who wants to think big-picture about the potential impacts of emerging technologies and possible means of containment.