| UK edition cover (2025) | |
| Author | Eliezer Yudkowsky and Nate Soares |
|---|---|
| Subject | Existential risk from artificial intelligence |
| Genre | Non-fiction |
| Publisher | Hachette Book Group |
Publication date | 16 September 2025 |
| Pages | 256 |
| ISBN | 9780316595643 |
If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (published with the alternate subtitle The Case Against Superintelligent AI in the UK) is a 2025 book by Eliezer Yudkowsky and Nate Soares which details potential threats posed to humanity by artificial superintelligence.
It was listed in The New York Times best-seller list on October 5, 2025. [1]
Modern AI systems are "grown" rather than "crafted": unlike traditional software that consists of code created by humans, modern AI systems are hundreds of billions to trillions of numbers that no one understands. These numbers can be found using enormous computing power, but humans do not truly understand how they work and cannot specify nor control their values. When an AI system threatens a New York Times reporter, or calls itself "MechaHitler", no one can look inside, find the line of code responsible for that behavior, and fix it.
Humans can train AI systems to be generally competent. An AI that tries to achieve goals will perform better on many metrics, so it will be selected for by the training process. However, due to the nature of modern machine learning, it is not possible for humans to specify the goals that a superintelligent AI system should try to pursue. With current technology, the AI's goals would not contain anything of value to humans.
Just as humans would lose a game of chess against Stockfish, they would lose against an AI system that is generally more competent than them. It is hard to predict the exact path, as that would mean being as good at achieving goals as the AI system, but there are some paths available to it. Superintelligence would not care about humans, but it would want the resources that humans need. Humanity would thus lose and go extinct.
The book's authors contend that the world's leaders, the scientific community, and everyone else need to speak up and warn the world about the danger. To avoid a catastrophe, the authors believe that humanity needs to coordinate to halt large-scale general AI development everywhere, possibly with the exception for narrow AI systems like AlphaFold that would not threaten humanity's existence. At a minimum, as the first step, humanity should make a global halt into AI research, as they get more evidence of the danger.
Max Tegmark acclaimed it as "The most important book of the decade", writing that "the competition to build smarter-than-human machines isn't an arms race but a suicide race, fueled by wishful thinking." [2]
It also received praise from Stephen Fry, [3] Ben Bernanke, [4] Vitalik Buterin, Grimes, Yoshua Bengio, Scott Aaronson, Bruce Schneier, George Church, Tim Urban, Matthew Yglesias, Christopher Clark, Dorothy Sue Cobble, Huw Price, Fiona Hill, Steve Bannon, Emma Sky, Jon Wolfsthal, Joan Feigenbaum, Patton Oswalt, Mark Ruffalo, Alex Winter, Bart Selman, Liv Boeree, Zvi Mowshowitz, Jaan Tallinn, and Emmett Shear. [5] [6] [7] [8]
Reviews of the book by critics have been mixed.
Upon its release, it was included in the New York Times best-seller lists for hardcover nonfiction and for combined print and e-books nonfiction. [9]
Writing for The New York Times , Stephen Marche compared the book to that of a Scientology manual and said reading it was like being trapped in a room with irritating college students on their first mushroom trip. [10]
The Guardian called it one of the biggest books of the autumn, stating that superintelligent AI is dangerous, but humanity can still take steps to avoid disaster. [11] It became Book of the day on September 22, 2025. In a review, The Guardian's non-fiction books editor David Shariatmadari wrote that "If Anyone Builds It, Everyone Dies is as clear as its conclusions are hard to swallow" and that anyone who cares about the future has a responsibility to read the book's arguments. [12]
Tom Whipple, the Science editor at The Times , described the book as both compelling and disturbing, noting its readability and engaging storytelling, which at times resembled a thriller, "albeit one where the thrills come from the obliteration of literally everything of value". While finding the authors' astonishing and dire claims credible, he expressed hope that they are wrong due to the apocalyptic outcome, one that he himself can't see a way to avoid. [13]
Bill Conerly wrote for Forbes that the book persuaded him that the catastrophic risk to humanity was greater than he had previously thought. He noted that the book effectively used parables to argue that AI's self-improvement could lead to unpredictable evolutionary paths which may diverge from those of its human programmers and eventually cause the AI to view humans as a hindrance or simply not valuable enough to warrant resources. He concluded that he was "far more concerned" after reading the book. [14]
Kevin Canfield wrote in the San Francisco Chronicle that the book makes powerful arguments and recommended it. [15]
In The Atlantic , Adam Becker wrote that the book is "tendentious and rambling, simultaneously condescending and shallow. Yudkowsky and Soares are earnest; unlike many of the loudest prognosticators around AI, they are not grifters. They are just wrong...Yudkowsky and Soares fail to make an evidence-based scientific case for their claims." [16]
In an article titled "Why we must pull the plug on superintelligence", Paul Wood wrote for The Spectator : "If more and more people understand the danger, wake up and decide to end the 'suicide race', our fate is still in our own hands. If Anyone Builds It, Everyone Dies is an important book. We should consider its arguments – while we still can." [17]
Publishers Weekly said the book is an "urgent clarion call to prevent the creation of artificial superintelligence" and a "frightening warning that deserves to be reckoned with", but mentioned that some of the parables and analogies are less effective than others and that very few opposing viewpoints are presented. [18]
Kirkus Reviews gave a positive review, calling the book "a timely and terrifying education on the galloping havoc AI could unleash—unless we grasp the reins and take control." [19]
Booklist gave the book a starred review. It praised Yudkowsky and Soares for their analysis of the existential threats posed by artificial superintelligence, detailing a potentiality that is less about technological advancement and more the survival humanity. It likens the book to a "fire alarm" for anyone involved in shaping the future, emphasizing that it demands serious consideration and reflection from all stakeholders no matter their opinion on its conclusions. [20]
Steven Levy in Wired expressed skepticism regarding the likelihood of AI causing human extinction, finding the authors' proposed solutions for preventing devastation more improbable than their doomsday scenarios, but mentioned a study of AI contemplating blackmail and concluded "My gut tells me the scenarios Yudkowsky and Soares spin are too bizarre to be true. But I can’t be sure they are wrong." [21]
The New Zealand Herald called it the book of the day on October 21, 2025: "How many chances do you want to take with the future of our species?". [22]
Ian Leslie, writing for The Observer , said the authors wrote their story with "clarity, verve, and barely suppressed glee," making it "a lot of fun" for a book about human extinction. However, he was not convinced that superintelligence as described is imminent, or that if it emerges, it would likely lead to humanity's demise. [23]
Gary Marcus in The Times Literary Supplement wrote that "Things are worrying, but not nearly as worrying as the authors suggest" and that the authors "lay out this thesis thoughtfully, entertainingly, earnestly, provocatively and doggedly. Yet their book is also deeply flawed. It deserves to be read with an immense amount of salt." [24]
Jacob Aron, writing for New Scientist , called the book "extremely readable" but added that "the problem is that, while compelling, the argument is fatally flawed", concluding that effort would be better spent on "problems of science fact" like climate change. [25]
Clara Collier criticized the book in the effective altruist journal Asterisk Magazine for being less coherent than the authors' prior writings and not fully explaining the premises. [26]
Grace Byron in the Washington Post criticized the book for being a polemic with vague instructions rather than a manual. She concluded that while the authors are subject matter experts, the book feels like it was written by "two aggrieved patriarchs tired of being ignored". [27]
It's true that the book is more up-to-date and accessible than the authors' vast corpus of prior writings, not to mention marginally less condescending. Unfortunately, it is also significantly less coherent. The book is full of examples that don't quite make sense and premises that aren't fully explained. But its biggest weakness was described many years ago by a young blogger named Eliezer Yudkowsky: both authors are persistently unable to update their priors.