If Anyone Builds It, Everyone Dies

Last updated

If Anyone Builds It, Everyone Dies
If Anyone Builds It Cover 2025.webp
UK edition cover (2025)
Author Eliezer Yudkowsky and Nate Soares
Subject Existential risk from artificial intelligence
Genre Non-fiction
Publisher Hachette Book Group
Publication date
16 September 2025
Pages256
ISBN 9780316595643

If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (published with the alternate subtitle The Case Against Superintelligent AI in the UK) is a 2025 book by Eliezer Yudkowsky and Nate Soares which details potential threats posed to humanity by artificial superintelligence.

Contents

It was published in the United States on September 16, 2025, and was included in The New York Times best-seller list upon its debut. [1]

Synopsis

Modern AI systems are "grown" rather than "crafted": unlike traditional software that consists of code created by humans, modern AI systems are hundreds of billions to trillions of numbers that no one understands. We can find these numbers using enormous computing power, but don't truly understand how they work and can't specify and control their values. When an AI system threatens a New York Times reporter, or calls itself MechaHitler, no one can look inside, find the line of code responsible for that behavior, and fix it.

We can train AI systems to be generally competent: an AI that tries to achieve goals will perform better on many metrics, and so will be selected for by the training process. However, due to the nature of modern machine learning, it isn't possible for us to specify the goals that a superintelligent AI system would be trying to pursue, and with the current technology, those would not contain anything of value to humans.

Just like we'd lose a game of chess against Stockfish, we'd lose against an AI system generally more competent than us. It's hard to predict the exact path, as that'd mean being as good at achieving goals as the AI system, but there are some paths available to it. Superintelligence won't care about us but would want the resources we need. Humanity would lose and go extinct.

The world's leaders, the scientific community, and everyone else need to speak up and warn the world about the danger. To avoid a catastrophe, humanity needs to coordinate to halt large-scale general AI development everywhere, possibly with the exception for narrow AI systems like AlphaFold that wouldn't threaten humanity's existence. At a minimum, as the first step, we should build in the optionality for a global halt later, as we get more evidence of the danger.

Reception

Reviews by public figures and scientists

Max Tegmark acclaimed it as "The most important book of the decade", writing that "the competition to build smarter-than-human machines isn't an arms race but a suicide race, fueled by wishful thinking." [2]

Stephen Fry called it the most important book he's read for years, writing in a review that the authors, who've studied AI for decades, "sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster" and applauding their "brilliant gift for analogy, metaphor and parable" used to explain AI engineering, cognition and neuroscience better than any book of the many he's read on the subject. [3]

Ben Bernanke said it's "A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity."

It also received praise from Vitalik Buterin, Grimes, Yoshua Bengio, Scott Aaronson, Bruce Schneier, George Church, Tim Urban, Matthew Yglesias, Christopher Clark, Dorothy Sue Cobble, Huw Price, Fiona Hill, Steve Bannon, Emma Sky, Jon Wolfsthal, Joan Feigenbaum, Patton Oswalt, Mark Ruffalo, Alex Winter, Bart Selman, Liv Boeree, Zvi Mowshowitz, Jaan Tallinn, and Emmett Shear. [4] [5] [6] [7]

Critical reception

Reviews of the book by critics have been mixed.

Upon its release, it was included in the New York Times best-seller lists for hardcover nonfiction and for combined print and e-books nonfiction. [8]

Writing for The New York Times , Stephen Marche said the book "reads like a Scientology manual" and that it "evokes the feeling of being locked in a room with the most annoying students you met in college while they try mushrooms for the first time." [9]

The Guardian called it one of the biggest books of the autumn, writing: "Should you worry about superintelligent AI? The answer from one of the tech world’s most influential doomsayers, Eliezer Yudkowsky, is emphatically yes. The good news? We aren’t there yet, and there are still steps we can take to avert disaster." [10] It became Book of the day on September 22, 2025. In a review, The Guardian's non-fiction books editor David Shariatmadari wrote that "If Anyone Builds It, Everyone Dies is as clear as its conclusions are hard to swallow" and that "everyone with an interest in the future has a duty to read what [Yudkowsky] and Soares have to say." [11]

Science editor at The Times found the book convincing: "Are they right? Given the gravity of the case they make, it feels an odd thing to say that this book is good. It is readable. It tells stories well. At points it is like a thriller — albeit one where the thrills come from the obliteration of literally everything of value. [...] The achievement of this book is, given the astonishing claims they make, that they make a credible case for not being mad. But I really hope they are: because I can’t see a way we get off that ladder." [12]

Kevin Canfield wrote in San Francisco Chronicle that the book makes powerful arguments and recommended it. [13]

In The Atlantic , Adam Becker wrote that the book is "tendentious and rambling, simultaneously condescending and shallow. Yudkowsky and Soares are earnest; unlike many of the loudest prognosticators around AI, they are not grifters. They are just wrong...Yudkowsky and Soares fail to make an evidence-based scientific case for their claims." [14]

In an article titled "Why we must pull the plug on superintelligence", Paul Wood wrote for The Spectator : "If more and more people understand the danger, wake up and decide to end the “suicide race,” our fate is still in our own hands. If Anyone Builds It, Everyone Dies is an important book. We should consider its arguments – while we still can." [15]

Publishers Weekly said the book is an "urgent clarion call to prevent the creation of artificial superintelligence" and a "frightening warning that deserves to be reckoned with", but mentioned that the authors "make extensive use of parables and analogies, some of which are less effective than others" and "present precious few opposing viewpoints, even though not all experts agree with their dire perspective." [16]

Kirkus Reviews gave a positive review, calling the book "a timely and terrifying education on the galloping havoc AI could unleash—unless we grasp the reins and take control." [17]

Booklist gave the book a starred review: "Yudkowsky and Soares offer a stark examination of the existential risks posed by artificial superintelligence. [...] This is not a book about innovation; it is a book about survival. Aimed at technologists, policymakers, ethicists, and concerned citizens, it serves as a fire alarm for anyone shaping the future. Whether one agrees with its conclusions or not, the book demands serious attention and reflection." [18]

Steven Levy in Wired wrote that "even after reading this book, I don’t think it’s likely that AI will kill us all" and "the solutions they propose to stop the devastation seem even more far-fetched than the idea that software will murder us all", but mentioned a study of AI contemplating blackmail and concluded "My gut tells me the scenarios Yudkowsky and Soares spin are too bizarre to be true. But I can’t be sure they are wrong." [19]

Ian Leslie, writing for The Observer , said "the authors tell their story with clarity, verve and a kind of barely suppressed glee. For a book about human extinction, If Anyone Builds It, Everyone Dies is a lot of fun", but stated that he wasn't convinced that "superintelligence they describe is remotely imminent, or that we’re goners if something like it ever does come to pass." [20]

Gary Marcus in The Times Literary Supplement wrote that "Things are worrying, but not nearly as worrying as the authors suggest" and that the authors "lay out this thesis thoughtfully, entertainingly, earnestly, provocatively and doggedly. Yet their book is also deeply flawed. It deserves to be read with an immense amount of salt." [21]

Jacob Aron, writing for New Scientist , called the book "extremely readable" but added that "the problem is that, while compelling, the argument is fatally flawed", concluding that effort would be better spent on "problems of science fact" like climate change. [22]

Clara Collier criticized the book in the effective altruist journal Asterisk Magazine for being less coherent than the authors' prior writings and not fully explaining the premises. [23]

Grace Byron in the Washington Post wrote that the book "is less a manual than a polemic. Its instructions are vague, its arguments belabored and its absurdist fables too plentiful" and added that "Yudkowsky and Soares are certainly experts in their field, but this book often reads like a disgruntled missive from two aggrieved patriarchs tired of being ignored." [24]

References

  1. If Anyone Builds It, Everyone Dies. 3 March 2025. ISBN   978-1-6686-5265-7.
  2. "Max Tegmark on X: "Most important book of the decade:"". Twitter .
  3. If Anyone Builds It, Everyone Dies.
  4. "vitalik.eth (@VitalikButerin) on X: "A good book, worth reading to understand the basic case for why many people, even those who are generally very enthusiastic about speeding up technological progress, consider superintelligent AI uniquely risky"". Twitter .
  5. "If Anyone Builds It, Everyone Dies: Full Praise".
  6. ""If Anyone Builds It, Everyone Dies" release day! - Machine Intelligence Research Institute".
  7. "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All by Eliezer Yudkowsky, Nate Soares, Hardcover".
  8. "Combined Print & E-Book Nonfiction". The New York Times. Retrieved September 25, 2025.
  9. Marche, Stephen (27 August 2025). "A.I. Bots or Us: Who Will End Humanity First?". The New York Times. Retrieved 9 September 2025.
  10. "From a new Thomas Pynchon novel to a memoir by Margaret Atwood: the biggest books of the autumn". The Guardian. 6 September 2025. Retrieved 12 September 2025.
  11. Shariatmadari, David (22 September 2025). "If Anyone Builds it, Everyone Dies review – how AI could kill us all". The Guardian. Retrieved 22 September 2025.
  12. Whipple, Tom (13 September 2025). "AI — it's going to kill us all". The Times. Retrieved 13 September 2025.
  13. Canfield, Kevin (2 September 2025). "'Everyone, everywhere on Earth, will die': Why 2 new books on AI foretell doom". San Francisco Chronicle. Retrieved 11 September 2025.
  14. Becker, Adam (19 September 2025). "The Useful Idiots of AI Doomsaying". The Atlantic. Retrieved 19 September 2025.
  15. Wood, Paul (23 September 2025). "Why we must pull the plug on superintelligence". The Spectator. Retrieved 23 September 2025.
  16. "If Anyone Builds It, Everyone Dies: Why Superhuman AI Will Kill Us All". Publishers Weekly. Retrieved 11 September 2025.
  17. "IF ANYONE BUILDS IT, EVERYONE DIES". Kirkus Reviews. Retrieved 9 September 2025.
  18. "IF ANYONE BUILDS IT, EVERYONE DIES". Booklist. 2 September 2025. Retrieved 23 September 2025.
  19. Levy, Steven (5 September 2025). "The Doomers Who Insist AI Will Kill Us All". Wired. Retrieved 9 September 2025.
  20. Leslie, Ian (28 September 2025). "Visions of the AI apocalypse". The Observer. Retrieved 29 September 2025.
  21. "AI armageddon?".
  22. Aron, Jacob (8 September 2025). "No, AI isn't going to kill us all, despite what this new book says". New Scientist. Retrieved 9 September 2025.
  23. Collier, Clara (September 2025). "More Was Possible: A Review of If Anyone Builds It, Everyone Dies". Asterisk Magazine. Retrieved 24 September 2025. It's true that the book is more up-to-date and accessible than the authors' vast corpus of prior writings, not to mention marginally less condescending. Unfortunately, it is also significantly less coherent. The book is full of examples that don't quite make sense and premises that aren't fully explained. But its biggest weakness was described many years ago by a young blogger named Eliezer Yudkowsky: both authors are persistently unable to update their priors.
  24. Byron, Grace (28 September 2025). "Could AI be a truly apocalyptic threat? These writers think so". The Washington Post. Retrieved 29 September 2025.