If Anyone Builds It, Everyone Dies

Last updated

If Anyone Builds It, Everyone Dies
If Anyone Builds It Cover 2025.webp
UK edition cover (2025)
Author Eliezer Yudkowsky and Nate Soares
Subject Existential risk from artificial intelligence
Genre Non-fiction
Publisher Hachette Book Group
Publication date
16 September 2025
Pages256
ISBN 9780316595643

If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All (published with the alternate subtitle The Case Against Superintelligent AI in the UK) is a 2025 book by Eliezer Yudkowsky and Nate Soares which details the potential threats posed to humanity by artificial superintelligence.

Contents

It will be published in the United States on September 16, 2025. [1]

Reception

Reviews by public figures and scientists

Max Tegmark acclaimed it as "The most important book of the decade", writing that "the competition to build smarter-than-human machines isn't an arms race but a suicide race, fueled by wishful thinking." [2]

Stephen Fry wrote in a review: "Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. Their brilliant gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I've ever read, and I've waded through scores of them" and called it the most important book he's read for years. [3]

Ben Bernanke said it's "A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity."

It also received praise from Vitalik Buterin, Grimes, Scott Aaronson, Bruce Schneier, George Church, Tim Urban, Dorothy Sue Cobble, Huw Price, Jon Wolfsthal, Mark Ruffalo, Patton Oswalt, Alex Winter, and Emmett Shear. [4] [5]

Critical reception

Reviews of the book by critics have been more mixed.

Writing for The New York Times , Stephen Marche said the book "reads like a Scientology manual" and that it "evokes the feeling of being locked in a room with the most annoying students you met in college while they try mushrooms for the first time". [6]

The Guardian wrote: "Should you worry about superintelligent AI? The answer from one of the tech world’s most influential doomsayers, Eliezer Yudkowsky, is emphatically yes. The good news? We aren’t there yet, and there are still steps we can take to avert disaster." [7]

Science editor at The Times found the book convincing: "Are they right? Given the gravity of the case they make, it feels an odd thing to say that this book is good. It is readable. It tells stories well. At points it is like a thriller — albeit one where the thrills come from the obliteration of literally everything of value. [...] The achievement of this book is, given the astonishing claims they make, that they make a credible case for not being mad. But I really hope they are: because I can’t see a way we get off that ladder." [8]

Steven Levy in Wired wrote that "even after reading this book, I don’t think it’s likely that AI will kill us all"" and "the solutions they propose to stop the devastation seem even more far-fetched than the idea that software will murder us all", but mentioned a study of AI contemplating blackmail and concluded "My gut tells me the scenarios Yudkowsky and Soares spin are too bizarre to be true. But I can’t be sure they are wrong." [9]

Kevin Canfield wrote in San Francisco Chronicle that the book makes powerful arguments and recommended it. [10]

Jacob Aron, writing for New Scientist , called the book "extremely readable" but added that "the problem is that, while compelling, the argument is fatally flawed", concluding that effort would be better spent on "problems of science fact" like climate change. [11]

Publishers Weekly said the book is an "urgent clarion call to prevent the creation of artificial superintelligence" and a "frightening warning that deserves to be reckoned with", but mentioned that the authors "make extensive use of parables and analogies, some of which are less effective than others" and "present precious few opposing viewpoints, even though not all experts agree with their dire perspective." [12]

Kirkus Reviews gave a positive review, calling the book "a timely and terrifying education on the galloping havoc AI could unleash". [13]

References

  1. If Anyone Builds It, Everyone Dies. 3 March 2025. ISBN   978-1-6686-5265-7.
  2. "Max Tegmark on X: "Most important book of the decade:"". Twitter .{{cite web}}: CS1 maint: url-status (link)
  3. "If Anyone Builds It, Everyone Dies".{{cite web}}: CS1 maint: url-status (link)
  4. "vitalik.eth (@VitalikButerin) on X: "A good book, worth reading to understand the basic case for why many people, even those who are generally very enthusiastic about speeding up technological progress, consider superintelligent AI uniquely risky"". Twitter .
  5. "If Anyone Builds It, Everyone Dies: Full Praise".
  6. Marche, Stephen (27 August 2025). "A.I. Bots or Us: Who Will End Humanity First?". The New York Times. Retrieved 9 September 2025.
  7. "From a new Thomas Pynchon novel to a memoir by Margaret Atwood: the biggest books of the autumn". The Guardian. 6 September 2025. Retrieved 12 September 2025.
  8. Whipple, Tom (13 September 2025). "AI — it's going to kill us all". The Times. Retrieved 13 September 2025.
  9. Levy, Steven (5 September 2025). "The Doomers Who Insist AI Will Kill Us All". Wired. Retrieved 9 September 2025.
  10. Canfield, Kevin (2 September 2025). "'Everyone, everywhere on Earth, will die': Why 2 new books on AI foretell doom". San Francisco Chronicle. Retrieved 11 September 2025.
  11. Aron, Jacob (8 September 2025). "No, AI isn't going to kill us all, despite what this new book says". New Scientist. Retrieved 9 September 2025.
  12. "If Anyone Builds It, Everyone Dies: Why Superhuman AI Will Kill Us All". Publishers Weekly. Retrieved 11 September 2025.
  13. "IF ANYONE BUILDS IT, EVERYONE DIES". Kirkus Reviews. Retrieved 9 September 2025.