In AI safety, P(doom) is the probability of existentially catastrophic outcomes (so-called "doomsday scenarios") as a result of artificial intelligence. [1] [2] The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence. [3]
Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of GPT-4, as high-profile figures such as Geoffrey Hinton [4] and Yoshua Bengio [5] began to warn of the risks of AI. [6] In a 2023 survey, AI researchers were asked to estimate the probability that future AI advancements could lead to human extinction or similarly severe and permanent disempowerment within the next 100 years. The mean value from the responses was 14.4%, with a median value of 5%. [7]
Name | P(doom) | Notes |
---|---|---|
Elon Musk | c. 10–30% [8] | Businessman and CEO of X, Tesla, and SpaceX |
Lex Fridman | 10% [9] | American computer scientist and host of Lex Fridman Podcast |
Marc Andreessen | 0% [10] | American businessman |
Geoffrey Hinton | 10-20% (all-things-considered); >50% (independent impression) [11] | "Godfather of AI" and 2024 Nobel Prize laureate in Physics |
Demis Hassabis | >0% [12] | Co-founder and CEO of Google DeepMind and Isomorphic Labs and 2024 Nobel Prize laureate in Chemistry |
Lina Khan | 15% [6] | c.Former chair of the Federal Trade Commission |
Dario Amodei | c. 10–25% [6] [13] | CEO of Anthropic |
Vitalik Buterin | 10% [1] [14] | c.Cofounder of Ethereum |
Yann LeCun | <0.01% [15] [Note 1] | Chief AI Scientist at Meta |
Eliezer Yudkowsky | >95% [1] | Founder of the Machine Intelligence Research Institute |
Nate Silver | 5–10% [16] | Statistician, founder of FiveThirtyEight |
Yoshua Bengio | 50% [3] [Note 2] | Computer scientist and scientific director of the Montreal Institute for Learning Algorithms |
Daniel Kokotajlo | 70–80% [17] | AI researcher and founder of AI Futures Project, formerly of OpenAI |
Max Tegmark | >90% [18] | Swedish-American physicist, machine learning researcher, and author, best known for theorising the mathematical universe hypothesis and co-founding the Future of Life Institute. |
Holden Karnofsky | 50% [19] | Executive Director of Open Philanthropy |
Emmett Shear | 5–50% [6] | Co-founder of Twitch and former interim CEO of OpenAI |
Shane Legg | c. 5–50% [20] | Co-founder and Chief AGI Scientist of Google DeepMind |
Emad Mostaque | 50% [21] | Co-founder of Stability AI |
Zvi Mowshowitz | 60% [22] | Writer on artificial intelligence, former competitive Magic: The Gathering player |
Jan Leike | 10–90% [1] | AI alignment researcher at Anthropic, formerly of DeepMind and OpenAI |
Casey Newton | 5% [1] | American technology journalist |
Roman Yampolskiy | 99.9% [23] [Note 3] | Latvian computer scientist |
Grady Booch | 0% [1] [Note 4] | c.American software engineer |
Dan Hendrycks | >80% [1] [Note 5] | Director of Center for AI Safety |
Toby Ord | 10% [24] | Australian philosopher and author of The Precipice |
Connor Leahy | 90%+ [25] | German-American AI researcher; cofounder of EleutherAI. |
Paul Christiano | 50% [26] | Head of research at the US AI Safety Institute |
Richard Sutton | 0% [27] [Note 6] [28] | Canadian computer scientist and 2025 Turing Award laureate |
Andrew Critch | 85% [29] | Founder of the Center for Applied Rationality |
David Duvenaud | 85% [30] | Former Anthropic Safety Team Lead |
Eli Lifland | c. 35–40% [31] | Top competitive superforecaster, co-author of AI 2027. |
Paul Crowley | >80% [32] | Computer scientist at Anthropic |
Benjamin Mann | 0–10% [33] | Co-founder of Anthropic |
There has been some debate about the usefulness of P(doom) as a term, in part due to the lack of clarity about whether or not a given prediction is conditional on the existence of artificial general intelligence, the time frame, and the precise meaning of "doom". [6] [34]