P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence. [1] [2] The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence. [3]
Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of GPT-4, as high-profile figures such as Geoffrey Hinton [4] and Yoshua Bengio [5] began to warn of the risks of AI. [6] In a 2023 survey, AI researchers were asked to estimate the probability that future AI advancements could lead to human extinction or similarly severe and permanent disempowerment within the next 100 years. The mean value from the responses was 14.4%, with a median value of 5%. [7] [8]
Name | P(doom) | Notes |
---|---|---|
Dario Amodei | 10–25% [6] | CEO of Anthropic |
Elon Musk | 10–20% [9] | Businessman and CEO of X, Tesla, and SpaceX |
Paul Christiano | 50% [10] | Head of research at the US AI Safety Institute |
Lina Khan | 15% [6] | Chair of the Federal Trade Commission |
Emmet Shear | 5–50% [6] | Co-founder of Twitch and former interim CEO of OpenAI |
Geoffrey Hinton | 10%-50% [6] [Note 1] | AI researcher, formerly of Google |
Yoshua Bengio | 20% [3] [Note 2] | Computer scientist and scientific director of the Montreal Institute for Learning Algorithms |
Jan Leike | 10–90% [1] | AI alignment researcher at Anthropic, formerly of DeepMind and OpenAI |
Vitalik Buterin | 10% [1] | Cofounder of Ethereum |
Dan Hendrycks | 80%+ [1] [Note 3] | Director of Center for AI Safety |
Grady Booch | 0% [1] [Note 4] | c.American software engineer |
Casey Newton | 5% [1] | American technology journalist |
Eliezer Yudkowsky | 95%+ [1] | Founder of the Machine Intelligence Research Institute |
Roman Yampolskiy | 99.9% [11] [Note 5] | Latvian computer scientist |
Marc Andreessen | 0% [12] | American businessman |
Yann Le Cun | <0.01% [13] [Note 6] | Chief AI Scientist at Meta |
Toby Ord | 10% [14] | Australian philosopher and author of The Precipice |
Demis Hassabis | 0-25% [15] [16] | Co-founder and CEO of Google DeepMind and Isomorphic Labs |
Emad Mostaque | 50% [17] | Co-founder of Stability AI |
There has been some debate about the usefulness of P(doom) as a term, in part due to the lack of clarity about whether or not a given prediction is conditional on the existence of artificial general intelligence, the time frame, and the precise meaning of "doom". [6] [18]
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
Geoffrey Everest Hinton is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, and Nobel Prize winner in Physics, known for his work on artificial neural networks which earned him the title as the "Godfather of AI".
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by an artificial superintelligence (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
Human extinction or omnicide is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.
Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.
A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk".
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
Yoshua Bengio is a Canadian computer scientist, and a pioneer of artificial neural networks and deep learning. He is a professor at the Université de Montréal and scientific director of the AI institute MILA.
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
Shane Legg is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies, and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter.
Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.
On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short Statement on AI Risk:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Émile P. Torres is an American philosopher, intellectual historian, author, activist, and postdoctoral researcher at Case Western Reserve University. Their research focuses on eschatology, existential risk, and human extinction. Along with computer scientist Timnit Gebru, Torres coined the acronym neologism "TESCREAL" to criticize what they see as a group of related philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.
PauseAI is a global political movement founded in the Netherlands with the stated aim of achieving global coordination to stop the development of artificial intelligence systems more powerful than GPT-4, at least until it is known how to build them safely, and keep them under democratic control. The movement was established in Utrecht in May 2023 by software entrepreneur Joep Meindertsma.
TESCREAL is an acronym neologism proposed by computer scientist Timnit Gebru and philosopher Émile P. Torres that stands for "transhumanism, Extropianism, singularitarianism, (modern) cosmism, Rationalism, Effective Altruism, and longtermism". Gebru and Torres argue that these ideologies should be treated as an "interconnected and overlapping" group with shared origins. They say this is a movement that allows its proponents to use the threat of human extinction to justify expensive or detrimental projects and consider it pervasive in social and academic circles in Silicon Valley centered around artificial intelligence. As such, the acronym is sometimes used to criticize a perceived belief system associated with Big Tech.
Connor Leahy is a German-American artificial intelligence researcher and entrepreneur known for cofounding EleutherAI and being CEO of AI safety research company Conjecture. He has warned of the existential risk from artificial general intelligence, and has called for regulation such as "a moratorium on frontier AI runs" implemented through a cap on compute.