P(doom)

Last updated

P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence. [1] [2] The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence. [3]

Contents

Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of GPT-4, as high-profile figures such as Geoffrey Hinton [4] and Yoshua Bengio [5] began to warn of the risks of AI. [6] In 2022, a survey of AI researchers, which had a 17% response rate, found that the majority believed there is at least a 10% chance that our inability to control AI could cause an existential catastrophe. [7]

Sample P(doom) values

NameP(doom)Notes
Dario Amodei 10-25% [6] CEO of Anthropic
Elon Musk 10-20% [8] Businessman and CEO of X, Tesla, and SpaceX
Paul Christiano 50% [9] Head of research at the US AI Safety Institute
Lina Khan 15% [6] Chair of the Federal Trade Commission
Emmet Shear 5-50% [6] Co-founder of Twitch and former interim CEO of OpenAI
Geoffrey Hinton 10% [6] [Note 1] AI researcher, formerly of Google
Yoshua Bengio 20% [3] [Note 2] Computer scientist and scientific director of the Montreal Institute for Learning Algorithms
Jan Leike 10-90% [1] AI alignment researcher at Anthropic, formerly of DeepMind and OpenAI
Vitalik Buterin 10% [1] Cofounder of Ethereum
Dan Hendrycks 80%+ [1] [Note 3] Director of Center for AI Safety
Grady Booch c.0% [1] [Note 4] American software engineer
Casey Newton 5% [1] American technology journalist
Eliezer Yudkowsky 99%+ [10] Founder of the Machine Intelligence Research Institute
Roman Yampolskiy 99.9% [11] [Note 5] Latvian computer scientist
Marc Andreessen 0% [12] American businessman
Yann Le Cun <0.01% [13] [Note 6] Chief AI Scientist at Meta

Criticism

There has been some debate about the usefulness of P(doom) as a term, in part due to the lack of clarity about whether or not a given prediction is conditional on the existence of artificial general intelligence, the time frame, and the precise meaning of "doom". [6] [14]

See also

Notes

  1. Conditional on A.I. not being "strongly regulated", time frame of 30 years.
  2. Based on an estimated "50 per cent probability that AI would reach human-level capabilities within a decade, and a greater than 50 per cent likelihood that AI or humans themselves would turn the technology against humanity at scale."
  3. Up from ~20% 2 years prior.
  4. Equivalent to "P(all the oxygen in my room spontaneously moving to a corner thereby suffocating me)".
  5. Within the next 100 years.
  6. "Less likely than an asteroid wiping us out".

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human capabilities across a wide range of cognitive tasks. This is in contrast to narrow AI, which is designed for specific tasks. AGI is considered one of various definitions of strong AI.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers remain popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.

<span class="mw-page-title-main">Future of Humanity Institute</span> Defunct Oxford interdisciplinary research centre

The Future of Humanity Institute (FHI) was an interdisciplinary research centre at the University of Oxford investigating big-picture questions about humanity and its prospects. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. Its director was philosopher Nick Bostrom, and its research staff included futurist Anders Sandberg and Giving What We Can founder Toby Ord.

<span class="mw-page-title-main">Global catastrophic risk</span> Potentially harmful worldwide events

A global catastrophic risk or a doomsday scenario is a hypothetical event that could damage human well-being on a global scale, even endangering or destroying modern civilization. An event that could cause human extinction or permanently and drastically curtail humanity's existence or potential is known as an "existential risk."

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

<i>Superintelligence: Paths, Dangers, Strategies</i> 2014 book by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the philosopher Nick Bostrom. It explores how superintelligence could be created and what its features and motivations might be. It argues that superintelligence, if created, would be difficult to control, and that it could take over the world in order to accomplish its goals. The book also presents strategies to help make superintelligences whose goals benefit humanity. It was particularly influential for raising concerns about existential risk from artificial intelligence.

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.

Existential risk from artificial general intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">OpenAI</span> Artificial intelligence research organization

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI has developed several large language models, advanced image generation models, and previously, released open-source models. Its release of ChatGPT has been credited with catalyzing widespread interest in AI.

The regulation of artificial intelligence refers to the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement. It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 20,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.

Dan Hendrycks is an American machine learning researcher. He serves as the director of the Center for AI Safety.

xAI (company) Artificial Intelligence focused startup

X.AI Corp., doing business as xAI, is an American startup company working in the area of artificial intelligence (AI). Founded by Elon Musk in March 2023, its stated goal is "to understand the true nature of the universe".

<span class="mw-page-title-main">Grok (chatbot)</span> Chatbot developed by xAI

Grok is a generative artificial intelligence chatbot developed by xAI. Based on a large language model (LLM), it was developed as an initiative by Elon Musk in a direct response to the meteoric rise of ChatGPT, the developer of which, OpenAI, Musk co-founded. The chatbot is advertised as "having a sense of humor" and direct access to X. It is currently under beta testing and is available with X Premium.

Émile P. Torres is an American philosopher, intellectual historian, author, and postdoctoral researcher at Case Western Reserve University. Their research focuses on eschatology, existential risk, and human extinction. Along with computer scientist Timnit Gebru, Torres coined the acronym "TESCREAL" to criticize what they see as a group of related philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.

References

  1. 1 2 3 4 5 6 Railey, Clint (2023-07-12). "P(doom) is AI's latest apocalypse metric. Here's how to calculate your score". Fast Company.
  2. Thomas, Sean (2024-03-04). "Are we ready for P(doom)?". The Spectator. Retrieved 2024-06-19.
  3. 1 2 "It started as a dark in-joke. It could also be one of the most important questions facing humanity". ABC News. 2023-07-14. Retrieved 2024-06-18.
  4. Metz, Cade (2023-05-01). "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead". The New York Times. ISSN   0362-4331 . Retrieved 2024-06-19.
  5. "One of the "godfathers of AI" airs his concerns". The Economist. ISSN   0013-0613 . Retrieved 2024-06-19.
  6. 1 2 3 4 5 6 Roose, Kevin (2023-12-06). "Silicon Valley Confronts a Grim New A.I. Metric". The New York Times. ISSN   0362-4331 . Retrieved 2024-06-17.
  7. "2022 Expert Survey on Progress in AI". AI Impacts. 2022-08-04. Retrieved 2024-06-19.
  8. Tangalakis-Lippert, Katherine. "Elon Musk says there could be a 20% chance AI destroys humanity — but we should do it anyway". Business Insider. Retrieved 2024-06-19.
  9. "ChatGPT creator says there's 50% chance AI ends in 'doom'". The Independent. 2023-05-03. Retrieved 2024-06-19.
  10. "TIME100 AI 2023: Eliezer Yudkowsky". Time. 2023-09-07. Retrieved 2024-06-18.
  11. Altchek, Ana. "Why this AI researcher thinks there's a 99.9% chance AI wipes us out". Business Insider. Retrieved 2024-06-18.
  12. Marantz, Andrew (2024-03-11). "Among the A.I. Doomsayers". The New Yorker. ISSN   0028-792X . Retrieved 2024-06-19.
  13. Wayne Williams (2024-04-07). "Top AI researcher says AI will end humanity and we should stop developing it now — but don't worry, Elon Musk disagrees". TechRadar. Retrieved 2024-06-19.
  14. King, Isaac (2024-01-01). "Stop talking about p(doom)". LessWrong .
  15. "GUM & Ambrose Kenny-Smith are teaming up again for new collaborative album 'III Times'". DIY. 2024-05-07. Retrieved 2024-06-19.