Statement on AI risk of extinction

Last updated

On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short Statement on AI Risk: [1] [2] [3]

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

At release time, the signatories included over 100 professors of AI including the two most-cited computer scientists and Turing laureates Geoffrey Hinton and Yoshua Bengio, as well as the scientific and executive leaders of several major AI companies, and experts in pandemics, climate, nuclear disarmament, philosophy, social sciences, and other fields. [1] [2] [4] Media coverage has emphasized the signatures from several tech leaders; [2] this was followed by concerns in other newspapers that the statement could be motivated by public relations or regulatory capture. [5] The statement was released shortly after an open letter calling for a pause on AI experiments.

The statement is hosted on the website of the AI research and advocacy non-profit Center for AI Safety. It was released with an accompanying text which states that it is still difficult to speak up about extreme risks of AI and that the statement aims to overcome this obstacle. [1] The center's CEO Dan Hendrycks stated that "systemic bias, misinformation, malicious use, cyberattacks, and weaponization" are all examples of "important and urgent risks from AI... not just the risk of extinction" and added, "[s]ocieties can manage multiple risks at once; it's not 'either/or' but 'yes/and.'" [6] [4]

The Prime Minister of the United Kingdom, Rishi Sunak, retweeted the statement and wrote, "The government is looking very carefully at this." [7] When asked about the statement, the White House Press Secretary, Karine Jean-Pierre, commented that AI "is one of the most powerful technologies that we see currently in our time. But in order to seize the opportunities it presents, we must first mitigate its risks." [8]

Among the well-known signatories are: Sam Altman, Bill Gates, Peter Singer, Daniel Dennett, Sam Harris, Grimes, Stuart J. Russell, Jaan Tallinn, Vitalik Buterin, David Chalmers, Ray Kurzweil, Max Tegmark, Lex Fridman, Martin Rees, Demis Hassabis, Dawn Song, Ted Lieu, Ilya Sutskever, Martin Hellman, Bill McKibben, Angela Kane, Audrey Tang, David Silver, Andrew Barto, Mira Murati, Pattie Maes, Eric Horvitz, Peter Norvig, Joseph Sifakis, Erik Brynjolfsson, Ian Goodfellow, Baburam Bhattarai, Kersti Kaljulaid, Rusty Schweickart, Nicholas Fairfax, David Haussler, Peter Railton, Bart Selman, Dustin Moskovitz, Scott Aaronson, Bruce Schneier, Martha Minow, Andrew Revkin, Rob Pike, Jacob Tsimerman, Ramy Youssef, James Pennebaker and Ronald C. Arkin. [9]

Skeptics of the letter point out that AI has failed to reach certain milestones, such as predictions around self-driving cars. [4] Skeptics also argue that signatories of the letter were continuing funding of AI research. [3] Companies would benefit from public perception that AI algorithms were far more advanced than currently possible. [3] Skeptics, including from Human Rights Watch, have argued that scientists should focus on the known risks of AI instead of distracting with speculative future risks. [10] [3] Timnit Gebru has criticized elevating the risk of AI agency, especially by the "same people who have poured billions of dollars into these companies.” [10] Émile P. Torres and Gebru both argue against the statement, suggesting it may be motivated by TESCREAL ideologies. [11]

See also

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human capabilities across a wide range of cognitive tasks. This is in contrast to narrow AI, which is designed for specific tasks. AGI is considered one of various definitions of strong AI.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers remain popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Human extinction</span> Hypothetical end of the human species

Human extinction is the hypothetical end of the human species, either by population decline due to extraneous natural causes, such as an asteroid impact or large-scale volcanism, or via anthropogenic destruction (self-extinction), for example by sub-replacement fertility.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation.

<span class="mw-page-title-main">Gary Marcus</span> American cognitive scientist

Gary Fred Marcus is an American psychologist, cognitive scientist, and author, known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence (AI).

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

Existential risk from artificial general intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco. Its mission is to develop "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI has developed several large language models, advanced image generation models, and previously, released open-source models. Its release of ChatGPT has been credited with catalyzing widespread interest in AI.

<span class="mw-page-title-main">Timnit Gebru</span> Computer scientist

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is a co-founder of Black in AI, an advocacy group that has pushed for more Black roles in AI development and research. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.

The Center for AI Safety (CAIS) is a nonprofit organization based in San Francisco, that promotes the safe development and deployment of artificial intelligence (AI). CAIS's work encompasses research in technical AI safety and AI ethics, advocacy, and support to grow the AI safety research field.

Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 20,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.

<span class="mw-page-title-main">Émile P. Torres</span> American philosopher, historian, and author

Émile P. Torres is an American philosopher, intellectual historian, author, and postdoctoral researcher at Case Western Reserve University. Their research focuses on eschatology, existential risk, and human extinction. Along with computer scientist Timnit Gebru, Torres coined the acronym "TESCREAL" to criticize what they see as a group of related philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.

P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill with the goal of reducing the risks of frontier artificial intelligence models, the largest and most powerful foundation models. If passed, the bill will also establish CalCompute, a public cloud computing cluster for startups, researchers and community groups.

<span class="mw-page-title-main">TESCREAL</span> Multiple philosophies used to advocate for AGI

TESCREAL is an acronym neologism, proposed and advocated by computer scientist Timnit Gebru and philosopher Émile P. Torres, standing for transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism. Gebru and Torres argue that these ideologies should be treated as an "interconnected and overlapping" group with shared origins. Gebru and Torres allege this movement allows its proponents to use the threat of human extinction to justify societally expensive or detrimental projects. They consider it pervasive in social and academic circles in Silicon Valley centered around artificial intelligence. As such, the acronym is sometimes used to criticize a perceived belief system associated with Big Tech.

References

  1. 1 2 3 "Statement on AI Risk". Center for AI Safety. May 30, 2023.
  2. 1 2 3 Roose, Kevin (2023-05-30). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times. ISSN   0362-4331 . Retrieved 2023-05-30.
  3. 1 2 3 4 Gregg, Aaron; Lima-Strong, Cristiano; Vynck, Gerrit De (2023-05-31). "AI poses 'risk of extinction' on par with nukes, tech leaders say". Washington Post. ISSN   0190-8286 . Retrieved 2024-07-03.
  4. 1 2 3 Vincent, James (2023-05-30). "Top AI researchers and CEOs warn against 'risk of extinction' in 22-word statement". The Verge. Retrieved 2024-07-03.
  5. Wong, Matteo (2023-06-02). "AI Doomerism Is a Decoy". The Atlantic. Retrieved 2023-12-26.
  6. Lomas, Natasha (2023-05-30). "OpenAI's Altman and other AI giants back warning of advanced AI as 'extinction' risk". TechCrunch. Retrieved 2023-05-30.
  7. "Artificial intelligence warning over human extinction – all you need to know". The Independent. 2023-05-31. Retrieved 2023-06-03.
  8. "President Biden warns artificial intelligence could 'overtake human thinking'". USA TODAY. Retrieved 2023-06-03.
  9. "Statement on AI Risk | CAIS". www.safe.ai. Retrieved 2024-03-18.
  10. 1 2 Ryan-Mosley, Tate (12 June 2023). "It's time to talk about the real AI risks". MIT Technology Review. Retrieved 2024-07-03.
  11. Torres, Émile P. (2023-06-11). "AI and the threat of "human extinction": What are the tech-bros worried about? It's not you and me". Salon. Retrieved 2024-07-03.