Statement on AI risk of extinction

Last updated

On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short Statement on AI Risk: [1] [2] [3]

Contents

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

At release time, the signatories included over 100 professors of AI including the two most-cited computer scientists and Turing laureates Geoffrey Hinton and Yoshua Bengio, as well as the scientific and executive leaders of several major AI companies, and experts in pandemics, climate, nuclear disarmament, philosophy, social sciences, and other fields. [1] [2] [4] Media coverage has emphasized the signatures from several tech leaders; [2] this was followed by concerns in other newspapers that the statement could be motivated by public relations or regulatory capture. [5] The statement was released shortly after an open letter calling for a pause on AI experiments.

The statement is hosted on the website of the AI research and advocacy non-profit Center for AI Safety. It was released with an accompanying text which states that it is still difficult to speak up about extreme risks of AI and that the statement aims to overcome this obstacle. [1] The center's CEO Dan Hendrycks stated that "systemic bias, misinformation, malicious use, cyberattacks, and weaponization" are all examples of "important and urgent risks from AI... not just the risk of extinction" and added, "[s]ocieties can manage multiple risks at once; it's not 'either/or' but 'yes/and.'" [6] [4]

Among the well-known signatories are: Sam Altman, Bill Gates, Peter Singer, Daniel Dennett, Sam Harris, Grimes, Stuart J. Russell, Jaan Tallinn, Vitalik Buterin, David Chalmers, Ray Kurzweil, Max Tegmark, Lex Fridman, Martin Rees, Demis Hassabis, Dawn Song, Ted Lieu, Ilya Sutskever, Martin Hellman, Bill McKibben, Angela Kane, Audrey Tang, David Silver, Andrew Barto, Mira Murati, Pattie Maes, Eric Horvitz, Peter Norvig, Joseph Sifakis, Erik Brynjolfsson, Ian Goodfellow, Baburam Bhattarai, Kersti Kaljulaid, Rusty Schweickart, Nicholas Fairfax, David Haussler, Peter Railton, Bart Selman, Dustin Moskovitz, Scott Aaronson, Bruce Schneier, Martha Minow, Andrew Revkin, Rob Pike, Jacob Tsimerman, Ramy Youssef, James Pennebaker and Ronald C. Arkin. [7]

Reception

The Prime Minister of the United Kingdom, Rishi Sunak, retweeted the statement and wrote, "The government is looking very carefully at this." [8] When asked about the statement, the White House Press Secretary, Karine Jean-Pierre, commented that AI "is one of the most powerful technologies that we see currently in our time. But in order to seize the opportunities it presents, we must first mitigate its risks." [9]

Skeptics of the letter point out that AI has failed to reach certain milestones, such as predictions around self-driving cars. [4] Skeptics also argue that signatories of the letter were continuing funding of AI research. [3] Companies would benefit from public perception that AI algorithms were far more advanced than currently possible. [3] Skeptics, including from Human Rights Watch, have argued that scientists should focus on the known risks of AI instead of distracting with speculative future risks. [10] [3] Timnit Gebru has criticized elevating the risk of AI agency, especially by the "same people who have poured billions of dollars into these companies." [10] Émile P. Torres and Gebru both argue against the statement, suggesting it may be motivated by TESCREAL ideologies. [11]

See also

Related Research Articles

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

<span class="mw-page-title-main">Roman Yampolskiy</span> Latvian computer scientist (born 1979)

Roman Vladimirovich Yampolskiy is a Latvian computer scientist at the University of Louisville, mostly known for his work on AI safety and cybersecurity. He holds a PhD from the University at Buffalo (2008). He is the founder and current director of Cyber Security Lab, in the department of Computer Engineering and Computer Science at the Speed School of Engineering of the University of Louisville.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better military AI, driven by increasing geopolitical and military tensions.

<span class="mw-page-title-main">Timnit Gebru</span> Computer scientist

Timnit Gebru is an Eritrean Ethiopian-born computer scientist who works in the fields of artificial intelligence (AI), algorithmic bias and data mining. She is a co-founder of Black in AI, an advocacy group that has pushed for more Black roles in AI development and research. She is the founder of the Distributed Artificial Intelligence Research Institute (DAIR).

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

The Center for AI Safety (CAIS) is a nonprofit organization based in San Francisco, that promotes the safe development and deployment of artificial intelligence (AI). CAIS's work encompasses research in technical AI safety and AI ethics, advocacy, and support to grow the AI safety research field.

Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.

<span class="mw-page-title-main">Émile P. Torres</span> American philosopher, historian, and author

Émile P. Torres is an American philosopher, intellectual historian, author, and postdoctoral researcher at Case Western Reserve University. Their research focuses on eschatology, existential risk, and human extinction. Along with computer scientist Timnit Gebru, Torres coined the acronym neologism "TESCREAL" to criticize what they see as a group of related philosophies: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.

PauseAI is a global political movement founded in the Netherlands with the stated aim of achieving global coordination to stop the development of artificial intelligence systems more powerful than GPT-4, at least until it is known how to build them safely, and keep them under democratic control. The movement was established in Utrecht in May 2023 by software entrepreneur Joep Meindertsma.

P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist". Specifically, the bill would apply to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations. SB 1047 would apply to all AI companies doing business in California—the location of the company does not matter. The bill creates protections for whistleblowers and requires developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also establish CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.

TESCREAL is an acronym neologism proposed by computer scientist Timnit Gebru and philosopher Émile P. Torres that stands for "transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism". Gebru and Torres argue that these ideologies should be treated as an "interconnected and overlapping" group with shared origins. They say this is a movement that allows its proponents to use the threat of human extinction to justify expensive or detrimental projects and consider it pervasive in social and academic circles in Silicon Valley centered around artificial intelligence. As such, the acronym is sometimes used to criticize a perceived belief system associated with Big Tech.

Connor Leahy is a German-American artificial intelligence researcher and entrepreneur known for cofounding EleutherAI and being CEO of AI safety research company Conjecture. He has warned of the existential risk from artificial general intelligence, and has called for regulation such as "a moratorium on frontier AI runs" implemented through a cap on compute.

References

  1. 1 2 3 "Statement on AI Risk". Center for AI Safety. May 30, 2023.
  2. 1 2 3 Roose, Kevin (2023-05-30). "A.I. Poses 'Risk of Extinction,' Industry Leaders Warn". The New York Times. ISSN   0362-4331 . Retrieved 2023-05-30.
  3. 1 2 3 4 Gregg, Aaron; Lima-Strong, Cristiano; Vynck, Gerrit De (2023-05-31). "AI poses 'risk of extinction' on par with nukes, tech leaders say". Washington Post. ISSN   0190-8286 . Retrieved 2024-07-03.
  4. 1 2 3 Vincent, James (2023-05-30). "Top AI researchers and CEOs warn against 'risk of extinction' in 22-word statement". The Verge. Retrieved 2024-07-03.
  5. Wong, Matteo (2023-06-02). "AI Doomerism Is a Decoy". The Atlantic. Retrieved 2023-12-26.
  6. Lomas, Natasha (2023-05-30). "OpenAI's Altman and other AI giants back warning of advanced AI as 'extinction' risk". TechCrunch. Retrieved 2023-05-30.
  7. "Statement on AI Risk | CAIS". www.safe.ai. Retrieved 2024-03-18.
  8. "Artificial intelligence warning over human extinction – all you need to know". The Independent. 2023-05-31. Retrieved 2023-06-03.
  9. "President Biden warns artificial intelligence could 'overtake human thinking'". USA TODAY. Retrieved 2023-06-03.
  10. 1 2 Ryan-Mosley, Tate (12 June 2023). "It's time to talk about the real AI risks". MIT Technology Review. Retrieved 2024-07-03.
  11. Torres, Émile P. (2023-06-11). "AI and the threat of "human extinction": What are the tech-bros worried about? It's not you and me". Salon. Retrieved 2024-07-03.