Pause Giant AI Experiments: An Open Letter

Last updated

Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. [1] It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari. [1] [2] [3]

Contents

Motivations

The publication occurred a week after the release of OpenAI's large language model GPT-4. It asserts that current large language models are "becoming human-competitive at general tasks", referencing a paper about early experiments of GPT-4, described as having "Sparks of AGI". [4] AGI is described as posing numerous important risks, especially in a context of race-to-the-bottom dynamics in which some AI labs may be incentivized to overlook security to deploy products more quickly. [5]

It asks to refocus AI research on making powerful AI systems "more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal". The letter also recommends more governmental regulation, independent audits before training AI systems, as well as "tracking highly capable AI systems and large pools of computational capability" and "robust public funding for technical AI safety research". [1] FLI suggests using the "amount of computation that goes into a training run" as a proxy to for how powerful an AI is, and thus as a threshold. [6]

Reception

The letter received widespread coverage, with support coming from a range of high-profile figures. As of July 2024, a pause has not been realized - instead, as FLI pointed out on the letter's one-year anniversary, AI companies have directed "vast investments in infrastructure to train ever-more giant AI systems". [7] However, it was credited with generating a "renewed urgency within governments to work out what to do about the rapid progress of AI", and reflecting the public's increasing concern about risks presented by AI. [8]

Eliezer Yudkowsky wrote that the letter "doesn't go far enough" and argued that it should ask for an indefinite pause. He fears that finding a solution to the alignment problem might take several decades and that any misaligned AI sufficiently intelligent might cause human extinction. [9]

Some IEEE members have expressed various reasons for signing the letter, such as that "There are too many ways these systems could be abused. They are being freely distributed, and there is no review or regulation in place to prevent harm." [10] One AI ethicist argued that the letter provides awareness to multiple issues such as voice cloning, but argued the letter was unactionable and unenforceable. [11]

The letter has been criticized for diverting attention from more immediate societal risks such as algorithmic biases. [12] Timnit Gebru and others argued that the letter was sensationalist and amplified "some futuristic, dystopian sci-fi scenario" instead of current problems with AI today. [11]

Microsoft's CEO Bill Gates chose not to sign the letter, stating that he does not think "asking one particular group to pause solves the challenges". [13] Sam Altman, CEO of OpenAI, commented that the letter was "missing most technical nuance about where we need the pause" and stated that "An earlier version of the letter claimed OpenAI is training GPT-5 right now. We are not and won't for some time." [14] Reid Hoffman argued the letter was "virtue signalling", with no real impact. [15]

List of notable signatories

Listed below are some notable signatories of the letter. [1]

See also

Related Research Articles

<span class="mw-page-title-main">Max Tegmark</span> Swedish-American cosmologist

Max Erik Tegmark is a Swedish-American physicist, machine learning researcher and author. He is best known for his book Life 3.0 about what the world might look like as artificial intelligence continues to improve. Tegmark is a professor at the Massachusetts Institute of Technology and the president of the Future of Life Institute.

<span class="mw-page-title-main">Stuart J. Russell</span> British computer scientist and author (born 1962)

Stuart Jonathan Russell is a British computer scientist known for his contributions to artificial intelligence (AI). He is a professor of computer science at the University of California, Berkeley and was from 2008 to 2011 an adjunct professor of neurological surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley. Russell is the co-author with Peter Norvig of the authoritative textbook of the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.

This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.

<span class="mw-page-title-main">Gary Marcus</span> American cognitive scientist (born 1970)

Gary Fred Marcus is an American psychologist, cognitive scientist, and author, known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence (AI).

<span class="mw-page-title-main">Sam Altman</span> American entrepreneur and investor (born 1985)

Samuel Harris Altman is an American entrepreneur and investor best known as the chief executive officer of OpenAI since 2019. He is also the chairman of clean energy companies Oklo Inc. and Helion Energy. Altman is considered to be one of the leading figures of the AI boom. He dropped out of Stanford University after two years and founded Loopt, a mobile social networking service, raising more than $30 million in venture capital. In 2011, Altman joined Y Combinator, a startup accelerator, and was its president from 2014 to 2019.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">Yoshua Bengio</span> Canadian computer scientist

Yoshua Bengio is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. He is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).

<span class="mw-page-title-main">D. Scott Phoenix</span>

D. Scott Phoenix is an American entrepreneur and former cofounder and CEO of Vicarious, an artificial intelligence research company funded by 250M from Elon Musk, Mark Zuckerberg, and others that was acquired by Intrinsic, an Alphabet company in 2022.

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.

<i>Life 3.0</i> 2017 book by Max Tegmark on artificial intelligence

Life 3.0: Being Human in the Age of Artificial Intelligence is a 2017 non-fiction book by Swedish-American cosmologist Max Tegmark. Life 3.0 discusses artificial intelligence (AI) and its impact on the future of life on Earth and beyond. The book discusses a variety of societal implications, what can be done to maximize the chances of a positive outcome, and potential futures for humanity, technology and combinations thereof.

<span class="mw-page-title-main">Emad Mostaque</span> Bangladeshi-British businessperson and former hedge fund manager

Mohammad Emad Mostaque is a British-Bangladeshi business executive, mathematician, and former hedge fund manager. He is the founder and was CEO of Stability AI until 23 March 2024, one of the companies behind Stable Diffusion.

<span class="mw-page-title-main">Grok (chatbot)</span> Chatbot developed by xAI

Grok is a generative artificial intelligence chatbot developed by xAI. Based on the large language model (LLM) of the same name, it was launched in 2023 as an initiative by Elon Musk. The chatbot is advertised as having a "sense of humor" and direct access to X. It is currently available and free to use through the X.com platform.

PauseAI is a global political movement founded in the Netherlands with the stated aim of achieving global coordination to stop the development of artificial intelligence systems more powerful than GPT-4, at least until it is known how to build them safely, and keep them under democratic control. The movement was established in Utrecht in May 2023 by software entrepreneur Joep Meindertsma.

Connor Leahy is a German-American artificial intelligence researcher and entrepreneur known for cofounding EleutherAI and being CEO of AI safety research company Conjecture. He has warned of the existential risk from artificial general intelligence, and has called for regulation such as "a moratorium on frontier AI runs" implemented through a cap on compute.

References

  1. 1 2 3 4 "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2024-07-19.
  2. Metz, Cade; Schmidt, Gregory (2023-03-29). "Elon Musk and Others Call for Pause on A.I., Citing 'Profound Risks to Society'". The New York Times. ISSN   0362-4331 . Retrieved 2024-08-20.
  3. Hern, Alex (2023-03-29). "Elon Musk joins call for pause in creation of giant AI 'digital minds'". The Guardian. ISSN   0261-3077 . Retrieved 2024-08-20.
  4. Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (2023-04-12). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv: 2303.12712 [cs.CL].
  5. "MPs warned of AI arms race to the bottom | Computer Weekly". ComputerWeekly.com. Retrieved 2023-04-13.
  6. Support (2023-03-31). "FAQs about FLI's Open Letter Calling for a Pause on Giant AI Experiments". Future of Life Institute. Retrieved 2023-04-13.
  7. Aguirre, Anthony (2024-03-22). "The Pause Letter: One year later". Future of Life Institute. Retrieved 2024-07-19.
  8. "Six months after call for AI pause, are we closer to disaster?". euronews. 2023-09-21. Retrieved 2024-07-19.
  9. "The Open Letter on AI Doesn't Go Far Enough". Time. 2023-03-29. Retrieved 2023-04-13.
  10. "'AI Pause' Open Letter Stokes Fear and Controversy - IEEE Spectrum". IEEE . Retrieved 2023-04-13.
  11. 1 2 Anderson, Margo (7 April 2023). "'AI Pause' Open Letter Stokes Fear and Controversy - IEEE Spectrum". IEEE Spectrum. Retrieved 2024-07-03.
  12. Paul, Kari (2023-04-01). "Letter signed by Elon Musk demanding AI research pause sparks controversy". The Guardian. ISSN   0261-3077 . Retrieved 2023-04-14.
  13. Rigby, Jennifer (2023-04-04). "Bill Gates says calls to pause AI won't 'solve challenges'". Reuters. Retrieved 2023-04-13.
  14. Vincent, James (April 14, 2023). "OpenAI's CEO confirms the company isn't training GPT-5 and 'won't for some time'". The Verge.
  15. Heath, Ryan (22 September 2023). "The great AI "pause" that wasn't". Axios.