PauseAI

Last updated
PauseAI
FormationMay 2023;1 year ago (2023-05)
FounderJoep Meindertsma
Founded at Utrecht, Netherlands
Type Advocacy group, Nonprofit
PurposeMitigating the existential risk from artificial general intelligence and other risks of advanced artificial intelligence
Region
International
Website pauseai.info

PauseAI is a global political movement founded in the Netherlands with the stated aim of achieving global coordination to stop the development of artificial intelligence systems more powerful than GPT-4, at least until it is known how to build them safely, and keep them under democratic control. [1] The movement was established in Utrecht in May 2023 by software entrepreneur Joep Meindertsma. [2] [3] [4]

Contents

Proposal

PauseAI's stated goal is to “implement a pause on the training of AI systems more powerful than GPT-4”. Their website lists some proposed steps to achieve this goal: [1]

Background

During the late 2010s and early 2020s, a rapid improvement in the capabilities of artificial intelligence models known as the AI boom was underway, which included the release of large language model GPT-3, its more powerful successor GPT-4, and image generation models Midjourney and DALL-E. This led to an increased concern about the risks of advanced AI, causing the Future of Life Institute to release an open letter calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4". The letter was signed by thousands of AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, and Elon Musk. [5] [6] [7]

History

Founder Joep Meindertsma first became worried about the existential risk from artificial general intelligence after reading philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies. He founded PauseAI in May 2023, putting his job as the CEO of a software firm on hold. Meindertsma claimed the rate of progress in AI alignment research is lagging behind the progress in AI capabilities, and said "there is a chance that we are facing extinction in a short frame of time". As such, he felt an urge to organise people to act. [3] [8] [4]

PauseAI's first public action was to protest in front of Microsoft's Brussels lobbying office in May 2023 during an event on artificial intelligence. [4] In November of the same year, they protested outside the inaugural AI Safety Summit at Bletchley Park. [9] The Bletchley Declaration that was signed at the summit, which acknowledged the potential for catastrophic risks stemming from AI, was perceived by Meindertsma to be a small first step. But, he argued "binding international treaties" are needed. He mentioned the Montreal Protocol and treaties banning blinding laser weapons as examples of previous successful global agreements. [3]

In February 2024, members of PauseAI gathered outside OpenAI's headquarters in San Francisco, in part due to OpenAI changing its usage policy that prohibited the use of its models for military purposes. [10]

On 13 May 2024, protests were held across thirteen different countries before the AI Seoul Summit, including the United States, the United Kingdom, Brazil, Germany, Australia, and Norway. Meindertserma said that those attending the summit "need to realize that they are the only ones who have the power to stop this race". Protesters in San Francisco held signs reading "When in doubt, pause", and "Quit your job at OpenAI. Trust your conscience". [11] [12] [3] [13] Jan Leike, head of the "superalignment" team at OpenAI, resigned 2 days later due to his belief that "safety culture and processes [had] taken a backseat to shiny products". [14]

See also

Related Research Articles

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.

This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable. The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">Yoshua Bengio</span> Canadian computer scientist

Yoshua Bengio is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. He is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.

AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.

Generative Pre-trained Transformer 4 (GPT-4) is a multimodal large language model created by OpenAI, and the fourth in its series of GPT foundation models. It was launched on March 14, 2023, and made publicly available via the paid chatbot product ChatGPT Plus, via OpenAI's API, and via the free chatbot Microsoft Copilot. As a transformer-based model, GPT-4 uses a paradigm where pre-training using both public data and "data licensed from third-party providers" is used to predict the next token. After this step, the model was then fine-tuned with reinforcement learning feedback from humans and AI for human alignment and policy compliance.

The Center for AI Safety (CAIS) is a nonprofit organization based in San Francisco, that promotes the safe development and deployment of artificial intelligence (AI). CAIS's work encompasses research in technical AI safety and AI ethics, advocacy, and support to grow the AI safety research field.

Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.

xAI (company) Artificial Intelligence focused startup

X.AI Corp., doing business as xAI, is an American startup company working in the area of artificial intelligence (AI). Founded by Elon Musk in March 2023, its stated goal is "to understand the true nature of the universe".

<span class="mw-page-title-main">AI Safety Summit</span> 2023 global summit on AI safety

The AI Safety Summit was an international conference discussing the safety and regulation of artificial intelligence. It was held at Bletchley Park, Milton Keynes, United Kingdom, on 1–2 November 2023. It was the first ever global summit on artificial intelligence, and is planned to become a recurring event.

<span class="mw-page-title-main">Grok (chatbot)</span> Chatbot developed by xAI

Grok is a generative artificial intelligence chatbot developed by xAI. Based on the large language model (LLM) of the same name, it was launched in 2023 as an initiative by Elon Musk. The chatbot is advertised as having a "sense of humor" and direct access to X. It is currently under beta testing and is available with X Premium.

The 2024 AI Seoul Summit was co-hosted by the South Korean and British governments. The Seoul Declaration was adopted to address artificial intelligence technology and related challenges and opportunities.

P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist". Specifically, the bill would apply to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations. SB 1047 would apply to all AI companies doing business in California—the location of the company does not matter. The bill creates protections for whistleblowers and requires developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also establish CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.

Connor Leahy is a German-American artificial intelligence researcher and entrepreneur known for cofounding EleutherAI and currently serving as CEO of AI safety research company Conjecture. He has warned of the existential risk from artificial general intelligence, and has called for regulation such as "a moratorium on frontier AI runs" implemented through a cap on compute.

References

  1. 1 2 "PauseAI Proposal". PauseAI. Retrieved 2024-05-02.
  2. Meaker, Morgan. "Meet the AI Protest Group Campaigning Against Human Extinction". Wired . ISSN   1059-1028 . Retrieved 2024-04-30.
  3. 1 2 3 4 Reynolds, Matt. "Protesters Are Fighting to Stop AI, but They're Split on How to Do It". Wired. ISSN   1059-1028 . Retrieved 2024-08-20.
  4. 1 2 3 "The rag-tag group trying to pause AI in Brussels". Politico . 2023-05-24. Retrieved 2024-04-30.
  5. Hern, Alex (2023-03-29). "Elon Musk joins call for pause in creation of giant AI 'digital minds'". The Guardian. ISSN   0261-3077 . Retrieved 2024-08-20.
  6. Metz, Cade; Schmidt, Gregory (2023-03-29). "Elon Musk and Others Call for Pause on A.I., Citing 'Profound Risks to Society'". The New York Times. ISSN   0362-4331 . Retrieved 2024-08-20.
  7. "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2024-08-20.
  8. "Could AI lead us to extinction? This activist group believes so". euronews. 2023-06-14. Retrieved 2024-11-06.
  9. "What happens in Bletchley, stays in…". Islington Tribune . Retrieved 2024-05-08.
  10. Nuñez, Michael (2024-02-13). "Protesters gather outside OpenAI office, opposing military AI and AGI". VentureBeat. Retrieved 2024-08-20.
  11. Gordon, Anna (2024-05-13). "Why Protesters Are Demanding Pause on AI Development". TIME. Retrieved 2024-08-20.
  12. Rodriguez, Joe Fitzgerald (2024-05-13). "As OpenAI Unveils Big Update, Protesters Call for Pause in Risky 'Frontier' Tech | KQED". www.kqed.org. Retrieved 2024-08-20.
  13. "OpenAI launches new AI model GPT-4o, a conversational digital personal assistant". ABC7 San Francisco. 2024-05-14. Retrieved 2024-08-20.
  14. Robison, Kylie (2024-05-17). "OpenAI researcher resigns, claiming safety has taken "a backseat to shiny products"". The Verge. Retrieved 2024-08-20.