Connor Leahy

Last updated

Connor Leahy is a German-American [1] artificial intelligence researcher and entrepreneur known for cofounding EleutherAI [2] [3] and being CEO of AI safety research company Conjecture. [4] [5] [6] He has warned of the existential risk from artificial general intelligence, and has called for regulation such as "a moratorium on frontier AI runs" implemented through a cap on compute. [7]

Career

In 2019, Leahy reverse-engineered GPT-2 in his bedroom, and later co-founded EleutherAI to attempt to replicate GPT-3. [2]

Leahy is sceptical of reinforcement learning from human feedback as a solution to the alignment problem. “These systems, as they become more powerful, are not becoming less alien. If anything, we’re putting a nice little mask on them with a smiley face. If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding.” [8]

He was one of the signatories of the 2023 open letter from the Future of Life Institute calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." [9] [10]

In November 2023, Leahy was invited to speak at the inaugural AI Safety Summit. He worried that the summit would fail to deal with the risks from "god like AI" stemming from the AI alignment problem, arguing that “If you build systems that are more capable than humans at manipulation, business, politics, science and everything else, and we do not control them, then the future belongs to them, not us.” He cofounded the campaign group ControlAI to advocate for governments to implement a pause on the development of artificial general intelligence. [4] Leahy has likened the regulation of artificial intelligence to that of climate change, arguing that "it's not the responsibility of oil companies to solve climate change", and that governments must step in to solve both issues. [3]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.

Anthropic PBC is a U.S.-based artificial intelligence (AI) public-benefit startup founded in 2021. It researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe, reliable models for the public. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini.

<span class="mw-page-title-main">Future of Life Institute</span> International nonprofit research institute

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

<span class="mw-page-title-main">Yoshua Bengio</span> Canadian computer scientist

Yoshua Bengio is a Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. He is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (MILA).

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.

In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.

<span class="mw-page-title-main">EleutherAI</span> Artificial intelligence research collective

EleutherAI is a grass-roots non-profit artificial intelligence (AI) research group. The group, considered an open-source version of OpenAI, was formed in a Discord server in July 2020 by Connor Leahy, Sid Black, and Leo Gao to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Institute, a non-profit research institute.

The Center for AI Safety (CAIS) is a nonprofit organization based in San Francisco, that promotes the safe development and deployment of artificial intelligence (AI). CAIS's work encompasses research in technical AI safety and AI ethics, advocacy, and support to grow the AI safety research field.

Pause Giant AI Experiments: An Open Letter is the title of a letter published by the Future of Life Institute in March 2023. The letter calls "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", citing risks such as AI-generated propaganda, extreme automation of jobs, human obsolescence, and a society-wide loss of control. It received more than 30,000 signatures, including academic AI researchers and industry CEOs such as Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak and Yuval Noah Harari.

The Alignment Research Center (ARC) is a nonprofit research institute based in Berkeley, California, dedicated to the alignment of advanced artificial intelligence with human values and priorities. Established by former OpenAI researcher Paul Christiano, ARC focuses on recognizing and comprehending the potentially harmful capabilities of present-day AI models.

On May 30, 2023, hundreds of artificial intelligence experts and other notable figures signed the following short Statement on AI Risk:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

PauseAI is a global political movement founded in the Netherlands with the stated aim of achieving global coordination to stop the development of artificial intelligence systems more powerful than GPT-4, at least until it is known how to build them safely, and keep them under democratic control. The movement was established in Utrecht in May 2023 by software entrepreneur Joep Meindertsma.

P(doom) is a term in AI safety that refers to the probability of catastrophic outcomes (or "doom") as a result of artificial intelligence. The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.

References

  1. "Memes tell the story of a secret war in tech. It's no joke". ABC News. 2024-02-17. Retrieved 2024-07-01.
  2. 1 2 Smith, Tim (March 29, 2023). "'We are super, super fucked': Meet the man trying to stop an AI apocalypse".
  3. 1 2 Pringle, Eleanor. "Asking Big Tech to police AI is like turning to 'oil companies to solve climate change,' AI researcher says". Fortune. Retrieved 2024-08-06.
  4. 1 2 Stacey, Kiran; Milmo, Dan (2023-10-20). "Sunak's global AI safety summit risks achieving very little, warns tech boss". The Guardian. ISSN   0261-3077 . Retrieved 2024-07-01.
  5. "Superintelligent AI: Transhumanism etc". Financial Times. 2023-12-05. Retrieved 2024-08-06.
  6. Werner, John. "Can We Handle Ubertechnology? Yann LeCun And Others On Controlling AI". Forbes. Retrieved 2024-08-06.
  7. Perrigo, Billy (2024-01-19). "Researcher: To Stop AI Killing Us, First Regulate Deepfakes". TIME. Retrieved 2024-07-01.
  8. Perrigo, Billy (2023-02-17). "Bing's AI Is Threatening Users. That's No Laughing Matter". TIME. Retrieved 2024-07-20.
  9. Evans, Greg (2023-03-29). "Elon Musk & Steve Wozniak Sign Open Letter Calling For Moratorium On Some Advanced A.I. Systems". Deadline. Retrieved 2024-07-01.
  10. "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2024-07-01.