Dario Amodei

Last updated
Dario Amodei
Dario Amodei at TechCrunch Disrupt 2023 01.jpg
Amodei in 2023
Born1983 (age 4041)
CitizenshipUnited States
Alma mater
Known forCo-founder / CEO of Anthropic
Scientific career
Fields Artificial intelligence
Institutions
Thesis Network-Scale Electrophysiology: Measuring and Understanding the Collective Behavior
of Neural Circuits
 (2011)
Doctoral advisor Michael J. Berry
William Bialek
Website https://darioamodei.com

Dario Amodei (born 1983) is an Italian-American artificial intelligence researcher and entrepreneur. He is the co-founder and CEO of Anthropic, the company behind the large language model series Claude AI. He was previously the vice president of research at OpenAI. [1] [2]

Contents

Education

Dario grew up in San Francisco and graduated from Lowell High School. [3] Amodei began his undergraduate studies at Caltech, where he worked with Tom Tombrello as one of Tombrello's Physics 11 students. He later transferred to Stanford University, where he earned his undergraduate degree in physics. [4] He also holds a PhD in physics from Princeton University, where he studied electrophysiology of neural circuits. [5] He was a postdoctoral scholar at the Stanford University School of Medicine. [6]

Career

From November 2014 until October 2015 he worked at Baidu. After that, he worked at Google. [7] In 2016, Amodei joined OpenAI. [8]

In 2021, Amodei and his sister Daniela founded Anthropic along with other former senior members of OpenAI. The Amodei siblings were among those who left OpenAI due to directional differences. [9]

In July 2023, Amodei warned a United States Senate judiciary panel of the dangers of AI, including the risks it poses in the development and control of weaponry. [10]

In September 2023, Amodei and his sister Daniela were named as two of the TIME 100 Most Influential People in AI (TIME100 AI). [11]

In November 2023, the board of directors of OpenAI approached Amodei about replacing Sam Altman and potentially merging the two startups. Amodei declined both offers. [12]

In October 2024, Amodei published an essay named "Machines of Loving Grace", [13] in which he lays out a vision for how AI could radically improve human welfare assuming the risks are successfully managed. [14]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Nick Bostrom</span> Philosopher and writer (born 1973)

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.

<span class="mw-page-title-main">Geoffrey Hinton</span> British computer scientist (born 1947)

Geoffrey Everest Hinton is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, known for his work on artificial neural networks which earned him the title as the "Godfather of AI".

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.

<span class="mw-page-title-main">John Hopfield</span> American scientist (born 1933)

John Joseph Hopfield is an American physicist and emeritus professor of Princeton University, most widely known for his study of associative neural networks in 1982. He is known for the development of the Hopfield network. Previous to its invention, research in artificial intelligence (AI) was in a decay period or AI winter, Hopfield work revitalized large scale interest in this field.

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">Jaan Tallinn</span> Estonian programmer and investor

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.

Anthropic PBC is a U.S.-based artificial intelligence (AI) public-benefit startup founded in 2021. It researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe, reliable models for the public. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini.

<span class="mw-page-title-main">Andrew Ng</span> American artificial intelligence researcher

Andrew Yan-Tak Ng is a British-American computer scientist and technology entrepreneur focusing on machine learning and artificial intelligence (AI). Ng was a cofounder and head of Google Brain and was the former Chief Scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people.

<span class="mw-page-title-main">Holden Karnofsky</span> American nonprofit executive

Holden Karnofsky is an American nonprofit executive. He is a co-founder and Director of AI Strategy of the research and grantmaking organization Open Philanthropy. Karnofsky co-founded the charity evaluator GiveWell with Elie Hassenfeld in 2007 and is vice chair of its board of directors.

<span class="mw-page-title-main">Fei-Fei Li</span> Chinese-American computer scientist (born 1976)

Fei-Fei Li is a Chinese-American computer scientist known for establishing ImageNet, the dataset that enabled rapid advances in computer vision in the 2010s. She is the Sequoia Capital professor of computer science at Stanford University and former board director at Twitter. Li is a co-director of the Stanford Institute for Human-Centered Artificial Intelligence and a co-director of the Stanford Vision and Learning Lab. She served as the director of the Stanford Artificial Intelligence Laboratory from 2013 to 2018.

Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.

Shane Legg is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies, and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter.

In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.

<span class="mw-page-title-main">Andrej Karpathy</span> Czechoslovak-born AI researcher (born 1986)

Andrej Karpathy is a Slovak-Canadian computer scientist who served as the director of artificial intelligence and Autopilot Vision at Tesla. He co-founded and formerly worked at OpenAI, where he specialized in deep learning and computer vision.

Paul Christiano is an American researcher in the field of artificial intelligence (AI), with a specific focus on AI alignment, which is the subfield of AI safety research that aims to steer AI systems toward human interests. He serves as the Head of Safety for the U.S. Artificial Intelligence Safety Institute inside NIST. He formerly led the language model alignment team at OpenAI and became founder and head of the non-profit Alignment Research Center (ARC), which works on theoretical AI alignment and evaluations of machine learning models. In 2023, Christiano was named as one of the TIME 100 Most Influential People in AI.

Daniela Amodei is an Italian-American artificial intelligence researcher and entrepreneur. She is the President and co-founder of the artificial intelligence company Anthropic.

Jan Leike is an AI alignment researcher who has worked at DeepMind and OpenAI. He joined Anthropic in May 2024.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist". Specifically, the bill would apply to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations. SB 1047 would apply to all AI companies doing business in California—the location of the company does not matter. The bill creates protections for whistleblowers and requires developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also establish CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.

References

  1. Roose, Kevin (July 11, 2023). "Inside the White-Hot Center of A.I. Doomerism" . New York Times.
  2. Oreskovic, Alexei (July 11, 2023). "Anthropic CEO A.I. risks: short, medium, and long-term". Fortune.
  3. "Lowell Alumni Newsletter Winter 2008 by Lowell Alumni Association". Issuu. August 23, 2014.
  4. Fuller-Wright, Liz (September 12, 2023). "TIME Magazine's TIME100 artificial intelligence list honors six Princetonians". Princeton University.
  5. Amodei, Dario (2011). Network-Scale Electrophysiology: Measuring and Understanding the Collective Behavior of Neural Circuits (Thesis).
  6. "Dario Amodei, PhD". The Hertz Foundation.
  7. "Dario Amodei". LinkedIn.
  8. Hao, Karen (February 17, 2020). "The messy, secretive reality behind OpenAI's bid to save the world". MIT Technology Review.
  9. Goldman, Sharon (April 7, 2023). "As Anthropic seeks billions to take on OpenAI, 'industrial capture' is nigh. Or is it?". VentureBeat.
  10. "Anthropic's Amodei Warns US Senators of AI-Powered Weapons" . Bloomberg. July 25, 2023.
  11. "TIME100 AI 2023: Dario and Daniela Amodei". Time. September 7, 2023.
  12. Dastin, Jeffrey (November 21, 2023). "OpenAI's board approached Anthropic CEO about top job and merger". Reuters.
  13. Amodei, Dario (2024-10-11). "Dario Amodei — Machines of Loving Grace" . Retrieved 2024-11-18.
  14. Sullivan, Mark (October 17, 2024). "Anthropic CEO Dario Amodei pens a smart look at our AI future". Fast Company.