Dario Amodei | |
---|---|
Born | 1983 (age 40–41) |
Citizenship | United States |
Alma mater | |
Known for | Co-founder / CEO of Anthropic |
Scientific career | |
Fields | Artificial intelligence |
Institutions | |
Thesis | Network-Scale Electrophysiology: Measuring and Understanding the Collective Behavior of Neural Circuits (2011) |
Doctoral advisor | Michael J. Berry William Bialek |
Website | https://darioamodei.com |
Dario Amodei (born 1983) is an Italian-American artificial intelligence researcher and entrepreneur. He is the co-founder and CEO of Anthropic, the company behind the large language model series Claude AI. He was previously the vice president of research at OpenAI. [1] [2]
Dario grew up in San Francisco and graduated from Lowell High School. [3] Amodei began his undergraduate studies at Caltech, where he worked with Tom Tombrello as one of Tombrello's Physics 11 students. He later transferred to Stanford University, where he earned his undergraduate degree in physics. [4] He also holds a PhD in physics from Princeton University, where he studied electrophysiology of neural circuits. [5] He was a postdoctoral scholar at the Stanford University School of Medicine. [6]
From November 2014 until October 2015 he worked at Baidu. After that, he worked at Google. [7] In 2016, Amodei joined OpenAI. [8]
In 2021, Amodei and his sister Daniela founded Anthropic along with other former senior members of OpenAI. The Amodei siblings were among those who left OpenAI due to directional differences. [9]
In July 2023, Amodei warned a United States Senate judiciary panel of the dangers of AI, including the risks it poses in the development and control of weaponry. [10]
In September 2023, Amodei and his sister Daniela were named as two of the TIME 100 Most Influential People in AI (TIME100 AI). [11]
In November 2023, the board of directors of OpenAI approached Amodei about replacing Sam Altman and potentially merging the two startups. Amodei declined both offers. [12]
In October 2024, Amodei published an essay named "Machines of Loving Grace", [13] in which he lays out a vision for how AI could radically improve human welfare assuming the risks are successfully managed. [14]
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test. He was the founding director of the now dissolved Future of Humanity Institute at the University of Oxford and is now Principal Researcher at the Macrostrategy Research Initiative.
Geoffrey Everest Hinton is a British-Canadian computer scientist, cognitive scientist, cognitive psychologist, known for his work on artificial neural networks which earned him the title as the "Godfather of AI".
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI.
John Joseph Hopfield is an American physicist and emeritus professor of Princeton University, most widely known for his study of associative neural networks in 1982. He is known for the development of the Hopfield network. Previous to its invention, research in artificial intelligence (AI) was in a decay period or AI winter, Hopfield work revitalized large scale interest in this field.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.
Anthropic PBC is a U.S.-based artificial intelligence (AI) public-benefit startup founded in 2021. It researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe, reliable models for the public. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini.
Andrew Yan-Tak Ng is a British-American computer scientist and technology entrepreneur focusing on machine learning and artificial intelligence (AI). Ng was a cofounder and head of Google Brain and was the former Chief Scientist at Baidu, building the company's Artificial Intelligence Group into a team of several thousand people.
Holden Karnofsky is an American nonprofit executive. He is a co-founder and Director of AI Strategy of the research and grantmaking organization Open Philanthropy. Karnofsky co-founded the charity evaluator GiveWell with Elie Hassenfeld in 2007 and is vice chair of its board of directors.
Fei-Fei Li is a Chinese-American computer scientist known for establishing ImageNet, the dataset that enabled rapid advances in computer vision in the 2010s. She is the Sequoia Capital professor of computer science at Stanford University and former board director at Twitter. Li is a co-director of the Stanford Institute for Human-Centered Artificial Intelligence and a co-director of the Stanford Vision and Learning Lab. She served as the director of the Stanford Artificial Intelligence Laboratory from 2013 to 2018.
Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
Shane Legg is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies, and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter.
In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.
Andrej Karpathy is a Slovak-Canadian computer scientist who served as the director of artificial intelligence and Autopilot Vision at Tesla. He co-founded and formerly worked at OpenAI, where he specialized in deep learning and computer vision.
Paul Christiano is an American researcher in the field of artificial intelligence (AI), with a specific focus on AI alignment, which is the subfield of AI safety research that aims to steer AI systems toward human interests. He serves as the Head of Safety for the U.S. Artificial Intelligence Safety Institute inside NIST. He formerly led the language model alignment team at OpenAI and became founder and head of the non-profit Alignment Research Center (ARC), which works on theoretical AI alignment and evaluations of machine learning models. In 2023, Christiano was named as one of the TIME 100 Most Influential People in AI.
Daniela Amodei is an Italian-American artificial intelligence researcher and entrepreneur. She is the President and co-founder of the artificial intelligence company Anthropic.
Jan Leike is an AI alignment researcher who has worked at DeepMind and OpenAI. He joined Anthropic in May 2024.
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, is a 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so advanced that they are not yet known to exist". Specifically, the bill would apply to models which cost more than $100 million to train and were trained using a quantity of computing power greater than 1026 integer or floating-point operations. SB 1047 would apply to all AI companies doing business in California—the location of the company does not matter. The bill creates protections for whistleblowers and requires developers to perform risk assessments of their models prior to release, under the supervision of the Government Operations Agency. It would also establish CalCompute, a University of California public cloud computing cluster for startups, researchers and community groups.