Dario Amodei | |
|---|---|
| Amodei in 2023 | |
| Born | 1983 (age 42–43) San Francisco, California, U.S. |
| Education |
|
| Known for | Co-founder and CEO of Anthropic |
| Relatives | Daniela Amodei (sister) |
| Awards | Time 100 (2025) |
| Scientific career | |
| Fields | Artificial intelligence |
| Institutions | |
| Thesis | Network-Scale Electrophysiology: Measuring and Understanding the Collective Behavior of Neural Circuits (2011) |
| Doctoral advisor | Michael J. Berry William Bialek |
| Website | darioamodei |
Dario Amodei (born 1983) is an American artificial intelligence researcher and entrepreneur. He is the co-founder and CEO of Anthropic, the company behind the large language model series Claude. [1] He was previously the vice president of research at OpenAI. [2] [3]
In his capacity as Anthropic's CEO, he often writes on the benefits and risks of advanced AI systems. [4] He is a proponent of an "entente" strategy in which a coalition of democratic nations use advanced AI systems in military applications to achieve a decisive advantage over adversaries, while sharing the benefits with cooperating nations. [5] [6] [7]
Dario Amodei was born in San Francisco, California, in 1983. [8] His sister, Daniela, was born four years later. [8] His father was Riccardo Amodei, an Italian-American leather craftsman. His mother Elena Engel, who is Jewish and was born in Chicago, worked as a project manager for libraries. [8]
Dario grew up in San Francisco and graduated from Lowell High School. [9] He was a member of the US Physics olympiad Team in 2000. [10] Amodei began his undergraduate studies at Caltech, where he worked with Tom Tombrello as one of Tombrello's Physics 11 students. He later transferred to Stanford University, where he earned his undergraduate degree in physics. [11] He also holds a PhD in biophysics from Princeton University, where he studied electrophysiology of neural circuits. [12] He was a postdoctoral scholar at the Stanford University School of Medicine. [13]
From November 2014 until October 2015 he worked at Baidu. [14] After that, he worked at Google. [15] In 2016, Amodei joined OpenAI. [16]
In 2021, Dario and his sister, Daniela, founded Anthropic along with other former senior members of OpenAI. [17] [18] The Amodei siblings were among those who left OpenAI due to directional differences. [19] As of November 2025, [update] Anthropic has an estimated value of $350 billion. [20]
In November 2023, the board of directors of OpenAI approached Amodei about replacing Sam Altman and potentially merging the two startups. Amodei declined both offers. [21]
In 2025, Time magazine listed Amodei as one of the world's 100 most influential people. [22] He was also named as one of the "Architects of AI" for Time's Person of the Year.
Amodei has written about both the potential benefits and the risks of advanced AI systems.
In October 2024, Amodei published an essay titled "Machines of Loving Grace", speculating about how AI could improve human welfare. [6] In it, he writes, "I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be." [5] The essay described a vision of civilization where the risks of AI had been addressed and powerful AI was applied to raise the quality of life for everyone, suggesting that AI could contribute to enormous advances in biology, neuroscience, economic development, global peace, and work and meaning. [5]
In the article, Amodei also stresses the importance "that democracies have the upper hand on the world stage when powerful AI is created", and argues for an "entente" strategy where a coalition of democracies use AI to achieve a decisive strategic and military advantage over their adversaries, while distributing the benefits to all cooperating democratic nations. [5]
In January 2026, Amodei published a follow-up essay titled "The Adolescence of Technology", which focuses on the risks posed by powerful AI [23] [24] [25] and expands on his earlier statements about these risks. [26] [27] [4] [28] [29] In the essay, Amodei identifies five major categories of AI risk.
The first category concerns the possibility that AI systems develop goals or behaviors misaligned with human intentions. He notes that such behaviors have already been observed in testing at Anthropic, including AI models engaging in deception, blackmail, and scheming. [23] [24]
The second category involves misuse of AI for destruction by individuals or small groups, with Amodei expressing particular concern about biological weapons. He warns that AI could enable people without specialized training to create weapons of mass destruction. [23] [24]
The third category concerns misuse of AI by powerful actors to seize or maintain power. Amodei cautions that AI could enable authoritarian governments to conduct unprecedented surveillance, deploy autonomous weapons, and engage in mass propaganda. He identifies the Chinese Communist Party as the greatest threat in this regard, arguing that democracies must maintain AI leadership to prevent a "global totalitarian dictatorship." [23]
The fourth category addresses economic disruption, including mass labor displacement and concentration of wealth. Amodei notes that AI could displace half of all entry-level white-collar jobs within one to five years, and warns of wealth concentration exceeding that of the Gilded Age, with personal fortunes potentially reaching into the trillions of dollars. [23] [30] [25] [24]
The fifth category encompasses indirect effects and unknown unknowns, including rapid advances in biology that could alter human lifespans or human intelligence, unhealthy changes to human life from AI interaction, and challenges to human purpose in a world where AI exceeds human capabilities across virtually all domains. [23]