![]() Logo | |
Company type | Private |
---|---|
Industry | Artificial intelligence |
Founded | 2021 |
Founders |
|
Headquarters | San Francisco, California, U.S. |
Products | Claude Claude Code |
Number of employees | c. 1,000 (2025) [4] |
Website | anthropic.com |
Part of a series on |
Artificial intelligence (AI) |
---|
![]() |
Anthropic PBC is an American artificial intelligence (AI) startup company founded in 2021. Anthropic has developed a family of large language models (LLMs) named Claude. According to the company, it researches and develops AI to "study their safety properties at the technological frontier" and use this research to deploy safe models for the public. [5] [6]
Anthropic was founded by former members of OpenAI, including siblings Daniela Amodei and Dario Amodei. [7] In September 2023, Amazon announced an investment of up to $4 billion, followed by a $2 billion commitment from Google in the following month. [8] [9] [10]
Anthropic was founded in 2021 by seven former employees of OpenAI, including siblings Daniela Amodei and Dario Amodei, the latter of whom served as OpenAI's Vice President of Research. [11] [12]
In April 2022, Anthropic announced it had received $580 million in funding, [13] including a $500 million investment from FTX under the leadership of Sam Bankman-Fried. [14] [3]
In the summer of 2022, Anthropic finished training the first version of Claude but did not release it, mentioning the need for further internal safety testing and the desire to avoid initiating a potentially hazardous race to develop increasingly powerful AI systems. [15]
On September 25, 2023, Amazon announced a partnership with Anthropic, with Amazon becoming a minority stakeholder by initially investing $1.25 billion, and planning a total investment of $4 billion. [8] As part of the deal, Anthropic would use Amazon Web Services (AWS) as its primary cloud provider and make its AI models available to AWS customers. [8] [16] The next month, Google invested $500 million in Anthropic, and committed to an additional $1.5 billion over time. [10]
In February 2024, Anthropic hired former Google Books head of partnerships Tom Turvey, and tasked him with obtaining "all the books in the world". [17] The company then began using destructive book scanning to digitize "millions" of books to train Claude. [17]
In March 2024, Amazon maxed out its potential investment from the agreement made in the prior year by investing another US$2.75 billion into Anthropic, completing its $4 billion investment. [9]
In November 2024, Amazon announced a new investment of $4 billion in Anthropic (bringing its total investment to $8 billion), including an agreement to increase the use of Amazon's AI chips for training and running Anthropic's large language models. [18]
In 2024, Anthropic attracted several notable employees from OpenAI, including Jan Leike, John Schulman, and Durk Kingma. [19]
In early 2025, Anthropic secured significant funding and partnerships while continuing its focus on AI safety research and policy advocacy. The company raised $3.5 billion in a Series E funding round in March, achieving a post-money valuation of $61.5 billion, led by Lightspeed Venture Partners with participation from several major investors. [20] [21] The investment enabled Anthropic to advance development of next-generation AI systems, expand compute capacity, and accelerate international expansion. [20] A significant partnership was announced in March with Databricks, establishing a five-year strategic relationship to integrate Anthropic's models natively into the Databricks Data Intelligence Platform. This partnership provided over 10,000 companies access to Claude models for building AI agents that can reason over their enterprise data. [22] [23]
Anthropic released several major updates to its Claude AI models throughout 2025. In May, the company announced Claude 4, introducing both Claude Opus 4 and Claude Sonnet 4 with enhanced coding capabilities and advanced reasoning features. [24] Claude Opus 4 was positioned as a highly-competitive coding model with sustained performance on complex tasks, while Claude Sonnet 4 delivered improved reasoning and instruction-following capabilities. [24] The company also introduced new API capabilities including the code execution tool, Model Context Protocol (MCP) connector, Files API, and prompt caching functionality. [24] In May, Anthropic launched a web search API that enabled Claude to access real-time information from the internet, expanding its capabilities beyond static training data. [25] Claude Code, Anthropic's coding assistant, transitioned from research preview to general availability, featuring integrations with VS Code and JetBrains IDEs and support for GitHub Actions. [24] The product enabled developers to collaborate directly with Claude in their development environment, with the AI capable of making coordinated changes across multiple files and understanding entire codebases. [26]
In July, Anthropic published a report titled "Build AI in America", outlining policy recommendations for domestic AI infrastructure development. [27] The report emphasized the need for substantial investments in computing power and electricity infrastructure, projecting that the U.S. AI sector would require at least 50 gigawatts of electric capacity by 2028 to maintain global leadership. [27]
According to Anthropic, the company's goal is to research the safety and reliability of artificial intelligence systems. [6] The Amodei siblings were among those who left OpenAI due to directional differences. [12]
Anthropic incorporated itself as a Delaware public-benefit corporation (PBC), which enables directors to balance the financial interests of stockholders with its public benefit purpose. [28]
Anthropic's "Long-Term Benefit Trust" is a purpose trust for "the responsible development and maintenance of advanced AI for the long-term benefit of humanity". It holds Class T shares in the PBC which allow it to elect directors onto Anthropic's board. [29] [30] As of April 2025, the members of the Trust are Neil Buddy Shah, Kanika Bahl and Zach Robinson. [31]
Investors include Amazon.com for $8B, [18] Google for $2B, [10] and Menlo Ventures for $750M. [32]
Claude incorporates "Constitutional AI" to set safety guidelines for the model's output. [36] The name, "Claude", was chosen either as a reference to mathematician Claude Shannon, or as a male name to contrast the female names of other A.I. assistants such as Alexa, Siri, and Cortana. [3]
Anthropic initially released two versions of its model, Claude and Claude Instant, in March 2023, with the latter being a more lightweight model. [37] [38] [39] The next iteration, Claude 2, was launched in July 2023. [40] Unlike Claude, which was only available to select users, Claude 2 is available for public use. [41]
Claude 3 was released in March 2024, with three language models: Opus, Sonnet, and Haiku. [42] [43] The Opus model is the largest. According to Anthropic, it outperformed OpenAI's GPT-4 and GPT-3.5, and Google's Gemini Ultra, in benchmark tests at the time. Sonnet and Haiku are Anthropic's medium- and small-sized models, respectively. All three models can accept image input. [42] Amazon has added Claude 3 to its cloud AI service Bedrock. [44]
In May 2024, Anthropic announced the Claude Team plan, its first enterprise offering for Claude, and Claude iOS app. [45]
In June 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated significantly improved performance on benchmarks compared to the larger Claude 3 Opus, notably in areas such as coding, multistep workflows, chart interpretation, and text extraction from images. Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a dedicated window in the interface and preview select code in real time such as websites or SVGs. [46]
In October 2024, Anthropic released an improved version of Claude 3.5, along with a beta feature called "Computer use", which enables Claude to take screenshots, click, and type text. [47]
In November 2024, Palantir announced a partnership with Anthropic and Amazon Web Services to provide U.S. intelligence and defense agencies access to Claude 3 and 3.5. According to Palantir, this was the first time that Claude would be used in "classified environments". [48]
In December 2024, Claude 3.5 Haiku was made available to all users on web and mobile platforms. [49]
In February 2025, Claude 3.7 Sonnet was introduced to all paid users. It is a "hybrid reasoning" model (one that responds directly to simple queries, while taking more time for complex problems). [50] [51]
In May 2025, Claude 4 Opus and Sonnet were introduced. With these models, Anthropic also introduced Extended thinking with tool use and the ability to use tools in parallel. [52]
In August 2025, Claude Opus 4.1 was announced. [53]
According to Anthropic, Constitutional AI (CAI) is a framework developed to align AI systems with human values and ensure that they are helpful, harmless, and honest. [11] [54] Within this framework, humans provide a set of rules describing the desired behavior of the AI system, known as the "constitution". [54] The AI system evaluates the generated output and then adjusts the AI models to better fit the constitution. [54] The self-reinforcing process aims to avoid harm, respect preferences, and provide true information. [54]
Some of the principles of Claude 2's constitution are derived from documents such as the 1948 Universal Declaration of Human Rights and Apple's terms of service. [40] For example, one rule from the UN Declaration applied in Claude 2's CAI states "Please choose the response that most supports and encourages freedom, equality and a sense of brotherhood." [40]
Anthropic also publishes research on the interpretability of machine learning systems, focusing on the transformer architecture. [11] [55] [56]
Part of Anthropic's research aims to be able to automatically identify "features" in generative pretrained transformers like Claude. In a neural network, a feature is a pattern of neural activations that corresponds to a concept. In 2024, using a compute-intensive technique called "dictionary learning", Anthropic was able to identify millions of features in Claude, including for example one associated with the Golden Gate Bridge. Enhancing the ability to identify and edit features is expected to have significant safety implications. [57] [58] [59]
In March 2025, research by Anthropic suggested that multilingual LLMs partially process information in a conceptual space before converting it to the appropriate language. It also found evidence that LLMs can sometimes plan ahead. For example, when writing poetry, Claude identifies potential rhyming words before generating a line that ends with one of these words. [60] [61]
Anthropic partnered with Palantir and Amazon Web Services in November 2024 to provide the Claude model to U.S. intelligence and defense agencies. [62] Anthropic's CEO Dario Amodei said about working with the U.S. military:
The position that we should never use AI in defense and intelligence settings doesn't make sense to me. The position that we should go gangbusters and use it to make anything we want — up to and including doomsday weapons — that's obviously just as crazy. We're trying to seek the middle ground, to do things responsibly. [63]
In June 2025, Anthropic announced a "Claude Gov" model. Ars Technica reported that, as of June 2025, it was in use at multiple US national security agencies. [64]
In July 2025, the United States Department of Defense announced that Anthropic had received a $200 million contract for AI in the military, along with Google, OpenAI, and xAI. [65]
In August 2025, Anthropic launched two major education initiatives: a Higher Education Advisory Board and three AI Fluency courses designed to guide responsible AI integration in academic settings. [66] The advisory board is chaired by Rick Levin, former president of Yale University (1993-2013) and former CEO of Coursera (2014-2017), and includes prominent academic leaders from institutions such as Rice University, University of Michigan, University of Texas at Austin, and Stanford University. [67] The three AI Fluency courses—AI Fluency for Educators, AI Fluency for Students, and Teaching AI Fluency—were co-developed with professors Rick Dakan of Ringling College of Art and Design and Joseph Feller of University College Cork, and are available under Creative Commons licenses for institutional adaptation. [68] Additionally, Anthropic has established partnerships with universities including Northeastern University, London School of Economics and Political Science, and Champlain College, providing campus-wide access to Claude for Education, and has announced integrations with educational platforms Canvas, Wiley (publisher), and Panopto to enhance academic research capabilities. [69]
On October 18, 2023, Anthropic was sued by Concord, Universal, ABKCO, and other music publishers for, per the complaint, "systematic and widespread infringement of their copyrighted song lyrics." [70] [71] [72] They alleged that the company used copyrighted material without permission in the form of song lyrics. [73] The plaintiffs asked for up to $150,000 for each work infringed upon by Anthropic, citing infringement of copyright laws. [73] In the lawsuit, the plaintiffs support their allegations of copyright violations by citing several examples of Anthropic's Claude model outputting copied lyrics from songs such as Katy Perry's "Roar" and Gloria Gaynor's "I Will Survive". [73] Additionally, the plaintiffs alleged that even given some prompts that did not directly state a song name, the model responded with modified lyrics based on original work. [73]
On January 16, 2024, Anthropic claimed that the music publishers were not unreasonably harmed and that the examples noted by plaintiffs were merely bugs. [74]
In August 2024, a class-action lawsuit was filed against Anthropic in California for alleged copyright infringement. The suit claims Anthropic fed its LLMs with pirated copies of the authors' work, including from participants Kirk Wallace Johnson, Andrea Bartz and Charles Graeber. [75] On June 23, 2025, the United States District Court for the Northern District of California granted summary judgment for Anthropic that the use of digital copies of plaintiffs works (inter alia) for the purpose of training Anthropic's LLMs was a fair use. But it found that Anthropic had used millions of pirated library copies and that such use of pirated copies could not be a fair use. Therefore the case was ordered to go to trial on the pirated copies used to create Anthropic's central library and the resulting damages. [76]
In June 2025, Reddit sued Anthropic, alleging that it is scraping data from the website in violation of its user agreement. [77]