![]() | |
Formation | 2014 |
---|---|
Founder | Paul Allen |
Type | Non-profit research institute |
82-4083177 | |
Location | |
CEO | Ali Farhadi |
Key people | Peter Clark, Yejin Choi, Noah Smith, Hannaneh Hajishirzi, Dan Weld, Chris Bretherton, Ani Kembhavi, Jes Lefcourt |
Website | allenai |
The Allen Institute for AI (abbreviated AI2) is a 501(c)(3) non-profit scientific research institute founded by late Microsoft co-founder and philanthropist Paul Allen in 2014. The institute seeks to conduct high-impact AI research and engineering in service of the common good. [1] AI2 is based in Seattle, and also has an active office in Tel Aviv, Israel. [2]
Oren Etzioni was appointed by Paul Allen in September 2013 to direct the research at the institute. [3] After leading the organization for nine years, Etzioni stepped down from his role as CEO on September 30, 2022. [4] He was replaced in an interim capacity by the leading researcher of the company's Aristo project, Peter Clark. [5] On June 20, 2023, AI2 announced Ali Farhadi as its next CEO starting July 31, 2023. [5]
AI2 has been actively involved in the development of open-source artificial intelligence through the release of fully open large language models, datasets, and model training assets.
On May 11, 2023, AI2 announced they were developing OLMo, an open language model aiming to match the performance of other state-of-the-art language models. [14] In February 2024, 1B and 7B parameter variants of the model were open-sourced, including code, model weights with intermediate snapshots and logs, and contents of their Dolma training dataset, [15] making it the most open state-of-the-art model available. [16] [17] In November 2024, AI2 released the second iteration of OLMo, OLMo 2, with the initial release including 7B and 13B parameter models. [18] [19] In March 2025, AI2 released a 32B variant of OLMo 2, claiming to have released "the first fully-open model (all data, code, weights, and details are freely available) to outperform GPT3.5-Turbo and GPT-4o mini". [20]
In addition to the fully-open OLMo family of models, AI2 has also developed Tulu, a family of instruction-tuned models and open post-training recipes that build on open-weights base models (e.g., Meta's Llama) to provide fully transparent alternatives to proprietary instruction-tuning methods. [21] AI2 released the first iteration of Tulu in June 2023, [22] with subsequent iterations being released in November 2023 (Tulu 2 [23] ) and November 2024 (Tulu 3 [24] ).
{{cite web}}
: CS1 maint: numeric names: authors list (link)