List of large language models

Last updated

A large language model (LLM) is a type of machine learning model designed for natural language processing tasks such as language generation. LLMs are language models with many parameters, and are trained with self-supervised learning on a vast amount of text.

Contents

List

For the training cost column, 1 petaFLOP-day equals 1 petaFLOP/sec × 1 day, or 8.64×1019 FLOP (floating point operations). Only the cost of the largest model is shown. The number of parameters is measured in billions, [a] and the training cost is measured in petaFLOP-days.

2018

NameRelease date [b] DeveloperNumber of parametersCorpus sizeTraining costLicense [c] Notes
GPT-1 Jun 11 OpenAI 0.117BUnknown1 [1] MIT [2]
First GPT model, decoder-only transformer. Trained for 30 days on 8 P600 GPUs. [3]
BERT Oct 2018 Google 0.340B [4] 3.3 billion words [4] 9 [5] Apache 2.0 [6]
An early and influential language model. [7] Encoder-only and thus not built to be prompted or generative. [8] Training took 4 days on 64 TPUv2 chips. [9]

2019

NameRelease date [b] DeveloperNumber of parametersCorpus sizeTraining costLicense [c] Notes
T5 Oct 2019Google11B [10] 34 billion tokens [10] Unknown Apache 2.0 [11]
Base model for Google projects like Imagen. [12]
XLNet Jun 2019Google0.340B [13] 33 billion words330 Apache 2.0 [14]
An alternative to BERT; designed as encoder-only. Trained on 512 TPU v3 chips for 5.5 days. [15]
GPT-2 Feb 2019 OpenAI 1.5B [16] 40GB [17] (~10 billion tokens) [18] 28 [19] MIT [20]
Trained on 32 TPUv3 chips for 1 week. [19]

2020

NameRelease date [b] DeveloperNumber of parametersCorpus sizeTraining costLicense [c] Notes
GPT-3 May 2020OpenAI175B [21] 300 billion tokens [18] 3640 [22] Proprietary
A fine-tuned variant of GPT-3, termed GPT-3.5, was made available to the public through ChatGPT in 2022. [23]

2021

NameRelease date [b] DeveloperNumber of parametersCorpus sizeTraining costLicense [c] Notes
GPT-NeoMar 2021 EleutherAI 2.7B [24] 825 GiB [25] Unknown MIT [26]
The first of a series of free GPT-3 alternatives released by EleutherAI. GPT-Neo outperformed an equivalent-size GPT-3 model on some benchmarks, but was significantly worse than the largest GPT-3. [26]
GPT-J Jun 2021EleutherAI6B [27] 825 GiB [25] 200 [28] Apache 2.0
Megatron-Turing NLGOct 2021 [29] Microsoft and Nvidia 530B [30] 338.6 billion tokens [30] 38000 [31] Unreleased
Trained for 3 months on over 2000 A100 GPUs on the NVIDIA Selene Supercomputer, for over 3 million GPU-hours. [31]
Ernie 3.0 TitanDec 2021 Baidu 260B [32] 4TBUnknown Proprietary
Claude [33] Dec 2021 Anthropic 52B [34] 400 billion tokens [34] Unknown Proprietary
Fine-tuned for desirable behavior in conversations. [35]
GLaM (Generalist Language Model)Dec 2021 Google 1200B [36] 1.6 trillion tokens [36] 5600 [36] Proprietary
GopherDec 2021 Google DeepMind 280B [37] 300 billion tokens [38] 5833 [39] Proprietary

2022

NameRelease date [b] DeveloperNumber of parametersCorpus sizeTraining costLicense [c] Notes
LaMDA (Language Models for Dialog Applications)Jan 2022Google137B [40] 1.56T words, [40] 168 billion tokens [38] 4110 [41] Proprietary
GPT-NeoXFeb 2022 EleutherAI 20B [42] 825 GiB [25] 740 [28] Apache 2.0
Chinchilla Mar 2022Google DeepMind70B [43] 1.4 trillion tokens [43] [38] 6805 [39] Proprietary
PaLM (Pathways Language Model)Apr 2022Google540B [44] 768 billion tokens [43] 29,250 [39] Proprietary
Trained for ~60 days on ~6000 TPU v4 chips. [39]
OPT (Open Pretrained Transformer)May 2022 Meta 175B [45] 180 billion tokens [46] 310 [28] Non-commercial research [d]
GPT-3 architecture with some adaptations from Megatron. The training logbook written by the team was published. [47]
YaLM 100BJun 2022 Yandex 100B [48] 1.7TB [48] Unknown Apache 2.0
MinervaJun 2022 Google 540B [49] 38.5B tokens from webpages filtered for math content and from arXiv [49] Unknown Proprietary
For solving "mathematical and scientific questions using step-by-step reasoning". [50]
BLOOM Jul 2022Large collaboration led by Hugging Face 175B [51] 350 billion tokens (1.6TB) [52] UnknownResponsible AI
GalacticaNov 2022Meta120B106 billion tokens [53] UnknownCC-BY-NC-4.0
AlexaTM (Teacher Models)Nov 2022 Amazon 20B [54] 1.3 trillion [55] Unknown Proprietary [56]

2023

NameRelease date [b] DeveloperNumber of parametersCorpus sizeTraining costLicense [c] Notes
Llama Feb 2023 Meta AI 65B [57] 1.4 trillion [57] 6300 [58] Non-commercial research [e]
GPT-4 Mar 2023 OpenAI Unknown [f]
(According to rumors: 1760) [60]
UnknownUnknown,
estimated 230,000
Proprietary
Cerebras-GPTMar 2023 Cerebras 13B [61] 270 [28] Apache 2.0
FalconMar 2023 Technology Innovation Institute 40B [62] 1 trillion tokens, from RefinedWeb (filtered web text corpus) [63] plus some "curated corpora". [64] 2800 [58] Apache 2.0 [65]
BloombergGPTMar 2023 Bloomberg L.P. 50B363 billion tokens from Bloomberg's proprietary data sources, plus 345 billion tokens from general purpose datasets [66] UnknownUnreleased
Designed for financial tasks. [66]
PanGu-Σ Mar 2023 Huawei 1085B329 billion tokens [67] Unknown Proprietary
OpenAssistant [68] Mar 2023 LAION 17B1.5 trillion tokensUnknown Apache 2.0
Jurassic-2 [69] [70] Mar 2023 AI21 Labs UnknownUnknownUnknown Proprietary
PaLM 2 (Pathways Language Model 2)May 2023 Google 340B [71] 3.6 trillion tokens [71] 85,000 [58] Proprietary
Used in the Bard chatbot. [72]
YandexGPT May 17, 2023 Yandex UnknownUnknownUnknown Proprietary
Llama 2 Jul 2023 Meta AI 70B [73] 2 trillion tokens [73] 21,000 Llama 2
Trained over 3.3 million GPU (A100) hours. [74]
Claude 2 Jul 2023 Anthropic UnknownUnknownUnknown Proprietary
Used in the Claude chatbot. [75]
Granite 13b Jul 2023 IBM UnknownUnknownUnknown Proprietary
Used in IBM Watsonx. [76]
Mistral 7B Sep 2023 Mistral AI 7.3B [77] UnknownUnknown Apache 2.0
YandexGPT 2 Sep 7, 2023YandexUnknownUnknownUnknown Proprietary
Claude 2.1 Nov 2023AnthropicUnknownUnknownUnknown Proprietary
Used in the Claude chatbot. Has a context window of 200,000 tokens, or ~500 pages. [78]
Grok-1 [79] Nov 2023 xAI 314BUnknownUnknown Apache 2.0
Used in the Grok chatbot. Grok 1 has a context length of 8,192 tokens and has access to X (Twitter). [80]
Gemini 1.0 Dec 2023 Google DeepMind UnknownUnknownUnknown Proprietary
Multimodal model, comes in three sizes. Used in the chatbot of the same name. [81]
Mixtral 8x7B Dec 2023 Mistral AI 46.7BUnknownUnknown Apache 2.0
Outperforms GPT-3.5 and Llama 2 70B on many benchmarks. [82] Mixture of experts model, with 12.9 billion parameters activated per token. [83]
DeepSeek-LLM Nov 29, 2023 DeepSeek 67B2T tokens [84] :table 212,000DeepSeek
Trained on English and Chinese text. Used 1024 training FLOPs for 67B model, 10b FLOPs for 7B. [84] :figure 5
Phi-2 Dec 2023 Microsoft 2.7B1.4T tokens419 [85] MIT
Trained on real and synthetic "textbook-quality" data over 14 days on 96 A100 GPUs. [85]

2024

NameRelease date [b] DeveloperNumber of parametersCorpus sizeTraining costLicense [c] Notes
Gemini 1.5 Feb 2024 Google DeepMind UnknownUnknownUnknown Proprietary
Multimodal model based on a MoE architecture. Context window above 1 million tokens. [86]
Gemini Ultra Feb 2024Google DeepMindUnknownUnknownUnknown Proprietary
Gemma Feb 2024Google DeepMind7B6T tokensUnknownGemma Terms of Use [87]
OLMoFeb 2024 Allen Institute for AI 7B [88] 2T tokens [89] Unknown Apache 2.0
Claude 3 Mar 2024 Anthropic UnknownUnknownUnknown Proprietary
Includes three models: Haiku, Sonnet, and Opus. [90]
DBRX Mar 2024 Databricks and Mosaic ML 136B12T tokensUnknownDatabricks Open Model [91] [92]
YandexGPT 3 Pro Mar 28, 2024 Yandex UnknownUnknownUnknown Proprietary
Fugaku-LLM [93] May 2024 Fujitsu, Tokyo Institute of Technology, Tohoku University, RIKEN, etc.13380B tokensUnknownFugaku-LLM Terms of Use [94]
The largest model ever trained on CPU-only, on the Fugaku supercomputer; the model was trained from scratch on 380 billion tokens using 13,824 Fugaku nodes. [93] [95]
ChameleonMay 2024 Meta AI 34B [96] 4.4 trillionUnknownNon-commercial research [97]
Mixtral 8x22B [98] Apr 17, 2024 Mistral AI 141BUnknownUnknown Apache 2.0
Phi-3 Apr 23, 2024 Microsoft 14B [99] 4.8T tokens[ citation needed ]Unknown MIT
Marketed by Microsoft as a "small language model". [100]
Granite Code Models May 2024 IBM UnknownUnknownUnknown Apache 2.0
YandexGPT 3 Lite May 28, 2024 Yandex UnknownUnknownUnknown Proprietary
Qwen2 Jun 2024 Alibaba Cloud 72B [101] 3T tokensUnknown Various
DeepSeek-V2 Jun 2024 DeepSeek 236B8.1T tokens28,000DeepSeek
1.4M hours on H800. [102]
Nemotron-4Jun 2024 Nvidia 340B9T tokens200,000NVIDIA Open Model [103] [104]
Trained for 1 epoch. Trained on 6144 H100 GPUs between December 2023 and May 2024. [105] [106]
Claude 3.5 Jun 2024 Anthropic UnknownUnknownUnknown Proprietary
Initially, only one model, Sonnet, was released. [107] In October 2024, Sonnet 3.5 was upgraded, and Haiku 3.5 became available. [108]
Llama 3.1 Jul 2024 Meta AI 405B15.6T tokens440,000 Llama 3
405B version took 31 million hours on H100-80GB, at 3.8E25 FLOPs. [109] [110]
Grok-2 Aug 14, 2024 xAI UnknownUnknownUnknownxAI Community License Agreement [111] [112]
Originally closed-source, then re-released as "Grok 2.5" under a source-available license in August 2025. [113] [114]
OpenAI o1 Sep 12, 2024 OpenAI UnknownUnknownUnknown Proprietary
First LLM described as a "reasoning model". [115] [116] [ better source needed ]
Sarvam-1 Oct 24, 2024 Sarvam AI 2B~2T tokensUnknownSarvam AI Research
Supports 10 Indic languages and English [117] [118]
YandexGPT 4 Lite and Pro Oct 24, 2024 Yandex UnknownUnknownUnknown Proprietary
Mistral Large Nov 2024 Mistral AI 123BUnknownUnknownMistral Research
Upgraded over time. The latest version is 24.11. [119]
Pixtral Nov 2024Mistral AI123BUnknownUnknownMistral Research
Multimodal. There is also a 12B version which is under Apache 2 license. [119]
OLMo 2Nov 2024 Allen Institute for AI 32 [120] [121] 6.6T tokens [121] 15,000 [121] Apache 2.0
Phi-4 Dec 12, 2024 Microsoft 14 [122] 9.8T tokensUnknown MIT
Marketed by Microsoft as a "small language model". [123]
DeepSeek-V3 Dec 2024 DeepSeek 671B14.8T tokens56,000 MIT
Used 2.788M training hours on H800 GPUs. [124] Originally released under the DeepSeek License, then re-released under the MIT License as "DeepSeek-V3-0324" in March 2025. [125]
Amazon NovaDec 2024 Amazon UnknownUnknownUnknown Proprietary
Includes three models: Nova Micro, Nova Lite, and Nova Pro. [126]

2025

NameRelease date [b] DeveloperNumber of parametersCorpus sizeTraining costLicense [c] Notes
DeepSeek-R1 Jan 20 DeepSeek 671BNot applicableUnknown MIT
No pretraining; reinforcement-learned upon V3-Base. [127] [128]
Qwen2.5 Jan 26 Alibaba 72B18T tokensUnknown Various
7 dense models with parameter counts from 0.5B to 72B. Alibaba also released 2 MoE variants. [129]
MiniMax-Text-01Jan 14 Minimax 456B4.7T tokens [130] UnknownMinimax Model
Gemini 2.0 Feb 5 Google DeepMind UnknownUnknownUnknown Proprietary
Three models released: Flash, Flash-Lite and Pro. [132] [133] [134]
Grok 3 Feb 19 xAI UnknownUnknownUnknown Proprietary
Training cost claimed to be "10x the compute of previous state-of-the-art models". [135]
Claude 3.7 Feb 24 Anthropic UnknownUnknownUnknown Proprietary
One model, Sonnet 3.7. [136]
YandexGPT 5 Lite Pretrain and Pro Feb 25 Yandex UnknownUnknownUnknown Proprietary
GPT-4.5 Feb 27 OpenAI UnknownUnknownUnknown Proprietary
OpenAI's largest non-reasoning model at the time. [137]
Gemini 2.5 Mar 25 Google DeepMind UnknownUnknownUnknown Proprietary
Three models released: Flash, Flash-Lite and Pro. [138]
YandexGPT 5 Lite Instruct Mar 31 Yandex UnknownUnknownUnknown Proprietary
Llama 4 Apr 5 Meta AI 400B40T tokensUnknown Llama 4
OpenAI o3 and o4-miniApr 16 OpenAI UnknownUnknownUnknown Proprietary
Reasoning models. [141]
Qwen3 Apr 28 Alibaba Cloud 235B36T tokensUnknown Apache 2.0
Multiple sizes, the smallest being 0.6B. [142]
Claude 4 May 22 Anthropic UnknownUnknownUnknown Proprietary
Includes two models, Sonnet and Opus. [143]
Sarvam-MMay 23 Sarvam AI 24BUnknownUnknown Apache 2.0
Hybrid reasoning model fine-tuned on Mistral Small base; optimized for math, programming, and Indian languages. [144] [145]
Grok 4 Jul 9 xAI UnknownUnknownUnknown Proprietary
Param-1Jul 21BharatGen2.9B [146] 5T tokens "focus[ed] on India’s linguistic landscape" [146] Unknown Apache 2.0
GLM-4.5Jul 29 Z.ai 355B22T tokens [148] [g] Unknown MIT
Released in 335B and 106B sizes. [149]
GPT-OSS Aug 5 OpenAI 117BUnknownUnknown Apache 2.0
Released in 20B and 120B sizes. [150]
Claude 4.1 Aug 5 Anthropic UnknownUnknownUnknown Proprietary
Includes one model, Opus. [151]
GPT-5 Aug 7 OpenAI UnknownUnknownUnknown Proprietary
Includes three models: GPT-5, GPT-5 mini, and GPT-5 nano. GPT-5 is available in ChatGPT and API. It includes reasoning abilities. [152] [153]
DeepSeek-V3.1Aug 21 DeepSeek 671B15.639TUnknown MIT
Based on DeepSeek V3 (trained on 14.8T tokens); further trained on 839B tokens from the extension phases (630B + 209B). [154] A hybrid model that can switch between thinking and non-thinking modes. [155]
YandexGPT 5.1 Pro Aug 28 Yandex UnknownUnknownUnknown Proprietary
Apertus Sep 2 ETH Zurich and EPF Lausanne 70B15 trillion [156] Unknown Apache 2.0
The first LLM to be compliant with the Artificial Intelligence Act of the European Union. [157]
Claude Sonnet 4.5 Sep 29AnthropicUnknownUnknownUnknown Proprietary
GLM-4.6Sep 30Z.ai357BUnknownUnknown Apache 2.0
Alice AI LLM 1.0 Oct 28YandexUnknownUnknownUnknown Proprietary
Gemini 3 Nov 18 Google DeepMind UnknownUnknownUnknown Proprietary
Models released: Deep Think and Pro. [162]
Olmo 3 [163] Nov 20 Allen Institute for AI 325.9T tokens [164] Unknown Apache 2.0
Includes 7B and 32B parameter versions, alongside reasoning and instruction-following models. [164]
Claude Opus 4.5 Nov 24AnthropicUnknownUnknownUnknown Proprietary
Largest model in the Claude family. [165]
DeepSeek-V3.2Dec 1 DeepSeek 685BUnknownUnknown MIT
Uses a custom DeepSeek Sparse Attention (DSA) mechanism [166] [167] [168]
GPT 5.2 Dec 11OpenAIUnknownUnknownUnknown Proprietary
It was able to solve an open problem in statistical learning theory that had previously remained unresolved by human researchers. [169]
GLM-4.7Dec 22Z.ai355BUnknownUnknown Apache 2.0

2026

NameRelease date [b] DeveloperNumber of parametersCorpus sizeTraining costLicense [c] Notes
Qwen3-Max-ThinkingJan 26 Alibaba Cloud UnknownUnknownUnknown Proprietary
Proprietary reasoning model with adaptive tool-use, test-time scaling, and iterative self-reflection. [170]
Kimi K2.5 Jan 27 Moonshot AI 1040B15T tokensUnknownModified MIT
Multimodal MoE with 32B active parameters, derived from Kimi K2. [171] Can use "Agent Swarm" technology to coordinate up to 100 parallel sub-agents. [172] [173]
Step-3.5-Flash Feb 12 StepFun 196BUnknownUnknown Apache 2.0
MoE model with 11B active parameters out of 196B total [174] [175] [176]
Claude Opus 4.6 Feb 5 Anthropic UnknownUnknownUnknown Proprietary
GPT-5.3-Codex Feb 5 OpenAI UnknownUnknownUnknown Proprietary
GLM-5Feb 12 Z.ai 754BUnknownUnknown MIT
Claude Sonnet 4.6 Feb 17AnthropicUnknownUnknownUnknown Proprietary
Param-2Feb 17BharatGen17B~22T tokensUnknownBharatGen Research [177]
Mixture-of-experts model, successor of Param-1; many more Indic languages are supported. Trained on H100 GPUs for 24 days. [178]
Sarvam-105BFeb 18 [h] Sarvam AI 105BUnknownUnknown Apache 2.0
India's first independently-trained foundation model; has 105B and 30B versions. Based on mixture-of-experts model, using only 10.3B active parameters at a time. [180] Interprets Indic languages and Hinglish. [181] [182]
Sarvam-30B~16T tokens
GPT-5.4 Mar 5OpenAIUnknownUnknownUnknown Proprietary
Mistral Small 4Mar 17 Mistral AI 119BUnknownUnknown Apache 2.0
MoE model with 6B active parameters out of 119B total [183] [184]
MiMo-V2-Pro Mar 18 Xiaomi 1000B [185] UnknownUnknown Proprietary
Mixture-of-experts (MoE) model with more than 1 trillion parameters (43 billion active). Designed for agentic scenarios. Initially available on OpenRouter under the codename "Hunter Alpha" before official release. [186]
Gemma 4 Apr 2 Google DeepMind 31BUnknownUnknown Apache 2.0
Released in 31B, 26B A4B (3.8 billion active parameters), E4B (4 billion effective parameters), and E2B variants [187] [188]
GLM-5.1Apr 7Z.ai754BUnknownUnknown MIT
MoE model designed for agentic coding [189] [190]
Qwen3.6 (Qwen3.6-35B-A3B)Apr 15Alibaba Cloud35BUnknownUnknown Apache 2.0
MoE model with 3B active parameters out of 35B total [191] [192]
Claude Opus 4.7 Apr 16AnthropicUnknownUnknownUnknown Proprietary
GPT-5.5 Apr 23OpenAIUnknownUnknownUnknown Proprietary
DeepSeek-V4-FlashApr 24 DeepSeek 284B32TUnknown MIT
Preview release [193]
DeepSeek-V4-Pro1.6T

See also

Notes

  1. In many cases, researchers release or report on multiple versions of a model having different sizes. In these cases, the size of the largest model is listed here.
  2. 1 2 3 4 5 6 7 8 9 This is the date that documentation describing the model's architecture was first released.
  3. 1 2 3 4 5 6 7 8 9 This is the license of the pre-trained model weights. In almost all cases the training code itself is open-source or can be easily replicated. LLMs may be licensed differently from the chatbots that use them; for the licenses of chatbots, see List of chatbots.
  4. The smaller models including 66B are publicly available, while the 175B model is available on request.
  5. Facebook's license and distribution scheme restricted access to approved researchers, but the model weights were leaked and became widely available.
  6. As stated in Technical report: "Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method ..." [59]
  7. Corpus size was calculated by combining the 15 trillion tokens and the 7 trillion tokens pre-training mix.
  8. An early checkpoint of the model was released in January. [179]

References

  1. "Improving language understanding with unsupervised learning". openai.com. June 11, 2018. Archived from the original on 2023-03-18. Retrieved 2023-03-18.
  2. "finetune-transformer-lm". GitHub. Archived from the original on 19 May 2023. Retrieved 2 January 2024.
  3. Radford, Alec (11 June 2018). "Improving language understanding with unsupervised learning". OpenAI . Retrieved 18 November 2025.
  4. 1 2 Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (11 October 2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv: 1810.04805v2 [cs.CL].
  5. Prickett, Nicole Hemsoth (2021-08-24). "Cerebras Shifts Architecture To Meet Massive AI/ML Models". The Next Platform. Archived from the original on 2023-06-20. Retrieved 2023-06-20.
  6. "BERT". March 13, 2023. Archived from the original on January 13, 2021. Retrieved March 13, 2023 via GitHub.
  7. Manning, Christopher D. (2022). "Human Language Understanding & Reasoning". Daedalus. 151 (2): 127–138. doi: 10.1162/daed_a_01905 . S2CID   248377870. Archived from the original on 2023-11-17. Retrieved 2023-03-09.
  8. Patel, Ajay; Li, Bryan; Rasooli, Mohammad Sadegh; Constant, Noah; Raffel, Colin; Callison-Burch, Chris (2022). "Bidirectional Language Models Are Also Few-shot Learners". arXiv: 2209.14500 [cs.LG].
  9. Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (11 October 2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv: 1810.04805v2 [cs.CL].
  10. 1 2 Raffel, Colin; Shazeer, Noam; Roberts, Adam; Lee, Katherine; Narang, Sharan; Matena, Michael; Zhou, Yanqi; Li, Wei; Liu, Peter J. (2020). "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer". Journal of Machine Learning Research. 21 (140): 1–67. arXiv: 1910.10683 . ISSN   1533-7928.
  11. google-research/text-to-text-transfer-transformer, Google Research, 2024-04-02, archived from the original on 2024-03-29, retrieved 2024-04-04
  12. "Imagen: Text-to-Image Diffusion Models". imagen.research.google. Archived from the original on 2024-03-27. Retrieved 2024-04-04.
  13. "Pretrained models — transformers 2.0.0 documentation". huggingface.co. Archived from the original on 2024-08-05. Retrieved 2024-08-05.
  14. "xlnet". GitHub. Archived from the original on 2 January 2024. Retrieved 2 January 2024.
  15. Yang, Zhilin; Dai, Zihang; Yang, Yiming; Carbonell, Jaime; Salakhutdinov, Ruslan; Le, Quoc V. (2 January 2020). "XLNet: Generalized Autoregressive Pretraining for Language Understanding". arXiv: 1906.08237 [cs.CL].
  16. "GPT-2: 1.5B Release". OpenAI. 2019-11-05. Archived from the original on 2019-11-14. Retrieved 2019-11-14.
  17. "Better language models and their implications". openai.com. Archived from the original on 2023-03-16. Retrieved 2023-03-13.
  18. 1 2 "OpenAI's GPT-3 Language Model: A Technical Overview". lambdalabs.com. 3 June 2020. Archived from the original on 27 March 2023. Retrieved 13 March 2023.
  19. 1 2 "openai-community/gpt2-xl · Hugging Face". huggingface.co. Archived from the original on 2024-07-24. Retrieved 2024-07-24.
  20. "gpt-2". GitHub. Archived from the original on 11 March 2023. Retrieved 13 March 2023.
  21. Wiggers, Kyle (28 April 2022). "The emerging types of language models and why they matter". TechCrunch. Archived from the original on 16 March 2023. Retrieved 9 March 2023.
  22. Table D.1 in Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (May 28, 2020). "Language Models are Few-Shot Learners". arXiv: 2005.14165v4 [cs.CL].
  23. "ChatGPT: Optimizing Language Models for Dialogue". OpenAI. 2022-11-30. Archived from the original on 2022-11-30. Retrieved 2023-01-13.
  24. "GPT Neo". March 15, 2023. Archived from the original on March 12, 2023. Retrieved March 12, 2023 via GitHub.
  25. 1 2 3 Gao, Leo; Biderman, Stella; Black, Sid; Golding, Laurence; Hoppe, Travis; Foster, Charles; Phang, Jason; He, Horace; Thite, Anish; Nabeshima, Noa; Presser, Shawn; Leahy, Connor (31 December 2020). "The Pile: An 800GB Dataset of Diverse Text for Language Modeling". arXiv: 2101.00027 [cs.CL].
  26. 1 2 Iyer, Abhishek (15 May 2021). "GPT-3's free alternative GPT-Neo is something to be excited about". VentureBeat. Archived from the original on 9 March 2023. Retrieved 13 March 2023.
  27. "GPT-J-6B: An Introduction to the Largest Open Source GPT Model | Forefront". www.forefront.ai. Archived from the original on 2023-03-09. Retrieved 2023-02-28.
  28. 1 2 3 4 Dey, Nolan; Gosal, Gurpreet; Zhiming; Chen; Khachane, Hemant; Marshall, William; Pathria, Ribhu; Tom, Marvin; Hestness, Joel (2023-04-01). "Cerebras-GPT: Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster". arXiv: 2304.03208 [cs.LG].
  29. Alvi, Ali; Kharya, Paresh (11 October 2021). "Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, the World's Largest and Most Powerful Generative Language Model". Microsoft Research. Archived from the original on 13 March 2023. Retrieved 13 March 2023.
  30. 1 2 Smith, Shaden; Patwary, Mostofa; Norick, Brandon; LeGresley, Patrick; Rajbhandari, Samyam; Casper, Jared; Liu, Zhun; Prabhumoye, Shrimai; Zerveas, George; Korthikanti, Vijay; Zhang, Elton; Child, Rewon; Aminabadi, Reza Yazdani; Bernauer, Julie; Song, Xia (2022-02-04). "Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model". arXiv: 2201.11990 [cs.CL].
  31. 1 2 Rajbhandari, Samyam; Li, Conglong; Yao, Zhewei; Zhang, Minjia; Aminabadi, Reza Yazdani; Awan, Ammar Ahmad; Rasley, Jeff; He, Yuxiong (2022-07-21), DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale, arXiv: 2201.05596
  32. Wang, Shuohuan; Sun, Yu; Xiang, Yang; Wu, Zhihua; Ding, Siyu; Gong, Weibao; Feng, Shikun; Shang, Junyuan; Zhao, Yanbin; Pang, Chao; Liu, Jiaxiang; Chen, Xuyi; Lu, Yuxiang; Liu, Weixin; Wang, Xi; Bai, Yangfan; Chen, Qiuliang; Zhao, Li; Li, Shiyong; Sun, Peng; Yu, Dianhai; Ma, Yanjun; Tian, Hao; Wu, Hua; Wu, Tian; Zeng, Wei; Li, Ge; Gao, Wen; Wang, Haifeng (December 23, 2021). "ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation". arXiv: 2112.12731 [cs.CL].
  33. "Product". Anthropic. Archived from the original on 16 March 2023. Retrieved 14 March 2023.
  34. 1 2 Askell, Amanda; Bai, Yuntao; Chen, Anna; et al. (9 December 2021). "A General Language Assistant as a Laboratory for Alignment". arXiv: 2112.00861 [cs.CL].
  35. Bai, Yuntao; Kadavath, Saurav; Kundu, Sandipan; et al. (15 December 2022). "Constitutional AI: Harmlessness from AI Feedback". arXiv: 2212.08073 [cs.CL].
  36. 1 2 3 Dai, Andrew M; Du, Nan (December 9, 2021). "More Efficient In-Context Learning with GLaM". ai.googleblog.com. Archived from the original on 2023-03-12. Retrieved 2023-03-09.
  37. "Language modelling at scale: Gopher, ethical considerations, and retrieval". www.deepmind.com. 8 December 2021. Archived from the original on 20 March 2023. Retrieved 20 March 2023.
  38. 1 2 3 Hoffmann, Jordan; Borgeaud, Sebastian; Mensch, Arthur; et al. (29 March 2022). "Training Compute-Optimal Large Language Models". arXiv: 2203.15556 [cs.CL].
  39. 1 2 3 4 Table 20 and page 66 of PaLM: Scaling Language Modeling with Pathways Archived 2023-06-10 at the Wayback Machine
  40. 1 2 Cheng, Heng-Tze; Thoppilan, Romal (January 21, 2022). "LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything". ai.googleblog.com. Archived from the original on 2022-03-25. Retrieved 2023-03-09.
  41. Thoppilan, Romal; De Freitas, Daniel; Hall, Jamie; Shazeer, Noam; Kulshreshtha, Apoorv; Cheng, Heng-Tze; Jin, Alicia; Bos, Taylor; Baker, Leslie; Du, Yu; Li, YaGuang; Lee, Hongrae; Zheng, Huaixiu Steven; Ghafouri, Amin; Menegali, Marcelo (2022-01-01). "LaMDA: Language Models for Dialog Applications". arXiv: 2201.08239 [cs.CL].
  42. Black, Sidney; Biderman, Stella; Hallahan, Eric; et al. (2022-05-01). GPT-NeoX-20B: An Open-Source Autoregressive Language Model. Proceedings of BigScience Episode #5 – Workshop on Challenges & Perspectives in Creating Large Language Models. Vol. Proceedings of BigScience Episode #5 – Workshop on Challenges & Perspectives in Creating Large Language Models. pp. 95–136. Archived from the original on 2022-12-10. Retrieved 2022-12-19.
  43. 1 2 3 Hoffmann, Jordan; Borgeaud, Sebastian; Mensch, Arthur; Sifre, Laurent (12 April 2022). "An empirical analysis of compute-optimal large language model training". Deepmind Blog. Archived from the original on 13 April 2022. Retrieved 9 March 2023.
  44. Narang, Sharan; Chowdhery, Aakanksha (April 4, 2022). "Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance". ai.googleblog.com. Archived from the original on 2022-04-04. Retrieved 2023-03-09.
  45. Susan Zhang; Mona Diab; Luke Zettlemoyer. "Democratizing access to large-scale language models with OPT-175B". ai.facebook.com. Archived from the original on 2023-03-12. Retrieved 2023-03-12.
  46. Zhang, Susan; Roller, Stephen; Goyal, Naman; Artetxe, Mikel; Chen, Moya; Chen, Shuohui; Dewan, Christopher; Diab, Mona; Li, Xian; Lin, Xi Victoria; Mihaylov, Todor; Ott, Myle; Shleifer, Sam; Shuster, Kurt; Simig, Daniel; Koura, Punit Singh; Sridhar, Anjali; Wang, Tianlu; Zettlemoyer, Luke (21 June 2022). "OPT: Open Pre-trained Transformer Language Models". arXiv: 2205.01068 [cs.CL].
  47. "metaseq/projects/OPT/chronicles at main · facebookresearch/metaseq". GitHub. Retrieved 2024-10-18.
  48. 1 2 Khrushchev, Mikhail; Vasilev, Ruslan; Petrov, Alexey; Zinov, Nikolay (2022-06-22), YaLM 100B, archived from the original on 2023-06-16, retrieved 2023-03-18
  49. 1 2 Lewkowycz, Aitor; Andreassen, Anders; Dohan, David; Dyer, Ethan; Michalewski, Henryk; Ramasesh, Vinay; Slone, Ambrose; Anil, Cem; Schlag, Imanol; Gutman-Solo, Theo; Wu, Yuhuai; Neyshabur, Behnam; Gur-Ari, Guy; Misra, Vedant (30 June 2022). "Solving Quantitative Reasoning Problems with Language Models". arXiv: 2206.14858 [cs.CL].
  50. "Minerva: Solving Quantitative Reasoning Problems with Language Models". ai.googleblog.com. 30 June 2022. Retrieved 20 March 2023.
  51. Ananthaswamy, Anil (8 March 2023). "In AI, is bigger always better?" . Nature. 615 (7951): 202–205. Bibcode:2023Natur.615..202A. doi:10.1038/d41586-023-00641-w. PMID   36890378. S2CID   257380916. Archived from the original on 16 March 2023. Retrieved 9 March 2023.
  52. "bigscience/bloom · Hugging Face". huggingface.co. Archived from the original on 2023-04-12. Retrieved 2023-03-13.
  53. Taylor, Ross; Kardas, Marcin; Cucurull, Guillem; Scialom, Thomas; Hartshorn, Anthony; Saravia, Elvis; Poulton, Andrew; Kerkez, Viktor; Stojnic, Robert (16 November 2022). "Galactica: A Large Language Model for Science". arXiv: 2211.09085 [cs.CL].
  54. "20B-parameter Alexa model sets new marks in few-shot learning". Amazon Science. 2 August 2022. Archived from the original on 15 March 2023. Retrieved 12 March 2023.
  55. Soltan, Saleh; Ananthakrishnan, Shankar; FitzGerald, Jack; et al. (3 August 2022). "AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model". arXiv: 2208.01448 [cs.CL].
  56. "AlexaTM 20B is now available in Amazon SageMaker JumpStart | AWS Machine Learning Blog". aws.amazon.com. 17 November 2022. Archived from the original on 13 March 2023. Retrieved 13 March 2023.
  57. 1 2 "Introducing LLaMA: A foundational, 65-billion-parameter large language model". Meta AI. 24 February 2023. Archived from the original on 3 March 2023. Retrieved 9 March 2023.
  58. 1 2 3 "The Falcon has landed in the Hugging Face ecosystem". huggingface.co. Archived from the original on 2023-06-20. Retrieved 2023-06-20.
  59. "GPT-4 Technical Report" (PDF). OpenAI . 2023. Archived (PDF) from the original on March 14, 2023. Retrieved March 14, 2023.
  60. Schreiner, Maximilian (2023-07-11). "GPT-4 architecture, datasets, costs and more leaked". THE DECODER. Archived from the original on 2023-07-12. Retrieved 2024-07-26.
  61. Dey, Nolan (March 28, 2023). "Cerebras-GPT: A Family of Open, Compute-efficient, Large Language Models". Cerebras. Archived from the original on March 28, 2023. Retrieved March 28, 2023.
  62. "Abu Dhabi-based TII launches its own version of ChatGPT". tii.ae. Archived from the original on 2023-04-03. Retrieved 2023-04-03.
  63. Penedo, Guilherme; Malartic, Quentin; Hesslow, Daniel; Cojocaru, Ruxandra; Cappelli, Alessandro; Alobeidli, Hamza; Pannier, Baptiste; Almazrouei, Ebtesam; Launay, Julien (2023-06-01). "The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only". arXiv: 2306.01116 [cs.CL].
  64. "tiiuae/falcon-40b · Hugging Face". huggingface.co. 2023-06-09. Retrieved 2023-06-20.
  65. UAE's Falcon 40B, World's Top-Ranked AI Model from Technology Innovation Institute, is Now Royalty-Free Archived 2024-02-08 at the Wayback Machine , 31 May 2023
  66. 1 2 Wu, Shijie; Irsoy, Ozan; Lu, Steven; Dabravolski, Vadim; Dredze, Mark; Gehrmann, Sebastian; Kambadur, Prabhanjan; Rosenberg, David; Mann, Gideon (March 30, 2023). "BloombergGPT: A Large Language Model for Finance". arXiv: 2303.17564 [cs.LG].
  67. Ren, Xiaozhe; Zhou, Pingyi; Meng, Xinfan; Huang, Xinjing; Wang, Yadao; Wang, Weichao; Li, Pengfei; Zhang, Xiaoda; Podolskiy, Alexander; Arshinov, Grigory; Bout, Andrey; Piontkovskaya, Irina; Wei, Jiansheng; Jiang, Xin; Su, Teng; Liu, Qun; Yao, Jun (March 19, 2023). "PanGu-Σ: Towards Trillion Parameter Language Model with Sparse Heterogeneous Computing". arXiv: 2303.10845 [cs.CL].
  68. Köpf, Andreas; Kilcher, Yannic; von Rütte, Dimitri; Anagnostidis, Sotiris; Tam, Zhi-Rui; Stevens, Keith; Barhoum, Abdullah; Duc, Nguyen Minh; Stanley, Oliver; Nagyfi, Richárd; ES, Shahul; Suri, Sameer; Glushkov, David; Dantuluri, Arnav; Maguire, Andrew (2023-04-14). "OpenAssistant Conversations – Democratizing Large Language Model Alignment". arXiv: 2304.07327 [cs.CL].
  69. Wrobel, Sharon. "Tel Aviv startup rolls out new advanced AI language model to rival OpenAI". The Times of Israel . ISSN   0040-7909. Archived from the original on 2023-07-24. Retrieved 2023-07-24.
  70. Wiggers, Kyle (2023-04-13). "With Bedrock, Amazon enters the generative AI race". TechCrunch. Archived from the original on 2023-07-24. Retrieved 2023-07-24.
  71. 1 2 Elias, Jennifer (16 May 2023). "Google's newest A.I. model uses nearly five times more text data for training than its predecessor". CNBC . Archived from the original on 16 May 2023. Retrieved 18 May 2023.
  72. "Introducing PaLM 2". Google. May 10, 2023. Archived from the original on May 18, 2023. Retrieved May 18, 2023.
  73. 1 2 "Introducing Llama 2: The Next Generation of Our Open Source Large Language Model". Meta AI. 2023. Archived from the original on 2024-01-05. Retrieved 2023-07-19.
  74. "llama/MODEL_CARD.md at main · meta-llama/llama". GitHub. Archived from the original on 2024-05-28. Retrieved 2024-05-28.
  75. "Claude 2". anthropic.com. Archived from the original on 15 December 2023. Retrieved 12 December 2023.
  76. Nirmal, Dinesh (2023-09-07). "Building AI for business: IBM's Granite foundation models". IBM Blog. Archived from the original on 2024-07-22. Retrieved 2024-08-11.
  77. "Announcing Mistral 7B". Mistral. 2023. Archived from the original on 2024-01-06. Retrieved 2023-10-06.
  78. "Introducing Claude 2.1". anthropic.com. Archived from the original on 15 December 2023. Retrieved 12 December 2023.
  79. xai-org/grok-1, xai-org, 2024-03-19, archived from the original on 2024-05-28, retrieved 2024-03-19
  80. "Grok-1 model card". x.ai. Retrieved 12 December 2023.
  81. "Gemini – Google DeepMind". deepmind.google. Archived from the original on 8 December 2023. Retrieved 12 December 2023.
  82. Franzen, Carl (11 December 2023). "Mistral shocks AI community as latest open source model eclipses GPT-3.5 performance". VentureBeat. Archived from the original on 11 December 2023. Retrieved 12 December 2023.
  83. "Mixtral of experts". mistral.ai. 11 December 2023. Archived from the original on 13 February 2024. Retrieved 12 December 2023.
  84. 1 2 DeepSeek-AI; Bi, Xiao; Chen, Deli; Chen, Guanting; Chen, Shanhuang; Dai, Damai; Deng, Chengqi; Ding, Honghui; Dong, Kai (2024-01-05), DeepSeek LLM: Scaling Open-Source Language Models with Longtermism, arXiv: 2401.02954
  85. 1 2 Hughes, Alyssa (12 December 2023). "Phi-2: The surprising power of small language models". Microsoft Research. Archived from the original on 12 December 2023. Retrieved 13 December 2023.
  86. "Our next-generation model: Gemini 1.5". Google. 15 February 2024. Archived from the original on 16 February 2024. Retrieved 16 February 2024. This means 1.5 Pro can process vast amounts of information in one go — including 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code or over 700,000 words. In our research, we've also successfully tested up to 10 million tokens.
  87. "Gemma" via GitHub.
  88. "OLMo: Open Language Model | Ai2". allenai.org. Retrieved 2026-03-17.
  89. Groeneveld, Dirk; Beltagy, Iz; Walsh, Pete; Bhagia, Akshita; Kinney, Rodney; Tafjord, Oyvind; Jha, Ananya Harsh; Ivison, Hamish; Magnusson, Ian (2024-06-07), OLMo: Accelerating the Science of Language Models, arXiv, doi:10.48550/arXiv.2402.00838, arXiv:2402.00838, retrieved 2026-03-17
  90. "Introducing the next generation of Claude". www.anthropic.com. Archived from the original on 2024-03-04. Retrieved 2024-03-04.
  91. "Databricks Open Model License". Databricks . 27 March 2024. Retrieved 6 August 2025.
  92. "Databricks Open Model Acceptable Use Policy". Databricks . 27 March 2024. Retrieved 6 August 2025.
  93. 1 2 "Release of "Fugaku-LLM" - a large language model trained on the supercomputer "Fugaku"". Fujitsu. 10 May 2024. Retrieved 20 April 2026.
  94. "Fugaku-LLM Terms of Use". 23 April 2024. Retrieved 6 August 2025 via Hugging Face.
  95. "Fugaku-LLM/Fugaku-LLM-13B · Hugging Face". huggingface.co. Archived from the original on 2024-05-17. Retrieved 2024-05-17.
  96. Dickson, Ben (22 May 2024). "Meta introduces Chameleon, a state-of-the-art multimodal model". VentureBeat.
  97. "chameleon/LICENSE at e3b711ef63b0bb3a129cf0cf0918e36a32f26e2c · facebookresearch/chameleon". Meta Research. Retrieved 6 August 2025 via GitHub.
  98. AI, Mistral (2024-04-17). "Cheaper, Better, Faster, Stronger". mistral.ai. Archived from the original on 2024-05-05. Retrieved 2024-05-05.
  99. "Phi-3". azure.microsoft.com. 23 April 2024. Archived from the original on 2024-04-27. Retrieved 2024-04-28.
  100. Bilenko, Misha (2024-04-23). "Introducing Phi-3: Redefining what's possible with SLMs". Microsoft Azure Blog. Retrieved 2026-03-19.
  101. "Qwen2". GitHub . Archived from the original on 2024-06-17. Retrieved 2024-06-17.
  102. DeepSeek-AI; Liu, Aixin; Feng, Bei; Wang, Bin; Wang, Bingxuan; Liu, Bo; Zhao, Chenggang; Dengr, Chengqi; Ruan, Chong (2024-06-19), DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model, arXiv: 2405.04434
  103. "NVIDIA Open Models License". Nvidia . 16 June 2025. Retrieved 6 August 2025.
  104. "Trustworthy AI". Nvidia . 27 June 2024. Retrieved 6 August 2025.
  105. "nvidia/Nemotron-4-340B-Base · Hugging Face". huggingface.co. 2024-06-14. Archived from the original on 2024-06-15. Retrieved 2024-06-15.
  106. "Nemotron-4 340B | Research". research.nvidia.com. Archived from the original on 2024-06-15. Retrieved 2024-06-15.
  107. "Introducing Claude 3.5 Sonnet". www.anthropic.com. Retrieved 8 August 2025.
  108. "Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku". www.anthropic.com. Retrieved 8 August 2025.
  109. "The Llama 3 Herd of Models" (July 23, 2024) Llama Team, AI @ Meta
  110. "llama-models/models/llama3_1/MODEL_CARD.md at main · meta-llama/llama-models". GitHub. Archived from the original on 2024-07-23. Retrieved 2024-07-23.
  111. "LICENSE · xai-org/grok-2 at main". 5 November 2025. Retrieved 18 November 2025 via Hugging Face.
  112. "xAI Acceptable Use Policy". xAI . 2 January 2025. Retrieved 18 November 2025.
  113. Weatherbed, Jess (14 August 2024). "xAI's new Grok-2 chatbots bring AI image generation to X". The Verge . Retrieved 18 November 2025.
  114. Ha, Anthony (24 August 2025). "Elon Musk says xAI has open sourced Grok 2.5". TechCrunch . Retrieved 18 November 2025.
  115. "Introducing OpenAI o1". openai.com. Retrieved 8 August 2025.
  116. Paul, Katie; Tong, Anna (13 September 2024). "OpenAI launches new series of AI models with 'reasoning' abilities". Reuters .
  117. Jindal, Siddharth (24 October 2024). "Sarvam AI Launches Sarvam-1, Outperforms Gemma-2 and Llama-3.2". Analytics India Magazine. Archived from the original on 25 July 2025. Retrieved 20 April 2026.
  118. "LICENSE.md · sarvamai/sarvam-1". 23 October 2024. Retrieved 20 April 2026 via Hugging Face.
  119. 1 2 "Models Overview". mistral.ai. Retrieved 2025-03-03.
  120. "OLMo 2: The best fully open language model to date | Ai2". allenai.org. Retrieved 2026-03-17.
  121. 1 2 3 OLMo, Team; Walsh, Pete; Soldaini, Luca; Groeneveld, Dirk; Lo, Kyle; Arora, Shane; Bhagia, Akshita; Gu, Yuling; Huang, Shengyi (2025-10-08), 2 OLMo 2 Furious, arXiv, doi:10.48550/arXiv.2501.00656, arXiv:2501.00656, retrieved 2026-03-17
  122. "Phi-4 Model Card". huggingface.co. Retrieved 2025-11-11.{{cite web}}: CS1 maint: url-status (link)
  123. "Introducing Phi-4: Microsoft's Newest Small Language Model Specializing in Complex Reasoning". techcommunity.microsoft.com. Retrieved 2025-11-11.{{cite web}}: CS1 maint: url-status (link)
  124. deepseek-ai/DeepSeek-V3, DeepSeek, 2024-12-26, retrieved 2024-12-26
  125. Feng, Coco (25 March 2025). "DeepSeek wows coders with more powerful open-source V3 model". South China Morning Post . Retrieved 6 April 2025.
  126. Amazon Nova Micro, Lite, and Pro - AWS AI Service Cards3, Amazon, 2024-12-27, retrieved 2024-12-27
  127. deepseek-ai/DeepSeek-R1, DeepSeek, 2025-01-21, retrieved 2025-01-21
  128. DeepSeek-AI; Guo, Daya; Yang, Dejian; Zhang, Haowei; Song, Junxiao; Zhang, Ruoyu; Xu, Runxin; Zhu, Qihao; Ma, Shirong (2025-01-22), DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning, arXiv: 2501.12948
  129. Qwen; Yang, An; Yang, Baosong; Zhang, Beichen; Hui, Binyuan; Zheng, Bo; Yu, Bowen; Li, Chengyuan; Liu, Dayiheng (2025-01-03), Qwen2.5 Technical Report, arXiv: 2412.15115
  130. 1 2 MiniMax; Li, Aonian; Gong, Bangwei; Yang, Bo; Shan, Boji; Liu, Chang; Zhu, Cheng; Zhang, Chunhao; Guo, Congchao (2025-01-14), MiniMax-01: Scaling Foundation Models with Lightning Attention, arXiv: 2501.08313
  131. MiniMax-AI/MiniMax-01, MiniMax, 2025-01-26, retrieved 2025-01-26
  132. Kavukcuoglu, Koray (5 February 2025). "Gemini 2.0 is now available to everyone". Google. Retrieved 6 February 2025.
  133. "Gemini 2.0: Flash, Flash-Lite and Pro". Google for Developers. Retrieved 6 February 2025.
  134. Franzen, Carl (5 February 2025). "Google launches Gemini 2.0 Pro, Flash-Lite and connects reasoning model Flash Thinking to YouTube, Maps and Search". VentureBeat. Retrieved 6 February 2025.
  135. "Grok 3 Beta — The Age of Reasoning Agents". x.ai. Retrieved 2025-02-22.
  136. "Claude 3.7 Sonnet and Claude Code". www.anthropic.com. Retrieved 8 August 2025.
  137. "Introducing GPT-4.5". openai.com. Retrieved 8 August 2025.
  138. Kavukcuoglu, Koray (25 March 2025). "Gemini 2.5: Our most intelligent AI model". Google. Retrieved 23 September 2025.
  139. "meta-llama/Llama-4-Maverick-17B-128E · Hugging Face". huggingface.co. 2025-04-05. Retrieved 2025-04-06.
  140. "The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation". ai.meta.com. Archived from the original on 2025-04-05. Retrieved 2025-04-05.
  141. "Introducing OpenAI o3 and o4-mini". openai.com. Retrieved 8 August 2025.
  142. Team, Qwen (2025-04-29). "Qwen3: Think Deeper, Act Faster". Qwen. Retrieved 2025-04-29.
  143. "Introducing Claude 4". www.anthropic.com. Retrieved 8 August 2025.
  144. Yadav, Nandini (2025-05-26). "Indian AI startup launches Sarvam-M model: What is it, why is everyone talking about it". India Today. Retrieved 2026-03-18.
  145. "Sarvam-M: Open Source Hybrid Indic LLM | Sarvam AI". Sarvam AI. 2025-05-23. Retrieved 2026-03-18.
  146. 1 2 Pundalik, Kundeshwar; Sawarkar, Piyush; Sahoo, Nihar; Shinde, Abhishek; Chanda, Prateek; Goswami, Vedant; Nagpal, Ajay; Singh, Atul; Thakur, Viraj (2025-07-16), PARAM-1 BharatGen 2.9B Model, arXiv, doi:10.48550/arXiv.2507.13390, arXiv:2507.13390, retrieved 2026-03-18
  147. "README.md · bharatgenai/Param-1". 24 February 2026. Retrieved 12 April 2026 via Hugging Face.
  148. "GLM-4.5: Reasoning, Coding, and Agentic Abililties". z.ai. Retrieved 2025-08-06.
  149. "zai-org/GLM-4.5 · Hugging Face". huggingface.co. 2025-08-04. Retrieved 2025-08-06.
  150. Whitwam, Ryan (5 August 2025). "OpenAI announces two "gpt-oss" open AI models, and you can download them today". Ars Technica . Retrieved 6 August 2025.
  151. "Claude Opus 4.1". www.anthropic.com. Retrieved 8 August 2025.
  152. "Introducing GPT-5". openai.com. 7 August 2025. Retrieved 8 August 2025.
  153. "OpenAI Platform: GPT-5 Model Documentation". openai.com. Retrieved 18 August 2025.
  154. "deepseek-ai/DeepSeek-V3.1 · Hugging Face". huggingface.co. 2025-08-21. Retrieved 2025-08-25.
  155. "DeepSeek-V3.1 Release | DeepSeek API Docs". api-docs.deepseek.com. Retrieved 2025-08-25.
  156. "Apertus: Ein vollständig offenes, transparentes und mehrsprachiges Sprachmodell" (in German). Zürich: ETH Zürich. 2025-09-02. Retrieved 2025-11-07.
  157. Kirchner, Malte (2025-09-02). "Apertus: Schweiz stellt erstes offenes und mehrsprachiges KI-Modell vor". heise online (in German). Retrieved 2025-11-07.
  158. "Introducing Claude Sonnet 4.5". www.anthropic.com. Retrieved 29 September 2025.
  159. "GLM-4.6: Advanced Agentic, Reasoning and Coding Capabilities". z.ai. Retrieved 2025-10-01.
  160. "zai-org/GLM-4.6 · Hugging Face". huggingface.co. 2025-09-30. Retrieved 2025-10-01.
  161. "GLM-4.6". modelscope.cn. Retrieved 2025-10-01.
  162. "A new era of intelligence with Gemini 3". Google. 18 November 2025. Retrieved 5 January 2026.
  163. "Olmo 3: Charting a path through the model flow to lead open-source AI". Ai2. 20 November 2025.
  164. 1 2 Olmo, Team; Ettinger, Allyson; Bertsch, Amanda; Kuehl, Bailey; Graham, David; Heineman, David; Groeneveld, Dirk; Brahman, Faeze; Timbers, Finbarr (2025-12-15), Olmo 3, arXiv, doi:10.48550/arXiv.2512.13961, arXiv:2512.13961, retrieved 2026-03-17
  165. "Introducing Claude Opus 4.5". www.anthropic.com. Retrieved 8 January 2026.
  166. Binder, Matt (3 December 2025). "DeepSeek v3.2: What it is, how it compares to ChatGPT, how to try it". Mashable . Retrieved 12 April 2026.
  167. "DeepSeek-V3.2 Release". DeepSeek API Docs. 1 December 2025. Retrieved 12 April 2026.
  168. "DeepSeek-V3.2: Efficient Reasoning & Agentic AI". Hugging Face . 1 December 2025. Retrieved 12 April 2026.
  169. "Advancing science and math with GPT-5.2". openai.com. Retrieved 4 January 2026.
  170. "Pushing Qwen3-Max-Thinking Beyond its Limits". Qwen. 25 January 2026. Archived from the original on 6 February 2026. Retrieved 6 February 2026. We further enhance Qwen3-Max-Thinking with two key innovations: (1) adaptive tool-use capabilities [...]; and (2) advanced test-time scaling techniques [...]. [...] We limit [parallel trajectories] and redirect saved computation to iterative self-reflection guided by a "take-experience" mechanism.
  171. Team, Kimi; Bai, Yifan; Bao, Yiping; Charles, Y.; Chen, Cheng; Chen, Guanduo; Chen, Haiting; Chen, Huarong; Chen, Jiahao (2026-02-03), Kimi K2: Open Agentic Intelligence, arXiv, doi:10.48550/arXiv.2507.20534, arXiv:2507.20534, retrieved 2026-03-18
  172. Team, Kimi; Bai, Tongtong; Bai, Yifan; Bao, Yiping; Cai, S. H.; Cao, Yuan; Charles, Y.; Che, H. S.; Chen, Cheng (2026-02-02), Kimi K2.5: Visual Agentic Intelligence, arXiv, doi:10.48550/arXiv.2602.02276, arXiv:2602.02276, retrieved 2026-03-18
  173. "Kimi K2.5: Chat with Kimi K2.5 for Free". Kimi K2.5. Retrieved 2026-03-18.
  174. Jiang, Ben (3 February 2026). "Compact AI model from China's StepFun outshines rivals from DeepSeek, Moonshot". South China Morning Post . Archived from the original on 4 February 2026. Retrieved 14 April 2026.
  175. "Step 3.5 Flash: Fast Enough to Think. Reliable Enough to Act". StepFun . 12 February 2026. Retrieved 20 April 2026.
  176. "stepfun-ai/Step-3.5-Flash". 14 March 2026. Retrieved 14 April 2026 via Hugging Face.
  177. "LICENSE · bharatgenai/Param2-17B-A2.4B-Thinking". 16 February 2026. Retrieved 12 April 2026 via Hugging Face.
  178. "bharatgenai/Param2-17B-A2.4B-Thinking" . Retrieved 2026-03-08 via Hugging Face.
  179. "sarvamai/sarvam-1-v0.5 · Hugging Face". huggingface.co. Retrieved 2026-03-08.
  180. "sarvamai/sarvam-105b · Hugging Face". huggingface.co. Retrieved 2026-03-08.
  181. Kumar, Abhijeet (19 February 2026). "Why Sarvam's new 105B model marks a shift in India's sovereign AI ambitions". Business Standard .
  182. Singh, Jagmeet (2026-02-18). "Indian AI lab Sarvam's new models are a major bet on the viability of open source AI". TechCrunch. Retrieved 2026-03-18.
  183. Marquez, Javier (17 March 2026). "Una IA para reunir todas las funciones posibles: la apuesta de Mistral con Small 4 es hacer más con menos cosas" [An AI to bring together all possible functions: Mistral's bet with Small 4 is to do more with less]. Xataka (in Spanish). Retrieved 20 April 2026.
  184. "Introducing Mistral Small 4". Mistral AI . Retrieved 20 April 2026.
  185. "Xiaomi Launches Powerful AI Model MiMo-V2 Pro With 1 Trillion Parametres, 1 Million Token Context Window". NDTV Profit . 19 March 2026.
  186. "Mystery AI model revealed to be Xiaomi's following suspicions it was DeepSeek's". Reuters. 18 March 2026. Retrieved 3 April 2026.
  187. Whitwam, Ryan (2 April 2026). "Google announces Gemma 4 open AI models, switches to Apache 2.0 license". Ars Technica . Retrieved 3 April 2026.
  188. Mann, Tobias (2 April 2026). "Google battles Chinese open weights models with Gemma 4" . Retrieved 3 April 2026.
  189. Franzen, Carl (7 April 2026). "AI joins the 8-hour work day as GLM ships 5.1 open source LLM, beating Opus 4.6 and GPT-5.4 on SWE-Bench Pro". VentureBeat . Retrieved 12 April 2026.
  190. "GLM-5.1: Towards Long-Horizon Tasks". Z.ai. Retrieved 12 April 2026.
  191. "A Chinese AI called 'Qwen3.6-35B-A3B,' which is more powerful than Gemma4, has been released as an open model". Gigazine  [ jp ]. 17 April 2026. Retrieved 17 April 2026.
  192. "README.md · Qwen/Qwen3.6-35B-A3B". 15 April 2026. Retrieved 17 April 2026 via Hugging Face.
  193. Butts, Dylan (24 April 2026). "China's DeepSeek releases preview of long-awaited V4 model as AI race intensifies". CNBC.