Mistral AI

Last updated
Mistral AI
Company typePrivate
Industry Artificial intelligence
Founded28 April 2023
Founders
  • Arthur Mensch
    (Co-Founder & CEO)
  • Guillaume Lample
    (Co-Founder & Chief Scientist)
  • Timothée Lacroix
    (Co-Founder & CTO)
Headquarters
Paris
,
France
Products
  • Mistral 7B
  • Mixtral 8x7B
  • Mistral Medium
  • Mistral Large
  • Mistral Large 2 (123B)
  • Mixtral 8x22B
  • Codestral 22B
  • Codestral Mamba (7B)
  • Mathstral (7B)
  • Mistral NeMo 12B
  • Mistral Embed
Website mistral.ai

Mistral AI is a French company specializing in artificial intelligence (AI) products. Founded in April 2023 by former employees of Meta Platforms and Google DeepMind, [1] the company has quickly risen to prominence in the AI sector.

Contents

The company focuses on producing open source large language models, [2] emphasizing the foundational importance of free and open-source software, and positioning itself as an alternative to proprietary models. [3]

In October 2023, Mistral AI raised €385 million. [4] By December 2023, it was valued at over $2 billion. [5] [6] [7]

In June 2024, Mistral AI announced a new funding round of €600 million ($645 million), significantly boosting its valuation to €5.8 billion ($6.2 billion). [8] This round was led by the venture capital firm General Catalyst, with participation from existing investors. [9]

Mistral AI has published three open-source models available as weights. [10] Additionally, three more models—Small, Medium, and Large—are available via API only. [11] [12]

Based on valuation, the company is in fourth place in the global AI race and in first place outside the San Francisco Bay Area, ahead of several of its peers, such as Cohere, Hugging Face, Inflection, Perplexity and Together. [13] Mistral AI aims to "democratize" AI by focusing on open-source innovation. [14]

History

Mistral AI was co-founded in April 2023 by Arthur Mensch, Guillaume Lample and Timothée Lacroix.[ citation needed ]

Prior to co-founding Mistral AI, Arthur Mensch worked at Google DeepMind which is Google's artificial intelligence laboratory, while Guillaume Lample and Timothée Lacroix worked at Meta Platforms. [15] The co-founders met while students at École polytechnique. Mistral is named for a strong wind that blows in France. [16]

In June 2023, the start-up carried out a first fundraising of €105 million ($117 million) with investors including the American fund Lightspeed Venture Partners, Eric Schmidt, Xavier Niel and JCDecaux. The valuation is then estimated by the Financial Times at €240 million ($267 million).

On 27 September 2023, the company made its language processing model “Mistral 7B” available under the free Apache 2.0 license. This model has 7 billion parameters, a small size compared to its competitors.

On 10 December 2023, Mistral AI announced that it had raised €385 million ($428 million) as part of its second fundraising. This round of financing notably involves the Californian fund Andreessen Horowitz, BNP Paribas and the software publisher Salesforce. [17]

On 11 December 2023, the company released the Mixtral 8x7B model with 46.7 billion parameters but using only 12.9 billion per token thanks to the mixture of experts architecture. The model masters 5 languages (French, Spanish, Italian, English and German) and outperforms, according to its developers' tests, the "LLama 2 70B" model from Meta. A version trained to follow instructions and called “Mixtral 8x7B Instruct” is also offered. [18]

On 26 February 2024, Microsoft announced a new partnership with the company to expand its presence in the rapidly evolving artificial intelligence industry. Under the agreement, Mistral's rich language models will be available on Microsoft's Azure cloud, while the multilingual conversational assistant "Le Chat" will be launched in the style of ChatGPT. [19]

On 10 April 2024, the company released the mixture of expert models, Mixtral 8x22B, offering high performance on various benchmarks compared to other open models.[ citation needed ]

On 16 April 2024, reporting revealed that Mistral was in talks to raise €500 million, a deal that would more than double its current valuation to at least €5 billion. [20]

Models

Open Weight Models

Mistral 7B

Mistral 7B is a 7.3B parameter language model using the transformers architecture. Officially released on September 27, 2023, via a BitTorrent magnet link, [21] and Hugging Face. [22] The model was released under the Apache 2.0 license. The release blog post claimed the model outperforms LLaMA 2 13B on all benchmarks tested, and is on par with LLaMA 34B on many benchmarks tested. [23]

Mistral 7B uses grouped-query attention (GQA), which is a variant of the standard attention mechanism. Instead of computing attention over all the hidden states, it computes attention over groups of hidden states. [24]

Both a base model and "instruct" model were released with the latter receiving additional tuning to follow chat-style prompts. The fine-tuned model is only intended for demonstration purposes, and does not have guardrails or moderation built-in. [23]

Mixtral 8x7B

Much like Mistral's first model, Mixtral 8x7B was released via a BitTorrent link posted on Twitter on December 9, 2023, [2] and later Hugging Face and a blog post were released two days later. [18]

Unlike the previous Mistral model, Mixtral 8x7B uses a sparse mixture of experts architecture. The model has 8 distinct groups of "experts", giving the model a total of 46.7B usable parameters. [25] [26] Each single token can only use 12.9B parameters, therefore giving the speed and cost that a 12.9B parameter model would incur. [18]

Mistral AI's testing shows the model beats both LLaMA 70B, and GPT-3.5 in most benchmarks. [27]

In March 2024, research conducted by Patronus AI comparing performance of LLMs on a 100-question test with prompts to generate text from books protected under U.S. copyright law found that Open AI's GPT-4, Mixtral, Meta AI's LLaMA-2, and Anthropic's Claude2 generated copyrighted text verbatim in 44%, 22%, 10%, and 8% of responses respectively. [28] [29]

Mixtral 8x22B

Similar to Mistral's previous open models, Mixtral 8x22B was released via a BitTorrent link on Twitter on April 10, 2024, [30] with a release on Hugging Face soon after. [31] The model uses an architecture similar to that of Mistral 8x7B, but with each expert having 22 billion parameters instead of 7. In total, the model contains 141 billion parameters, as some parameters are shared among the experts. [31]

Mistral Large 2

Mistral Large 2 was announced on July 24, 2024, and released on Hugging Face. Unlike the previous Mistral Large, this version was released with open weights. It is available for free with a Mistral Research Licence, and with a commercial licence for commercial purposes. Mistral AI claims that it is fluent in dozens of languages, including many programming languages. The model has 123 billion parameters and a context length of 128,000 tokens. Its performance in benchmarks is competitive with Llama 3.1 405B, particularly in programming-related tasks. [32] [33]

Codestral 22B

Codestral is Mistral's first code focused open weight model. Codestral was launched on 29 May 2024. It is a lightweight model specifically built for code generation tasks. As of its release date, this model surpasses Meta's Llama3 70B and DeepSeek Coder 33B (78.2% - 91.6%), another code-focused model on the HumanEval FIM benchmark. [34] Mistral claims Codestral is fluent in more than 80 Programming languages [35] Codestral has its own license which forbids the usage of Codestral for Commercial purposes. [36]

Mathstral 7B

Mathstral 7B is a model with 7 billion parameters released by Mistral AI on July 16, 2024. It focuses on STEM subjects, achieving a score of 56.6% on the MATH benchmark and 63.47% on the MMLU benchmark. [37] The model was produced in collaboration with Project Numina, [38] and was released under the Apache 2.0 License. It has a context length of 32k tokens. [37]

Codestral Mamba 7B

Codestral Mamba is based on the Mamba 2 architecture, which allows it to generate responses even with longer input. [38] Unlike Codestral, it was released under the Apache 2.0 license. While previous releases often included both the base model and the instruct version, only the instruct version of Codestral Mamba was released. [39]

API-Only Models

Unlike Mistral 7B, Mixtral 8x7B and Mixtral 8x22B, the following models are closed-source and only available through the Mistral API. [40]

Mistral Large

Mistral Large was launched on February 26, 2024, and Mistral claims it is second in the world only to OpenAI's GPT-4.

It is fluent in English, French, Spanish, German, and Italian, with Mistral claiming understanding of both grammar and cultural context, and provides coding capabilities. As of early 2024, it is Mistral's flagship AI. [41] It is also available on Microsoft Azure.

In July 2024, Mistral Large 2 was released, replacing the original Mistral Large. [42] Unlike the original model, it was released with open weights. [33]

Mistral Medium

Mistral Medium is trained in various languages including English, French, Italian, German, Spanish and code with a score of 8.6 on MT-Bench. [43] It is ranked in performance above Claude and below GPT-4 on the LMSys ELO Arena benchmark. [44]

The number of parameters, and architecture of Mistral Medium is not known as Mistral has not published public information about it.

Mistral Small

Like the Large model, Small was launched on February 26, 2024. It is intended to be a light-weight model for low latency, with better performance than Mixtral 8x7B. [45]

Related Research Articles

<span class="mw-page-title-main">Databricks</span> American software company

Databricks, Inc. is a global data, analytics, and artificial intelligence company founded by the original creators of Apache Spark.

OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.

<span class="mw-page-title-main">Writer Inc.</span>

Writer is a generative artificial intelligence company based in San Francisco, California that offers a full-stack generative AI platform for enterprises. In September 2023, Writer raised $100m in a Series B led by ICONIQ Growth with participation from Insight Partners, WndrCo, Balderton Capital, and Aspect Ventures. The co-founders also worked together on Qordoba, a previous startup.

Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.

<span class="mw-page-title-main">GPT-2</span> 2019 text-generating language model

Generative Pre-trained Transformer 2 (GPT-2) is a large language model by OpenAI and the second in their foundational series of GPT models. GPT-2 was pre-trained on a dataset of 8 million web pages. It was partially released in February 2019, followed by full release of the 1.5-billion-parameter model on November 5, 2019.

Prompt engineering is the process of structuring an instruction that can be interpreted and understood by a generative artificial intelligence (AI) model. A prompt is natural language text describing the task that an AI should perform. A prompt for a text-to-text language model can be a query such as "what is Fermat's little theorem?", a command such as "write a poem in the style of Edgar Allan Poe about leaves falling", or a longer statement including context, instructions, and conversation history.

A foundation model, also known as large AI model, is a machine learning or deep learning model that is trained on vast datasets so it can be applied across a wide range of use cases. Generative AI applications like Large Language Models are often examples of foundation models.

Hugging Face, Inc. is an American company incorporated under the Delaware General Corporation Law and based in New York City that develops computation tools for building applications using machine learning. It is most notable for its transformers library built for natural language processing applications and its platform that allows users to share machine learning models and datasets and showcase their work.

<span class="mw-page-title-main">Stable Diffusion</span> Image-generating machine learning model

Stable Diffusion is a deep learning, text-to-image model released in 2022 based on diffusion techniques. The generative artificial intelligence technology is the premier product of Stability AI and is considered to be a part of the ongoing artificial intelligence boom.

<span class="mw-page-title-main">AI21 Labs</span> Tel Aviv-based company

AI21 Labs is an Israeli company specializing in Natural Language Processing (NLP), which develops AI systems that can understand and generate natural language.

<span class="mw-page-title-main">EleutherAI</span> Artificial intelligence research collective

EleutherAI is a grass-roots non-profit artificial intelligence (AI) research group. The group, considered an open-source version of OpenAI, was formed in a Discord server in July 2020 by Connor Leahy, Sid Black, and Leo Gao to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Institute, a non-profit research institute.

A large language model (LLM) is a type of computational model designed for natural language processing tasks such as language generation. As language models, LLMs acquire these abilities by learning statistical relationships from vast amounts of text during a self-supervised and semi-supervised training process.

In deep learning, fine-tuning is an approach to transfer learning in which the parameters of a pre-trained neural network model are trained on new data. Fine-tuning can be done on the entire neural network, or on only a subset of its layers, in which case the layers that are not being fine-tuned are "frozen". A model may also be augmented with "adapters" that consist of far fewer parameters than the original model, and fine-tuned in a parameter-efficient way by tuning the weights of the adapters and leaving the rest of the model's weights frozen.

The Pile is an 886.03 GB diverse, open-source dataset of English text created as a training dataset for large language models (LLMs). It was constructed by EleutherAI in 2020 and publicly released on December 31 of that year. It is composed of 22 smaller datasets, including 14 new ones.

Llama is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023. The latest version is Llama 3.2, released in September 2024.

Open-source artificial intelligence is an AI system that is freely available to use, study, modify, and share. These attributes extend to each of the system’s components, including datasets, code, and model parameters, promoting a collaborative and transparent approach to AI development.

<span class="mw-page-title-main">Gemini (language model)</span> Large language model developed by Google

Gemini is a family of multimodal large language models developed by Google DeepMind, serving as the successor to LaMDA and PaLM 2. Comprising Gemini Ultra, Gemini Pro, Gemini Flash, and Gemini Nano, it was announced on December 6, 2023, positioned as a competitor to OpenAI's GPT-4. It powers the chatbot of the same name.

DBRX is an open-sourced large language model (LLM) developed by Mosaic ML team at Databricks, released on March 27, 2024. It is a mixture-of-experts Transformer model, with 132 billion parameters in total. 36 billion parameters are active for each token. The released model comes in either a base foundation model version or an instruct-tuned variant.

llama.cpp Software library for LLM inference

llama.cpp is an open source software library written mostly in C++ that performs inference on various large language models such as Llama. It is co-developed alongside the GGML project, a general-purpose tensor library.

<span class="mw-page-title-main">01.AI</span> Artificial intelligence company

01.AI is an artificial intelligence (AI) company based in Beijing, China. It focuses on developing open source products.

References

  1. "France's unicorn start-up Mistral AI embodies its artificial intelligence hopes". Le Monde.fr. 2023-12-12. Retrieved 2023-12-16.
  2. 1 2 "Buzzy Startup Just Dumps AI Model That Beats GPT-3.5 Into a Torrent Link". Gizmodo. 2023-12-12. Retrieved 2023-12-16.
  3. "Bringing open AI models to the frontier". Mistral AI. 27 September 2023. Retrieved 4 January 2024.
  4. Metz, Cade (10 December 2023). "Mistral, French A.I. Start-Up, Is Valued at $2 Billion in Funding Round". The New York Times.
  5. Fink, Charlie. "This Week In XR: Epic Triumphs Over Google, Mistral AI Raises $415 Million, $56.5 Million For Essential AI". Forbes. Retrieved 2023-12-16.
  6. "A French AI start-up may have commenced an AI revolution, silently". Hindustan Times. December 12, 2023.
  7. "French AI start-up Mistral secures €2bn valuation". ft.com Financial Times.
  8. Kharpal, Arjun (2024-05-24). "CEOs of AI startups backed by Microsoft and Amazon are the new tech rockstars". CNBC. Retrieved 2024-06-13.
  9. "Tripling Down on Mistral AI | General Catalyst". www.generalcatalyst.com. Retrieved 2024-06-13.
  10. "Open-weight models and Mistral AI Large Language Models". docs.mistral.ai. Retrieved 2024-01-04.
  11. "Endpoints and Mistral AI Large Language Models". docs.mistral.ai.
  12. "Endpoints and benchmarks | Mistral AI Large Language Models". docs.mistral.ai. Retrieved 2024-03-06.
  13. Bratton, Laura (2024-06-12). "OpenAI's French rival Mistral AI is now worth $6 billion. That's still a fraction of its top competitors". Quartz (publication) . Retrieved 2024-06-13.
  14. Webb, Maria (2024-01-02). "Mistral AI: Exploring Europe's Latest Tech Unicorn". techopedia.com. Retrieved 2024-06-13.
  15. "France's unicorn start-up Mistral AI embodies its artificial intelligence hopes". Le Monde.fr. 12 December 2023.
  16. Journal, Sam Schechner | Photographs by Edouard Jacquinet for The Wall Street. "The 9-Month-Old AI Startup Challenging Silicon Valley's Giants". WSJ. Retrieved 2024-03-31.
  17. "Mistral lève 385 M€ et devient une licorne française - le Monde Informatique". 11 December 2023.
  18. 1 2 3 "Mixtral of experts". mistral.ai. 2023-12-11. Retrieved 2024-01-04.
  19. Bableshwar (2024-02-26). "Mistral Large, Mistral AI's flagship LLM, debuts on Azure AI Models-as-a-Service". techcommunity.microsoft.com. Retrieved 2024-02-26.
  20. "Mistral in talks to raise €500mn at €5bn valuation". www.ft.com. Retrieved 2024-04-19.
  21. Goldman, Sharon (2023-12-08). "Mistral AI bucks release trend by dropping torrent link to new open source LLM". VentureBeat. Retrieved 2024-01-04.
  22. Coldewey, Devin (27 September 2023). "Mistral AI makes its first large language model free for everyone". TechCrunch. Retrieved 4 January 2024.
  23. 1 2 "Mistral 7B". mistral.ai. Mistral AI. 27 September 2023. Retrieved 4 January 2024.
  24. Jiang, Albert Q.; Sablayrolles, Alexandre; Mensch, Arthur; Bamford, Chris; Chaplot, Devendra Singh; Casas, Diego de las; Bressand, Florian; Lengyel, Gianna; Lample, Guillaume (2023-10-10). "Mistral 7B". arXiv: 2310.06825v1 [cs.CL].
  25. "Mixture of Experts Explained". huggingface.co. Retrieved 2024-01-04.
  26. Marie, Benjamin (2023-12-15). "Mixtral-8x7B: Understanding and Running the Sparse Mixture of Experts". Medium. Retrieved 2024-01-04.
  27. Franzen, Carl (2023-12-11). "Mistral shocks AI community as latest open source model eclipses GPT-3.5 performance". VentureBeat. Retrieved 2024-01-04.
  28. Field, Hayden (March 6, 2024). "Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst". CNBC. Retrieved March 6, 2024.
  29. "Introducing CopyrightCatcher, the first Copyright Detection API for LLMs". Patronus AI. March 6, 2024. Retrieved March 6, 2024.
  30. @MistralAI (April 10, 2024). "Torrent" (Tweet) via Twitter.
  31. 1 2 "mistralai/Mixtral-8x22B-v0.1 · Hugging Face". huggingface.co. Retrieved 2024-05-05.
  32. AI, Mistral (2024-07-24). "Large Enough". mistral.ai. Retrieved 2024-07-24.
  33. 1 2 "mistralai/Mistral-Large-Instruct-2407 · Hugging Face". huggingface.co. Retrieved 2024-08-24.
  34. AI, Mistral (2024-05-29). "Codestral: Hello, World!". mistral.ai. Retrieved 2024-05-30.
  35. Sharma, Shubham (2024-05-29). "Mistral announces Codestral, its first programming focused AI model". VentureBeat. Retrieved 2024-05-30.
  36. Wiggers, Kyle (2024-05-29). "Mistral releases Codestral, its first generative AI model for code". TechCrunch. Retrieved 2024-05-30.
  37. 1 2 AI, Mistral (2024-07-16). "MathΣtral". mistral.ai. Retrieved 2024-07-16.
  38. 1 2 David, Emilia (2024-07-16). "Mistral releases Codestral Mamba for faster, longer code generation". VentureBeat. Retrieved 2024-07-17.
  39. AI, Mistral (2024-07-16). "Codestral Mamba". mistral.ai. Retrieved 2024-07-16.
  40. "Pricing and rate limits | Mistral AI Large Language Models". docs.mistral.ai. Retrieved 2024-01-22.
  41. AI, Mistral (2024-02-26). "Au Large". mistral.ai. Retrieved 2024-03-06.
  42. "Models | Mistral AI Large Language Models". docs.mistral.ai. Retrieved 2024-08-24.
  43. AI, Mistral (2023-12-11). "La plateforme". mistral.ai. Retrieved 2024-01-22.
  44. "LMSys Chatbot Arena Leaderboard - a Hugging Face Space by lmsys". huggingface.co. Retrieved 2024-01-22.
  45. AI, Mistral (2024-02-26). "Au Large". mistral.ai. Retrieved 2024-03-06.