Llama.cpp

Last updated
llama.cpp
Original author Georgi Gerganov
Developers Georgi Gerganov and community
Initial releaseMarch 10, 2023;2 years ago (2023-03-10) [1]
Repository github.com/ggml-org/llama.cpp
Written in C++, C
Type Library for large language models
License MIT License [2]

llama.cpp is an open source software library that performs inference on various large language models such as Llama. [3] It is co-developed alongside the GGML project, a general-purpose tensor library. [4]

Contents

Command-line tools are included with the library, [5] alongside a server with a simple web interface. [6] [7]

Background

Towards the end of September 2022, Georgi Gerganov started work on the GGML library, a C library implementing tensor algebra. Gerganov developed the library with the intention of strict memory management and multi-threading. The creation of GGML was inspired by Fabrice Bellard's work on LibNC. [8]

Before llama.cpp, Gerganov worked on a similar library called whisper.cpp which implemented Whisper, a speech to text model by OpenAI. [9]

Development

llama.cpp began development in March 2023 by Georgi Gerganov as an implementation of the Llama inference code in pure C/C++ with no dependencies. This improved performance on computers without GPU or other dedicated hardware, which was a goal of the project. [3] [10] [11] llama.cpp gained traction with users who lacked specialized hardware, as it could run on just a CPU.

While initially designed for CPUs, GPU and NPU backend support was later added. [12] As of August 2025 it has more than 85,000 stars on GitHub. [13]

On Apr 30, 2024, FlashAttention was introduced.

On Apr 10, 2025, libmtmd was introduced, which reinvigorated support for multimodal models that has been stagnant previously.

On Dec 17, 2025, full acceleration on Android and ChromeOS devices was introduced via a new GUI binding [14] , which unlocks native app development beyond the previous approach of cross-compiling and running CLI [10] [15] [16] in an adb shell.

Architecture

llama.cpp supports multiple hardware targets, including x86, ARM, Metal, BLAS, BLIS, zDNN, ZenDNN, SYCL, MUSA, CUDA, HIP, CANN, OpenCL, RPC and Vulkan (version 1.2 or greater). [17] [18] [19] [20] These back-ends make up the GGML tensor library which is used by the front-end model-specific llama.cpp code. [21] llama.cpp makes use of several CPU extensions for optimization:

llama.cpp supports a variety of features aimed at inference on edge devices, such as:

In addition, llama.cpp supports a variety of features and APIs for frontend communication, such as:

GGUF file format

GGUF
GGML logo.svg
Filename extension .gguf
Magic number 0x470x470x550x46
Developed byGeorgi Gerganov and community
Initial releaseAugust 22, 2023;2 years ago (2023-08-22) [24]
Latest release
v3 [25]
Type of format Machine-learning tensors

The GGUF (GGML Universal File) [26] file format is a binary format that stores both tensors and metadata in a single file, and is designed for fast saving, and loading of model data. [27] It was introduced in August 2023 by the llama.cpp project to better maintain backwards compatibility as support was added for other model architectures. [12] [28] It superseded previous formats used by the project such as GGML.

GGUF files are typically created by converting models developed with a different machine learning library such as PyTorch. [27]

Design

GGUF focuses on quantization, the act of reducing precision in the model weights. This can lead to reduced memory usage and increased speed, albeit at the cost of reduced model accuracy. [29] [28]

GGUF supports 2-bit to 8-bit quantized integer types, [30] common floating-point data formats such as float32, float16, and bfloat16, and 1.58 bit quantization. [5]

GGUF contains information necessary for running a GPT-like language model such as the tokenizer vocabulary, context length, tensor info and other attributes. [31]

Byte-level structure (little-endian)

BytesDescription [32]
4GGUF magic number, currently set to 0x47 0x47 0x55 0x46
4GGUF version, currently set to 3
8UINT64 tensor_count: number of tensors
8UINT64 metadata_kv_count: number of metadata key-value pairs
VariableMetadata block, containing metadata_kv_count key-value pairs
VariableTensors info block, containing tensor_count values
Variableuint8_t tensor_data[], weight bits block

Metadata block

// example metadatageneral.architecture:'llama',general.name:'LLaMA v2',llama.context_length:4096,...,general.file_type:10,// (typically indicates quantization level, here "MOSTLY_Q2_K")tokenizer.ggml.model:'llama',tokenizer.ggml.tokens:['<unk>','<s>','</s>','<0x00>','<0x01>','<0x02>','<0x03>','<0x04>','<0x05>','<0x06>','<0x07>','<0x08>',...],...

Tensors info block

// n-th tensorname:GGUFstring,// ex: "blk.0.ffn_gate.weight"n_dimensions:UINT32,// ex: 2dimensions:UINT64[],// ex: [ 4096, 32000 ]type:UINT32,// ex: 10 (typically indicates quantization level, here "GGML_TYPE_Q2_K")offset:UINT64// starting position within the tensor_data block, relative to the start of the block// (n+1)-th tensor...

References

  1. "Initial release · ggerganov/llama.cpp@26c0846". GitHub. Retrieved 15 May 2024.
  2. "llama.cpp/LICENSE at master · ggerganov/llama.cpp". GitHub.
  3. 1 2 Connatser, Matthew. "How this open source LLM chatbot runner hit the gas on x86, Arm CPUs". theregister.com. Retrieved 15 April 2024.
  4. Gerganov, Georgi (17 May 2024). "ggerganov/ggml". GitHub .
  5. 1 2 Mann, Tobias (14 Jul 2024). "Honey, I shrunk the LLM! A beginner's guide to quantization – and testing it". theregister.
  6. Alden, Daroc. "Portable LLMs with llamafile [LWN.net]". lwn.net. Retrieved 30 July 2024.
  7. 1 2 Mann, Tobias (15 December 2024). "Intro to speculative decoding: Cheat codes for faster LLMs". theregister.
  8. "Bringing Whisper and LLaMA to the masses with Georgi Gerganov (Changelog Interviews #532)". Changelog. 22 March 2023. Retrieved 28 July 2024.
  9. "ggerganov/whisper.cpp". GitHub .
  10. 1 2 Edwards, Benj (13 March 2023). "You can now run a GPT-3-level AI model on your laptop, phone, and Raspberry Pi". arstechnica.com. Retrieved 15 April 2024.
  11. 1 2 Wiest, Isabella Catharina; Ferber, Dyke; Zhu, Jiefu; van Treeck, Marko; Meyer, Meyer, Sonja K.; Juglan, Radhika; Carrero, Zunamys I.; Paech, Daniel; Kleesiek, Jens; Ebert, Matthias P.; Truhn, Daniel; Kather, Jakob Nikolas (2024). "Privacy-preserving large language models for structured medical information retrieval". npj Digital Medicine. 7 (257): 257. doi:10.1038/s41746-024-01233-2. PMC   11415382 . PMID   39304709.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  12. 1 2 Rajput, Saurabhsingh; Sharma, Tushar (4 June 2024). "Benchmarking Emerging Deep Learning Quantization Methods for Energy Efficiency". 2024 IEEE 21st International Conference on Software Architecture Companion (ICSA-C). pp. 238–242. doi:10.1109/ICSA-C63560.2024.00049. ISBN   979-8-3503-6625-9.
  13. 1 2 "ggerganov/llama.cpp". GitHub .
  14. ggml-org. "llama.cpp/docs/android.md at master · ggml-org/llama.cpp". GitHub. Retrieved 2025-12-20.
  15. Hood, Stephen. "llamafile: bringing LLMs to the people, and to your own computer". Mozilla Innovations. Retrieved 28 July 2024.
  16. "Democratizing AI with open-source language models". lwn.net. Retrieved 28 July 2024.
  17. Gerganov, Georgi; Nguyen, Xuan Son; Slaren (August 13, 2024). "Introduction to ggml". Huggingface.
  18. Kluska, Piotr; Castell´o, Adri´an; Scheidegger, Florian; I. Malossi, A. Cristiano; Quintana-Ort´ı, Enrique (June 2024). "QAttn: Efficient GPU Kernels for mixed-precision Vision Transformers" (PDF). Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
  19. Jianyu, Zhang; Hengyu, Meng; Ying, Hu; Yu, Luo; Xiaoping, Duan; Corporation, Majumder Abhilash Intel (July 2024). "Run LLMs on Intel GPUs Using llama.cpp". The Parallel Universe. No. 57. Intel. pp. 34–37.
  20. Bolz, Jeff (February 11–13, 2025). "Machine Learning in Vulkan with Cooperative Matrix 2" (PDF). Cambridge, UK: The Khronos Group/Nvidia.
  21. Pounder, Les (25 March 2023). "How To Create Your Own AI Chatbot Server With Raspberry Pi 4". tomshardware.com. Retrieved 16 April 2024.
  22. Larabel, Michael. "Llamafile 0.7 Brings AVX-512 Support: 10x Faster Prompt Eval Times For AMD Zen 4". www.phoronix.com.
  23. Walkowiak, Bartosz; Walkowiak, Tomasz (2024). "Implementation of language models within an infrastructure designed for Natural Language Processing" (PDF). International Journal of Electronics and Telecommunications. 70 (1): 153–159. doi:10.24425/ijet.2024.149525 . Retrieved 8 May 2024.
  24. "GGUF by ggerganov · Pull Request #2398 · ggerganov/llama.cpp". GitHub.
  25. "ggml/docs/gguf.md at master · ggerganov/ggml". GitHub.
  26. "ggerganov/llama.cpp/gguf-py/README.md". GitHub. Retrieved 10 November 2024.
  27. 1 2 "GGUF". huggingface.co. Retrieved 9 May 2024.
  28. 1 2 Mucci, Tim (3 July 2024). "GGUF versus GGML". www.ibm.com. Retrieved 26 July 2024.
  29. Labonne, Maxime (29 November 2023). "Quantize Llama models with GGUF and llama.cpp". Medium. Towards Data Science. Retrieved 9 May 2024.
  30. Cabezas, Darío; Fonseca-Delgado, Rigoberto; Reyes-Chacón, Iván; Vizcaino-Imacaña, Paulina; Morocho-Cayamcela, Manuel (2024). "Integrating a LLaMa-based Chatbot with Augmented Retrieval Generation as a Complementary Educational Tool for High School and College Students". Proceedings of the 19th International Conference on Software Technologies. pp. 395–402. doi:10.5220/0012763000003753. ISBN   978-989-758-706-1.
  31. Dong, Bo; Lin, Jun; Yu, Zhentao; Xu, Zhenzhong; Luo, Yu; Chang, Hanwen; Shen, Haihao (July 2024). "Accelerating GGUF Models with Transformers". The Parallel Universe. No. 57. Intel. pp. 28–33.
  32. "GGUF specification (ggml/docs/gguf.md at master · ggml-org/ggml)".