Small language model

Last updated

Small language models or compact language models are artificial intelligence language models designed for human natural language processing including language and text generation. Unlike large language models, small language models are much smaller in scale and scope.

Contents

Typically, an large language models's number of training parameters is in the hundreds of billions, with some models even exceeding a trillion parameters. The size of any large language model is vast because it contains a large amount of information, which allows it to generate better content. However, this requires enormous computational power, making it impossible for an individual to train a large language model using just a single computer and graphical processing unit.

Small language models, on the other hand, use far fewer parameters, typically ranging from a few thousand to a few hundred million. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device. [1] [2] [3] [4]

Most contemporary (2020s) small language models use the same architecture as a large language model, but with a smaller parameter count and sometimes lower arithmetic precision. Parameter count is reduced by a combination of knowledge distillation and pruning. Precision can be reduced by quantization. Work on large language models mostly translate to small language models: pruning and quantization are also widely used to speed up large language models. Some notable models are: [2]

(Phi-4 14B is marginally "small" at best, but Microsoft does market it as a small model.) [5]

Language model with small pre-training dataset

Traditional AI language systems need enormous computers and vast amounts of data. Pre-training matters, even tiny models show significant performance improvements when pre-trained performance increases with larger pre-training datasets. Classification accuracy improves when pre-training and test datasets share similar tokens. Shallow architectures can replicate deep model performance through collaborative learning. [6]

See also

References

  1. Rina Diane Caballar (31 October 2024). "What are small language models?". IBM.
  2. 1 2 John Johnson (25 February 2025). "Small Language Models (SLM): A Comprehensive Overview". Huggingface.
  3. Kate Whiting. "What is a small language model and how can businesses leverage this AI tool?". The World Economic Forum.
  4. "SLM (Small Language Model) with your Data". Microsoft. 11 July 2024.
  5. "Introducing Phi-4: Microsoft's Newest Small Language Model Specializing in Complex Reasoning". techcommunity.microsoft.com.
  6. Gross, Ronit D.; Tzach, Yarden; Halevi, Tal; Koresh, Ella; Kanter, Ido (2025). "Tiny language models". arXiv: 2507.14871 [cs.CL].