In machine learning, a neural scaling law is an empirical scaling law that describes how neural network performance changes as key factors are scaled up or down. These factors typically include the number of parameters, training dataset size, [1] [2] and training cost.
In general, a deep learning model can be characterized by four parameters: model size, training dataset size, training cost, and the post-training error rate (e.g., the test set error rate). Each of these variables can be defined as a real number, usually written as (respectively: parameter count, dataset size, computing cost, and loss).
A neural scaling law is a theoretical or empirical statistical law between these parameters. There are also other parameters with other scaling laws.
In most cases, the model's size is simply the number of parameters. However, one complication arises with the use of sparse models, such as mixture-of-expert models. [3] With sparse models, during inference, only a fraction of their parameters are used. In comparison, most other kinds of neural networks, such as transformer models, always use all their parameters during inference.
The size of the training dataset is usually quantified by the number of data points within it. Larger training datasets are typically preferred, as they provide a richer and more diverse source of information from which the model can learn. This can lead to improved generalization performance when the model is applied to new, unseen data. [4] However, increasing the size of the training dataset also increases the computational resources and time required for model training.
With the "pretrain, then finetune" method used for most large language models, there are two kinds of training dataset: the pretraining dataset and the finetuning dataset. Their sizes have different effects on model performance. Generally, the finetuning dataset is less than 1% the size of pretraining dataset. [5]
In some cases, a small amount of high quality data suffices for finetuning, and more data does not necessarily improve performance. [5]
Training cost is typically measured in terms of time (how long it takes to train the model) and computational resources (how much processing power and memory are required). It is important to note that the cost of training can be significantly reduced with efficient training algorithms, optimized software libraries, and parallel computing on specialized hardware such as GPUs or TPUs.
The cost of training a neural network model is a function of several factors, including model size, training dataset size, the training algorithm complexity, and the computational resources available. [4] In particular, doubling the training dataset size does not necessarily double the cost of training, because one may train the model for several times over the same dataset (each being an "epoch").
The performance of a neural network model is evaluated based on its ability to accurately predict the output given some input data. Common metrics for evaluating model performance include: [4]
Performance can be improved by using more data, larger models, different training algorithms, regularizing the model to prevent overfitting, and early stopping using a validation set.
When the performance is a number bounded within the range of , such as accuracy, precision, etc., it often scales as a sigmoid function of cost, as seen in the figures.
The 2017 paper [2] is a common reference point for neural scaling laws fitted by statistical analysis on experimental data. Previous works before the 2000s, as cited in the paper, were either theoretical or orders of magnitude smaller in scale. Whereas previous works generally found the scaling exponent to scale like , with , the paper found that .
Of the factors they varied, only task can change the exponent . Changing the architecture optimizers, regularizers, and loss functions, would only change the proportionality factor, not the exponent. For example, for the same task, one architecture might have while another might have . They also found that for a given architecture, the number of parameters necessary to reach lowest levels of loss, given a fixed dataset size, grows like for another exponent .
They studied machine translation with LSTM (), generative language modelling with LSTM (), ImageNet classification with ResNet (), and speech recognition with two hybrid (LSTMs complemented by either CNNs or an attention decoder) architectures ().
A 2020 analysis [10] studied statistical relations between over a wide range of values and found similar scaling laws, over the range of , , and over multiple modalities (text, video, image, text to image, etc.). [10]
In particular, the scaling laws it found are (Table 1 of [10] ):
The scaling law of was confirmed during the training of GPT-3 (Figure 3.1 [11] ).
One particular scaling law ("Chinchilla scaling") states that, for a large language model (LLM) autoregressively trained for one epoch, with a cosine learning rate schedule, we have: [13] where the variables are
and the statistical parameters are
Although Besiroglu et al. [15] claims that the statistical estimation is slightly off, and should be .
The statistical laws were fitted over experimental data with .
Since there are 4 variables related by 2 equations, imposing 1 additional constraint and 1 additional optimization objective allows us to solve for all four variables. In particular, for any fixed , we can uniquely solve for all 4 variables that minimizes . This provides us with the optimal for any fixed :Plugging in the numerical values, we obtain the "Chinchilla efficient" model size and training dataset size, as well as the test loss achievable:Similarly, we may find the optimal training dataset size and training compute budget for any fixed model parameter size, and so on.
There are other estimates for "Chinchilla efficient" model size and training dataset size. The above is based on a statistical model of . One can also directly fit a statistical law for without going through the detour, for which one obtains:or as tabulated:
/ FLOP | / FLOPs of training Gopher | ||
---|---|---|---|
400 million | 1.92e+19 | 1/29968 | 8.0 billion |
1 billion | 1.21e+20 | 1/5706 | 20.2 billion |
10 billion | 1.23e+22 | 1/2819 | 205.1 billion |
67 billion | 5.76e+23 | 1 | 1.5 trillion |
175 billion | 3.85e+24 | 6.7 | 3.7 trillion |
280 billion | 9.90e+24 | 17.2 | 5.9 trillion |
520 billion | 3.43e+25 | 59.5 | 11.0 trillion |
1 trillion | 1.27e+26 | 221.3 | 21.2 trillion |
10 trillion | 1.30e+28 | 22515.9 | 216.2 trillion |
The Chinchilla scaling law analysis for training transformer language models suggests that for a given training compute budget (), to achieve the minimal pretraining loss for that budget, the number of model parameters () and the number of training tokens () should be scaled in equal proportions, . This conclusion differs from analysis conducted by Kaplan et al., [14] which found that should be increased more quickly than , .
This discrepancy can primarily be attributed to the two studies using different methods for measuring model size. Kaplan et al.: [16]
Secondary effects also arise due to differences in hyperparameter tuning and learning rate schedules. Kaplan et al.: [17]
As Chinchilla scaling has been the reference point for many large-scaling training runs, there had been a concurrent effort to go "beyond Chinchilla scaling", meaning to modify some of the training pipeline in order to obtain the same loss with less effort, or deliberately train for longer than what is "Chinchilla optimal".
Usually, the goal is to make the scaling law exponent larger, which means the same loss can be trained for much less compute. For instance, filtering data can make the scaling law exponent larger. [18]
Another strand of research studies how to deal with limited data, as according to Chinchilla scaling laws, the training dataset size for the largest language models already approaches what is available on the internet. [19] found that augmenting the dataset with a mix of "denoising objectives" constructed from the dataset improves performance. [20] studies optimal scaling when all available data is already exhausted (such as in rare languages), so one must train multiple epoches over the same dataset (whereas Chinchilla scaling requires only one epoch). The Phi series of small language models were trained on textbook-like data generated by large language models, for which data is only limited by amount of compute available. [21]
Chinchilla optimality was defined as "optimal for training compute", whereas in actual production-quality models, there will be a lot of inference after training is complete. "Overtraining" during training means better performance during inference. [22] LLaMA models were overtrained for this reason. Subsequent studies discovered scaling laws in the overtraining regime, for dataset sizes up to 32x more than Chinchilla-optimal. [23]
A 2022 analysis [24] found that many scaling behaviors of artificial neural networks follow a smoothly broken power law functional form:
in which refers to the quantity being scaled (i.e. , , , number of training steps, number of inference steps, or model input size) and refers to the downstream (or upstream) performance evaluation metric of interest (e.g. prediction error, cross entropy, calibration error, AUROC, BLEU score percentage, F1 score, reward, Elo rating, solve rate, or FID score) in zero-shot, prompted, or fine-tuned settings. The parameters are found by statistical fitting.
On a log–log plot, when is not too large and is subtracted out from the y-axis, this functional form looks like a series of linear segments connected by arcs; the transitions between the segments are called "breaks", hence the name broken neural scaling laws (BNSL).
The scenarios in which the scaling behaviors of artificial neural networks were found to follow this functional form include large-scale vision, language, audio, video, diffusion, generative modeling, multimodal learning, contrastive learning, AI alignment, AI capabilities, robotics, out-of-distribution (OOD) generalization, continual learning, transfer learning, uncertainty estimation / calibration, out-of-distribution detection, adversarial robustness, distillation, sparsity, retrieval, quantization, pruning, fairness, molecules, computer programming/coding, math word problems, arithmetic, emergent abilities, double descent, supervised learning, unsupervised/self-supervised learning, and reinforcement learning (single agent and multi-agent).
The architectures for which the scaling behaviors of artificial neural networks were found to follow this functional form include residual neural networks, transformers, MLPs, MLP-mixers, recurrent neural networks, convolutional neural networks, graph neural networks, U-nets, encoder-decoder (and encoder-only) (and decoder-only) models, ensembles (and non-ensembles), MoE (mixture of experts) (and non-MoE) models, and sparse pruned (and non-sparse unpruned) models.
Other than scaling up training compute, one can also scale up inference compute (or "test-time compute" [25] ). As an example, the Elo rating of AlphaGo improves steadily as it is allowed to spend more time on its Monte Carlo Tree Search per play. [26] : Fig 4 For AlphaGo Zero, increasing Elo by 120 requires either 2x model size and training, or 2x test-time search. [27] Similarly, a language model for solving competition-level coding challenges, AlphaCode, consistently improved (log-linearly) in performance with more search time. [28]
For Hex, 10x training-time compute trades for 15x test-time compute. [8] For Libratus for heads up no-limit Texas hold 'em, and Cicero for Diplomacy , and many other abstract games of partial information, inference-time searching improves performance at a similar tradeoff ratio, for up to 100,000x effective increase in training-time compute. [27]
In 2024, the OpenAI o1 report documented that o1's performance consistently improved with both increased train-time compute and test-time compute, and gave numerous examples of test-time compute scaling in mathematics, scientific reasoning, and coding tasks. [29] [30]
One method for scaling up test-time compute is process-based supervision, where a model generates a step-by-step reasoning chain to answer a question, and another model (either human or AI) provides a reward score on some of the intermediate steps, not just the final answer. Process-based supervision can be scaled arbitrarily by using synthetic reward score without another model, for example, by running Monte Carlo rollouts and scoring each step in the reasoning according to how likely it leads to the right answer. Another method is by revision models, which are models trained to solve a problem multiple times, each time revising the previous attempt. [31]
Vision transformers, similar to language transformers, exhibit scaling laws. A 2022 research trained vision transformers, with parameter counts , on image sets of sizes , for computing (in units of TPUv3-core-days). [32]
After training the model, it is finetuned on ImageNet training set. Let be the error probability of the finetuned model classifying ImageNet test set. They found .
Ghorbani, Behrooz et al. [33] studied scaling laws for neural machine translation (specifically, English as source, and German as target) in encoder-decoder Transformer models, trained until convergence on the same datasets (thus they did not fit scaling laws for computing cost or dataset size ). They varied They found three results:
The authors hypothesize that source-natural datasets have uniform and dull target sentences, and so a model that is trained to predict the target sentences would quickly overfit.
[35] trained Transformers for machine translations with sizes on dataset sizes . They found the Kaplan et al. (2020) [14] scaling law applied to machine translation: . They also found the BLEU score scaling as .
Hernandez, Danny et al. [36] studied scaling laws for transfer learning in language models. They trained a family of Transformers in three ways:
The idea is that pretraining on English should help the model achieve low loss on a test set of Python text. Suppose the model has parameter count , and after being finetuned on Python tokens, it achieves some loss . We say that its "transferred token count" is , if another model with the same achieves the same after training on Python tokens.
They found for pretraining on English text, and for pretraining on English and non-Python code.
Kumar et al. [37] study scaling laws for numerical precision in the training of language models. They train a family of language models with weights, activations, and KV cache in varying numerical precision in both integer and floating-point type to measure the effects on loss as a function of precision. For training, their scaling law accounts for lower precision by wrapping the effects of precision into an overall "effective parameter count" that governs loss scaling, using the parameterization . This illustrates how training in lower precision degrades performance by reducing the true capacity of the model in a manner that varies exponentially with bits.
For inference, they find that extreme overtraining of language models past Chinchilla-optimality can lead to models being more sensitive to quantization, a standard technique for efficient deep learning. This is demonstrated by observing that the degradation in loss due to weight quantization increases as an approximate power law in the token/parameter ratio seen during pretraining, so that models pretrained on extreme token budgets can perform worse in terms of validation loss than those trained on more modest token budgets if post-training quantization is applied. Other work examining the effects of overtraining include Sardana et al. [38] and Gadre et al. [39]
Xiao et al. [7] considered the parameter efficiency ("density") of models over time. The idea is that over time, researchers would discover models that use their parameters more efficiently, in that models with the same performance can have fewer parameters.
A model can have an actual parameter count , defined as the actual number of parameters in the model, and an "effective" parameter count , defined as how many parameters it would have taken a previous well-known model to reach he same performance on some benchmarks, such as MMLU. is not measured directly, but rather by measuring the actual model performance , then plugging it back to a previously fitted scaling law, such as the Chinchilla scaling law, to obtain what would be required to reach that performance , according to that previously fitted scaling laws.
A densing law states that , where is real-world time, measured in days.