In machine learning, knowledge distillation or model distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have more knowledge capacity than small models, this capacity might not be fully utilized. It can be just as computationally expensive to evaluate a model even if it utilizes little of its knowledge capacity. Knowledge distillation transfers knowledge from a large model to a smaller one without loss of validity. As smaller models are less expensive to evaluate, they can be deployed on less powerful hardware (such as a mobile device). [1]
Model distillation is not to be confused with model compression, which describes methods to decrease the size of a large model itself, without training a new model. Model compression generally preserves the architecture and the nominal parameter count of the model, while decreasing the bits-per-parameter.
Knowledge distillation has been successfully used in several applications of machine learning such as object detection, [2] acoustic models, [3] and natural language processing. [4] Recently, it has also been introduced to graph neural networks applicable to non-grid data. [5]
Knowledge transfer from a large model to a small one somehow needs to teach the latter without loss of validity. If both models are trained on the same data, the smaller model may have insufficient capacity to learn a concise knowledge representation compared to the large model. However, some information about a concise knowledge representation is encoded in the pseudolikelihoods assigned to its output: when a model correctly predicts a class, it assigns a large value to the output variable corresponding to such class, and smaller values to the other output variables. The distribution of values among the outputs for a record provides information on how the large model represents knowledge. Therefore, the goal of economical deployment of a valid model can be achieved by training only the large model on the data, exploiting its better ability to learn concise knowledge representations, and then distilling such knowledge into the smaller model, by training it to learn the soft output of the large model. [1]
Given a large model as a function of the vector variable , trained for a specific classification task, typically the final layer of the network is a softmax in the form
where is the temperature, a parameter which is set to 1 for a standard softmax. The softmax operator converts the logit values to pseudo-probabilities: higher temperature values generate softer distributions of pseudo-probabilities among the output classes. Knowledge distillation consists of training a smaller network, called the distilled model, on a data set called the transfer set (which is different than the data set used to train the large model) using cross-entropy as the loss function between the output of the distilled model and the output of the large model on the same record (or the average of the individual outputs, if the large model is an ensemble), using a high value of softmax temperature for both models [1]
In this context, a high temperature increases the entropy of the output, therefore providing more information to learn for the distilled model compared to hard targets, and at the same time reducing the variance of the gradient between different records, thus allowing a higher learning rate. [1]
If ground truth is available for the transfer set, the process can be strengthened by adding to the loss the cross-entropy between the output of the distilled model (computed with ), and the known label
where the component of the loss with respect to the large model is weighted by a factor of since, as the temperature increases, the gradient of the loss with respect to the model weights scales by a factor of . [1]
Under the assumption that the logits have zero mean, it is possible to show that model compression is a special case of knowledge distillation. The gradient of the knowledge distillation loss with respect to the logit of the distilled model is given by
where are the logits of the large model. For large values of this can be approximated as
and under the zero-mean hypothesis it becomes , which is the derivative of , i.e. the loss is equivalent to matching the logits of the two models, as done in model compression. [1]
Optimal Brain Damage (OBD) algorithm is as follows: [6]
Do until a desired level of sparsity or performance is reached:
Train the network (by methods such as backpropagation) until a reasonable solution is obtainedCompute the saliencies for each parameter
Delete some lowest-saliency parameters
Deleting a parameter means fixing the parameter to zero. The "saliency" of a parameter is defined as , where is the loss function. The second-derivative can be computed by second-order backpropagation.
The idea for optimal brain damage is to approximate the loss function in a neighborhood of optimal parameter by Taylor expansion:where , since is optimal, and the cross-derivatives are neglected to save compute. Thus, the saliency of a parameter approximates the increase in loss if that parameter is deleted.
A related methodology was model compression or pruning, where a trained network is reduced in size. This was first done in 1965 by Alexey Ivakhnenko and Valentin Lapa in Ukraine (1965). [7] [8] [9] Their deep networks were trained layer by layer through regression analysis. Superfluous hidden units were pruned using a separate validation set. [10] Other neural network compression methods include Biased Weight Decay [11] and Optimal Brain Damage. [6]
An early example of neural network distillation was published by Jürgen Schmidhuber in 1991, in the field of recurrent neural networks (RNNs). The problem was sequence prediction for long sequences, i.e., deep learning. It was solved by two RNNs. One of them (the automatizer) predicted the sequence, and another (the chunker) predicted the errors of the automatizer. Simultaneously, the automatizer predicted the internal states of the chunker. After the automatizer manages to predict the chunker's internal states well, it would start fixing the errors, and soon the chunker is obsoleted, leaving just one RNN in the end. [12] [13]
The idea of using the output of one neural network to train another neural network was also studied as the teacher-student network configuration. [14] In 1992, several papers studied the statistical mechanics of teacher-student configurations with committee machines [15] [16] or both are parity machines. [17]
Compressing the knowledge of multiple models into a single neural network was called model compression in 2006: compression was achieved by training a smaller model on large amounts of pseudo-data labelled by a higher-performing ensemble, optimizing to match the logit of the compressed model to the logit of the ensemble. [18] The knowledge distillation preprint of Geoffrey Hinton et al. (2015) [1] formulated the concept and showed some results achieved in the task of image classification.
Knowledge distillation is also related to the concept of behavioral cloning discussed by Faraz Torabi et. al. [19]
In mathematics, a spherical coordinate system is a coordinate system for three-dimensional space where the position of a given point in space is specified by three real numbers: the radial distancer along the radial line connecting the point to the fixed point of origin; the polar angleθ between the radial line and a given polar axis; and the azimuthal angleφ as the angle of rotation of the radial line around the polar axis. (See graphic re the "physics convention".) Once the radius is fixed, the three coordinates (r, θ, φ), known as a 3-tuple, provide a coordinate system on a sphere, typically called the spherical polar coordinates. The plane passing through the origin and perpendicular to the polar axis (where the polar angle is a right angle) is called the reference plane (sometimes fundamental plane).
A likelihood function measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the joint probability distribution of the random variable that (presumably) generated the observations. When evaluated on the actual data points, it becomes a function solely of the model parameters.
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
In mathematics, a unit vector in a normed vector space is a vector of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in .
In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller than any relevant dimension of the body; so that its geometry and the constitutive properties of the material at each point of space can be assumed to be unchanged by the deformation.
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. The table of spherical harmonics contains a list of common spherical harmonics.
In statistics, the logistic model is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression estimates the parameters of a logistic model. In binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.
Linear elasticity is a mathematical model as to how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman–Darmois family. Sometimes loosely referred to as "the" exponential family, this class of distributions is distinct because they all possess a variety of desirable properties, most importantly the existence of a sufficient statistic.
In analytical mechanics, generalized coordinates are a set of parameters used to represent the state of a system in a configuration space. These parameters must uniquely define the configuration of the system relative to a reference state. The generalized velocities are the time derivatives of the generalized coordinates of the system. The adjective "generalized" distinguishes these parameters from the traditional use of the term "coordinate" to refer to Cartesian coordinates.
This is a list of some vector calculus formulae for working with common curvilinear coordinate systems.
Random forests or random decision forests is an ensemble learning method for classification, regression and other tasks that works by creating a multitude of decision trees during training. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the output is the average of the predictions of the trees. Random forests correct for decision trees' habit of overfitting to their training set.
In information theory, the cross-entropy between two probability distributions and , over the same underlying set of events, measures the average number of bits needed to identify an event drawn from the set when the coding scheme used for the set is optimized for an estimated probability distribution , rather than the true distribution .
In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.
Mechanics of planar particle motion is the analysis of the motion of particles gravitationally attracted to one another which are observed from non-inertial reference frames and the generalization of this problem to planetary motion. This type of analysis is closely related to centrifugal force, two-body problem, orbit and Kepler's laws of planetary motion. The mechanics of planar particle motion fall within the general field of analytical dynamics, and help to determine orbits from the force laws. This article is focused more on the kinematic issues surrounding planar motion, which are the determination of the forces necessary to result in a certain trajectory given the particle trajectory.
Curvilinear coordinates can be formulated in tensor calculus, with important applications in physics and engineering, particularly for describing transportation of physical quantities and deformation of matter in fluid mechanics and continuum mechanics.
In the study of artificial neural networks (ANNs), the neural tangent kernel (NTK) is a kernel that describes the evolution of deep artificial neural networks during their training by gradient descent. It allows ANNs to be studied using theoretical tools from kernel methods.
An energy-based model (EBM) is an application of canonical ensemble formulation from statistical physics for learning from data. The approach prominently appears in generative artificial intelligence.
The hyperbolastic functions, also known as hyperbolastic growth models, are mathematical functions that are used in medical statistical modeling. These models were originally developed to capture the growth dynamics of multicellular tumor spheres, and were introduced in 2005 by Mohammad Tabatabai, David Williams, and Zoran Bursac. The precision of hyperbolastic functions in modeling real world problems is somewhat due to their flexibility in their point of inflection. These functions can be used in a wide variety of modeling problems such as tumor growth, stem cell proliferation, pharma kinetics, cancer growth, sigmoid activation function in neural networks, and epidemiological disease progression or regression.