This article may be too technical for most readers to understand.(December 2021) |
Part of a series on |
Machine learning and data mining |
---|
Batch normalization (also known as batch norm) is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015. [1]
The reasons behind the effectiveness of batch normalization remain under discussion. It was believed that it can mitigate the problem of internal covariate shift, where parameter initialization and changes in the distribution of the inputs of each layer affect the learning rate of the network. [1] Recently, some scholars have argued that batch normalization does not reduce internal covariate shift, but rather smooths the objective function, which in turn improves the performance. [2] However, at initialization, batch normalization in fact induces severe gradient explosion in deep networks, which is only alleviated by skip connections in residual networks. [3] Others maintain that batch normalization achieves length-direction decoupling, and thereby accelerates neural networks. [4]
Each layer of a neural network has inputs with a corresponding distribution, which is affected during the training process by the randomness in the parameter initialization and the randomness in the input data. The effect of these sources of randomness on the distribution of the inputs to internal layers during training is described as internal covariate shift. Although a clear-cut precise definition seems to be missing, the phenomenon observed in experiments is the change on means and variances of the inputs to internal layers during training.
Batch normalization was initially proposed to mitigate internal covariate shift. [1] During the training stage of networks, as the parameters of the preceding layers change, the distribution of inputs to the current layer changes accordingly, such that the current layer needs to constantly readjust to new distributions. This problem is especially severe for deep networks, because small changes in shallower hidden layers will be amplified as they propagate within the network, resulting in significant shift in deeper hidden layers. Therefore, the method of batch normalization is proposed to reduce these unwanted shifts to speed up training and to produce more reliable models.
Besides reducing internal covariate shift, batch normalization is believed to introduce many other benefits. With this additional operation, the network can use higher learning rate without vanishing or exploding gradients. Furthermore, batch normalization seems to have a regularizing effect such that the network improves its generalization properties, and it is thus unnecessary to use dropout to mitigate overfitting. It has also been observed that the network becomes more robust to different initialization schemes and learning rates while using batch normalization.
In a neural network, batch normalization is achieved through a normalization step that fixes the means and variances of each layer's inputs. Ideally, the normalization would be conducted over the entire training set, but to use this step jointly with stochastic optimization methods, it is impractical to use the global information. Thus, normalization is restrained to each mini-batch in the training process.
Let us use B to denote a mini-batch of size m of the entire training set. The empirical mean and variance of B could thus be denoted as
and .
For a layer of the network with d-dimensional input, , each dimension of its input is then normalized (i.e. re-centered and re-scaled) separately,
, where and ; and are the per-dimension mean and standard deviation, respectively.
is added in the denominator for numerical stability and is an arbitrarily small constant. The resulting normalized activation have zero mean and unit variance, if is not taken into account. To restore the representation power of the network, a transformation step then follows as
,
where the parameters and are subsequently learned in the optimization process.
Formally, the operation that implements batch normalization is a transform called the Batch Normalizing transform. The output of the BN transform is then passed to other network layers, while the normalized output remains internal to the current layer.
The described BN transform is a differentiable operation, and the gradient of the loss l with respect to the different parameters can be computed directly with the chain rule.
Specifically, depends on the choice of activation function, and the gradient against other parameters could be expressed as a function of :
,
, ,
, ,
and .
During the training stage, the normalization steps depend on the mini-batches to ensure efficient and reliable training. However, in the inference stage, this dependence is not useful any more. Instead, the normalization step in this stage is computed with the population statistics such that the output could depend on the input in a deterministic manner. The population mean, , and variance, , are computed as:
, and .
The population statistics thus is a complete representation of the mini-batches.
The BN transform in the inference step thus becomes
,
where is passed on to future layers instead of . Since the parameters are fixed in this transformation, the batch normalization procedure is essentially applying a linear transform to the activation.
Although batch normalization has become popular due to its strong empirical performance, the working mechanism of the method is not yet well-understood. The explanation made in the original paper [1] was that batch norm works by reducing internal covariate shift, but this has been challenged by more recent work. One experiment [2] trained a VGG-16 network [5] under 3 different training regimes: standard (no batch norm), batch norm, and batch norm with noise added to each layer during training. In the third model, the noise has non-zero mean and non-unit variance, i.e. it explicitly introduces covariate shift. Despite this, it showed similar accuracy to the second model, and both performed better than the first, suggesting that covariate shift is not the reason that batch norm improves performance.
Using batch normalization causes the items in a batch to no longer be iid, which can lead to difficulties in training due to lower quality gradient estimation. [6]
One alternative explanation, [2] is that the improvement with batch normalization is instead due to it producing a smoother parameter space and smoother gradients, as formalized by a smaller Lipschitz constant.
Consider two identical networks, one contains batch normalization layers and the other does not, the behaviors of these two networks are then compared. Denote the loss functions as and , respectively. Let the input to both networks be , and the output be , for which , where is the layer weights. For the second network, additionally goes through a batch normalization layer. Denote the normalized activation as , which has zero mean and unit variance. Let the transformed activation be , and suppose and are constants. Finally, denote the standard deviation over a mini-batch as .
First, it can be shown that the gradient magnitude of a batch normalized network, , is bounded, with the bound expressed as
.
Since the gradient magnitude represents the Lipschitzness of the loss, this relationship indicates that a batch normalized network could achieve greater Lipschitzness comparatively. Notice that the bound gets tighter when the gradient correlates with the activation , which is a common phenomena. The scaling of is also significant, since the variance is often large.
Secondly, the quadratic form of the loss Hessian with respect to activation in the gradient direction can be bounded as
.
The scaling of indicates that the loss Hessian is resilient to the mini-batch variance, whereas the second term on the right hand side suggests that it becomes smoother when the Hessian and the inner product are non-negative. If the loss is locally convex, then the Hessian is positive semi-definite, while the inner product is positive if is in the direction towards the minimum of the loss. It could thus be concluded from this inequality that the gradient generally becomes more predictive with the batch normalization layer.
It then follows to translate the bounds related to the loss with respect to the normalized activation to a bound on the loss with respect to the network weights:
, where and .
In addition to the smoother landscape, it is further shown that batch normalization could result in a better initialization with the following inequality:
, where and are the local optimal weights for the two networks, respectively.
Some scholars argue that the above analysis cannot fully capture the performance of batch normalization, because the proof only concerns the largest eigenvalue, or equivalently, one direction in the landscape at all points. It is suggested that the complete eigenspectrum needs to be taken into account to make a conclusive analysis. [4]
Since it is hypothesized that batch normalization layers could reduce internal covariate shift, an experiment[ citation needed ] is set up to measure quantitatively how much covariate shift is reduced. First, the notion of internal covariate shift needs to be defined mathematically. Specifically, to quantify the adjustment that a layer's parameters make in response to updates in previous layers, the correlation between the gradients of the loss before and after all previous layers are updated is measured, since gradients could capture the shifts from the first-order training method. If the shift introduced by the changes in previous layers is small, then the correlation between the gradients would be close to 1.
The correlation between the gradients are computed for four models: a standard VGG network, [5] a VGG network with batch normalization layers, a 25-layer deep linear network (DLN) trained with full-batch gradient descent, and a DLN network with batch normalization layers. Interestingly, it is shown that the standard VGG and DLN models both have higher correlations of gradients compared with their counterparts, indicating that the additional batch normalization layers are not reducing internal covariate shift.
Even though batchnorm was originally introduced to alleviate gradient vanishing or explosion problems, a deep batchnorm network in fact suffers from gradient explosion at initialization time, no matter what it uses for nonlinearity. Thus the optimization landscape is very far from smooth for a randomly initialized, deep batchnorm network. More precisely, if the network has layers, then the gradient of the first layer weights has norm for some depending only on the nonlinearity. For any fixed nonlinearity, decreases as the batch size increases. For example, for ReLU, decreases to as the batch size tends to infinity. Practically, this means deep batchnorm networks are untrainable. This is only relieved by skip connections in the fashion of residual networks. [3]
This gradient explosion on the surface contradicts the smoothness property explained in the previous section, but in fact they are consistent. The previous section studies the effect of inserting a single batchnorm in a network, while the gradient explosion depends on stacking batchnorms typical of modern deep neural networks.
Another possible reason for the success of batch normalization is that it decouples the length and direction of the weight vectors and thus facilitates better training.
By interpreting batch norm as a reparametrization of weight space, it can be shown that the length and the direction of the weights are separated and can thus be trained separately. For a particular neural network unit with input and weight vector , denote its output as , where is the activation function, and denote . Assume that , and that the spectrum of the matrix is bounded as , , such that is symmetric positive definite. Adding batch normalization to this unit thus results in
, by definition.
The variance term can be simplified such that . Assume that has zero mean and can be omitted, then it follows that
, where is the induced norm of , .
Hence, it could be concluded that , where , and and accounts for its length and direction separately. This property could then be used to prove the faster convergence of problems with batch normalization.
With the reparametrization interpretation, it could then be proved that applying batch normalization to the ordinary least squares problem achieves a linear convergence rate in gradient descent, which is faster than the regular gradient descent with only sub-linear convergence.
Denote the objective of minimizing an ordinary least squares problem as
, where and .
Since , the objective thus becomes
, where 0 is excluded to avoid 0 in the denominator.
Since the objective is convex with respect to , its optimal value could be calculated by setting the partial derivative of the objective against to 0. The objective could be further simplified to be
.
Note that this objective is a form of the generalized Rayleigh quotient
, where is a symmetric matrix and is a symmetric positive definite matrix.
It is proven that the gradient descent convergence rate of the generalized Rayleigh quotient is
, where is the largest eigenvalue of , is the second largest eigenvalue of , and is the smallest eigenvalue of . [7]
In our case, is a rank one matrix, and the convergence result can be simplified accordingly. Specifically, consider gradient descent steps of the form with step size , and starting from , then
.
The problem of learning halfspaces refers to the training of the Perceptron, which is the simplest form of neural network. The optimization problem in this case is
, where and is an arbitrary loss function.
Suppose that is infinitely differentiable and has a bounded derivative. Assume that the objective function is -smooth, and that a solution exists and is bounded such that . Also assume is a multivariate normal random variable. With the Gaussian assumption, it can be shown that all critical points lie on the same line, for any choice of loss function . Specifically, the gradient of could be represented as
, where , , and is the -th derivative of .
By setting the gradient to 0, it thus follows that the bounded critical points can be expressed as , where depends on and . Combining this global property with length-direction decoupling, it could thus be proved that this optimization problem converges linearly.
First, a variation of gradient descent with batch normalization, Gradient Descent in Normalized Parameterization (GDNP), is designed for the objective function , such that the direction and length of the weights are updated separately. Denote the stopping criterion of GDNP as
.
Let the step size be
.
For each step, if , then update the direction as
.
Then update the length according to
, where is the classical bisection algorithm, and is the total iterations ran in the bisection step.
Denote the total number of iterations as , then the final output of GDNP is
.
The GDNP algorithm thus slightly modifies the batch normalization step for the ease of mathematical analysis.
It can be shown that in GDNP, the partial derivative of against the length component converges to zero at a linear rate, such that
, where and are the two starting points of the bisection algorithm on the left and on the right, correspondingly.
Further, for each iteration, the norm of the gradient of with respect to converges linearly, such that
.
Combining these two inequalities, a bound could thus be obtained for the gradient with respect to :
, such that the algorithm is guaranteed to converge linearly.
Although the proof stands on the assumption of Gaussian input, it is also shown in experiments that GDNP could accelerate optimization without this constraint.
Consider a multilayer perceptron (MLP) with one hidden layer and hidden units with mapping from input to a scalar output described as
, where and are the input and output weights of unit correspondingly, and is the activation function and is assumed to be a tanh function.
The input and output weights could then be optimized with
, where is a loss function, , and .
Consider fixed and optimizing only , it can be shown that the critical points of of a particular hidden unit , , all align along one line depending on incoming information into the hidden layer, such that
, where is a scalar, .
This result could be proved by setting the gradient of to zero and solving the system of equations.
Apply the GDNP algorithm to this optimization problem by alternating optimization over the different hidden units. Specifically, for each hidden unit, run GDNP to find the optimal and . With the same choice of stopping criterion and stepsize, it follows that
.
Since the parameters of each hidden unit converge linearly, the whole optimization problem has a linear rate of convergence. [4]
In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space . A vector field on a plane can be visualized as a collection of arrows with given magnitudes and directions, each attached to a point on the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout three dimensional space, such as the wind, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point.
In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.
In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller than any relevant dimension of the body; so that its geometry and the constitutive properties of the material at each point of space can be assumed to be unchanged by the deformation.
In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem. In one dimension, it is equivalent to the fundamental theorem of calculus. In three dimensions, it is equivalent to the divergence theorem.
In quantum mechanics, a spherically symmetric potential is a system of which the potential only depends on the radial distance from the spherical center and a location in space. A particle in a spherically symmetric potential will behave accordingly to said potential and can therefore be used as an approximation, for example, of the electron in a hydrogen atom or of the formation of chemical bonds.
In quantum field theory, Wilson loops are gauge invariant operators arising from the parallel transport of gauge variables around closed loops. They encode all gauge information of the theory, allowing for the construction of loop representations which fully describe gauge theories in terms of these loops. In pure gauge theory they play the role of order operators for confinement, where they satisfy what is known as the area law. Originally formulated by Kenneth G. Wilson in 1974, they were used to construct links and plaquettes which are the fundamental parameters in lattice gauge theory. Wilson loops fall into the broader class of loop operators, with some other notable examples being 't Hooft loops, which are magnetic duals to Wilson loops, and Polyakov loops, which are the thermal version of Wilson loops.
In quantum mechanics, the Hellmann–Feynman theorem relates the derivative of the total energy with respect to a parameter to the expectation value of the derivative of the Hamiltonian with respect to that same parameter. According to the theorem, once the spatial distribution of the electrons has been determined by solving the Schrödinger equation, all the forces in the system can be calculated using classical electrostatics.
In fluid dynamics, Couette flow is the flow of a viscous fluid in the space between two surfaces, one of which is moving tangentially relative to the other. The relative motion of the surfaces imposes a shear stress on the fluid and induces flow. Depending on the definition of the term, there may also be an applied pressure gradient in the flow direction.
In theoretical physics, a source is an abstract concept, developed by Julian Schwinger, motivated by the physical effects of surrounding particles involved in creating or destroying another particle. So, one can perceive sources as the origin of the physical properties carried by the created or destroyed particle, and thus one can use this concept to study all quantum processes including the spacetime localized properties and the energy forms, i.e., mass and momentum, of the phenomena. The probability amplitude of the created or the decaying particle is defined by the effect of the source on a localized spacetime region such that the affected particle captures its physics depending on the tensorial and spinorial nature of the source. An example that Julian Schwinger referred to is the creation of meson due to the mass correlations among five mesons.
In physics and fluid mechanics, a Blasius boundary layer describes the steady two-dimensional laminar boundary layer that forms on a semi-infinite plate which is held parallel to a constant unidirectional flow. Falkner and Skan later generalized Blasius' solution to wedge flow, i.e. flows in which the plate is not parallel to the flow.
Corner detection is an approach used within computer vision systems to extract certain kinds of features and infer the contents of an image. Corner detection is frequently used in motion detection, image registration, video tracking, image mosaicing, panorama stitching, 3D reconstruction and object recognition. Corner detection overlaps with the topic of interest point detection.
In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.
The term generalized logistic distribution is used as the name for several different families of probability distributions. For example, Johnson et al. list four forms, which are listed below.
In the following we solve the second-order differential equation called the hypergeometric differential equation using Frobenius method, named after Ferdinand Georg Frobenius. This is a method that uses the series solution for a differential equation, where we assume the solution takes the form of a series. This is usually the method we use for complicated ordinary differential equations.
Curvilinear coordinates can be formulated in tensor calculus, with important applications in physics and engineering, particularly for describing transportation of physical quantities and deformation of matter in fluid mechanics and continuum mechanics.
In machine learning, the vanishing gradient problem is encountered when training neural networks with gradient-based learning methods and backpropagation. In such methods, during each training iteration, each neural network weight receives an update proportional to the partial derivative of the loss function with respect to the current weight. The problem is that as the network depth or sequence length increases, the gradient magnitude typically is expected to decrease, slowing the training process. In the worst case, this may completely stop the neural network from further learning. As one example of this problem, traditional activation functions such as the hyperbolic tangent function have gradients in the range [-1,1], and backpropagation computes gradients using the chain rule. This has the effect of multiplying n of these small numbers to compute gradients of the early layers in an n-layer network, meaning that the gradient decreases exponentially with n while the early layers train very slowly.
In string theory, the Ramond–Neveu–Schwarz (RNS) formalism is an approach to formulating superstrings in which the worldsheet has explicit superconformal invariance but spacetime supersymmetry is hidden, in contrast to the Green–Schwarz formalism where the latter is explicit. It was originally developed by Pierre Ramond, André Neveu and John Schwarz in the RNS model in 1971, which gives rise to type II string theories and can also give type I string theory. Heterotic string theories can also be acquired through this formalism by using a different worldsheet action. There are various ways to quantize the string within this framework including light-cone quantization, old canonical quantization, and BRST quantization. A consistent string theory is only acquired if the spectrum of states is restricted through a procedure known as a GSO projection, with this projection being automatically present in the Green–Schwarz formalism.
Nonlinear tides are generated by hydrodynamic distortions of tides. A tidal wave is said to be nonlinear when its shape deviates from a pure sinusoidal wave. In mathematical terms, the wave owes its nonlinearity due to the nonlinear advection and frictional terms in the governing equations. These become more important in shallow-water regions such as in estuaries. Nonlinear tides are studied in the fields of coastal morphodynamics, coastal engineering and physical oceanography. The nonlinearity of tides has important implications for the transport of sediment.
In machine learning, normalization is a statistical technique with various applications. There are two main forms of normalization, namely data normalization and activation normalization. Data normalization includes methods that rescale input data so that the features have the same range, mean, variance, or other statistical properties. For instance, a popular choice of feature scaling method is min-max normalization, where each feature is transformed to have the same range. This solves the problem of different features having vastly different scales, for example if one feature is measured in kilometers and another in nanometers.