Vanishing gradient problem

Last updated

In machine learning, the vanishing gradient problem is encountered when training neural networks with gradient-based learning methods and backpropagation. In such methods, during each training iteration, each neural network weight receives an update proportional to the partial derivative of the loss function with respect to the current weight. [1] The problem is that as the network depth or sequence length increases, the gradient magnitude typically is expected to decrease (or grow uncontrollably), slowing the training process. [1] In the worst case, this may completely stop the neural network from further learning. [1] As one example of this problem, traditional activation functions such as the hyperbolic tangent function have gradients in the range [-1,1], and backpropagation computes gradients using the chain rule. This has the effect of multiplying n of these small numbers to compute gradients of the early layers in an n-layer network, meaning that the gradient (error signal) decreases exponentially with n while the early layers train very slowly.

Contents

Backpropagation allowed researchers to train supervised deep artificial neural networks from scratch, initially with little success. Hochreiter's diplom thesis of 1991 formally identified the reason for this failure in the "vanishing gradient problem", [2] [3] which not only affects many-layered feedforward networks, [4] but also recurrent networks. [5] [6] The latter are trained by unfolding them into very deep feedforward networks, where a new layer is created for each time-step of an input sequence processed by the network (the combination of unfolding and backpropagation is termed backpropagation through time).

When activation functions are used whose derivatives can take on larger values, one risk is encountering the related exploding gradient problem.

Prototypical models

This section is based on the paper On the difficulty of training Recurrent Neural Networks by Pascanu, Mikolov, and Bengio. [6]

Recurrent network model

A generic recurrent network has hidden states inputs , and outputs . Let it be parametrized by , so that the system evolves asOften, the output is a function of , as some . The vanishing gradient problem already presents itself clearly when , so we simplify our notation to the special case with: Now, take its differential:Training the network requires us to define a loss function to be minimized. Let it be [note 1] , then minimizing it by gradient descent gives

where is the learning rate.

The vanishing/exploding gradient problem appears because there are repeated multiplications, of the form

Example: recurrent network with sigmoid activation

For a concrete example, consider a typical recurrent network defined by

where is the network parameter, is the sigmoid activation function [note 2] , applied to each vector coordinate separately, and is the bias vector.

Then, , and so Since , the operator norm of the above multiplication is bounded above by . So if the spectral radius of is , then at large , the above multiplication has operator norm bounded above by . This is the prototypical vanishing gradient problem.

The effect of a vanishing gradient is that the network cannot learn long-range effects. Recall Equation ( loss differential ):The components of are just components of and , so if are bounded, then is also bounded by some , and so the terms in decay as . This means that, effectively, is affected only by the first terms in the sum.

If , the above analysis does not quite work. [note 3] For the prototypical exploding gradient problem, the next model is clearer.

Dynamical systems model

Bifurcation diagram of the one-neuron recurrent network. Horizontal axis is b, and vertical axis is x. The black curve is the set of stable and unstable equilibria. Notice that the system exhibits hysteresis, and can be used as a one-bit memory. One-neuron recurrent network bifurcation diagram.png
Bifurcation diagram of the one-neuron recurrent network. Horizontal axis is b, and vertical axis is x. The black curve is the set of stable and unstable equilibria. Notice that the system exhibits hysteresis, and can be used as a one-bit memory.

Following (Doya, 1993), [7] consider this one-neuron recurrent network with sigmoid activation:At the small limit, the dynamics of the network becomesConsider first the autonomous case, with . Set , and vary in . As decreases, the system has 1 stable point, then has 2 stable points and 1 unstable point, and finally has 1 stable point again. Explicitly, the stable points are .

Now consider and , where is large enough that the system has settled into one of the stable points.

If puts the system very close to an unstable point, then a tiny variation in or would make move from one stable point to the other. This makes and both very large, a case of the exploding gradient.

If puts the system far from an unstable point, then a small variation in would have no effect on , making , a case of the vanishing gradient.

Note that in this case, neither decays to zero nor blows up to infinity. Indeed, it's the only well-behaved gradient, which explains why early researches focused on learning or designing recurrent networks systems that could perform long-ranged computations (such as outputting the first input it sees at the very end of an episode) by shaping its stable attractors. [8]

For the general case, the intuition still holds ( [6] Figures 3, 4, and 5).

Geometric model

Continue using the above one-neuron network, fixing , and consider a loss function defined by . This produces a rather pathological loss landscape: as approach from above, the loss approaches zero, but as soon as crosses , the attractor basin changes, and loss jumps to 0.50. [note 4]

Consequently, attempting to train by gradient descent would "hit a wall in the loss landscape", and cause exploding gradient. A slightly more complex situation is plotted in, [6] Figures 6.

Solutions

To overcome this problem, several methods were proposed.

RNN

For recurrent neural networks, the long short-term memory (LSTM) network was designed to solve the problem (Hochreiter & Schmidhuber, 1997). [9]

For the exploding gradient problem, (Pascanu et al, 2012) [6] recommended gradient clipping, meaning dividing the gradient vector by if . This restricts the gradient vectors within a ball of radius .

Batch normalization

Batch normalization is a standard method for solving both the exploding and the vanishing gradient problems. [10] [11]

Multi-level hierarchy

In multi-level hierarchy of networks (Schmidhuber, 1992), pre-trained one level at a time through unsupervised learning, fine-tuned through backpropagation. [12] Here each level learns a compressed representation of the observations that is fed to the next level.

Deep belief network

Similar ideas have been used in feed-forward neural networks for unsupervised pre-training to structure a neural network, making it first learn generally useful feature detectors. Then the network is trained further by supervised backpropagation to classify labeled data. The deep belief network model by Hinton et al. (2006) involves learning the distribution of a high-level representation using successive layers of binary or real-valued latent variables. It uses a restricted Boltzmann machine to model each new layer of higher level features. Each new layer guarantees an increase on the lower-bound of the log likelihood of the data, thus improving the model, if trained properly. Once sufficiently many layers have been learned the deep architecture may be used as a generative model by reproducing the data when sampling down the model (an "ancestral pass") from the top level feature activations. [13] Hinton reports that his models are effective feature extractors over high-dimensional, structured data. [14]

Faster hardware

Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this "is basically what is winning many of the image recognition competitions now", but that it "does not really overcome the problem in a fundamental way" [15] since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs. [13]

Residual connection

Residual connections, or skip connections, refers to the architectural motif of , where is an arbitrary neural network module. This gives the gradient of , where the identity matrix do not suffer from the vanishing or exploding gradient. During backpropagation, part of the gradient flows through the residual connections. [16]

Concretely, let the neural network (without residual connections) be , then with residual connections, the gradient of output with respect to the activations at layer is . The gradient thus does not vanish in arbitrarily deep networks.

Feedforward networks with residual connections can be regarded as an ensemble of relatively shallow nets. In this perspective, they resolve the vanishing gradient problem by being equivalent to ensembles of many shallow networks, for which there is no vanishing gradient problem. [17]

Other activation functions

Rectifiers such as ReLU suffer less from the vanishing gradient problem, because they only saturate in one direction. [18]

Weight initialization

Weight initialization is another approach that has been proposed to reduce the vanishing gradient problem in deep networks.

Kumar suggested that the distribution of initial weights should vary according to activation function used and proposed to initialize the weights in networks with the logistic activation function using a Gaussian distribution with a zero mean and a standard deviation of 3.6/sqrt(N), where N is the number of neurons in a layer. [19]

Recently, Yilmaz and Poli [20] performed a theoretical analysis on how gradients are affected by the mean of the initial weights in deep neural networks using the logistic activation function and found that gradients do not vanish if the mean of the initial weights is set according to the formula: max(−1,-8/N). This simple strategy allows networks with 10 or 15 hidden layers to be trained very efficiently and effectively using the standard backpropagation.

Other

Behnke relied only on the sign of the gradient (Rprop) when training his Neural Abstraction Pyramid [21] to solve problems like image reconstruction and face localization.[ citation needed ]

Neural networks can also be optimized by using a universal search algorithm on the space of neural network's weights, e.g., random guess or more systematically genetic algorithm. This approach is not based on gradient and avoids the vanishing gradient problem. [22]

See also

Notes

  1. A more general loss function could depend on the entire sequence of outputs, as for which the problem is the same, just with more complex notations.
  2. Any activation function works, as long as it is differentiable with bounded derivative.
  3. Consider and , with and . Then has spectral radius , and , which might go to infinity or zero depending on choice of .
  4. This is because at , the two stable attractors are , and the unstable attractor is .

    Related Research Articles

    <span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

    In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where is the Laplace operator, is the divergence operator, is the gradient operator, and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.

    <span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

    The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

    In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).

    In continuum mechanics, the infinitesimal strain theory is a mathematical approach to the description of the deformation of a solid body in which the displacements of the material particles are assumed to be much smaller than any relevant dimension of the body; so that its geometry and the constitutive properties of the material at each point of space can be assumed to be unchanged by the deformation.

    <span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

    In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

    <span class="mw-page-title-main">Scalar potential</span> When potential energy difference depends only on displacement

    In mathematical physics, scalar potential describes the situation where the difference in the potential energies of an object in two different positions depends only on the positions, not upon the path taken by the object in traveling from one position to the other. It is a scalar field in three-space: a directionless value (scalar) that depends only on its location. A familiar example is potential energy due to gravity.

    In multivariable calculus, the directional derivative measures the rate at which a function changes in a particular direction at a given point.

    In machine learning, backpropagation is a gradient estimation method commonly used for training a neural network to compute its parameter updates.

    Recurrent neural networks (RNNs) are a class of artificial neural network commonly used for sequential data processing. Unlike feedforward neural networks, which process data in a single pass, RNNs process data across multiple time steps, making them well-adapted for modelling and processing text, speech, and time series.

    <span class="mw-page-title-main">Autoencoder</span> Neural network that learns efficient data encoding in an unsupervised manner

    An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data. An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction, to generate lower-dimensional embeddings for subsequent use by other machine learning algorithms.

    Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.

    <span class="mw-page-title-main">Long short-term memory</span> Type of recurrent neural network architecture

    Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models, and other sequence learning methods. It aims to provide a short-term memory for RNN that can last thousands of timesteps. The name is made in analogy with long-term memory and short-term memory and their relationship, studied by cognitive psychologists since the early 20th century.

    In fluid mechanics and mathematics, a capillary surface is a surface that represents the interface between two different fluids. As a consequence of being a surface, a capillary surface has no thickness in slight contrast with most real fluid interfaces.

    <span class="mw-page-title-main">Stochastic gradient Langevin dynamics</span> Optimization and sampling technique

    Stochastic gradient Langevin dynamics (SGLD) is an optimization and sampling technique composed of characteristics from Stochastic gradient descent, a Robbins–Monro optimization algorithm, and Langevin dynamics, a mathematical extension of molecular dynamics models. Like stochastic gradient descent, SGLD is an iterative optimization algorithm which uses minibatching to create a stochastic gradient estimator, as used in SGD to optimize a differentiable objective function. Unlike traditional SGD, SGLD can be used for Bayesian learning as a sampling method. SGLD may be viewed as Langevin dynamics applied to posterior distributions, but the key difference is that the likelihood gradient terms are minibatched, like in SGD. SGLD, like Langevin dynamics, produces samples from a posterior distribution of parameters based on available data. First described by Welling and Teh in 2011, the method has applications in many contexts which require optimization, and is most notably applied in machine learning problems.

    An artificial neural network (ANN) combines biological principles with advanced statistics to solve problems in domains such as pattern recognition and game-play. ANNs adopt the basic model of neuron analogues connected to each other in a variety of ways.

    <span class="mw-page-title-main">Variational autoencoder</span> Deep learning generative model to encode data representation

    In machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling. It is part of the families of probabilistic graphical models and variational Bayesian methods.

    In the study of artificial neural networks (ANNs), the neural tangent kernel (NTK) is a kernel that describes the evolution of deep artificial neural networks during their training by gradient descent. It allows ANNs to be studied using theoretical tools from kernel methods.

    A Neural Network Gaussian Process (NNGP) is a Gaussian process (GP) obtained as the limit of a certain type of sequence of neural networks. Specifically, a wide variety of network architectures converges to a GP in the infinitely wide limit, in the sense of distribution. The concept constitutes an intensional definition, i.e., a NNGP is just a GP, but distinguished by how it is obtained.

    A Stein discrepancy is a statistical divergence between two probability measures that is rooted in Stein's method. It was first formulated as a tool to assess the quality of Markov chain Monte Carlo samplers, but has since been used in diverse settings in statistics, machine learning and computer science.

    <span class="mw-page-title-main">Deep backward stochastic differential equation method</span>

    Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE). This method is particularly useful for solving high-dimensional problems in financial derivatives pricing and risk management. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings.

    References

    1. 1 2 3 Basodi, Sunitha; Ji, Chunyan; Zhang, Haiping; Pan, Yi (September 2020). "Gradient amplification: An efficient way to train deep neural networks". Big Data Mining and Analytics. 3 (3): 198. arXiv: 2006.10560 . doi: 10.26599/BDMA.2020.9020004 . ISSN   2096-0654. S2CID   219792172.
    2. Hochreiter, S. (1991). Untersuchungen zu dynamischen neuronalen Netzen (PDF) (Diplom thesis). Institut f. Informatik, Technische Univ. Munich.
    3. Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. (2001). "Gradient flow in recurrent nets: the difficulty of learning long-term dependencies". In Kremer, S. C.; Kolen, J. F. (eds.). A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press. doi:10.1109/9780470544037.ch14. ISBN   0-7803-5369-2.
    4. Goh, Garrett B.; Hodas, Nathan O.; Vishnu, Abhinav (15 June 2017). "Deep learning for computational chemistry". Journal of Computational Chemistry. 38 (16): 1291–1307. arXiv: 1701.04503 . Bibcode:2017arXiv170104503G. doi:10.1002/jcc.24764. PMID   28272810. S2CID   6831636.
    5. Bengio, Y.; Frasconi, P.; Simard, P. (1993). The problem of learning long-term dependencies in recurrent networks. IEEE International Conference on Neural Networks. IEEE. pp. 1183–1188. doi:10.1109/ICNN.1993.298725. ISBN   978-0-7803-0999-9.
    6. 1 2 3 4 5 Pascanu, Razvan; Mikolov, Tomas; Bengio, Yoshua (21 November 2012). "On the difficulty of training Recurrent Neural Networks". arXiv: 1211.5063 [cs.LG].
    7. Doya, K. (1992). "Bifurcations in the learning of recurrent neural networks". [Proceedings] 1992 IEEE International Symposium on Circuits and Systems. Vol. 6. IEEE. pp. 2777–2780. doi:10.1109/iscas.1992.230622. ISBN   0-7803-0593-0. S2CID   15069221.
    8. Bengio, Y.; Simard, P.; Frasconi, P. (March 1994). "Learning long-term dependencies with gradient descent is difficult". IEEE Transactions on Neural Networks. 5 (2): 157–166. doi:10.1109/72.279181. ISSN   1941-0093. PMID   18267787. S2CID   206457500.
    9. Hochreiter, Sepp; Schmidhuber, Jürgen (1997). "Long Short-Term Memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID   9377276. S2CID   1915014.
    10. Ioffe, Sergey; Szegedy, Christian (1 June 2015). "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift". International Conference on Machine Learning. PMLR: 448–456. arXiv: 1502.03167 .
    11. Santurkar, Shibani; Tsipras, Dimitris; Ilyas, Andrew; Madry, Aleksander (2018). "How Does Batch Normalization Help Optimization?". Advances in Neural Information Processing Systems. 31. Curran Associates, Inc.
    12. J. Schmidhuber., "Learning complex, extended sequences using the principle of history compression," Neural Computation, 4, pp. 234–242, 1992.
    13. 1 2 Hinton, G. E.; Osindero, S.; Teh, Y. (2006). "A fast learning algorithm for deep belief nets" (PDF). Neural Computation . 18 (7): 1527–1554. CiteSeerX   10.1.1.76.1541 . doi:10.1162/neco.2006.18.7.1527. PMID   16764513. S2CID   2309950.
    14. Hinton, G. (2009). "Deep belief networks". Scholarpedia. 4 (5): 5947. Bibcode:2009SchpJ...4.5947H. doi: 10.4249/scholarpedia.5947 .
    15. Schmidhuber, Jürgen (2015). "Deep learning in neural networks: An overview". Neural Networks. 61: 85–117. arXiv: 1404.7828 . doi:10.1016/j.neunet.2014.09.003. PMID   25462637. S2CID   11715509.
    16. He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE. pp. 770–778. arXiv: 1512.03385 . doi:10.1109/CVPR.2016.90. ISBN   978-1-4673-8851-1.
    17. Veit, Andreas; Wilber, Michael; Belongie, Serge (20 May 2016). "Residual Networks Behave Like Ensembles of Relatively Shallow Networks". arXiv: 1605.06431 [cs.CV].
    18. Glorot, Xavier; Bordes, Antoine; Bengio, Yoshua (14 June 2011). "Deep Sparse Rectifier Neural Networks". PMLR: 315–323.
    19. Kumar, Siddharth Krishna. "On weight initialization in deep neural networks." arXiv preprint arXiv:1704.08863 (2017).
    20. Yilmaz, Ahmet; Poli, Riccardo (1 September 2022). "Successfully and efficiently training deep multi-layer perceptrons with logistic activation function simply requires initializing the weights with an appropriate negative mean". Neural Networks. 153: 87–103. doi:10.1016/j.neunet.2022.05.030. ISSN   0893-6080. PMID   35714424. S2CID   249487697.
    21. Sven Behnke (2003). Hierarchical Neural Networks for Image Interpretation (PDF). Lecture Notes in Computer Science. Vol. 2766. Springer.
    22. "Sepp Hochreiter's Fundamental Deep Learning Problem (1991)". people.idsia.ch. Retrieved 7 January 2017.