Residual neural network

Last updated
A residual block in a deep residual network. Here, the residual connection skips two layers. ResBlock.png
A residual block in a deep residual network. Here, the residual connection skips two layers.

A residual neural network (also referred to as a residual network or ResNet) [1] is a deep learning architecture in which the layers learn residual functions with reference to the layer inputs. It was developed in 2015 for image recognition, and won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) of that year. [2] [3]

Contents

As a point of terminology, "residual connection" refers to the specific architectural motif of , where is an arbitrary neural network module. The motif had been used previously (see §History for details). However, the publication of ResNet made it widely popular for feedforward networks, appearing in neural networks that are seemingly unrelated to ResNet.

The residual connection stabilizes the training and convergence of deep neural networks with hundreds of layers, and is a common motif in deep neural networks, such as transformer models (e.g., BERT, and GPT models such as ChatGPT), the AlphaGo Zero system, the AlphaStar system, and the AlphaFold system.

Mathematics

Residual connection

In a multilayer neural network model, consider a subnetwork with a certain number of stacked layers (e.g., 2 or 3). Denote the underlying function performed by this subnetwork as , where is the input to the subnetwork. Residual learning re-parameterizes this subnetwork and lets the parameter layers represent a "residual function" . The output of this subnetwork is then represented as:

The operation of "" is implemented via a "skip connection" that performs an identity mapping to connect the input of the subnetwork with its output. This connection is referred to as a "residual connection" in later work. The function is often represented by matrix multiplication interlaced with activation functions and normalization operations (e.g., batch normalization or layer normalization). As a whole, one of these subnetworks is referred to as a "residual block". [1] A deep residual network is constructed by simply stacking these blocks.

Long short-term memory (LSTM) has a memory mechanism that serves as a residual connection. [4] In an LSTM without a forget gate, an input is processed by a function and added to a memory cell , resulting in . An LSTM with a forget gate essentially functions as a highway network.

To stabilize the variance of the layers' inputs, it is recommended to replace the residual connections with , where is the total number of residual layers. [5]

Projection connection

If the function is of type where , then is undefined. To handle this special case, a projection connection is used:

where is typically a linear projection, defined by where is a matrix. The matrix is trained via backpropagation, as is any other parameter of the model.

Signal propagation

The introduction of identity mappings facilitates signal propagation in both forward and backward paths. [6]

Forward propagation

If the output of the -th residual block is the input to the -th residual block (assuming no activation function between blocks), then the -th input is:

Applying this formulation recursively, e.g.:

yields the general relationship:

where is the index of a residual block and is the index of some earlier block. This formulation suggests that there is always a signal that is directly sent from a shallower block to a deeper block .

Backward propagation

The residual learning formulation provides the added benefit of mitigating the vanishing gradient problem to some extent. However, it is crucial to acknowledge that the vanishing gradient issue is not the root cause of the degradation problem, which is tackled through the use of normalization. To observe the effect of residual blocks on backpropagation, consider the partial derivative of a loss function with respect to some residual block input . Using the equation above from forward propagation for a later residual block : [6]

This formulation suggests that the gradient computation of a shallower layer, , always has a later term that is directly added. Even if the gradients of the terms are small, the total gradient resists vanishing due to the added term .

Variants of residual blocks

Two variants of convolutional Residual Blocks. Left: a basic block that has two 3x3 convolutional layers. Right: a bottleneck block that has a 1x1 convolutional layer for dimension reduction, a 3x3 convolutional layer, and another 1x1 convolutional layer for dimension restoration. ResBlockVariants.png
Two variants of convolutional Residual Blocks. Left: a basic block that has two 3x3 convolutional layers. Right: a bottleneck block that has a 1x1 convolutional layer for dimension reduction, a 3x3 convolutional layer, and another 1x1 convolutional layer for dimension restoration.

Basic block

A basic block is the simplest building block studied in the original ResNet. [1] This block consists of two sequential 3x3 convolutional layers and a residual connection. The input and output dimensions of both layers are equal.

Block diagram of ResNet (2015). It shows a ResNet block with and without the 1x1 convolution. The 1x1 convolution (with stride) can be used to change the shape of the array, which is necessary for residual connection through an upsampling/downsampling layer. ResNet block.svg
Block diagram of ResNet (2015). It shows a ResNet block with and without the 1x1 convolution. The 1x1 convolution (with stride) can be used to change the shape of the array, which is necessary for residual connection through an upsampling/downsampling layer.

Bottleneck block

A bottleneck block [1] consists of three sequential convolutional layers and a residual connection. The first layer in this block is a 1x1 convolution for dimension reduction (e.g., to 1/2 of the input dimension); the second layer performs a 3x3 convolution; the last layer is another 1x1 convolution for dimension restoration. The models of ResNet-50, ResNet-101, and ResNet-152 are all based on bottleneck blocks. [1]

Pre-activation block

The pre-activation residual block [6] applies activation functions before applying the residual function . Formally, the computation of a pre-activation residual block can be written as:

where can be any activation (e.g. ReLU) or normalization (e.g. LayerNorm) operation. This design reduces the number of non-identity mappings between residual blocks. This design was used to train models with 200 to over 1000 layers. [6]

Since GPT-2, transformer blocks have been mostly implemented as pre-activation blocks. This is often referred to as "pre-normalization" in the literature of transformer models. [7]

The original Resnet-18 architecture. Up to 152 layers were trained in the original publication (as "ResNet-152"). Resnet-18 architecture.svg
The original Resnet-18 architecture. Up to 152 layers were trained in the original publication (as "ResNet-152").

Applications

Originally, ResNet was designed for computer vision. [1] [8] [9]

The Transformer architecture includes residual connections. Transformer, full architecture.png
The Transformer architecture includes residual connections.

All transformer architectures include residual connections. Indeed, very deep transformers cannot be trained without them. [10]

The original ResNet paper made no claim on being inspired by biological systems. However, later research has related ResNet to biologically-plausible algorithms. [11] [12]

A study published in Science in 2023 [13] disclosed the complete connectome of an insect brain (specifically that of a fruit fly larva). This study discovered "multilayer shortcuts" that resemble the skip connections in artificial neural networks, including ResNets.

History

Previous work

Residual connections were noticed in neuroanatomy, such as Lorente de No (1938). [14] :Fig 3 McCulloch and Pitts (1943) proposed artificial neural networks and considered those with residual connections. [15] :Fig 1.h

In 1961, Frank Rosenblatt described a three-layer multilayer perceptron (MLP) model with skip connections. [16] :313,Chapter 15 The model was referred to as a "cross-coupled system", and the skip connections were forms of cross-coupled connections.

During the late 1980s, "skip-layer" connections were sometimes used in neural networks. Examples include: [17] [18] Lang and Witbrock (1988) [19] trained a fully connected feedforward network where each layer skip-connects to all subsequent layers, like the later DenseNet (2016). In this work, the residual connection was the form , where is a randomly-initialized projection connection. They termed it a "short-cut connection".

The long short-term memory (LSTM) cell can process data sequentially and keep its hidden state through time. The cell state
c
t
{\displaystyle c_{t}}
can function as a generalized residual connection. LSTM 3.svg
The long short-term memory (LSTM) cell can process data sequentially and keep its hidden state through time. The cell state can function as a generalized residual connection.

Degradation problem

Sepp Hochreiter discovered the vanishing gradient problem in 1991 [20] and argued that it explained why the then-prevalent forms of recurrent neural networks did not work for long sequences. He and Schmidhuber later designed the LSTM architecture to solve this problem, [4] [21] which has a "cell state" that can function as a generalized residual connection. The highway network (2015) [22] [23] applied the idea of an LSTM unfolded in time to feedforward neural networks, resulting in the highway network. ResNet is equivalent to an open-gated highway network.

Standard (left) and unfolded (right) basic recurrent neural network Recurrent neural network unfold.svg
Standard (left) and unfolded (right) basic recurrent neural network

During the early days of deep learning, there were attempts to train increasingly deep models. Notable examples included the AlexNet (2012), which had 8 layers, and the VGG-19 (2014), which had 19 layers. [24] However, stacking too many layers led to a steep reduction in training accuracy, [25] known as the "degradation" problem. [1] In theory, adding additional layers to deepen a network should not result in a higher training loss, but this is what happened with VGGNet. [1] If the extra layers can be set as identity mappings, however, then the deeper network would represent the same function as its shallower counterpart. There is some evidence that the optimizer is not able to approach identity mappings for the parameterized layers, and the benefit of residual connections was to allow identity mappings by default. [26]

In 2014, the state of the art was training deep neural networks with 20 to 30 layers. [27] The research team for ResNet attempted to train deeper ones by empirically testing various methods for training deeper networks, until they came upon the ResNet architecture. [28]

Subsequent work

DenseNet (2016) [29] connects the output of each layer to the input to each subsequent layer:

Stochastic depth [30] is a regularization method that randomly drops a subset of layers and lets the signal propagate through the identity skip connections. Also known as DropPath, this regularizes training for deep models, such as vision transformers. [31]

ResNeXt block diagram. ResNext block.svg
ResNeXt block diagram.

ResNeXt (2017) combines the Inception module with ResNet. [32] [8]

Squeeze-and-Excitation Networks (2018) added squeeze-and-excitation (SE) modules to ResNet. [33] An SE module is applied after a convolution, and takes a tensor of shape (height, width, channels) as input. Each channel is averaged, resulting in a vector of shape . This is then passed through a multilayer perceptron (with an architecture such as linear-ReLU-linear-sigmoid) before it is multiplied with the original tensor.

Related Research Articles

In machine learning, backpropagation is a gradient estimation method commonly used for training a neural network to compute its parameter updates.

Recurrent neural networks (RNNs) are a class of artificial neural network commonly used for sequential data processing. Unlike feedforward neural networks, which process data in a single pass, RNNs process data across multiple time steps, making them well-adapted for modelling and processing text, speech, and time series.

<span class="mw-page-title-main">Feedforward neural network</span> Type of artificial neural network

A feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. Its flow is uni-directional, meaning that the information in the model flows in only one direction—forward—from the input nodes, through the hidden nodes and to the output nodes, without any cycles or loops. Modern feedforward networks are trained using backpropagation, and are colloquially referred to as "vanilla" neural networks.

<span class="mw-page-title-main">Autoencoder</span> Neural network that learns efficient data encoding in an unsupervised manner

An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data. An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction, to generate lower-dimensional embeddings for subsequent use by other machine learning algorithms.

<span class="mw-page-title-main">Long short-term memory</span> Type of recurrent neural network architecture

Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at mitigating the vanishing gradient problem commonly encountered by traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models, and other sequence learning methods. It aims to provide a short-term memory for RNN that can last thousands of timesteps. The name is made in analogy with long-term memory and short-term memory and their relationship, studied by cognitive psychologists since the early 20th century.

<span class="mw-page-title-main">Activation function</span> Artificial neural network node function

The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation function is nonlinear. Modern activation functions include the smooth version of the ReLU, the GELU, which was used in the 2018 BERT model, the logistic (sigmoid) function used in the 2012 speech recognition model developed by Hinton et al, the ReLU used in the 2012 AlexNet computer vision model and in the 2015 ResNet model.

In the mathematical theory of artificial neural networks, universal approximation theorems are theorems of the following form: Given a family of neural networks, for each function from a certain function space, there exists a sequence of neural networks from the family, such that according to some criterion. That is, the family of neural networks is dense in the function space.

There are many types of artificial neural networks (ANN).

A convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns features by itself via filter optimization. This type of deep learning network has been applied to process and make predictions from many different types of data including text, images and audio. Convolution-based networks are the de-facto standard in deep learning-based approaches to computer vision and image processing, and have only recently have been replaced -- in some cases -- by newer deep learning architectures such as the transformer. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.

In machine learning, the vanishing gradient problem is encountered when training neural networks with gradient-based learning methods and backpropagation. In such methods, during each training iteration, each neural network weight receives an update proportional to the partial derivative of the loss function with respect to the current weight. The problem is that as the network depth or sequence length increases, the gradient magnitude typically is expected to decrease, slowing the training process. In the worst case, this may completely stop the neural network from further learning. As one example of this problem, traditional activation functions such as the hyperbolic tangent function have gradients in the range [-1,1], and backpropagation computes gradients using the chain rule. This has the effect of multiplying n of these small numbers to compute gradients of the early layers in an n-layer network, meaning that the gradient decreases exponentially with n while the early layers train very slowly.

In machine learning, the Highway Network was the first working very deep feedforward neural network with hundreds of layers, much deeper than previous neural networks. It uses skip connections modulated by learned gating mechanisms to regulate information flow, inspired by long short-term memory (LSTM) recurrent neural networks. The advantage of the Highway Network over other deep learning architectures is its ability to overcome or partially prevent the vanishing gradient problem, thus improving its optimization. Gating mechanisms are used to facilitate information flow across the many layers.

The convolutional sparse coding paradigm is an extension of the global sparse coding model, in which a redundant dictionary is modeled as a concatenation of circulant matrices. While the global sparsity constraint describes signal as a linear combination of a few atoms in the redundant dictionary , usually expressed as for a sparse vector , the alternative dictionary structure adopted by the convolutional sparse coding model allows the sparsity prior to be applied locally instead of globally: independent patches of are generated by "local" dictionaries operating over stripes of .

<span class="mw-page-title-main">Neural style transfer</span> Type of software algorithm for image manipulation

Neural style transfer (NST) refers to a class of software algorithms that manipulate digital images, or videos, in order to adopt the appearance or visual style of another image. NST algorithms are characterized by their use of deep neural networks for the sake of image transformation. Common uses for NST are the creation of artificial artwork from photographs, for example by transferring the appearance of famous paintings to user-supplied photographs. Several notable mobile apps use NST techniques for this purpose, including DeepArt and Prisma. This method has been used by artists and designers around the globe to develop new artwork based on existent style(s).

<span class="mw-page-title-main">Transformer (deep learning architecture)</span> Deep learning architecture for modelling sequential data

A transformer is a deep learning architecture developed by researchers at Google and based on the multi-head attention mechanism, proposed in the 2017 paper "Attention Is All You Need". Text is converted to numerical representations called tokens, and each token is converted into a vector via lookup from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism, allowing the signal for key tokens to be amplified and less important tokens to be diminished.

In the study of artificial neural networks (ANNs), the neural tangent kernel (NTK) is a kernel that describes the evolution of deep artificial neural networks during their training by gradient descent. It allows ANNs to be studied using theoretical tools from kernel methods.

A Neural Network Gaussian Process (NNGP) is a Gaussian process (GP) obtained as the limit of a certain type of sequence of neural networks. Specifically, a wide variety of network architectures converges to a GP in the infinitely wide limit, in the sense of distribution. The concept constitutes an intensional definition, i.e., a NNGP is just a GP, but distinguished by how it is obtained.

A graph neural network (GNN) belongs to a class of artificial neural networks for processing data that can be represented as graphs.

In machine learning, a neural differential equation is a differential equation whose right-hand side is parametrized by the weights θ of a neural network. In particular, a neural ordinary differential equation (neural ODE) is an ordinary differential equation of the form

<span class="mw-page-title-main">VGGNet</span> Series of convolutional neural networks for image classification

The VGGNets are a series of convolutional neural networks (CNNs) developed by the Visual Geometry Group (VGG) at the University of Oxford.

In deep learning, weight initialization describes the initial step in creating a neural network. A neural network contains trainable parameters that are modified during training: weight initalization is the pre-training step of assigning initial values to these parameters.

References

  1. 1 2 3 4 5 6 7 8 9 He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (10 Dec 2015). Deep Residual Learning for Image Recognition. arXiv: 1512.03385 .
  2. "ILSVRC2015 Results". image-net.org.
  3. Deng, Jia; Dong, Wei; Socher, Richard; Li, Li-Jia; Li, Kai; Fei-Fei, Li (2009). "ImageNet: A large-scale hierarchical image database". CVPR.
  4. 1 2 Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation . 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID   9377276. S2CID   1915014.
  5. Hanin, Boris; Rolnick, David (2018). "How to Start Training: The Effect of Initialization and Architecture". Advances in Neural Information Processing Systems. 31. Curran Associates, Inc. arXiv: 1803.01719 .
  6. 1 2 3 4 He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2015). "Identity Mappings in Deep Residual Networks". arXiv: 1603.05027 [cs.CV].
  7. Radford, Alec; Wu, Jeffrey; Child, Rewon; Luan, David; Amodei, Dario; Sutskever, Ilya (14 February 2019). "Language models are unsupervised multitask learners" (PDF). Archived (PDF) from the original on 6 February 2021. Retrieved 19 December 2020.
  8. 1 2 3 Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "8.6. Residual Networks (ResNet) and ResNeXt". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN   978-1-009-38943-3.
  9. Szegedy, Christian; Ioffe, Sergey; Vanhoucke, Vincent; Alemi, Alex (2016). "Inception-v4, Inception-ResNet and the impact of residual connections on learning". arXiv: 1602.07261 [cs.CV].
  10. Dong, Yihe; Cordonnier, Jean-Baptiste; Loukas, Andreas (2021). "Attention is not all you need: pure attention loses rank doubly exponentially with depth". arXiv: 2103.03404 [cs.LG].
  11. Liao, Qianli; Poggio, Tomaso (2016). Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex. arXiv: 1604.03640 .
  12. Xiao, Will; Chen, Honglin; Liao, Qianli; Poggio, Tomaso (2018). Biologically-Plausible Learning Algorithms Can Scale to Large Datasets. arXiv: 1811.03567 .
  13. Winding, Michael; Pedigo, Benjamin; Barnes, Christopher; Patsolic, Heather; Park, Youngser; Kazimiers, Tom; Fushiki, Akira; Andrade, Ingrid; Khandelwal, Avinash; Valdes-Aleman, Javier; Li, Feng; Randel, Nadine; Barsotti, Elizabeth; Correia, Ana; Fetter, Fetter; Hartenstein, Volker; Priebe, Carey; Vogelstein, Joshua; Cardona, Albert; Zlatic, Marta (10 Mar 2023). "The connectome of an insect brain". Science. 379 (6636): eadd9330. bioRxiv   10.1101/2022.11.28.516756v1 . doi:10.1126/science.add9330. PMC   7614541 . PMID   36893230. S2CID   254070919.
  14. De N, Rafael Lorente (1938-05-01). "Analysis of the Activity of the Chains of Internuncial Neurons". Journal of Neurophysiology. 1 (3): 207–244. doi:10.1152/jn.1938.1.3.207. ISSN   0022-3077.
  15. McCulloch, Warren S.; Pitts, Walter (1943-12-01). "A logical calculus of the ideas immanent in nervous activity". The Bulletin of Mathematical Biophysics. 5 (4): 115–133. doi:10.1007/BF02478259. ISSN   1522-9602.
  16. Rosenblatt, Frank (1961). Principles of neurodynamics. perceptrons and the theory of brain mechanisms (PDF).
  17. Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. "Learning internal representations by error propagation", Parallel Distributed Processing. Vol. 1. 1986.
  18. Venables, W. N.; Ripley, Brain D. (1994). Modern Applied Statistics with S-Plus. Springer. pp. 261–262. ISBN   9783540943501.
  19. Lang, Kevin; Witbrock, Michael (1988). "Learning to tell two spirals apart" (PDF). Proceedings of the 1988 Connectionist Models Summer School: 52–59.
  20. Hochreiter, Sepp (1991). Untersuchungen zu dynamischen neuronalen Netzen (PDF) (diploma thesis). Technical University Munich, Institute of Computer Science, advisor: J. Schmidhuber.
  21. Felix A. Gers; Jürgen Schmidhuber; Fred Cummins (2000). "Learning to Forget: Continual Prediction with LSTM". Neural Computation . 12 (10): 2451–2471. CiteSeerX   10.1.1.55.5709 . doi:10.1162/089976600300015015. PMID   11032042. S2CID   11598600.
  22. Srivastava, Rupesh Kumar; Greff, Klaus; Schmidhuber, Jürgen (3 May 2015). "Highway Networks". arXiv: 1505.00387 [cs.LG].
  23. Srivastava, Rupesh Kumar; Greff, Klaus; Schmidhuber, Jürgen (22 July 2015). "Training Very Deep Networks". arXiv: 1507.06228 [cs.LG].
  24. Simonyan, Karen; Zisserman, Andrew (2015-04-10). "Very Deep Convolutional Networks for Large-Scale Image Recognition". arXiv: 1409.1556 [cs.CV].
  25. He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification". arXiv: 1502.01852 [cs.CV].
  26. He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). "Identity Mappings in Deep Residual Networks". In Leibe, Bastian; Matas, Jiri; Sebe, Nicu; Welling, Max (eds.). Computer Vision – ECCV 2016. Vol. 9908. Cham: Springer International Publishing. pp. 630–645. doi:10.1007/978-3-319-46493-0_38. ISBN   978-3-319-46492-3 . Retrieved 2024-09-19.
  27. Simonyan, Karen; Zisserman, Andrew (2015-04-10). "Very Deep Convolutional Networks for Large-Scale Image Recognition". arXiv: 1409.1556 [cs.CV].
  28. Linn, Allison (2015-12-10). "Microsoft researchers win ImageNet computer vision challenge". The AI Blog. Retrieved 2024-06-29.
  29. Huang, Gao; Liu, Zhuang; van der Maaten, Laurens; Weinberger, Kilian (2016). Densely Connected Convolutional Networks. arXiv: 1608.06993 .
  30. Huang, Gao; Sun, Yu; Liu, Zhuang; Weinberger, Kilian (2016). Deep Networks with Stochastic Depth. arXiv: 1603.09382 .
  31. Lee, Youngwan; Kim, Jonghee; Willette, Jeffrey; Hwang, Sung Ju (2022). "MPViT: Multi-Path Vision Transformer for Dense Prediction": 7287–7296. arXiv: 2112.11010 .{{cite journal}}: Cite journal requires |journal= (help)
  32. Xie, Saining; Girshick, Ross; Dollar, Piotr; Tu, Zhuowen; He, Kaiming (2017). "Aggregated Residual Transformations for Deep Neural Networks": 1492–1500. arXiv: 1611.05431 .{{cite journal}}: Cite journal requires |journal= (help)
  33. Hu, Jie; Shen, Li; Sun, Gang (2018). "Squeeze-and-Excitation Networks": 7132–7141.{{cite journal}}: Cite journal requires |journal= (help)