Feedforward neural network

Last updated
In a feedforward network, information always moves in one direction; it never goes backwards. Feed forward neural net.gif
In a feedforward network, information always moves in one direction; it never goes backwards.
Simplified neural network training example.svg
Simplified example of training a neural network for object detection: The network is trained on multiple images depicting either starfish or sea urchins, which are correlated with "nodes" that represent visual features. The starfish match with a ringed texture and a star outline, whereas most sea urchins match with a striped texture and oval shape. However, the instance of a ring-textured sea urchin creates a weakly weighted association between them.
Simplified neural network example.svg
Subsequent run of the network on an input image (left): [1] The network correctly detects the starfish. However, the weakly weighted association between ringed texture and sea urchin also confers a weak signal to the latter from one of two intermediate nodes. In addition, a shell that was not included in the training gives a weak signal for the oval shape, also resulting in a weak signal for the sea urchin output. These weak signals may result in a false positive result for sea urchin.
In reality, textures and outlines would not be represented by single nodes, but rather by associated weight patterns of multiple nodes.

Feedforward refers to recognition-inference architecture of neural networks. Artificial neural network architectures are based on inputs multiplied by weights to obtain outputs (inputs-to-output): feedforward. [2] Recurrent neural networks, or neural networks with loops allow information from later processing stages to feed back to earlier stages for sequence processing. [3] However, at every stage of inference a feedforward multiplication remains the core, essential for backpropagation [4] [5] [6] [7] [8] or backpropagation through time. Thus neural networks cannot contain feedback like negative feedback or positive feedback where the outputs feed back to the very same inputs and modify them, because this forms an infinite loop which is not possible to rewind in time to generate an error signal through backpropagation. This issue and nomenclature appear to be a point of confusion between some computer scientists and scientists in other fields studying brain networks. [9]

Contents

Mathematical foundations

Activation function

The two historically common activation functions are both sigmoids, and are described by

The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here is the output of the -th node (neuron) and is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models).

In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids.

Learning

Learning occurs by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation.

We can represent the degree of error in an output node in the -th data point (training example) by , where is the desired target value for -th data point at node , and is the value produced at node when the -th data point is given as an input.

The node weights can then be adjusted based on corrections that minimize the error in the entire output for the -th data point, given by

Using gradient descent, the change in each weight is

where is the output of the previous neuron , and is the learning rate , which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression, denotes the partial derivative of the error according to the weighted sum of the input connections of neuron .

The derivative to be calculated depends on the induced local field , which itself varies. It is easy to prove that for an output node this derivative can be simplified to

where is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is

This depends on the change in weights of the th nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function. [10]

History

Timeline

Linear regression

Perceptron

If using a threshold, i.e. a linear activation function, the resulting linear threshold unit is called a perceptron . (Often the term is used to denote just one of these units.) Multiple parallel non-linear units are able to approximate any continuous function from a compact interval of the real numbers into the interval [−1,1] despite the limited computational power of single unit with a linear threshold function. [31]

Two-layer neural network capable of calculating XOR. Numbers in neurons represent their explicit threshold. Numbers annotating arrows represent weight of the inputs. If the threshold of 2 is met then a value of 1 is used for the weight multiplication to the next layer. Not meeting the threshold results in 0 being used. The bottom layer of inputs is not always considered a real neural network layer. XOR perceptron net.png
Two-layer neural network capable of calculating XOR. Numbers in neurons represent their explicit threshold. Numbers annotating arrows represent weight of the inputs. If the threshold of 2 is met then a value of 1 is used for the weight multiplication to the next layer. Not meeting the threshold results in 0 being used. The bottom layer of inputs is not always considered a real neural network layer.

Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule . It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent.

Multilayer perceptron

A multilayer perceptron (MLP) is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons (hence the synonym sometimes used of fully connected network (FCN)), often with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly separable. [32]

Other feedforward networks

1D convolutional neural network feed forward example 1D Convolutional Neural Network feed forward example.png
1D convolutional neural network feed forward example

Examples of other feedforward networks include convolutional neural networks and radial basis function networks, which use a different activation function.

See also

References

  1. Ferrie, C., & Kaiser, S. (2019). Neural Networks for Babies. Sourcebooks. ISBN   978-1492671206.{{cite book}}: CS1 maint: multiple names: authors list (link)
  2. Zell, Andreas (1994). Simulation Neuronaler Netze[Simulation of Neural Networks] (in German) (1st ed.). Addison-Wesley. p. 73. ISBN   3-89319-554-8.
  3. Schmidhuber, Jürgen (2015-01-01). "Deep learning in neural networks: An overview". Neural Networks. 61: 85–117. arXiv: 1404.7828 . doi:10.1016/j.neunet.2014.09.003. ISSN   0893-6080. PMID   25462637. S2CID   11715509.
  4. Linnainmaa, Seppo (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors (Masters) (in Finnish). University of Helsinki. p. 6–7.
  5. Kelley, Henry J. (1960). "Gradient theory of optimal flight paths". ARS Journal. 30 (10): 947–954. doi:10.2514/8.5282.
  6. Rosenblatt, Frank. x. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington DC, 1961
  7. 1 2 Werbos, Paul (1982). "Applications of advances in nonlinear sensitivity analysis" (PDF). System modeling and optimization. Springer. pp. 762–770. Archived (PDF) from the original on 14 April 2016. Retrieved 2 July 2017.
  8. 1 2 Rumelhart, David E., Geoffrey E. Hinton, and R. J. Williams. "Learning Internal Representations by Error Propagation". David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundation. MIT Press, 1986.
  9. Achler, T. (2023). "What AI, Neuroscience, and Cognitive Science Can Learn from Each Other: An Embedded Perspective". Cognitive Computation.
  10. Haykin, Simon (1998). Neural Networks: A Comprehensive Foundation (2 ed.). Prentice Hall. ISBN   0-13-273350-1.
  11. Merriman, Mansfield. A List of Writings Relating to the Method of Least Squares: With Historical and Critical Notes. Vol. 4. Academy, 1877.
  12. Stigler, Stephen M. (1981). "Gauss and the Invention of Least Squares". Ann. Stat. 9 (3): 465–474. doi: 10.1214/aos/1176345451 .
  13. 1 2 3 4 5 Schmidhuber, Jürgen (2022). "Annotated History of Modern AI and Deep Learning". arXiv: 2212.11279 [cs.NE].
  14. Bretscher, Otto (1995). Linear Algebra With Applications (3rd ed.). Upper Saddle River, NJ: Prentice Hall.
  15. Stigler, Stephen M. (1986). The History of Statistics: The Measurement of Uncertainty before 1900 . Cambridge: Harvard. ISBN   0-674-40340-1.
  16. McCulloch, Warren S.; Pitts, Walter (1943-12-01). "A logical calculus of the ideas immanent in nervous activity" . The Bulletin of Mathematical Biophysics. 5 (4): 115–133. doi:10.1007/BF02478259. ISSN   1522-9602.
  17. Rosenblatt, Frank (1958). "The Perceptron: A Probabilistic Model For Information Storage And Organization in the Brain". Psychological Review. 65 (6): 386–408. CiteSeerX   10.1.1.588.3775 . doi:10.1037/h0042519. PMID   13602029. S2CID   12781225.
  18. 1 2 Joseph, R. D. (1960). Contributions to Perceptron Theory, Cornell Aeronautical Laboratory Report No. VG-11 96--G-7, Buffalo.
  19. Rosenblatt, Frank (1962). Principles of Neurodynamics. Spartan, New York.
  20. Ivakhnenko, A. G. (1973). Cybernetic Predicting Devices. CCM Information Corporation.
  21. Ivakhnenko, A. G.; Grigorʹevich Lapa, Valentin (1967). Cybernetics and forecasting techniques. American Elsevier Pub. Co.
  22. Amari, Shun'ichi (1967). "A theory of adaptive pattern classifier". IEEE Transactions. EC (16): 279-307.
  23. Linnainmaa, Seppo (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors (Masters) (in Finnish). University of Helsinki. p. 6–7.
  24. Linnainmaa, Seppo (1976). "Taylor expansion of the accumulated rounding error". BIT Numerical Mathematics. 16 (2): 146–160. doi:10.1007/bf01931367. S2CID   122357351.
  25. Ostrovski, G.M., Volin,Y.M., and Boris, W.W. (1971). On the computation of derivatives. Wiss. Z. Tech. Hochschule for Chemistry, 13:382–384.
  26. 1 2 Schmidhuber, Juergen (25 Oct 2014). "Who Invented Backpropagation?". IDSIA, Switzerland. Archived from the original on 30 July 2024. Retrieved 14 Sep 2024.
  27. Anderson, James A.; Rosenfeld, Edward, eds. (2000). Talking Nets: An Oral History of Neural Networks. The MIT Press. doi:10.7551/mitpress/6626.003.0016. ISBN   978-0-262-26715-1.
  28. Werbos, Paul J. (1994). The Roots of Backpropagation : From Ordered Derivatives to Neural Networks and Political Forecasting. New York: John Wiley & Sons. ISBN   0-471-59897-6.
  29. Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (October 1986). "Learning representations by back-propagating errors" . Nature. 323 (6088): 533–536. Bibcode:1986Natur.323..533R. doi:10.1038/323533a0. ISSN   1476-4687.
  30. Bengio, Yoshua; Ducharme, Réjean; Vincent, Pascal; Janvin, Christian (March 2003). "A neural probabilistic language model". The Journal of Machine Learning Research. 3: 1137–1155.
  31. Auer, Peter; Harald Burgsteiner; Wolfgang Maass (2008). "A learning rule for very simple universal approximators consisting of a single layer of perceptrons" (PDF). Neural Networks. 21 (5): 786–795. doi:10.1016/j.neunet.2007.12.036. PMID   18249524. Archived from the original (PDF) on 2011-07-06. Retrieved 2009-09-08.
  32. Cybenko, G. 1989. Approximation by superpositions of a sigmoidal function Mathematics of Control, Signals, and Systems , 2(4), 303–314.