Hebbian theory

Last updated

Hebbian theory is a neuropsychological theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior. [1] The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows:

Contents

Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular changes that add to its stability. ... When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased. [1]

The theory is often summarized as "Neurons that fire together, wire together." [2] However, Hebb emphasized that cell A needs to "take part in firing" cell B, and such causality can occur only if cell A fires just before, not at the same time as, cell B. This aspect of causation in Hebb's work foreshadowed what is now known about spike-timing-dependent plasticity , which requires temporal precedence. [3]

The theory attempts to explain associative or Hebbian learning, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. It also provides a biological basis for errorless learning methods for education and memory rehabilitation. In the study of neural networks in cognitive function, it is often regarded as the neuronal basis of unsupervised learning.

Hebbian engrams and cell assembly theory

Hebbian theory concerns how neurons might connect themselves to become engrams. Hebb's theories on the form and function of cell assemblies can be understood from the following: [1] :70

The general idea is an old one, that any two cells or systems of cells that are repeatedly active at the same time will tend to become 'associated' so that activity in one facilitates activity in the other.

Hebb also wrote: [1] :63

When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.

[D. Alan Allport] posits additional ideas regarding cell assembly theory and its role in forming engrams, along the lines of the concept of auto-association, described as follows:

If the inputs to a system cause the same pattern of activity to occur repeatedly, the set of active elements constituting that pattern will become increasingly strongly inter-associated. That is, each element will tend to turn on every other element and (with negative weights) to turn off the elements that do not form part of the pattern. To put it another way, the pattern as a whole will become 'auto-associated'. We may call a learned (auto-associated) pattern an engram. [4] :44

Work in the laboratory of Eric Kandel has provided evidence for the involvement of Hebbian learning mechanisms at synapses in the marine gastropod Aplysia californica .[ citation needed ] Experiments on Hebbian synapse modification mechanisms at the central nervous system synapses of vertebrates are much more difficult to control than are experiments with the relatively simple peripheral nervous system synapses studied in marine invertebrates. Much of the work on long-lasting synaptic changes between vertebrate neurons (such as long-term potentiation) involves the use of non-physiological experimental stimulation of brain cells. However, some of the physiologically relevant synapse modification mechanisms that have been studied in vertebrate brains do seem to be examples of Hebbian processes. One such study [5] reviews results from experiments that indicate that long-lasting changes in synaptic strengths can be induced by physiologically relevant synaptic activity working through both Hebbian and non-Hebbian mechanisms.

Principles

From the point of view of artificial neurons and artificial neural networks, Hebb's principle can be described as a method of determining how to alter the weights between model neurons. The weight between two neurons increases if the two neurons activate simultaneously, and reduces if they activate separately. Nodes that tend to be either both positive or both negative at the same time have strong positive weights, while those that tend to be opposite have strong negative weights.

The following is a formulaic description of Hebbian learning: (many other descriptions are possible)

where is the weight of the connection from neuron to neuron and the input for neuron . Note that this is pattern learning (weights updated after every training example). In a Hopfield network, connections are set to zero if (no reflexive connections allowed). With binary neurons (activations either 0 or 1), connections would be set to 1 if the connected neurons have the same activation for a pattern.

When several training patterns are used the expression becomes an average of individual ones:

where is the weight of the connection from neuron to neuron , is the number of training patterns and the -th input for neuron . This is learning by epoch (weights updated after all the training examples are presented), being last term applicable to both discrete and continuous training sets. Again, in a Hopfield network, connections are set to zero if (no reflexive connections).

A variation of Hebbian learning that takes into account phenomena such as blocking and many other neural learning phenomena is the mathematical model of Harry Klopf. [6] Klopf's model reproduces a great many biological phenomena, and is also simple to implement.

Relationship to unsupervised learning, stability, and generalization

Because of the simple nature of Hebbian learning, based only on the coincidence of pre- and post-synaptic activity, it may not be intuitively clear why this form of plasticity leads to meaningful learning. However, it can be shown that Hebbian plasticity does pick up the statistical properties of the input in a way that can be categorized as unsupervised learning.

This can be mathematically shown in a simplified example. Let us work under the simplifying assumption of a single rate-based neuron of rate , whose inputs have rates . The response of the neuron is usually described as a linear combination of its input, , followed by a response function :

As defined in the previous sections, Hebbian plasticity describes the evolution in time of the synaptic weight :

Assuming, for simplicity, an identity response function , we can write

or in matrix form:

As in the previous chapter, if training by epoch is done an average over discrete or continuous (time) training set of can be done:

where is the correlation matrix of the input under the additional assumption that (i.e. the average of the inputs is zero). This is a system of coupled linear differential equations. Since is symmetric, it is also diagonalizable, and the solution can be found, by working in its eigenvectors basis, to be of the form

where are arbitrary constants, are the eigenvectors of and their corresponding eigen values. Since a correlation matrix is always a positive-definite matrix, the eigenvalues are all positive, and one can easily see how the above solution is always exponentially divergent in time. This is an intrinsic problem due to this version of Hebb's rule being unstable, as in any network with a dominant signal the synaptic weights will increase or decrease exponentially. Intuitively, this is because whenever the presynaptic neuron excites the postsynaptic neuron, the weight between them is reinforced, causing an even stronger excitation in the future, and so forth, in a self-reinforcing way. One may think a solution is to limit the firing rate of the postsynaptic neuron by adding a non-linear, saturating response function , but in fact, it can be shown that for any neuron model, Hebb's rule is unstable. [7] Therefore, network models of neurons usually employ other learning theories such as BCM theory, Oja's rule, [8] or the generalized Hebbian algorithm.

Regardless, even for the unstable solution above, one can see that, when sufficient time has passed, one of the terms dominates over the others, and

where is the largest eigenvalue of . At this time, the postsynaptic neuron performs the following operation:

Because, again, is the eigenvector corresponding to the largest eigenvalue of the correlation matrix between the s, this corresponds exactly to computing the first principal component of the input.

This mechanism can be extended to performing a full PCA (principal component analysis) of the input by adding further postsynaptic neurons, provided the postsynaptic neurons are prevented from all picking up the same principal component, for example by adding lateral inhibition in the postsynaptic layer. We have thus connected Hebbian learning to PCA, which is an elementary form of unsupervised learning, in the sense that the network can pick up useful statistical aspects of the input, and "describe" them in a distilled way in its output. [9]

Limitations

Despite the common use of Hebbian models for long-term potentiation, Hebb's principle does not cover all forms of synaptic long-term plasticity. Hebb did not postulate any rules for inhibitory synapses, nor did he make predictions for anti-causal spike sequences (presynaptic neuron fires after the postsynaptic neuron). Synaptic modification may not simply occur only between activated neurons A and B, but at neighboring synapses as well. [10] All forms of hetero synaptic and homeostatic plasticity are therefore considered non-Hebbian. An example is retrograde signaling to presynaptic terminals. [11] The compound most commonly identified as fulfilling this retrograde transmitter role is nitric oxide, which, due to its high solubility and diffusivity, often exerts effects on nearby neurons. [12] This type of diffuse synaptic modification, known as volume learning, is not included in the traditional Hebbian model. [13]

Hebbian learning account of mirror neurons

Hebbian learning and spike-timing-dependent plasticity have been used in an influential theory of how mirror neurons emerge. [14] [15] Mirror neurons are neurons that fire both when an individual performs an action and when the individual sees [16] or hears [17] another perform a similar action. The discovery of these neurons has been very influential in explaining how individuals make sense of the actions of others, by showing that, when a person perceives the actions of others, the person activates the motor programs which they would use to perform similar actions. The activation of these motor programs then adds information to the perception and helps predict what the person will do next based on the perceiver's own motor program. A challenge has been to explain how individuals come to have neurons that respond both while performing an action and while hearing or seeing another perform similar actions.

Christian Keysers and David Perrett suggested that as an individual performs a particular action, the individual will see, hear, and feel the performing of the action. These re-afferent sensory signals will trigger activity in neurons responding to the sight, sound, and feel of the action. Because the activity of these sensory neurons will consistently overlap in time with those of the motor neurons that caused the action, Hebbian learning predicts that the synapses connecting neurons responding to the sight, sound, and feel of an action and those of the neurons triggering the action should be potentiated. The same is true while people look at themselves in the mirror, hear themselves babble, or are imitated by others. After repeated experience of this re-afference, the synapses connecting the sensory and motor representations of an action are so strong that the motor neurons start firing to the sound or the vision of the action, and a mirror neuron is created.

Evidence for that perspective comes from many experiments that show that motor programs can be triggered by novel auditory or visual stimuli after repeated pairing of the stimulus with the execution of the motor program (for a review of the evidence, see Giudice et al., 2009 [18] ). For instance, people who have never played the piano do not activate brain regions involved in playing the piano when listening to piano music. Five hours of piano lessons, in which the participant is exposed to the sound of the piano each time they press a key is proven sufficient to trigger activity in motor regions of the brain upon listening to piano music when heard at a later time. [19] Consistent with the fact that spike-timing-dependent plasticity occurs only if the presynaptic neuron's firing predicts the post-synaptic neuron's firing, [20] the link between sensory stimuli and motor programs also only seem to be potentiated if the stimulus is contingent on the motor program.

See also

Related Research Articles

<span class="mw-page-title-main">Chemical synapse</span> Biological junctions through which neurons signals can be sent

Chemical synapses are biological junctions through which neurons' signals can be sent to each other and to non-neuronal cells such as those in muscles or glands. Chemical synapses allow neurons to form circuits within the central nervous system. They are crucial to the biological computations that underlie perception and thought. They allow the nervous system to connect to and control other systems of the body.

<span class="mw-page-title-main">Long-term potentiation</span> Persistent strengthening of synapses based on recent patterns of activity

In neuroscience, long-term potentiation (LTP) is a persistent strengthening of synapses based on recent patterns of activity. These are patterns of synaptic activity that produce a long-lasting increase in signal transmission between two neurons. The opposite of LTP is long-term depression, which produces a long-lasting decrease in synaptic strength.

In neuroscience, synaptic plasticity is the ability of synapses to strengthen or weaken over time, in response to increases or decreases in their activity. Since memories are postulated to be represented by vastly interconnected neural circuits in the brain, synaptic plasticity is one of the important neurochemical foundations of learning and memory.

In neurophysiology, long-term depression (LTD) is an activity-dependent reduction in the efficacy of neuronal synapses lasting hours or longer following a long patterned stimulus. LTD occurs in many areas of the CNS with varying mechanisms depending upon brain region and developmental progress.

Spike-timing-dependent plasticity (STDP) is a biological process that adjusts the strength of connections between neurons in the brain. The process adjusts the connection strengths based on the relative timing of a particular neuron's output and input action potentials. The STDP process partially explains the activity-dependent development of nervous systems, especially with regard to long-term potentiation and long-term depression.

<span class="mw-page-title-main">Neural circuit</span> Network or circuit of neurons

A neural circuit is a population of neurons interconnected by synapses to carry out a specific function when activated. Multiple neural circuits interconnect with one another to form large scale brain networks.

Metaplasticity is a term originally coined by W.C. Abraham and M.F. Bear to refer to the plasticity of synaptic plasticity. Until that time synaptic plasticity had referred to the plastic nature of individual synapses. However this new form referred to the plasticity of the plasticity itself, thus the term meta-plasticity. The idea is that the synapse's previous history of activity determines its current plasticity. This may play a role in some of the underlying mechanisms thought to be important in memory and learning such as long-term potentiation (LTP), long-term depression (LTD) and so forth. These mechanisms depend on current synaptic "state", as set by ongoing extrinsic influences such as the level of synaptic inhibition, the activity of modulatory afferents such as catecholamines, and the pool of hormones affecting the synapses under study. Recently, it has become clear that the prior history of synaptic activity is an additional variable that influences the synaptic state, and thereby the degree, of LTP or LTD produced by a given experimental protocol. In a sense, then, synaptic plasticity is governed by an activity-dependent plasticity of the synaptic state; such plasticity of synaptic plasticity has been termed metaplasticity. There is little known about metaplasticity, and there is much research currently underway on the subject, despite its difficulty of study, because of its theoretical importance in brain and cognitive science. Most research of this type is done via cultured hippocampus cells or hippocampal slices.

In neuroscience, homeostatic plasticity refers to the capacity of neurons to regulate their own excitability relative to network activity. The term homeostatic plasticity derives from two opposing concepts: 'homeostatic' and plasticity, thus homeostatic plasticity means "staying the same through change". In the nervous system, neurons must be able to evolve with the development of their constantly changing environment while simultaneously staying the same amidst this change. This stability is important for neurons to maintain their activity and functionality to prevent neurons from carcinogenesis. At the same time, neurons need to have flexibility to adapt to changes and make connections to cope with the ever-changing environment of a developing nervous system.

Oja's learning rule, or simply Oja's rule, named after Finnish computer scientist Erkki Oja, is a model of how neurons in the brain or in artificial neural networks change connection strength, or learn, over time. It is a modification of the standard Hebb's Rule that, through multiplicative normalization, solves all stability problems and generates an algorithm for principal components analysis. This is a computational form of an effect which is believed to happen in biological neurons.

BCM theory, BCM synaptic modification, or the BCM rule, named for Elie Bienenstock, Leon Cooper, and Paul Munro, is a physical theory of learning in the visual cortex developed in 1981. The BCM model proposes a sliding threshold for long-term potentiation (LTP) or long-term depression (LTD) induction, and states that synaptic plasticity is stabilized by a dynamic adaptation of the time-averaged postsynaptic activity. According to the BCM model, when a pre-synaptic neuron fires, the post-synaptic neurons will tend to undergo LTP if it is in a high-activity state, or LTD if it is in a lower-activity state. This theory is often used to explain how cortical neurons can undergo both LTP or LTD depending on different conditioning stimulus protocols applied to pre-synaptic neurons.

The generalized Hebbian algorithm (GHA), also known in the literature as Sanger's rule, is a linear feedforward neural network for unsupervised learning with applications primarily in principal components analysis. First defined in 1989, it is similar to Oja's rule in its formulation and stability, except it can be applied to networks with multiple outputs. The name originates because of the similarity between the algorithm and a hypothesis made by Donald Hebb about the way in which synaptic strengths in the brain are modified in response to experience, i.e., that changes are proportional to the correlation between the firing of pre- and post-synaptic neurons.

In neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuron has on another. The term is typically used in artificial and biological neural network research.

Coincidence detection is a neuronal process in which a neural circuit encodes information by detecting the occurrence of temporally close but spatially distributed input signals. Coincidence detectors influence neuronal information processing by reducing temporal jitter and spontaneous activity, allowing the creation of variable associations between separate neural events in memory. The study of coincidence detectors has been crucial in neuroscience with regards to understanding the formation of computational maps in the brain.

In neuroethology and the study of learning, anti-Hebbian learning describes a particular class of learning rule by which synaptic plasticity can be controlled. These rules are based on a reversal of Hebb's postulate, and therefore can be simplistically understood as dictating reduction of the strength of synaptic connectivity between neurons following a scenario in which a neuron directly contributes to production of an action potential in another neuron.

<span class="mw-page-title-main">Nonsynaptic plasticity</span> Form of neuroplasticity

Nonsynaptic plasticity is a form of neuroplasticity that involves modification of ion channel function in the axon, dendrites, and cell body that results in specific changes in the integration of excitatory postsynaptic potentials and inhibitory postsynaptic potentials. Nonsynaptic plasticity is a modification of the intrinsic excitability of the neuron. It interacts with synaptic plasticity, but it is considered a separate entity from synaptic plasticity. Intrinsic modification of the electrical properties of neurons plays a role in many aspects of plasticity from homeostatic plasticity to learning and memory itself. Nonsynaptic plasticity affects synaptic integration, subthreshold propagation, spike generation, and other fundamental mechanisms of neurons at the cellular level. These individual neuronal alterations can result in changes in higher brain function, especially learning and memory. However, as an emerging field in neuroscience, much of the knowledge about nonsynaptic plasticity is uncertain and still requires further investigation to better define its role in brain function and behavior.

A Bayesian Confidence Propagation Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem, which regards neural computation and processing as probabilistic inference. Neural unit activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology. This probabilistic neural network model can also be run in generative mode to produce spontaneous activations and temporal sequences.

In neuroscience, synaptic scaling is a form of homeostatic plasticity, in which the brain responds to chronically elevated activity in a neural circuit with negative feedback, allowing individual neurons to reduce their overall action potential firing rate. Where Hebbian plasticity mechanisms modify neural synaptic connections selectively, synaptic scaling normalizes all neural synaptic connections by decreasing the strength of each synapse by the same factor, so that the relative synaptic weighting of each synapse is preserved.

An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating the weights and bias levels of a network when a network is simulated in a specific data environment. A learning rule may accept existing conditions of the network and will compare the expected result and actual result of the network to give new and improved values for weights and bias. Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations.

<span class="mw-page-title-main">Homosynaptic plasticity</span> Type of synaptic plasticity.

Homosynaptic plasticity is one type of synaptic plasticity. Homosynaptic plasticity is input-specific, meaning changes in synapse strength occur only at post-synaptic targets specifically stimulated by a pre-synaptic target. Therefore, the spread of the signal from the pre-synaptic cell is localized.

<span class="mw-page-title-main">Heterosynaptic plasticity</span>

Synaptic plasticity refers to a chemical synapse's ability to undergo changes in strength. Synaptic plasticity is typically input-specific, meaning that the activity in a particular neuron alters the efficacy of a synaptic connection between that neuron and its target. However, in the case of heterosynaptic plasticity, the activity of a particular neuron leads to input unspecific changes in the strength of synaptic connections from other unactivated neurons. A number of distinct forms of heterosynaptic plasticity have been found in a variety of brain regions and organisms. These different forms of heterosynaptic plasticity contribute to a variety of neural processes including associative learning, the development of neural circuits, and homeostasis of synaptic input.

References

  1. 1 2 3 4 Hebb, D.O. (1949). The Organization of Behavior. New York: Wiley & Sons.
  2. Siegrid Löwel, Göttingen University; The exact sentence is: "neurons wire together if they fire together" (Löwel, S. and Singer, W. (1992) Science 255 (published January 10, 1992) Löwel, Siegrid; Singer, Wolf (1992). "Selection of Intrinsic Horizontal Connections in the Visual Cortex by Correlated Neuronal Activity". Science Magazine. 255 (5041). United States: American Association for the Advancement of Science: 209–212. doi:10.1126/science.1372754. ISSN   0036-8075. PMID   1372754.
  3. Caporale N; Dan Y (2008). "Spike timing-dependent plasticity: a Hebbian learning rule". Annual Review of Neuroscience. 31: 25–46. doi:10.1146/annurev.neuro.31.060407.125639. PMID   18275283.
  4. Allport, D.A. (1985). "Distributed memory, modular systems and dysphasia". In Newman, S.K.; Epstein R. (eds.). Current Perspectives in Dysphasia. Edinburgh: Churchill Livingstone. ISBN   978-0-443-03039-0.
  5. Paulsen, O; Sejnowski, T (1 April 2000). "Natural patterns of activity and long-term synaptic plasticity". Current Opinion in Neurobiology. 10 (2): 172–180. doi:10.1016/s0959-4388(00)00076-3. PMC   2900254 . PMID   10753798.
  6. Klopf, A. H. (1972). Brain function and adaptive systems—A heterostatic theory. Technical Report AFCRL-72-0164, Air Force Cambridge Research Laboratories, Bedford, MA.
  7. Euliano, Neil R. (1999-12-21). "Neural and Adaptive Systems: Fundamentals Through Simulations" (PDF). Wiley. Archived from the original (PDF) on 2015-12-25. Retrieved 2016-03-16.
  8. Shouval, Harel (2005-01-03). "The Physics of the Brain". The Synaptic basis for Learning and Memory: A theoretical approach. The University of Texas Health Science Center at Houston. Archived from the original on 2007-06-10. Retrieved 2007-11-14.
  9. Gerstner, Wulfram; Kistler, Werner M.; Naud, Richard; Paninski, Liam (July 2014). Chapter 19: Synaptic Plasticity and Learning. Cambridge University Press. ISBN   978-1107635197 . Retrieved 2020-11-09.{{cite book}}: |work= ignored (help)
  10. Horgan, John (May 1994). "Neural eavesdropping". Scientific American. 270 (5): 16. Bibcode:1994SciAm.270e..16H. doi:10.1038/scientificamerican0594-16. PMID   8197441.
  11. Fitzsimonds, Reiko; Mu-Ming Poo (January 1998). "Retrograde Signaling in the Development and Modification of Synapses". Physiological Reviews. 78 (1): 143–170. doi:10.1152/physrev.1998.78.1.143. PMID   9457171. S2CID   11604896.
  12. López, P; C.P. Araujo (2009). "A computational study of the diffuse neighbourhoods in biological and artificial neural networks" (PDF). International Joint Conference on Computational Intelligence.
  13. Mitchison, G; N. Swindale (October 1999). "Can Hebbian Volume Learning Explain Discontinuities in Cortical Maps?". Neural Computation. 11 (7): 1519–1526. doi:10.1162/089976699300016115. PMID   10490935. S2CID   2325474.
  14. Keysers C; Perrett DI (2004). "Demystifying social cognition: a Hebbian perspective". Trends in Cognitive Sciences. 8 (11): 501–507. doi:10.1016/j.tics.2004.09.005. PMID   15491904. S2CID   8039741.
  15. Keysers, C. (2011). The Empathic Brain.
  16. Gallese V; Fadiga L; Fogassi L; Rizzolatti G (1996). "Action recognition in the premotor cortex". Brain. 119 (Pt 2): 593–609. doi: 10.1093/brain/119.2.593 . PMID   8800951.
  17. Keysers C; Kohler E; Umilta MA; Nanetti L; Fogassi L; Gallese V (2003). "Audiovisual mirror neurons and action recognition". Exp Brain Res. 153 (4): 628–636. CiteSeerX   10.1.1.387.3307 . doi:10.1007/s00221-003-1603-5. PMID   12937876. S2CID   7704309.
  18. Del Giudice M; Manera V; Keysers C (2009). "Programmed to learn? The ontogeny of mirror neurons" (PDF). Dev Sci. 12 (2): 350–363. doi:10.1111/j.1467-7687.2008.00783.x. hdl: 2318/133096 . PMID   19143807.
  19. Lahav A; Saltzman E; Schlaug G (2007). "Action representation of sound: audiomotor recognition network while listening to newly acquired actions". J Neurosci. 27 (2): 308–314. doi:10.1523/jneurosci.4822-06.2007. PMC   6672064 . PMID   17215391.
  20. Bauer EP; LeDoux JE; Nader K (2001). "Fear conditioning and LTP in the lateral amygdala are sensitive to the same stimulus contingencies". Nat Neurosci. 4 (7): 687–688. doi:10.1038/89465. PMID   11426221. S2CID   33130204.

Further reading