![]() | This article may be confusing or unclear to readers.(January 2025) |
In neuroscience and machine learning, three-factor learning is the combinaison of Hebbian plasticity with a third modulatory factor to stabilise and enhance synaptic learning. [1] This third factor can represent various signals such as reward, punishment, error, surprise, or novelty, often implemented through neuromodulators. [2]
Three-factor learning introduces the concept of eligibility traces, which flag synapses for potential modification pending the arrival of the third factor, and helps temporal credit assignement by bridging the gap between rapid neuronal firing and slower behavioral timescales, from which learning can be done. [3] Biological basis for Three-factor learning rules have been supported by experimental evidence. [4] [2] This approach addresses the instability of classical Hebbian learning by minimizing autocorrelation and maximizing cross-correlation between inputs. [1]
Computational neuroscience is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.
Hebbian theory is a neuropsychological theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior. The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows:
Let us assume that the persistence or repetition of a reverberatory activity tends to induce lasting cellular changes that add to its stability. ... When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.
Spike-timing-dependent plasticity (STDP) is a biological process that adjusts the strength of connections between neurons in the brain. The process adjusts the connection strengths based on the relative timing of a particular neuron's output and input action potentials. The STDP process partially explains the activity-dependent development of nervous systems, especially with regard to long-term potentiation and long-term depression.
Neural coding is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the neuronal responses, and the relationship among the electrical activities of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is believed that neurons can encode both digital and analog information.
In neuroscience, homeostatic plasticity refers to the capacity of neurons to regulate their own excitability relative to network activity. The term homeostatic plasticity derives from two opposing concepts: 'homeostatic' and plasticity, thus homeostatic plasticity means "staying the same through change". In the nervous system, neurons must be able to evolve with the development of their constantly changing environment while simultaneously staying the same amidst this change. This stability is important for neurons to maintain their activity and functionality to prevent neurons from carcinogenesis. At the same time, neurons need to have flexibility to adapt to changes and make connections to cope with the ever-changing environment of a developing nervous system.
Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. These models leverage timing of discrete spikes as the main information carrier.
Bienenstock–Cooper–Munro (BCM) theory, BCM synaptic modification, or the BCM rule, named after Elie Bienenstock, Leon Cooper, and Paul Munro, is a physical theory of learning in the visual cortex developed in 1981. The BCM model proposes a sliding threshold for long-term potentiation (LTP) or long-term depression (LTD) induction, and states that synaptic plasticity is stabilized by a dynamic adaptation of the time-averaged postsynaptic activity. According to the BCM model, when a pre-synaptic neuron fires, the post-synaptic neurons will tend to undergo LTP if it is in a high-activity state, or LTD if it is in a lower-activity state. This theory is often used to explain how cortical neurons can undergo both LTP or LTD depending on different conditioning stimulus protocols applied to pre-synaptic neurons.
In neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuron has on another. The term is typically used in artificial and biological neural network research.
A Bayesian Confidence Propagation Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem, which regards neural computation and processing as probabilistic inference. Neural unit activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology. This probabilistic neural network model can also be run in generative mode to produce spontaneous activations and temporal sequences.
The network of the human nervous system is composed of nodes that are connected by links. The connectivity may be viewed anatomically, functionally, or electrophysiologically. These are presented in several Wikipedia articles that include Connectionism, Biological neural network, Artificial neural network, Computational neuroscience, as well as in several books by Ascoli, G. A. (2002), Sterratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011), Gerstner, W., & Kistler, W. (2002), and David Rumelhart, McClelland, J. L., and PDP Research Group (1986) among others. The focus of this article is a comprehensive view of modeling a neural network. Once an approach based on the perspective and connectivity is chosen, the models are developed at microscopic, mesoscopic, or macroscopic (system) levels. Computational modeling refers to models that are developed using computing tools.
Many experiments have been done to find out how the brain interprets stimuli and how animals develop fear responses. The emotion, fear, has been hard-wired into almost every individual, due to its vital role in the survival of the individual. Researchers have found that fear is established unconsciously and that the amygdala is involved with fear conditioning.
In neuroscience, predictive coding is a theory of brain function which postulates that the brain is constantly generating and updating a "mental model" of the environment. According to the theory, such a mental model is used to predict input signals from the senses that are then compared with the actual input signals from those senses. Predictive coding is member of a wider set of theories that follow the Bayesian brain hypothesis.
Kathryn Mary Murphy is a Canadian neuroscientist and professor who studies development and plasticity of the brain.
Claudia Clopath is a Professor of Computational Neuroscience at Imperial College London and research leader at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour. She develops mathematical models to predict synaptic plasticity for both medical applications and the design of human-like machines.
Tim P. Vogels is a professor of theoretical neuroscience and research leader at the Institute of Science and Technology Austria. He is primarily known for his scholarly contributions to the study of neuronal plasticity related to learning and memory in the brain.
Massimiliano (Max) Garagnani is a University Professor at the University of London, and is primarily known for his work on bio-realistic neural network models that closely mimic the structure, connectivity, and physiology of the human cortex. Garagnani presently runs the Goldsmiths Computational Cognitive Neuroscience Postgraduate Programme at the University of London, and further serves as a visiting researcher at the Free University of Berlin.
Ilana B. Witten is an American neuroscientist and professor of psychology and neuroscience at Princeton University. Witten studies the mesolimbic pathway, with a focus on the striatal neural circuit mechanisms driving reward learning and decision making.
Wulfram Gerstner is a German and Swiss computational neuroscientist. His research focuses on neural spiking patterns in neural networks, and their connection to learning, spatial representation and navigation. Since 2006 Gerstner has been a full professor of Computer Science and Life Sciences at École Polytechnique Fédérale de Lausanne (EPFL), where he also serves as a Director of the Laboratory of Computational Neuroscience.
The spike response model (SRM) is a spiking neuron model in which spikes are generated by either a deterministic or a stochastic threshold process. In the SRM, the membrane voltage V is described as a linear sum of the postsynaptic potentials (PSPs) caused by spike arrivals to which the effects of refractoriness and adaptation are added. The threshold is either fixed or dynamic. In the latter case it increases after each spike. The SRM is flexible enough to account for a variety of neuronal firing pattern in response to step current input. The SRM has also been used in the theory of computation to quantify the capacity of spiking neural networks; and in the neurosciences to predict the subthreshold voltage and the firing times of cortical neurons during stimulation with a time-dependent current stimulation. The name Spike Response Model points to the property that the two important filters and of the model can be interpreted as the response of the membrane potential to an incoming spike (response kernel , the PSP) and to an outgoing spike (response kernel , also called refractory kernel). The SRM has been formulated in continuous time and in discrete time. The SRM can be viewed as a generalized linear model (GLM) or as an (integrated version of) a generalized integrate-and-fire model with adaptation.
Alexei Koulakov is a theoretical physicist and a neuroscientist. He is the Charles Robertson Professor of Neuroscience at the Cold Spring Harbor Laboratory.