Binding neuron

Last updated

A binding neuron (BN) is an abstract concept of processing of input impulses in a generic neuron based on their temporal coherence and the level of neuronal inhibition. Mathematically, the concept may be implemented by most neuronal models including the well-known leaky integrate-and-fire model. The BN concept originated in 1996 and 1998 papers by A. K. Vidybida, [1] [2]

Contents

Description of the concept

For a generic neuron the stimuli are excitatory impulses. Normally, more than single input impulse is necessary for exciting neuron up to the level when it fires and emits an output impulse. Let the neuron receives input impulses at consecutive moments of time . In the BN concept the temporal coherence between input impulses is defined as follows

The high degree of temporal coherence between input impulses suggests that in external media all impulses can be created by a single complex event. Correspondingly, if BN is stimulated by a highly coherent set of input impulses, it fires and emits an output impulse. In the BN terminology, BN binds the elementary events (input impulses) into a single event (output impulse). The binding happens if the input impulses are enough coherent in time, and does not happen if those impulses do not have required degree of coherence.

Inhibition in the BN concept (essentially, the slow somatic potassium inhibition) controls the degree of temporal coherence required for binding: the higher level of inhibition, the higher degree of temporal coherence is necessary for binding to occur.

Scheme of signal processing in accordance with binding neuron concept.
t
1
,
t
2
,
...
,
t
n
{\displaystyle t_{1},t_{2},\dots ,t_{n}}
--- the moments of receiving of input impulses. Scheme Eng.svg
Scheme of signal processing in accordance with binding neuron concept. --- the moments of receiving of input impulses.

The emitted output impulse is treated as abstract representation of the compound event (the set of coherent in time input impulses), see Scheme.

Origin

"Although a neuron requires energy, its main function is to receive signals and to send them out that is, to handle information." --- this words by Francis Crick point at the necessity to describe neuronal functioning in terms of processing of abstract signals [3] The two abstract concepts, namely, the "coincidence detector" and "temporal integrator" are offered in this course, [4] [5] The first one expects that a neuron fires a spike if a number of input impulses are received at the same time. In the temporal integrator concept a neuron fires a spike after receiving a number of input impulses distributed in time. Each of the two takes into account some features of real neurons since it is known that a realistic neuron can display both coincidence detector and temporal integrator modes of activity depending on the stimulation applied, . [6] At the same time, it is known that a neuron together with excitatory impulses receives also inhibitory stimulation. A natural development of the two above mentioned concepts could be a concept which endows inhibition with its own signal processing role.

In the neuroscience, there is an idea of binding problem. For example, during visual perception, such features as form, color and stereopsis are represented in the brain by different neuronal assemblies. The mechanism ensuring those features to be perceived as belonging to a single real object is called "feature binding", . [7] The experimentally approved opinion is that precise temporal coordination between neuronal impulses is required for the binding to occur, [8] [9] [10] [11] [12] [13] This coordination mainly means that signals about different features must arrive to certain areas in the brain within a certain time window.

The BN concept reproduces at the level of single generic neuron the requirement, which is necessary for the feature binding to occur, and which was formulated earlier at the level of large-scale neuronal assemblies. Its formulation is made possible by the analysis of response of the Hodgkin–Huxley model to stimuli similar to those the real neurons receive in the natural conditions, see "Mathematical implementations", below.

Mathematical implementations

Hodgkin–Huxley (H-H) model

Hodgkin–Huxley model — physiologically substantiated neuronal model, which operates in terms of transmembrane ionic currents, and describes mechanism of generation of action potential.

In the paper [14] the response of the H-H model was studied numerically to stimuli composed of many excitatory impulses distributed randomly within a time window :

Here denotes magnitude of excitatory postsynaptic potential at moment ; — is the moment of arrival of -th impulse; — is the total number of impulses the stimulus is composed of. The numbers are random, distributed uniformly within interval . The stimulating current applied in the H-H equations is as follows

where — is the capacity of unit area of excitable membrane. The probability to generate action potential was calculated as a function of the window width . Different constant potassium conductances were added to the H-H equations in order to create certain levels of inhibitory potential. The dependencies obtained, if recalculated as functions of , which is analogous to temporal coherence of impulses in the compound stimulus, have step-like form. The location of the step is controlled by the level of inhibition potential, see Fig. 1. Due to this type of dependence, the H-H equations can be treated as mathematical model of the BN concept.

Fig. 1. Firing probability (
f
p
{\displaystyle fp}
) of Hodgkin-Huxley type neuron, stimulated with the set of
N
P
{\displaystyle NP}
input impulses as a function of temporal coherence of the impulses. The curves from the left to the right correspond to increasing potassium conductance, that is to increasing degree of inhibition. Step Ukr.svg
Fig. 1. Firing probability () of Hodgkin–Huxley type neuron, stimulated with the set of input impulses as a function of temporal coherence of the impulses. The curves from the left to the right correspond to increasing potassium conductance, that is to increasing degree of inhibition.

Leaky integrate and fire neuron (LIF)

Leaky integrate and fire neuron is a widely used abstract neuronal model. If to state a similar problem for the LIF neuron with appropriately chosen inhibition mechanism, then it is possible to obtain step-like dependencies similar to the Fig. 1 as well. Therefore, the LIF neuron as well can be considered as mathematical model of the BN concept.

Binding neuron model

The binding neuron model implements the BN concept in the most refined form. [15] In this model each input impulse is stored in the neuron during fixed time and then disappears. This kind of memory serves as surrogate of the excitatory postsynaptic potential. The model has a threshold : if the number of stored in the BN impulses exceeds then the neuron fires a spike and clears it internal memory. The presence of inhibition results in the decreased . In the BN model, it is necessary to control the time to live of any stored impulse during calculation of the neuron's response to input stimulation. This makes the BN model more complicated for numerical simulation than the LIF model. On the other hand, any impulse spends finite time in the BN model neuron. This is in contrast to the LIF model, where traces of any impulse can be present infinitely long. This property of the BN model allows to get precise description of output activity of BN stimulated with random stream of input impulses, see [16] [17] . [18]

The limiting case of BN with infinite memory, τ→∞, corresponds to the temporal integrator. The limiting case of BN with infinitely short memory, τ→0, corresponds to the coincidence detector.

Integrated circuit implementation

The above-mentioned and other neuronal models and nets made of them can be implemented in microchips. Among different chips it is worth mentioning the field-programmable gate arrays. These chips can be used for implementation of any neuronal model, but the BN model can be programmed most naturally because it can use only integers and do not need solving differential equations. Those features are used, e.g. in [19] and [20]

Limitations

As an abstract concept the BN model is subjected to necessary limitations. Among those are such as ignoring neuronal morphology, identical magnitude of input impulses, replacement of a set of transients with different relaxation times, known for a real neuron, with a single time to live, , of impulse in neuron, the absence of refractoriness and fast (chlorine) inhibition. The BN model has the same limitations, yet some of them can be removed in a complicated model, see, e.g., [21] where the BN model is used with refractoriness and fast inhibition.

Related Research Articles

Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior. The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows:

Let us assume that the persistence or repetition of a reverberatory activity tends to induce lasting cellular changes that add to its stability. ... When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.

In neuroscience, synaptic plasticity is the ability of synapses to strengthen or weaken over time, in response to increases or decreases in their activity. Since memories are postulated to be represented by vastly interconnected neural circuits in the brain, synaptic plasticity is one of the important neurochemical foundations of learning and memory.

An inhibitory postsynaptic potential (IPSP) is a kind of synaptic potential that makes a postsynaptic neuron less likely to generate an action potential. IPSP were first investigated in motorneurons by David P. C. Lloyd, John Eccles and Rodolfo Llinás in the 1950s and 1960s. The opposite of an inhibitory postsynaptic potential is an excitatory postsynaptic potential (EPSP), which is a synaptic potential that makes a postsynaptic neuron more likely to generate an action potential. IPSPs can take place at all chemical synapses, which use the secretion of neurotransmitters to create cell to cell signalling. Inhibitory presynaptic neurons release neurotransmitters that then bind to the postsynaptic receptors; this induces a change in the permeability of the postsynaptic neuronal membrane to particular ions. An electric current that changes the postsynaptic membrane potential to create a more negative postsynaptic potential is generated, i.e. the postsynaptic membrane potential becomes more negative than the resting membrane potential, and this is called hyperpolarisation. To generate an action potential, the postsynaptic membrane must depolarize—the membrane potential must reach a voltage threshold more positive than the resting membrane potential. Therefore, hyperpolarisation of the postsynaptic membrane makes it less likely for depolarisation to sufficiently occur to generate an action potential in the postsynaptic neurone.

Motion perception

Motion perception is the process of inferring the speed and direction of elements in a scene based on visual, vestibular and proprioceptive inputs. Although this process appears straightforward to most observers, it has proven to be a difficult problem from a computational perspective, and difficult to explain in terms of neural processing.

Neural oscillation Brainwaves, repetitive patterns of neural activity in the central nervous system

Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. In individual neurons, oscillations can appear either as oscillations in membrane potential or as rhythmic patterns of action potentials, which then produce oscillatory activation of post-synaptic neurons. At the level of neural ensembles, synchronized activity of large numbers of neurons can give rise to macroscopic oscillations, which can be observed in an electroencephalogram. Oscillatory activity in groups of neurons generally arises from feedback connections between the neurons that result in the synchronization of their firing patterns. The interaction between neurons can give rise to oscillations at a different frequency than the firing frequency of individual neurons. A well-known example of macroscopic neural oscillations is alpha activity.

Neural binding

Neural binding is the neuroscientific aspect of what is commonly known as the binding problem: the interdisciplinary difficulty of creating a comprehensive and verifiable model for the unity of consciousness. "Binding" refers to the integration of highly diverse neural information in the forming of one's cohesive experience. The neural binding hypothesis states that neural signals are paired through synchronized oscillations of neuronal activity that combine and recombine to allow for a wide variety of responses to context-dependent stimuli. These dynamic neural networks are thought to account for the flexibility and nuanced response of the brain to various situations. The coupling of these networks is transient, on the order of milliseconds, and allows for rapid activity.

Neurotransmission Impulse transmission between neurons

Neurotransmission is the process by which signaling molecules called neurotransmitters are released by the axon terminal of a neuron, and bind to and react with the receptors on the dendrites of another neuron a short distance away. A similar process occurs in retrograde neurotransmission, where the dendrites of the postsynaptic neuron release retrograde neurotransmitters that signal through receptors that are located on the axon terminal of the presynaptic neuron, mainly at GABAergic and glutamatergic synapses.

Neural coding is the transduction of environmental signals and internal signals of the body into neural activity patterns as representations forming a model of reality suitable for purposeful actions and adaptation, preserving the integrity and normal functioning of the body. It also describes the study of information processing by neurons along with learning on what the information is used for and how it is transformed when it is being passed through from one another.

In computational neuroscience, the Wilson–Cowan model describes the dynamics of interactions between populations of very simple excitatory and inhibitory model neurons. It was developed by Hugh R. Wilson and Jack D. Cowan and extensions of the model have been widely used in modeling neuronal populations. The model is important historically because it uses phase plane methods and numerical solutions to describe the responses of neuronal populations to stimuli. Because the model neurons are simple, only elementary limit cycle behavior, i.e. neural oscillations, and stimulus-dependent evoked responses are predicted. The key findings include the existence of multiple stable states, and hysteresis, in the population response.

Biological neuron model Mathematical descriptions of the properties of certain cells in the nervous system

Biological neuron models, also known as a spiking neuron models, are mathematical descriptions of the properties of certain cells in the nervous system that generate sharp electrical potentials across their cell membrane, roughly one millisecond in duration, called action potentials or spikes. Since spikes are transmitted along the axon and synapses from the sending neuron to many other neurons, spiking neurons are considered to be a major information processing unit of the nervous system. Spiking neuron models can be divided into different categories: the most detailed mathematical models are biophysical neuron models that describe the membrane voltage as a function of the input current and the activation of ion channels. Mathematically simpler are integrate-and-fire models that describe the membrane voltage as a function of the input current and predict the spike times without a description of the biophysical processes that shape the time course of an action potential. Even more abstract models only predict output spikes as a function of the stimulation where the stimulation can occur through sensory input or pharmacologically. This article provides a short overview of different spiking neuron models and links, whenever possible to experimental phenomena. It includes deterministic and probabilistic models.

Coincidence detection in the context of neurobiology is a process by which a neuron or a neural circuit can encode information by detecting the occurrence of temporally close but spatially distributed input signals. Coincidence detectors influence neuronal information processing by reducing temporal jitter, reducing spontaneous activity, and forming associations between separate neural events. This concept has led to a greater understanding of neural processes and the formation of computational maps in the brain.

Recurrent thalamo-cortical resonance is an observed phenomenon of oscillatory neural activity between the thalamus and various cortical regions of the brain. It is proposed by Rodolfo Llinas and others as a theory for the integration of sensory information into the whole of perception in the brain. Thalamocortical oscillation is proposed to be a mechanism of synchronization between different cortical regions of the brain, a process known as temporal binding. This is possible through the existence of thalamocortical networks, groupings of thalamic and cortical cells that exhibit oscillatory properties.

Shunting inhibition, also known as divisive inhibition, is a form of postsynaptic potential inhibition that can be represented mathematically as reducing the excitatory potential by division, rather than linear subtraction. The term "shunting" is used because of the synaptic conductance short-circuit currents that are generated at adjacent excitatory synapses. If a shunting inhibitory synapse is activated, the input resistance is reduced locally. The amplitude of subsequent excitatory postsynaptic potential (EPSP) is reduced by this, in accordance with Ohm's Law. This simple scenario arises if the inhibitory synaptic reversal potential is identical to the resting potential.

Two-alternative forced choice (2AFC) is a method for measuring the sensitivity of a person, child or infant, or animal to some particular sensory input, stimulus, through that observer's pattern of choices and response times to two versions of the sensory input. For example, to determine a person's sensitivity to dim light, the observer would be presented with a series of trials in which a dim light was randomly either in the top or bottom of the display. After each trial, the observer responds "top" or "bottom". The observer is not allowed to say "I do not know", or "I am not sure", or "I did not see anything". In that sense the observer's choice is forced between the two alternatives.

Summation (neurophysiology)

Summation, which includes both spatial summation and temporal summation, is the process that determines whether or not an action potential will be generated by the combined effects of excitatory and inhibitory signals, both from multiple simultaneous inputs, and from repeated inputs. Depending on the sum total of many individual inputs, summation may or may not reach the threshold voltage to trigger an action potential.

Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most definitive models of neuro-biological computation as well as the tools commonly used to construct and analyze them.

An autapse is a chemical or electrical synapse from a neuron onto itself. It can also be described as a synapse formed by the axon of a neuron on its own dendrites, in vivo or in vitro.

Exponential integrate-and-fire models are compact and computationally efficient nonlinear spiking neuron models with one or two variables. The exponential integrate-and-fire model was first proposed as a one-dimensional model. The most prominent two-dimensional examples are the adaptive exponential integrate-and-fire model and the generalized exponential integrate-and-fire model. Exponential integrate-and-fire models are widely used in the field of computational neuroscience and spiking neural networks because of (i) a solid grounding of the neuron model in the field of experimental neuroscience, (ii) computational efficiency in simulations and hardware implementations, and (iii) mathematical transparency.

Biological motion perception is the act of perceiving the fluid unique motion of a biological agent. The phenomenon was first documented by Swedish perceptual psychologist, Gunnar Johansson, in 1973. There are many brain areas involved in this process, some similar to those used to perceive faces. While humans complete this process with ease, from a computational neuroscience perspective there is still much to be learned as to how this complex perceptual problem is solved. One tool which many research studies in this area use is a display stimuli called a point light walker. Point light walkers are coordinated moving dots that simulate biological motion in which each dot represents specific joints of a human performing an action.

The spike response model (SRM) is a spiking neuron model in which spikes are generated by either a deterministic or a stochastic threshold process. In the SRM, the membrane voltage V is described as a linear sum of the postsynaptic potentials (PSPs) caused by spike arrivals to which the effects of refractoriness and adaptation are added. The threshold is either fixed or dynamic. In the latter case it increases after each spike. The SRM is flexible enough to account for a variety of neuronal firing pattern in response to step current input. The SRM has also been used in the theory of computation to quantify the capacity of spiking neural networks; and in the neurosciences to predict the subthreshold voltage and the firing times of cortical neurons during stimulation with a time-dependent current stimulation. The name Spike Response Model points to the property that the two important filters and of the model can be interpreted as the response of the membrane potential to an incoming spike and to an outgoing spike. The SRM has been formulated in continuous time and in discrete time. The SRM can be viewed as a generalized linear model (GLM) or as an a generalized integrate-and-fire model with adaptation.

References

  1. Vidybida, A. K. (1996). "Neuron as time coherence discriminator". Biological Cybernetics. 74 (6): 537–542. doi:10.1007/BF00209424. PMID   8672560. S2CID   19862684.
  2. Vidybida, A.K. (1998). "Inhibition as binding controller at the single neuron level". BioSystems. 48 (1–3): 263–267. doi:10.1016/S0303-2647(98)00073-2. PMID   9886656.
  3. F. Crick. The Astonishing Hypothesis. Touchstone., 1995.
  4. Abeles, M. (1982). "Role of the cortical neuron: integrator or coincidence detector?". Israel Journal of Medical Sciences. 18 (1): 83–92. PMID   6279540.
  5. König, P.; Engel, A. K.; Singer, W. (1996). "Integrator or coincidence detector? the role of the cortical neuron revisited". Trends in Neurosciences. 19 (4): 130–137. doi:10.1016/S0166-2236(96)80019-1. PMID   8658595. S2CID   14664183.
  6. Rudolph, M.; Destexhe, A. (2003). "Tuning neocortical pyramidal neurons between integrators and coincidence detectors". Journal of Computational.Neuroscience. 14 (3): 239–251. doi:10.1023/A:1023245625896. PMID   12766426. S2CID   3695640.
  7. J. P. Sougné. Binding problem. In Encyclopedia of Cognitive Science. John Wiley & Sons, Ltd, 2006.
  8. Treisman, A. M.; Gelade, G. (1980). "A feature-integration theory of attention". Cognitive Psychology. 12 (1): 97–136. doi:10.1016/0010-0285(80)90005-5. PMID   7351125. S2CID   353246.
  9. von der Malsburg, C (1999). "The what and why of binding: The modeler's perspective". Neuron. 24 (8): 95–104. doi: 10.1016/S0896-6273(00)80825-9 . PMID   10677030. S2CID   7057525.
  10. Eckhorn, R.; Bauer, R.; Jordan, W.; Brosch, M.; Kruse, W.; Munk, M.; Reitboeck, H. J. (1988). "Coherent oscillations: a mechanism for feature linking in the visual cortex?". Biological Cybernetics. 60 (2): 121–130. doi:10.1007/BF00202899. PMID   3228555. S2CID   206771651.
  11. Damasio, A. R. (1989). "Concepts in the brain". Mind & Language. 4 (1–2): 25–28. doi:10.1111/j.1468-0017.1989.tb00236.x.
  12. A. K. Engel, P. König, A. K. Kreiter, C. M. Gray, and W. Singer. Temporal coding by coherent oscillations as a potential solution to the binding problem: physiological evidence. In H. G. Schuster and W. Singer, editors, Nonlinear Dynamics and Neuronal Networks, pages 325. VCH Weinheim, 1991.
  13. Merzenich, Michael M. (1993). "Neural Mechanisms Underlying Temporal Integration, Segmentation, and Input Sequence Representation: Some Implications for the Origin of Learning Disabilities". Annals of the New York Academy of Sciences. 682: 1–22. doi:10.1111/j.1749-6632.1993.tb22955.x. PMID   8323106. S2CID   44698935.
  14. Vidybida, A. K. (1996). "Neuron as time coherence discriminator". Biological Cybernetics. 74 (6): 537–542. doi:10.1007/BF00209424. PMID   8672560. S2CID   19862684.
  15. Encyclopedia of information science and technology (2014). Mehdi Khosrow-Pour (ed.). Binding neuron (Third ed.). Hershey PA: IGI Global. pp. 1123–1134. ISBN   978-1-4666-5889-9.
  16. Vidybida, A. K. (2007). "Output stream of a binding neuron". Ukrainian Mathematical Journal. 59 (12): 1819–1839. doi:10.1007/s11253-008-0028-5. S2CID   120989952.
  17. Vidybida, A. K.; Kravchuk, K. G. (2013). "Delayed feedback makes neuronal firing statistics non-markovian". Ukrainian Mathematical Journal. 64 (12): 1793–1815. doi:10.1007/s11253-013-0753-2. S2CID   123193380.
  18. Arunachalam, V.; Akhavan-Tabatabaei, R.; Lopez, C. (2013). "Results on a Binding Neuron Model and Their Implications for Modified Hourglass Model for Neuronal Network". Computational and Mathematical Methods in Medicine. 2013: 374878. doi: 10.1155/2013/374878 . PMC   3876776 . PMID   24396394.
  19. Rosselló, J. L.; Canals, V.; Morro, A.; Oliver, A. (2012). "Hardware implementation of stochastic spiking neural networks". International Journal of Neural Systems. 22 (4): 1250014. doi:10.1142/S0129065712500141. PMID   22830964.
  20. Wang, R.; Cohen, G.; Stiefel, K. M.; Hamilton, T. J.; Tapson, J.; Schaik, A. van (2013). "An fpga implementation of a polychronous spiking neural network with delay adaptation". Frontiers in Neuroscience. 7: 14. doi: 10.3389/fnins.2013.00014 . PMC   3570898 . PMID   23408739.
  21. Kravchuk, K. G.; Vidybida, A. K. (2014). "Non-markovian spiking statistics of a neuron with delayed feedback in presence of refractoriness". Mathematical Biosciences and Engineering. 11 (1): 81–104. doi: 10.3934/mbe.2014.11.81 . PMID   24245681.