Models of neural computation

Last updated

Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most definitive models of neuro-biological computation as well as the tools commonly used to construct and analyze them.

Contents

Introduction

Due to the complexity of nervous system behavior, the associated experimental error bounds are ill-defined, but the relative merit of the different models of a particular subsystem can be compared according to how closely they reproduce real-world behaviors or respond to specific input signals. In the closely related field of computational neuroethology, the practice is to include the environment in the model in such a way that the loop is closed. In the cases where competing models are unavailable, or where only gross responses have been measured or quantified, a clearly formulated model can guide the scientist in designing experiments to probe biochemical mechanisms or network connectivity.

In all but the simplest cases, the mathematical equations that form the basis of a model cannot be solved exactly. Nevertheless, computer technology, sometimes in the form of specialized software or hardware architectures, allow scientists to perform iterative calculations and search for plausible solutions. A computer chip or a robot that can interact with the natural environment in ways akin to the original organism is one embodiment of a useful model. The ultimate measure of success is however the ability to make testable predictions.

General criteria for evaluating models

Speed of information processing

The rate of information processing in biological neural systems are constrained by the speed at which an action potential can propagate down a nerve fibre. This conduction velocity ranges from 1 m/s to over 100 m/s, and generally increases with the diameter of the neuronal process. Slow in the timescales of biologically-relevant events dictated by the speed of sound or the force of gravity, the nervous system overwhelmingly prefers parallel computations over serial ones in time-critical applications.

Robustness

A model is robust if it continues to produce the same computational results under variations in inputs or operating parameters introduced by noise. For example, the direction of motion as computed by a robust motion detector would not change under small changes of luminance, contrast or velocity jitter. For simple mathematical models of neuron, for example the dependence of spike patterns on signal delay is much weaker than the dependence on changes in "weights" of interneuronal connections. [1]

Gain control

This refers to the principle that the response of a nervous system should stay within certain bounds even as the inputs from the environment change drastically. For example, when adjusting between a sunny day and a moonless night, the retina changes the relationship between light level and neuronal output by a factor of more than so that the signals sent to later stages of the visual system always remain within a much narrower range of amplitudes. [2] [3] [4]

Linearity versus nonlinearity

A linear system is one whose response in a specified unit of measure, to a set of inputs considered at once, is the sum of its responses due to the inputs considered individually.

Linear systems are easier to analyze mathematically and are a persuasive assumption in many models including the McCulloch and Pitts neuron, population coding models, and the simple neurons often used in Artificial neural networks. Linearity may occur in the basic elements of a neural circuit such as the response of a postsynaptic neuron, or as an emergent property of a combination of nonlinear subcircuits. [5] Though linearity is often seen as incorrect, there has been recent work suggesting it may, in fact, be biophysically plausible in some cases. [6] [7]

Examples

A computational neural model may be constrained to the level of biochemical signalling in individual neurons or it may describe an entire organism in its environment. The examples here are grouped according to their scope.

Models of information transfer in neurons

The most widely used models of information transfer in biological neurons are based on analogies with electrical circuits. The equations to be solved are time-dependent differential equations with electro-dynamical variables such as current, conductance or resistance, capacitance and voltage.

Hodgkin–Huxley model and its derivatives

The Hodgkin–Huxley model, widely regarded as one of the great achievements of 20th-century biophysics, describes how action potentials in neurons are initiated and propagated in axons via voltage-gated ion channels. It is a set of nonlinear ordinary differential equations that were introduced by Alan Lloyd Hodgkin and Andrew Huxley in 1952 to explain the results of voltage clamp experiments on the squid giant axon. Analytic solutions do not exist, but the Levenberg–Marquardt algorithm, a modified Gauss–Newton algorithm, is often used to fit these equations to voltage-clamp data.

The FitzHugh–Nagumo model is a simplication of the Hodgkin–Huxley model. The Hindmarsh–Rose model is an extension which describes neuronal spike bursts. The Morris–Lecar model is a modification which does not generate spikes, but describes slow-wave propagation, which is implicated in the inhibitory synaptic mechanisms of central pattern generators.

Solitons

The soliton model is an alternative to the Hodgkin–Huxley model that claims to explain how action potentials are initiated and conducted in the form of certain kinds of solitary sound (or density) pulses that can be modeled as solitons along axons, based on a thermodynamic theory of nerve pulse propagation.

Transfer functions and linear filters

This approach, influenced by control theory and signal processing, treats neurons and synapses as time-invariant entities that produce outputs that are linear combinations of input signals, often depicted as sine waves with a well-defined temporal or spatial frequencies.

The entire behavior of a neuron or synapse are encoded in a transfer function, lack of knowledge concerning the exact underlying mechanism notwithstanding. This brings a highly developed mathematics to bear on the problem of information transfer.

The accompanying taxonomy of linear filters turns out to be useful in characterizing neural circuitry. Both low- and high-pass filters are postulated to exist in some form in sensory systems, as they act to prevent information loss in high and low contrast environments, respectively.

Indeed, measurements of the transfer functions of neurons in the horseshoe crab retina according to linear systems analysis show that they remove short-term fluctuations in input signals leaving only the long-term trends, in the manner of low-pass filters. These animals are unable to see low-contrast objects without the help of optical distortions caused by underwater currents. [8] [9]

Models of computations in sensory systems

Lateral inhibition in the retina: Hartline–Ratliff equations

In the retina, an excited neural receptor can suppress the activity of surrounding neurons within an area called the inhibitory field. This effect, known as lateral inhibition, increases the contrast and sharpness in visual response, but leads to the epiphenomenon of Mach bands. This is often illustrated by the optical illusion of light or dark stripes next to a sharp boundary between two regions in an image of different luminance.

The Hartline-Ratliff model describes interactions within a group of p photoreceptor cells. [10] Assuming these interactions to be linear, they proposed the following relationship for the steady-state response rate of the given p-th photoreceptor in terms of the steady-state response rates of the j surrounding receptors:

.

Here,

is the excitation of the target p-th receptor from sensory transduction

is the associated threshold of the firing cell, and

is the coefficient of inhibitory interaction between the p-th and the jth receptor. The inhibitory interaction decreases with distance from the target p-th receptor.

Cross-correlation in sound localization: Jeffress model

According to Jeffress, [11] in order to compute the location of a sound source in space from interaural time differences, an auditory system relies on delay lines: the induced signal from an ipsilateral auditory receptor to a particular neuron is delayed for the same time as it takes for the original sound to go in space from that ear to the other. Each postsynaptic cell is differently delayed and thus specific for a particular inter-aural time difference. This theory is equivalent to the mathematical procedure of cross-correlation.

Following Fischer and Anderson, [12] the response of the postsynaptic neuron to the signals from the left and right ears is given by

where

and

represents the delay function. This is not entirely correct and a clear eye is needed to put the symbols in order.

Structures have been located in the barn owl which are consistent with Jeffress-type mechanisms. [13]

Cross-correlation for motion detection: Hassenstein–Reichardt model

A motion detector needs to satisfy three general requirements: pair-inputs, asymmetry and nonlinearity. [14] The cross-correlation operation implemented asymmetrically on the responses from a pair of photoreceptors satisfies these minimal criteria, and furthermore, predicts features which have been observed in the response of neurons of the lobula plate in bi-wing insects. [15]

The master equation for response is

The HR model predicts a peaking of the response at a particular input temporal frequency. The conceptually similar Barlow–Levick model is deficient in the sense that a stimulus presented to only one receptor of the pair is sufficient to generate a response. This is unlike the HR model, which requires two correlated signals delivered in a time ordered fashion. However the HR model does not show a saturation of response at high contrasts, which is observed in experiment. Extensions of the Barlow-Levick model can provide for this discrepancy. [16]

Watson–Ahumada model for motion estimation in humans

This uses a cross-correlation in both the spatial and temporal directions, and is related to the concept of optical flow. [17]

Anti-Hebbian adaptation: spike-timing dependent plasticity

Models of sensory-motor coupling

Neurophysiological metronomes: neural circuits for pattern generation

Mutually inhibitory processes are a unifying motif of all central pattern generators. This has been demonstrated in the stomatogastric (STG) nervous system of crayfish and lobsters. [18] Two and three-cell oscillating networks based on the STG have been constructed which are amenable to mathematical analysis, and which depend in a simple way on synaptic strengths and overall activity, presumably the knobs on these things. [19] The mathematics involved is the theory of dynamical systems.

Feedback and control: models of flight control in the fly

Flight control in the fly is believed to be mediated by inputs from the visual system and also the halteres, a pair of knob-like organs which measure angular velocity. Integrated computer models of Drosophila , short on neuronal circuitry but based on the general guidelines given by control theory and data from the tethered flights of flies, have been constructed to investigate the details of flight control. [20] [21]

Cerebellum sensory motor control

Tensor network theory is a theory of cerebellar function that provides a mathematical model of the transformation of sensory space-time coordinates into motor coordinates and vice versa by cerebellar neuronal networks. The theory was developed by Andras Pellionisz and Rodolfo Llinas in the 1980s as a geometrization of brain function (especially of the central nervous system) using tensors. [22] [23]

Software modelling approaches and tools

Neural networks

In this approach the strength and type, excitatory or inhibitory, of synaptic connections are represented by the magnitude and sign of weights, that is, numerical coefficients in front of the inputs to a particular neuron. The response of the -th neuron is given by a sum of nonlinear, usually "sigmoidal" functions of the inputs as:

.

This response is then fed as input into other neurons and so on. The goal is to optimize the weights of the neurons to output a desired response at the output layer respective to a set given inputs at the input layer. This optimization of the neuron weights is often performed using the backpropagation algorithm and an optimization method such as gradient descent or Newton's method of optimization. Backpropagation compares the output of the network with the expected output from the training data, then updates the weights of each neuron to minimize the contribution of that individual neuron to the total error of the network.

Genetic algorithms

Genetic algorithms are used to evolve neural (and sometimes body) properties in a model brain-body-environment system so as to exhibit some desired behavioral performance. The evolved agents can then be subjected to a detailed analysis to uncover their principles of operation. Evolutionary approaches are particularly useful for exploring spaces of possible solutions to a given behavioral task because these approaches minimize a priori assumptions about how a given behavior ought to be instantiated. They can also be useful for exploring different ways to complete a computational neuroethology model when only partial neural circuitry is available for a biological system of interest. [24]

NEURON

The NEURON software, developed at Duke University, is a simulation environment for modeling individual neurons and networks of neurons. [25] The NEURON environment is a self-contained environment allowing interface through its GUI or via scripting with hoc or python. The NEURON simulation engine is based on a Hodgkin–Huxley type model using a Borg–Graham formulation. Several examples of models written in NEURON are available from the online database ModelDB. [26]

Embodiment in electronic hardware

Conductance-based silicon neurons

Nervous systems differ from the majority of silicon-based computing devices in that they resemble analog computers (not digital data processors) and massively parallel processors, not sequential processors. To model nervous systems accurately, in real-time, alternative hardware is required.

The most realistic circuits to date make use of analog properties of existing digital electronics (operated under non-standard conditions) to realize Hodgkin–Huxley-type models in silico. [27] [28]

Retinomorphic chips

[29]

See also

Related Research Articles

Neuroscience Scientific study of the nervous system

Neuroscience is the scientific study of the nervous system. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developmental biology, cytology, computer science and mathematical modeling to understand the fundamental and emergent properties of neurons and neural circuits. The understanding of the biological basis of learning, memory, behavior, perception, and consciousness has been described by Eric Kandel as the "ultimate challenge" of the biological sciences.

Nervous system Highly complex part of an animal that coordinates actions and sensory information by transmitting signals between different parts of the body

In biology, the nervous system is a highly complex part of an animal that coordinates its actions and sensory information by transmitting signals to and from different parts of its body. The nervous system detects environmental changes that impact the body, then works in tandem with the endocrine system to respond to such events. Nervous tissue first arose in wormlike organisms about 550 to 600 million years ago. In vertebrates it consists of two main parts, the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS consists of the brain and spinal cord. The PNS consists mainly of nerves, which are enclosed bundles of the long fibers or axons, that connect the CNS to every other part of the body. Nerves that transmit signals from the brain are called motor nerves or efferent nerves, while those nerves that transmit information from the body to the CNS are called sensory nerves or afferent. Spinal nerves are mixed nerves that serve both functions. The PNS is divided into three separate subsystems, the somatic, autonomic, and enteric nervous systems. Somatic nerves mediate voluntary movement. The autonomic nervous system is further subdivided into the sympathetic and the parasympathetic nervous systems. The sympathetic nervous system is activated in cases of emergencies to mobilize energy, while the parasympathetic nervous system is activated when organisms are in a relaxed state. The enteric nervous system functions to control the gastrointestinal system. Both autonomic and enteric nervous systems function involuntarily. Nerves that exit from the cranium are called cranial nerves while those exiting from the spinal cord are called spinal nerves.

Computational neuroscience is a branch of neuroscience which employs mathematical models, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.

An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network. The artificial neuron receives one or more inputs and sums them to produce an output. Usually each input is separately weighted, and the sum is passed through a non-linear function known as an activation function or transfer function. The transfer functions usually have a sigmoid shape, but they may also take the form of other non-linear functions, piecewise linear functions, or step functions. They are also often monotonically increasing, continuous, differentiable and bounded. The thresholding function has inspired building logic gates referred to as threshold logic; applicable to building logic circuits resembling brain processing. For example, new devices such as memristors have been extensively used to develop such logic in recent times.

Hebbian theory is a neuroscientific theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior. The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows:

Let us assume that the persistence or repetition of a reverberatory activity tends to induce lasting cellular changes that add to its stability. ... When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.

Neural oscillation Brainwaves, repetitive patterns of neural activity in the central nervous system

Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. In individual neurons, oscillations can appear either as oscillations in membrane potential or as rhythmic patterns of action potentials, which then produce oscillatory activation of post-synaptic neurons. At the level of neural ensembles, synchronized activity of large numbers of neurons can give rise to macroscopic oscillations, which can be observed in an electroencephalogram. Oscillatory activity in groups of neurons generally arises from feedback connections between the neurons that result in the synchronization of their firing patterns. The interaction between neurons can give rise to oscillations at a different frequency than the firing frequency of individual neurons. A well-known example of macroscopic neural oscillations is alpha activity.

Neuronal noise Random electric fluctuations in neurons

Neuronal noise or neural noise refers to the random intrinsic electrical fluctuations within neuronal networks. These fluctuations are not associated with encoding a response to internal or external stimuli and can be from one to two orders of magnitude. Most noise commonly occurs below a voltage-threshold that is needed for an action potential to occur, but sometimes it can be present in the form of an action potential; for example, stochastic oscillations in pacemaker neurons in suprachiasmatic nucleus are partially responsible for the organization of circadian rhythms.

Bursting, or burst firing, is an extremely diverse general phenomenon of the activation patterns of neurons in the central nervous system and spinal cord where periods of rapid action potential spiking are followed by quiescent periods much longer than typical inter-spike intervals. Bursting is thought to be important in the operation of robust central pattern generators, the transmission of neural codes, and some neuropathologies such as epilepsy. The study of bursting both directly and in how it takes part in other neural phenomena has been very popular since the beginnings of cellular neuroscience and is closely tied to the fields of neural synchronization, neural coding, plasticity, and attention.

Neural coding is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among the electrical activity of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is thought that neurons can encode both digital and analog information.

Computational neurogenetic modeling (CNGM) is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biology, as well as engineering.

In neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuron has on another. The term is typically used in artificial and biological neural network research.

Biological neuron model

Biological neuron models, also known as a spiking neuron models, are mathematical descriptions of the properties of certain cells in the nervous system that generate sharp electrical potentials across their cell membrane, roughly one millisecond in duration, called action potentials or spikes. Since spikes are transmitted along the axon and synapses from the sending neuron to many other neurons, spiking neurons are considered to be a major information processing unit of the nervous system. Spiking neuron models can be divided into different categories: the most detailed mathematical models are biophysical neuron models that describe the membrane voltage as a function of the input current and the activation of ion channels. Mathematically simpler are integrate-and-fire models that describe the membrane voltage as a function of the input current and predict the spike times without a description of the biophysical processes that shape the time course of an action potential. Even more abstract models only predict output spikes as a function of the stimulation where the stimulation can occur through sensory input or pharmacologically. This article provides a short overview of different spiking neuron models and links, whenever possible to experimental phenomena. It includes deterministic and probabilistic models.

In neurophysiology, several mathematical models of the action potential have been developed, which fall into two basic types. The first type seeks to model the experimental data quantitatively, i.e., to reproduce the measurements of current and voltage exactly. The renowned Hodgkin–Huxley model of the axon from the Loligo squid exemplifies such models. Although qualitatively correct, the H-H model does not describe every type of excitable membrane accurately, since it considers only two ions, each with only one type of voltage-sensitive channel. However, other ions such as calcium may be important and there is a great diversity of channels for all ions. As an example, the cardiac action potential illustrates how differently shaped action potentials can be generated on membranes with voltage-sensitive calcium channels and different types of sodium/potassium channels. The second type of mathematical model is a simplification of the first type; the goal is not to reproduce the experimental data, but to understand qualitatively the role of action potentials in neural circuits. For such a purpose, detailed physiological models may be unnecessarily complicated and may obscure the "forest for the trees". The FitzHugh–Nagumo model is typical of this class, which is often studied for its entrainment behavior. Entrainment is commonly observed in nature, for example in the synchronized lighting of fireflies, which is coordinated by a burst of action potentials; entrainment can also be observed in individual neurons. Both types of models may be used to understand the behavior of small biological neural networks, such as the central pattern generators responsible for some automatic reflex actions. Such networks can generate a complex temporal pattern of action potentials that is used to coordinate muscular contractions, such as those involved in breathing or fast swimming to escape a predator.

Linear-nonlinear-Poisson cascade model

The linear-nonlinear-Poisson (LNP) cascade model is a simplified functional model of neural spike responses. It has been successfully used to describe the response characteristics of neurons in early sensory pathways, especially the visual system. The LNP model is generally implicit when using reverse correlation or the spike-triggered average to characterize neural responses with white-noise stimuli.

The dynamical systems approach to neuroscience is a branch of mathematical biology that utilizes nonlinear dynamics to understand and model the nervous system and its functions. In a dynamical system, all possible states are expressed by a phase space. Such systems can experience bifurcation as a function of its bifurcation parameters and often exhibit chaos. Dynamical neuroscience describes the non-linear dynamics at many levels of the brain from single neural cells to cognitive processes, sleep states and the behavior of neurons in large-scale neuronal simulation.

Network of human nervous system comprises nodes that are connected by links. The connectivity may be viewed anatomically, functionally, or electrophysiologically. These are presented in several Wikipedia articles that include Connectionism, Biological neural network, Artificial neural network, Computational neuroscience, as well as in several books by Ascoli, G. A. (2002), Sterratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011), Gerstner, W., & Kistler, W. (2002), and Rumelhart, J. L., McClelland, J. L., and PDP Research Group (1986) among others. The focus of this article is a comprehensive view of modeling a neural network. Once an approach based on the perspective and connectivity is chosen, the models are developed at microscopic, mesoscopic, or macroscopic (system) levels. Computational modeling refers to models that are developed using computing tools.

Theta model

The theta model, or Ermentrout–Kopell canonical model, is a biological neuron model originally developed to model neurons in the animal Aplysia, and later used in various fields of computational neuroscience. The model is particularly well suited to describe neuron bursting, which are rapid oscillations in the membrane potential of a neuron interrupted by periods of relatively little oscillation. Bursts are often found in neurons responsible for controlling and maintaining steady rhythms. For example, breathing is controlled by a small network of bursting neurons in the brain stem. Of the three main classes of bursting neurons, the theta model describes parabolic bursting. Parabolic bursting is characterized by a series of bursts that are regulated by a slower external oscillation. This slow oscillation changes the frequency of the faster oscillation so that the frequency curve of the burst pattern resembles a parabola.

Exponential integrate-and-fire models are compact and computationally efficient nonlinear spiking neuron models with one or two variables. The exponential integrate-and-fire model was first proposed as a one-dimensional model. The most prominent two-dimensional examples are the adaptive exponential integrate-and-fire model and the generalized exponential integrate-and-fire model. Exponential integrate-and-fire models are widely used in the field of computational neuroscience and spiking neural networks because of (i) a solid grounding of the neuron model in the field of experimental neuroscience, (ii) computational efficiency in simulations and hardware implementations, and (iii) mathematical transparency.

A binding neuron (BN) is an abstract concept of processing of input impulses in a generic neuron based on their temporal coherence and the level of neuronal inhibition. Mathematically, the concept may be implemented by most neuronal models including the well-known leaky integrate-and-fire model. The BN concept originated in 1996 and 1998 papers by A. K. Vidybida,

The spike response model (SRM) is a spiking neuron model in which spikes are generated by either a deterministic or a stochastic threshold process. In the SRM, the membrane voltage V is described as a linear sum of the postsynaptic potentials (PSPs) caused by spike arrivals to which the effects of refractoriness and adaptation are added. The threshold is either fixed or dynamic. In the latter case it increases after each spike. The SRM is flexible enough to account for a variety of neuronal firing pattern in response to step current input. The SRM has also been used in the theory of computation to quantify the capacity of spiking neural networks; and in the neurosciences to predict the subthreshold voltage and the firing times of cortical neurons during stimulation with a time-dependent current stimulation. The name Spike Response Model points to the property that the two important filters and of the model can be interpreted as the response of the membrane potential to an incoming spike and to an outgoing spike. The SRM has been formulated in continuous time and in discrete time. The SRM can be viewed as a generalized linear model (GLM) or as an a generalized integrate-and-fire model with adaptation.

References

  1. Cejnar, Pavel; Vyšata, Oldřich; Vališ, Martin; Procházka, Aleš (2019). "The Complex Behaviour of a Simple Neural Oscillator Model in the Human Cortex". IEEE Transactions on Neural Systems and Rehabilitation Engineering. 27 (3): 337–347. doi:10.1109/TNSRE.2018.2883618. PMID   30507514. S2CID   54527064.
  2. Nicholas J. Priebe & David Ferster (2002). "A New Mechanism for Neuronal Gain Control". 35 (4). Neuron. pp. 602–604.
  3. Klein, S. A., Carney, T., Barghout-Stein, L., & Tyler, C. W. (1997, June). Seven models of masking. In Electronic Imaging'97 (pp. 13–24). International Society for Optics and Photonics.
  4. Barghout-Stein, Lauren. On differences between peripheral and foveal pattern masking. Diss. University of California, Berkeley, 1999.
  5. Molnar, Alyosha; Hsueh, Hain-Ann; Roska, Botond; Werblin, Frank S. (2009). "Crossover inhibition in the retina: circuitry that compensates for nonlinear rectifying synaptic transmission". Journal of Computational Neuroscience. 27 (3): 569–590. doi:10.1007/s10827-009-0170-6. ISSN   0929-5313. PMC   2766457 . PMID   19636690.
  6. Singh, Chandan; Levy, William B. (13 July 2017). "A consensus layer V pyramidal neuron can sustain interpulse-interval coding". PLOS ONE. 12 (7): e0180839. arXiv: 1609.08213 . Bibcode:2017PLoSO..1280839S. doi: 10.1371/journal.pone.0180839 . ISSN   1932-6203. PMC   5509228 . PMID   28704450.
  7. Cash, Sydney; Yuste, Rafael (1 January 1998). "Input Summation by Cultured Pyramidal Neurons Is Linear and Position-Independent". Journal of Neuroscience. 18 (1): 10–15. doi: 10.1523/JNEUROSCI.18-01-00010.1998 . ISSN   0270-6474. PMC   6793421 . PMID   9412481.
  8. Robert B. Barlow, Jr.; Ramkrishna Prakash; Eduardo Solessio (1993). "The Neural Network of the Limulus Retina: From Computer to Behavior". American Zoologist. 33: 66–78. doi: 10.1093/icb/33.1.66 .
  9. Robert B. Barlow, James M. Hitt and Frederick A. Dodge (2001). "Limulus Vision in the Marine Environment". The Biological Bulletin. 200 (2): 169–176. CiteSeerX   10.1.1.116.5190 . doi:10.2307/1543311. JSTOR   1543311. PMID   11341579. S2CID   18371282.
  10. K. P. Hadeler & D. Kuhn (1987). "Stationary States of the Hartline–Ratliff Model". Biological Cybernetics. 56 (5–6): 411–417. doi:10.1007/BF00319520. S2CID   8710876.
  11. Jeffress, L.A. (1948). "A place theory of sound localization". Journal of Comparative and Physiological Psychology. 41 (1): 35–39. doi:10.1037/h0061495. PMID   18904764.
  12. Fischer, Brian J.; Anderson, Charles H. (2004). "A computational model of sound localization in the barn owl". Neurocomputing. 58–60: 1007–1012. doi:10.1016/j.neucom.2004.01.159.
  13. Catherine E. Carr, 1993. Delay Line Models of Sound Localization in the Barn Owl "American Zoologist" Vol. 33, No. 1 79–85
  14. Borst A, Egelhaaf M., 1989. Principles of visual motion detection. "Trends in Neurosciences" 12(8):297–306
  15. Joesch, M.; et al. (2008). "Response properties of motion-sensitive visual interneurons in the lobula plate of Drosophila melanogaster". Curr. Biol. 18 (5): 368–374. doi: 10.1016/j.cub.2008.02.022 . PMID   18328703. S2CID   18873331.
  16. Gonzalo G. de Polavieja, 2006. Neuronal Algorithms That Detect the Temporal Order of Events "Neural Computation" 18 (2006) 2102–2121
  17. Andrew B. Watson and Albert J. Ahumada, Jr., 1985. Model of human visual-motion sensing "J. Opt. Soc. Am. A" 2(2) 322–341
  18. Michael P. Nusbaum and Mark P. Beenhakker, A small-systems approach to motor pattern generation, Nature 417, 343–350 (16 May 2002)
  19. Cristina Soto-Treviño, Kurt A. Thoroughman and Eve Marder, L. F. Abbott, 2006. Activity-dependent modification of inhibitory synapses in models of rhythmic neural networks Nature Vol 4 No 3 2102–2121
  20. "the Grand Unified Fly (GUF) model".
  21. http://www.mendeley.com/download/public/2464051/3652638122/d3bd7957efd2c8a011afb0687dfb6943731cb6d0/dl.pdf%5B%5D
  22. Pellionisz, A., Llinás, R. (1980). "Tensorial Approach to the Geometry of Brain Function: Cerebellar Coordination Via A Metric Tensor" (PDF). Neuroscience. 5 (7): 1125––1136. doi:10.1016/0306-4522(80)90191-8. PMID   6967569. S2CID   17303132.CS1 maint: multiple names: authors list (link)
  23. Pellionisz, A., Llinás, R. (1985). "Tensor Network Theory of the Metaorganization of Functional Geometries in the Central Nervous System". Neuroscience. 16 (2): 245–273. doi:10.1016/0306-4522(85)90001-6. PMID   4080158. S2CID   10747593.CS1 maint: multiple names: authors list (link)
  24. Beer, Randall; Chiel, Hillel (4 March 2008). "Computational neuroethology". Scholarpedia. 3 (3): 5307. Bibcode:2008SchpJ...3.5307B. doi: 10.4249/scholarpedia.5307 .
  25. "NEURON - for empirically-based simulations of neurons and networks of neurons".
  26. McDougal RA, Morse TM, Carnevale T, Marenco L, Wang R, Migliore M, Miller PL, Shepherd GM, Hines ML. Twenty years of ModelDB and beyond: building essential modeling tools for the future of neuroscience. J Comput Neurosci. 2017; 42(1):1–10.
  27. L. Alvadoa, J. Tomasa, S. Saghia, S. Renauda, T. Balb, A. Destexheb, G. Le Masson, 2004. Hardware computation of conductance-based neuron models. Neurocomputing 58–60 (2004) 109–115
  28. Indiveri, Giacomo; Douglas, Rodney; Smith, Leslie (29 March 2008). "Silicon neurons". Scholarpedia. 3 (3): 1887. Bibcode:2008SchpJ...3.1887I. doi: 10.4249/scholarpedia.1887 .
  29. Kwabena Boahen, "A Retinomorphic Chip with Parallel Pathways: Encoding INCREASING, ON, DECREASING, and OFF Visual Signals", Analog Integrated Circuits and Signal Processing, 30, 121–135, 2002