Dynamical neuroscience

Last updated

The dynamical systems approach to neuroscience is a branch of mathematical biology that utilizes nonlinear dynamics to understand and model the nervous system and its functions. In a dynamical system, all possible states are expressed by a phase space. [1] Such systems can experience bifurcation (a qualitative change in behavior) as a function of its bifurcation parameters and often exhibit chaos. [2] Dynamical neuroscience describes the non-linear dynamics at many levels of the brain from single neural cells [3] to cognitive processes, sleep states and the behavior of neurons in large-scale neuronal simulation. [4]

Contents

Neurons have been modeled as nonlinear systems for decades now, but dynamical systems emerge in numerous other ways in the nervous system. From chemistry, chemical species models like the Gray–Scott model exhibit rich, chaotic dynamics. [5] [6] Dynamic interactions between extracellular fluid pathways reshapes our view of intraneural communication. [7] Information theory draws on thermodynamics in the development of infodynamics which can involve nonlinear systems, especially with regards to the brain.

History

One of the first well-known incidences[ spelling? ] in which neurons were modeled on a mathematical and physical basis was the integrate-and-fire model developed in 1907. Decades later, the discovery of the squid giant axon eventually led Alan Hodgkin and Andrew Huxley (half-brother to Aldous Huxley) to develop the Hodgkin–Huxley model of the neuron in 1952. [8] This model was simplified with the FitzHugh–Nagumo model in 1962. [9] By 1981, the Morris–Lecar model had been developed for the barnacle muscle.

These mathematical models proved useful and are still used by the field of biophysics today, but a late 20th century development propelled the dynamical study of neurons even further: computer technology. The largest issue with physiological equations like the ones developed above is that they were nonlinear. This made the standard analysis impossible and any advanced kinds of analysis included a number of (nearly) endless possibilities. Computers opened a lot of doors for all of the hard sciences in terms of their ability to approximate solutions to nonlinear equations. This is the aspect of computational neuroscience that dynamical systems encompasses.

In 2007, a canonical text book was written by Eugene Izhikivech called Dynamical Systems in Neuroscience, assisting the transformation of an obscure research topic into a line of academic study.

Neuron dynamics

(intro needed here)

Electrophysiology of the neuron

The motivation for a dynamical approach to neuroscience stems from an interest in the physical complexity of neuron behavior. As an example, consider the coupled interaction between a neuron's membrane potential and the activation of ion channels throughout the neuron. As the membrane potential of a neuron increases sufficiently, channels in the membrane open up to allow more ions in or out. The ion flux further alters the membrane potential, which further affects the activation of the ion channels, which affects the membrane potential, and so on. This is often the nature of coupled nonlinear equations. A nearly straight forward example of this is the Morris–Lecar model:

See the Morris–Lecar paper [10] for an in-depth understanding of the model. A more brief summary of the Morris Lecar model is given by Scholarpedia. [11]

In this article, the point is to demonstrate the physiological basis of dynamical neuron models, so this discussion will only cover the two variables of the equation:

Most importantly, the first equation states that the change of with respect to time depends on both and , as does the change in with respect to time. and are both functions of . So we have two coupled functions, and .

Different types of neuron models utilize different channels, depending on the physiology of the organism involved. For instance, the simplified two-dimensional Hodgkins–Huxley model considers sodium channels, while the Morris–Lecar model considers calcium channels. Both models consider potassium and leak current. Note, however, that the Hodgkins–Huxley model is canonically four-dimensional. [12]

Excitability of neurons

One of the predominant themes in classical neurobiology is the concept of a digital component to neurons. This concept was quickly absorbed by computer scientists where it evolved into the simple weighting function for coupled artificial neural networks. Neurobiologists call the critical voltage at which neurons fire a threshold. The dynamical criticism of this digital concept is that neurons don't truly exhibit all-or-none firing and should instead be thought of as resonators. [13]

In dynamical systems, this kind of property is known as excitability. An excitable system starts at some stable point. Imagine an empty lake at the top of a mountain with a ball in it. The ball is in a stable point. Gravity is pulling it down, so it's fixed at the lake bottom. If we give it a big enough push, it will pop out of the lake and roll down the side of the mountain, gaining momentum and going faster. Let's say we fashioned a loop-de-loop around the base of the mountain so that the ball will shoot up it and return to the lake (no rolling friction or air resistance). Now we have a system that stays in its rest state (the ball in the lake) until a perturbation knocks it out (rolling down the hill) but eventually returns to its rest state (back in the lake). In this example, gravity is the driving force and spatial dimensions x (horizontal) and y (vertical) are the variables. In the Morris Lecar neuron, the fundamental force is electromagnetic and and are the new phase space, but the dynamical picture is essentially the same. The electromagnetic force acts along just as gravity acts along . The shape of the mountain and the loop-de-loop act to couple the y and x dimensions to each other. In the neuron, nature has already decided how and are coupled, but the relationship is much more complicated than the gravitational example.

This property of excitability is what gives neurons the ability to transmit information to each other, so it is important to dynamical neuron networks, but the Morris Lecar can also operate in another parameter regime where it exhibits oscillatory behavior, forever oscillating around in phase space. This behavior is comparable to pacemaker cells in the heart, that don't rely on excitability but may excite neurons that do.

Global neurodynamics

The global dynamics of a network of neurons depend on at least the first three of four attributes:

  1. individual neuron dynamics (primarily, their thresholds or excitability)
  2. information transfer between neurons (generally either synapses or gap junctions
  3. network topology
  4. external forces (such as thermodynamic gradients)

There are many combinations of neural networks that can be modeled between the choices of these four attributes that can result in a versatile array of global dynamics.

Biological neural network modeling

Biological neural networks can be modeled by choosing an appropriate biological neuron model to describe the physiology of the organism and appropriate coupling terms to describe the physical interactions between neurons (forming the network). Other global considerations must be taken into consideration, such as the initial conditions and parameters of each neuron.

In terms of nonlinear dynamics, this requires evolving the state of the system through the functions. Following from the Morris Lecar example, the alterations to the equation would be:

where now has the subscript , indicating that it is the ith neuron in the network and a coupling function has been added to the first equation. The coupling function, D, is chosen based on the particular network being modeled. The two major candidates are synaptic junctions and gap junctions.

Attractor network

  • Point attractors – memory, pattern completion, categorizing, noise reduction
  • Line attractors – neural integration: oculomotor control
  • Ring attractors – neural integration: spatial orientation
  • Plane attractors – neural integration: (higher dimension of oculomotor control)
  • Cyclic attractors – central pattern generators
  • Chaotic attractors – recognition of odors and chaos is often mistaken for random noise.

Please see Scholarpedia's page for a formal review of attractor networks. [14]

Beyond neurons

While neurons play a lead role in brain dynamics, it is becoming more clear to neuroscientists that neuron behavior is highly dependent on their environment. But the environment is not a simple background, and there is a lot happening right outside of the neuron membrane, in the extracellular space. Neurons share this space with glial cells and the extracellular space itself may contain several agents of interaction with the neurons. [15]

Glia

Glia, once considered a mere support system for neurons, have been found to serve a significant role in the brain. [16] [17] The subject of how the interaction between neuron and glia have an influence on neuron excitability is a question of dynamics. [18]

Neurochemistry

Like any other cell, neurons operate on an undoubtedly complex set of molecular reactions. Each cell is a tiny community of molecular machinery (organelles) working in tandem and encased in a lipid membrane. These organelles communicate largely via chemicals like G-proteins and neurotransmitters, consuming ATP for energy. Such chemical complexity is of interest to physiological studies of the neuron.

Neuromodulation

Neurons in the brain live in an extracellular fluid, capable of propagating both chemical and physical energy alike through reaction-diffusion and bond manipulation that leads to thermal gradients. Volume transmission has been associated with thermal gradients caused by biological reactions in the brain. [19] Such complex transmission has been associated with migraines. [20]

Cognitive neuroscience

The computational approaches to theoretical neuroscience often employ artificial neural networks that simplify the dynamics of single neurons in favor of examining more global dynamics. While neural networks are often associated with artificial intelligence, they have also been productive in the cognitive sciences. [21] Artificial neural networks use simple neuron models, but their global dynamics are capable of exhibiting both Hopfield and Attractor-like network dynamics.

Hopfield network

The Lyapunov function is a nonlinear technique used to analyze the stability of the zero solutions of a system of differential equations. Hopfield networks were specifically designed such that their underlying dynamics could be described by the Lyapunov function. Stability in biological systems is called homeostasis. Particularly of interest to the cognitive sciences, Hopfield networks have been implicated in the role of associative memory (memory triggered by cues). [22]

See also

Related Research Articles

<span class="mw-page-title-main">Action potential</span> Neuron communication by electric impulses

An action potential occurs when the membrane potential of a specific cell rapidly rises and falls. This depolarization then causes adjacent locations to similarly depolarize. Action potentials occur in several types of excitable cells, which include animal cells like neurons and muscle cells, as well as some plant cells. Certain endocrine cells such as pancreatic beta cells, and certain cells of the anterior pituitary gland are also excitable cells.

Computational neuroscience is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.

<span class="mw-page-title-main">Hodgkin–Huxley model</span> Describes how neurons transmit electric signals

The Hodgkin–Huxley model, or conductance-based model, is a mathematical model that describes how action potentials in neurons are initiated and propagated. It is a set of nonlinear differential equations that approximates the electrical engineering characteristics of excitable cells such as neurons and muscle cells. It is a continuous-time dynamical system.

<span class="mw-page-title-main">FitzHugh–Nagumo model</span> Toy model of excitable media

The FitzHugh–Nagumo model (FHN) describes a prototype of an excitable system.

<span class="mw-page-title-main">Biological neuron model</span> Mathematical descriptions of the properties of certain cells in the nervous system

Biological neuron models, also known as spiking neuron models, are mathematical descriptions of the conduction of electrical signals in neurons. Neurons are electrically excitable cells within the nervous system, able to fire electric signals, called action potentials, across a neural network. These mathematical models describe the role of the biophysical and geometrical characteristics of neurons on the conduction of electrical activity.

In neurophysiology, several mathematical models of the action potential have been developed, which fall into two basic types. The first type seeks to model the experimental data quantitatively, i.e., to reproduce the measurements of current and voltage exactly. The renowned Hodgkin–Huxley model of the axon from the Loligo squid exemplifies such models. Although qualitatively correct, the H-H model does not describe every type of excitable membrane accurately, since it considers only two ions, each with only one type of voltage-sensitive channel. However, other ions such as calcium may be important and there is a great diversity of channels for all ions. As an example, the cardiac action potential illustrates how differently shaped action potentials can be generated on membranes with voltage-sensitive calcium channels and different types of sodium/potassium channels. The second type of mathematical model is a simplification of the first type; the goal is not to reproduce the experimental data, but to understand qualitatively the role of action potentials in neural circuits. For such a purpose, detailed physiological models may be unnecessarily complicated and may obscure the "forest for the trees". The FitzHugh–Nagumo model is typical of this class, which is often studied for its entrainment behavior. Entrainment is commonly observed in nature, for example in the synchronized lighting of fireflies, which is coordinated by a burst of action potentials; entrainment can also be observed in individual neurons. Both types of models may be used to understand the behavior of small biological neural networks, such as the central pattern generators responsible for some automatic reflex actions. Such networks can generate a complex temporal pattern of action potentials that is used to coordinate muscular contractions, such as those involved in breathing or fast swimming to escape a predator.

<span class="mw-page-title-main">Hindmarsh–Rose model</span> Of the spiking-bursting behavior of a neuron

The Hindmarsh–Rose model of neuronal activity is aimed to study the spiking-bursting behavior of the membrane potential observed in experiments made with a single neuron. The relevant variable is the membrane potential, x(t), which is written in dimensionless units. There are two more variables, y(t) and z(t), which take into account the transport of ions across the membrane through the ion channels. The transport of sodium and potassium ions is made through fast ion channels and its rate is measured by y(t), which is called the spiking variable. z(t) corresponds to an adaptation current, which is incremented at every spike, leading to a decrease in the firing rate. Then, the Hindmarsh–Rose model has the mathematical form of a system of three nonlinear ordinary differential equations on the dimensionless dynamical variables x(t), y(t), and z(t). They read:

Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most definitive models of neuro-biological computation as well as the tools commonly used to construct and analyze them.

The Morris–Lecar model is a biological neuron model developed by Catherine Morris and Harold Lecar to reproduce the variety of oscillatory behavior in relation to Ca++ and K+ conductance in the muscle fiber of the giant barnacle. Morris–Lecar neurons exhibit both class I and class II neuron excitability.

The network of the human nervous system is composed of nodes that are connected by links. The connectivity may be viewed anatomically, functionally, or electrophysiologically. These are presented in several Wikipedia articles that include Connectionism, Biological neural network, Artificial neural network, Computational neuroscience, as well as in several books by Ascoli, G. A. (2002), Sterratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011), Gerstner, W., & Kistler, W. (2002), and David Rumelhart, McClelland, J. L., and PDP Research Group (1986) among others. The focus of this article is a comprehensive view of modeling a neural network. Once an approach based on the perspective and connectivity is chosen, the models are developed at microscopic, mesoscopic, or macroscopic (system) levels. Computational modeling refers to models that are developed using computing tools.

Compartmental modelling of dendrites deals with multi-compartment modelling of the dendrites, to make the understanding of the electrical behavior of complex dendrites easier. Basically, compartmental modelling of dendrites is a very helpful tool to develop new biological neuron models. Dendrites are very important because they occupy the most membrane area in many of the neurons and give the neuron an ability to connect to thousands of other cells. Originally the dendrites were thought to have constant conductance and current but now it has been understood that they may have active Voltage-gated ion channels, which influences the firing properties of the neuron and also the response of neuron to synaptic inputs. Many mathematical models have been developed to understand the electric behavior of the dendrites. Dendrites tend to be very branchy and complex, so the compartmental approach to understand the electrical behavior of the dendrites makes it very useful.

<span class="mw-page-title-main">Theta model</span>

The theta model, or Ermentrout–Kopell canonical model, is a biological neuron model originally developed to mathematically describe neurons in the animal Aplysia. The model is particularly well-suited to describe neural bursting, which is characterized by periodic transitions between rapid oscillations in the membrane potential followed by quiescence. This bursting behavior is often found in neurons responsible for controlling and maintaining steady rhythms such as breathing, swimming, and digesting. Of the three main classes of bursting neurons, the theta model describes parabolic bursting, which is characterized by a parabolic frequency curve during each burst.

An attractor network is a type of recurrent dynamical network, that evolves toward a stable pattern over time. Nodes in the attractor network converge toward a pattern that may either be fixed-point, cyclic, chaotic or random (stochastic). Attractor networks have largely been used in computational neuroscience to model neuronal processes such as associative memory and motor behavior, as well as in biologically inspired methods of machine learning.

In biology exponential integrate-and-fire models are compact and computationally efficient nonlinear spiking neuron models with one or two variables. The exponential integrate-and-fire model was first proposed as a one-dimensional model. The most prominent two-dimensional examples are the adaptive exponential integrate-and-fire model and the generalized exponential integrate-and-fire model. Exponential integrate-and-fire models are widely used in the field of computational neuroscience and spiking neural networks because of (i) a solid grounding of the neuron model in the field of experimental neuroscience, (ii) computational efficiency in simulations and hardware implementations, and (iii) mathematical transparency.

A binding neuron (BN) is an abstract concept of processing of input impulses in a generic neuron based on their temporal coherence and the level of neuronal inhibition. Mathematically, the concept may be implemented by most neuronal models including the well-known leaky integrate-and-fire model. The BN concept originated in 1996 and 1998 papers by A. K. Vidybida,

Dynamic causal modeling (DCM) is a framework for specifying models, fitting them to data and comparing their evidence using Bayesian model comparison. It uses nonlinear state-space models in continuous time, specified using stochastic or ordinary differential equations. DCM was initially developed for testing hypotheses about neural dynamics. In this setting, differential equations describe the interaction of neural populations, which directly or indirectly give rise to functional neuroimaging data e.g., functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG) or electroencephalography (EEG). Parameters in these models quantify the directed influences or effective connectivity among neuronal populations, which are estimated from the data using Bayesian statistical methods.

Phase reduction is a method used to reduce a multi-dimensional dynamical equation describing a nonlinear limit cycle oscillator into a one-dimensional phase equation. Many phenomena in our world such as chemical reactions, electric circuits, mechanical vibrations, cardiac cells, and spiking neurons are examples of rhythmic phenomena, and can be considered as nonlinear limit cycle oscillators.

The spike response model (SRM) is a spiking neuron model in which spikes are generated by either a deterministic or a stochastic threshold process. In the SRM, the membrane voltage V is described as a linear sum of the postsynaptic potentials (PSPs) caused by spike arrivals to which the effects of refractoriness and adaptation are added. The threshold is either fixed or dynamic. In the latter case it increases after each spike. The SRM is flexible enough to account for a variety of neuronal firing pattern in response to step current input. The SRM has also been used in the theory of computation to quantify the capacity of spiking neural networks; and in the neurosciences to predict the subthreshold voltage and the firing times of cortical neurons during stimulation with a time-dependent current stimulation. The name Spike Response Model points to the property that the two important filters and of the model can be interpreted as the response of the membrane potential to an incoming spike (response kernel , the PSP) and to an outgoing spike (response kernel , also called refractory kernel). The SRM has been formulated in continuous time and in discrete time. The SRM can be viewed as a generalized linear model (GLM) or as an (integrated version of) a generalized integrate-and-fire model with adaptation.

<span class="mw-page-title-main">Heteroclinic channels</span> Robotic control method

Heteroclinic channels are ensembles of trajectories that can connect saddle equilibrium points in phase space. Dynamical systems and their associated phase spaces can be used to describe natural phenomena in mathematical terms; heteroclinic channels, and the cycles that they produce, are features in phase space that can be designed to occupy specific locations in that space. Heteroclinic channels move trajectories from one equilibrium point to another. More formally, a heteroclinic channel is a region in phase space in which nearby trajectories are drawn closer and closer to one unique limiting trajectory, the heteroclinic orbit. Equilibria connected by heteroclinic trajectories form heteroclinic cycles and cycles can be connected to form heteroclinic networks. Heteroclinic cycles and networks naturally appear in a number of applications, such as fluid dynamics, population dynamics, and neural dynamics. In addition, dynamical systems are often used as methods for robotic control. In particular, for robotic control, the equilibrium points can correspond to robotic states, and the heteroclinic channels can provide smooth methods for switching from state to state.

<span class="mw-page-title-main">Synthetic nervous system</span> Computational neuroscience model

Synthetic Nervous System (SNS) is a computational neuroscience model that may be developed with the Functional Subnetwork Approach (FSA) to create biologically plausible models of circuits in a nervous system. The FSA enables the direct analytical tuning of dynamical networks that perform specific operations within the nervous system without the need for global optimization methods like genetic algorithms and reinforcement learning. The primary use case for a SNS is system control, where the system is most often a simulated biomechanical model or a physical robotic platform. An SNS is a form of a neural network much like artificial neural networks (ANNs), convolutional neural networks (CNN), and recurrent neural networks (RNN). The building blocks for each of these neural networks is a series of nodes and connections denoted as neurons and synapses. More conventional artificial neural networks rely on training phases where they use large data sets to form correlations and thus “learn” to identify a given object or pattern. When done properly this training results in systems that can produce a desired result, sometimes with impressive accuracy. However, the systems themselves are typically “black boxes” meaning there is no readily distinguishable mapping between structure and function of the network. This makes it difficult to alter the function, without simply starting over, or extract biological meaning except in specialized cases. The SNS method differentiates itself by using details of both structure and function of biological nervous systems. The neurons and synapse connections are intentionally designed rather than iteratively changed as part of a learning algorithm.

References

  1. Gerstner, Wulfram; Kistler, Werner M.; Naud, Richard; Paninski, Liam (2014-07-24). Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press. ISBN   978-1-107-06083-8.
  2. Strogatz, Steven H. (2018-05-04). Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. CRC Press. ISBN   978-0-429-97219-5.
  3. Izhikevich, E. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. Massachusetts: The MIT Press, 2007.
  4. "Agenda of the Dynamical Neuroscience XVIII: The resting brain: not at rest!". Archived from the original on 2011-07-09. Retrieved 2010-08-07.
  5. Wackerbauer, Renate; Showalter, Kenneth (2003-10-22). "Collapse of Spatiotemporal Chaos". Physical Review Letters. 91 (17). American Physical Society (APS): 174103. Bibcode:2003PhRvL..91q4103W. doi:10.1103/physrevlett.91.174103. ISSN   0031-9007. PMID   14611350.
  6. Lefèvre, Julien; Mangin, Jean-François (2010-04-22). Friston, Karl J. (ed.). "A Reaction-Diffusion Model of Human Brain Development". PLOS Computational Biology. 6 (4). Public Library of Science (PLoS): e1000749. Bibcode:2010PLSCB...6E0749L. doi: 10.1371/journal.pcbi.1000749 . ISSN   1553-7358. PMC   2858670 . PMID   20421989.
  7. Agnati, L.F.; Zoli, M.; Strömberg, I.; Fuxe, K. (1995). "Intercellular communication in the brain: Wiring versus volume transmission". Neuroscience. 69 (3). Elsevier BV: 711–726. doi:10.1016/0306-4522(95)00308-6. ISSN   0306-4522. PMID   8596642. S2CID   9752747.
  8. Hodgkin, A. L.; Huxley, A. F. (1952-08-28). "A quantitative description of membrane current and its application to conduction and excitation in nerve". The Journal of Physiology. 117 (4): 500–544. doi:10.1113/jphysiol.1952.sp004764. ISSN   0022-3751. PMC   1392413 . PMID   12991237.
  9. Izhikevich E. and FitzHugh R. (2006), Scholarpedia, 1(9):1349
  10. Morris, C.; Lecar, H. (1981). "Voltage oscillations in the barnacle giant muscle fiber". Biophysical Journal. 35 (1). Elsevier BV: 193–213. Bibcode:1981BpJ....35..193M. doi:10.1016/s0006-3495(81)84782-0. ISSN   0006-3495. PMC   1327511 . PMID   7260316.
  11. Lecar, H. (2007), Scholarpedia, 2(10):1333
  12. Hodgkin, A. and Huxley, A. (1952): A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117:500–544. PMID   12991237
  13. Izhikevich, E. Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. Massachusetts: The MIT Press, 2007.
  14. Eliasmith, C. (2007), Scholarpedia, 2(10):1380
  15. Dahlem, Yuliya A.; Dahlem, Markus A.; Mair, Thomas; Braun, Katharina; Müller, Stefan C. (2003-09-01). "Extracellular potassium alters frequency and profile of retinal spreading depression waves". Experimental Brain Research. 152 (2). Springer Science and Business Media LLC: 221–228. doi:10.1007/s00221-003-1545-y. ISSN   0014-4819. PMID   12879176. S2CID   10752622.
  16. Ullian, Erik M.; Christopherson, Karen S.; Barres, Ben A. (2004). "Role for glia in synaptogenesis" (PDF). Glia. 47 (3). Wiley: 209–216. doi:10.1002/glia.20082. ISSN   0894-1491. PMID   15252809. S2CID   7439962. Archived from the original (PDF) on 2011-03-05. Retrieved 2010-08-07.
  17. Keyser, David O.; Pellmar, Terry C. (1994). "Synaptic transmission in the hippocampus: Critical role for glial cells". Glia. 10 (4). Wiley: 237–243. doi:10.1002/glia.440100402. ISSN   0894-1491. PMID   7914511. S2CID   28877566.
  18. Nadkarni, S. (2005) Dynamics of Dressed Neurons: Modeling the Neural-Glial Circuit and Exploring its Normal and Pathological Implications. Doctoral dissertation. Ohio University, Ohio. Archived 2011-07-16 at the Wayback Machine
  19. Fuxe, K., Rivera, A., Jacobsen, K., Hoistad, M., Leo, G., Horvath, T., Stained, W., De la calle, A. and Agnati, L. (2005) Dynamics of volume transmission in the brain. Focus on catecholamine and opioid peptide communication and the role of uncoupling protein 2. Journal of Neural Transmission, 112:1.
  20. "Dahlem, M. (2009) Migraine and Chaos. SciLogs, 25 November". Archived from the original on 2010-06-13. Retrieved 2010-08-07.
  21. Gluck, M. 2001. Gateway to Memory: An Introduction to Neural Network Modeling of the Hippocampus and Learning. Massachusetts: MIT.
  22. Hopfield, J. (2007), Scholarpedia, 2(5):1977