Developer(s) | Romain Brette, Dan Goodman, Marcel Stimberg |
---|---|
Stable release | |
Repository | |
Written in | Python |
Operating system | Cross-platform |
Type | Neural network software |
License | CeCILL |
Website | www |
Brian is an open source Python package for developing simulations of networks of spiking neurons.
Brian is aimed at researchers developing models based on networks of spiking neurons. The general design is aimed at maximising flexibility, simplicity and users' development time. [2] Users specify neuron models by giving their differential equations in standard mathematical form as strings, create groups of neurons and connect them via synapses. This is in contrast to the approach taken by many neural simulators in which users select from a predefined set of neuron models.
Brian is written in Python. Computationally, it is based around the concept of code generation: users specify the model in Python but behind the scenes Brian generates, compiles and runs code in one of several languages (including Python, Cython and C++). In addition there is a "standalone" mode in which Brian generates an entire C++ source code tree with no dependency on Brian, allowing models to be run on platforms where Python is not available.
The following code defines, runs and plots a randomly connected network of leaky integrate and fire neurons with exponential inhibitory and excitatory currents.
frombrian2import*eqs="""dv/dt = (ge+gi-(v+49*mV))/(20*ms) : voltdge/dt = -ge/(5*ms) : voltdgi/dt = -gi/(10*ms) : volt"""P=NeuronGroup(4000,eqs,threshold="v>-50*mV",reset="v=-60*mV")P.v=-60*mVPe=P[:3200]Pi=P[3200:]Ce=Synapses(Pe,P,on_pre="ge+=1.62*mV")Ce.connect(p=0.02)Ci=Synapses(Pi,P,on_pre="gi-=9*mV")Ci.connect(p=0.02)M=SpikeMonitor(P)run(1*second)plot(M.t/ms,M.i,".")show()
Brian is primarily, although not solely, aimed at single compartment neuron models. Simulators focused on multi-compartmental models include Neuron, GENESIS, and its derivatives.
The focus of Brian is on flexibility and ease of use, and only supports simulations running on a single machine. The NEST simulator includes facilities for distributing simulations across a cluster. [3]
Computational neuroscience is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.
Hebbian theory is a neuropsychological theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior. The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. Hebb states it as follows:
Let us assume that the persistence or repetition of a reverberatory activity tends to induce lasting cellular changes that add to its stability. ... When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.
Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, transistors, among others. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g., using Python based frameworks such as snnTorch, or using canonical learning rules from the biological learning literature, e.g., using BindsNet.
Spike-timing-dependent plasticity (STDP) is a biological process that adjusts the strength of connections between neurons in the brain. The process adjusts the connection strengths based on the relative timing of a particular neuron's output and input action potentials. The STDP process partially explains the activity-dependent development of nervous systems, especially with regard to long-term potentiation and long-term depression.
The Blue Brain Project is a Swiss brain research initiative that aims to create a digital reconstruction of the mouse brain. The project was founded in May 2005 by the Brain Mind Institute of École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. Its mission is to use biologically-detailed digital reconstructions and simulations of the mammalian brain to identify the fundamental principles of brain structure and function.
Neural network software is used to simulate, research, develop, and apply artificial neural networks, software concepts adapted from biological neural networks, and in some cases, a wider array of adaptive systems such as artificial intelligence and machine learning.
Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.
Neuron is a simulation environment for modeling individual and networks of neurons. It was primarily developed by Michael Hines, John W. Moore, and Ted Carnevale at Yale and Duke.
GENESIS is a simulation environment for constructing realistic models of neurobiological systems at many levels of scale including: sub-cellular processes, individual neurons, networks of neurons, and neuronal systems. These simulations are “computer-based implementations of models whose primary objective is to capture what is known of the anatomical structure and physiological characteristics of the neural system of interest”. GENESIS is intended to quantify the physical framework of the nervous system in a way that allows for easy understanding of the physical structure of the nerves in question. “At present only GENESIS allows parallelized modeling of single neurons and networks on multiple-instruction-multiple-data parallel computers.” Development of GENESIS software spread from its home at Caltech to labs at the University of Texas at San Antonio, the University of Antwerp, the National Centre for Biological Sciences in Bangalore, the University of Colorado, the Pittsburgh Supercomputing Center, the San Diego Supercomputer Center, and Emory University.
Biological neuron models, also known as spiking neuron models, are mathematical descriptions of the conduction of electrical signals in neurons. Neurons are electrically excitable cells within the nervous system, able to fire electric signals, called action potentials, across a neural network. These mathematical models describe the role of the biophysical and geometrical characteristics of neurons on the conduction of electrical activity.
In the field of computational neuroscience, Brain simulation is the concept of creating a functioning computer model of a brain or part of a brain. Brain simulation projects intend to contribute to a complete understanding of the brain, and eventually also assist the process of treating and diagnosing brain diseases. Simulations utilize mathematical models of biological neurons, such as the hodgkin-huxley model, to simulate the behavior of neurons, or other cells within the brain.
NOMFET is a nanoparticle organic memory field-effect transistor. The transistor is designed to mimic the feature of the human synapse known as plasticity, or the variation of the speed and strength of the signal going from neuron to neuron. The device uses gold nano-particles of about 5—20 nm set with pentacene to emulate the change in voltages and speed within the signal. This device uses charge trapping/detrapping in an array of gold nanoparticles (NPs) at the SiO2/pentacene interface to design a SYNAPSTOR (synapse transistor) mimicking the dynamic plasticity of a biological synapse. This device (memristor-like) mimics short-term plasticity (STP) and temporal correlation plasticity (STDP, spike-timing dependent plasticity), two "functions" at the basis of learning processes. A compact model was developed, and these organic synapstors were used to demonstrate an associative memory, which can be trained to present a pavlovian response. A recent report showed that these organic synapse-transistors (synapstor) are working at 1 volt and with a plasticity typical response time in the range 100-200 ms. The device also works in contact with an electrolyte (EGOS : electrolyte gated organic synapstor) and can be interfaced with biologic neurons.
NeuroML is an XML based model description language that aims to provide a common data format for defining and exchanging models in computational neuroscience. The focus of NeuroML is on models which are based on the biophysical and anatomical properties of real neurons.
A Bayesian Confidence Propagation Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem, which regards neural computation and processing as probabilistic inference. Neural unit activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology. This probabilistic neural network model can also be run in generative mode to produce spontaneous activations and temporal sequences.
Neural decoding is a neuroscience field concerned with the hypothetical reconstruction of sensory and other stimuli from information that has already been encoded and represented in the brain by networks of neurons. Reconstruction refers to the ability of the researcher to predict what sensory stimuli the subject is receiving based purely on neuron action potentials. Therefore, the main goal of neural decoding is to characterize how the electrical activity of neurons elicit activity and responses in the brain.
The network of the human nervous system is composed of nodes that are connected by links. The connectivity may be viewed anatomically, functionally, or electrophysiologically. These are presented in several Wikipedia articles that include Connectionism, Biological neural network, Artificial neural network, Computational neuroscience, as well as in several books by Ascoli, G. A. (2002), Sterratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011), Gerstner, W., & Kistler, W. (2002), and Rumelhart, J. L., McClelland, J. L., and PDP Research Group (1986) among others. The focus of this article is a comprehensive view of modeling a neural network. Once an approach based on the perspective and connectivity is chosen, the models are developed at microscopic, mesoscopic, or macroscopic (system) levels. Computational modeling refers to models that are developed using computing tools.
NEST is a simulation software for spiking neural network models, including large-scale neuronal networks. NEST was initially developed by Markus Diesmann and Marc-Oliver Gewaltig and is now developed and maintained by the NEST Initiative.
In biology exponential integrate-and-fire models are compact and computationally efficient nonlinear spiking neuron models with one or two variables. The exponential integrate-and-fire model was first proposed as a one-dimensional model. The most prominent two-dimensional examples are the adaptive exponential integrate-and-fire model and the generalized exponential integrate-and-fire model. Exponential integrate-and-fire models are widely used in the field of computational neuroscience and spiking neural networks because of (i) a solid grounding of the neuron model in the field of experimental neuroscience, (ii) computational efficiency in simulations and hardware implementations, and (iii) mathematical transparency.
AnimatLab is an open-source neuromechanical simulation tool that allows authors to easily build and test biomechanical models and the neural networks that control them to produce behaviors. Users can construct neural models of varied level of details, 3D mechanical models of triangle meshes, and use muscles, motors, receptive fields, stretch sensors and other transducers to interface the two systems. Experiments can be run in which various stimuli are applied and data is recorded, making it a useful tool for computational neuroscience. The software can also be used to model biomimetic robotic systems.
Synthetic Nervous System (SNS) is a computational neuroscience model that may be developed with the Functional Subnetwork Approach (FSA) to create biologically plausible models of circuits in a nervous system. The FSA enables the direct analytical tuning of dynamical networks that perform specific operations within the nervous system without the need for global optimization methods like genetic algorithms and reinforcement learning. The primary use case for a SNS is system control, where the system is most often a simulated biomechanical model or a physical robotic platform. An SNS is a form of a neural network much like artificial neural networks (ANNs), convolutional neural networks (CNN), and recurrent neural networks (RNN). The building blocks for each of these neural networks is a series of nodes and connections denoted as neurons and synapses. More conventional artificial neural networks rely on training phases where they use large data sets to form correlations and thus “learn” to identify a given object or pattern. When done properly this training results in systems that can produce a desired result, sometimes with impressive accuracy. However, the systems themselves are typically “black boxes” meaning there is no readily distinguishable mapping between structure and function of the network. This makes it difficult to alter the function, without simply starting over, or extract biological meaning except in specialized cases. The SNS method differentiates itself by using details of both structure and function of biological nervous systems. The neurons and synapse connections are intentionally designed rather than iteratively changed as part of a learning algorithm.