SpiNNaker

Last updated
SpiNNaker: spiking neural network architecture
Spinn 1m pano.jpg
The SpiNNaker 1 million core machine assembled at the University of Manchester
Developer Steve Furber
Product family Manchester computers
Type Neuromorphic
Release date2019
CPU ARM968E-S @ 200 MHz
Memory7 TB
SuccessorSpiNNaker 2 [1]
Website apt.cs.manchester.ac.uk/projects/SpiNNaker/

SpiNNaker (spiking neural network architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. [2] It is composed of 57,600 processing nodes, each with 18 ARM9 processors (specifically ARM968) and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM. [3] The computing platform is based on spiking neural networks, useful in simulating the human brain (see Human Brain Project). [4] [5] [6] [7] [8] [9] [10] [11] [12]

The completed design is housed in 10 19-inch racks, with each rack holding over 100,000 cores. [13] The cards holding the chips are held in 5 blade enclosures, and each core emulates 1,000 neurons. [13] In total, the goal is to simulate the behaviour of aggregates of up to a billion neurons in real time. [14] This machine requires about 100 kW from a 240 V supply and an air-conditioned environment. [15]

SpiNNaker is being used as one component of the neuromorphic computing platform for the Human Brain Project. [16] [17]

On 14 October 2018 the HBP announced that the million core milestone had been achieved. [18] [19]

On 24 September 2019 HBP announced that an 8 million euro grant, that will fund construction of the second generation machine, (called SpiNNcloud) has been given to TU Dresden. [20]

Related Research Articles

Computational neuroscience is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, transistors, among others. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g., using Python based frameworks such as snnTorch, or using canonical learning rules from the biological learning literature, e.g., using BindsNet.

<span class="mw-page-title-main">Steve Furber</span> British computer scientist

Stephen Byram Furber is a British computer scientist, mathematician and hardware engineer, and Emeritus ICL Professor of Computer Engineering in the Department of Computer Science at the University of Manchester, UK. After completing his education at the University of Cambridge, he spent the 1980s at Acorn Computers, where he was a principal designer of the BBC Micro and the ARM 32-bit RISC microprocessor. As of 2023, over 250 billion arm chips have been manufactured, powering much of the world's mobile computing and embedded systems, everything from sensors to smartphones to servers.

<span class="mw-page-title-main">Optical neural network</span>

An optical neural network is a physical implementation of an artificial neural network with optical components. Early optical neural networks used a photorefractive Volume hologram to interconnect arrays of input neurons to arrays of output with synaptic weights in proportion to the multiplexed hologram's strength. Volume holograms were further multiplexed using spectral hole burning to add one dimension of wavelength to space to achieve four dimensional interconnects of two dimensional arrays of neural inputs and outputs. This research led to extensive research on alternative methods using the strength of the optical interconnect for implementing neuronal communications.

<span class="mw-page-title-main">Spiking neural network</span> Artificial neural network that mimics neurons

Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.

Neurorobotics is the combined study of neuroscience, robotics, and artificial intelligence. It is the science and technology of embodied autonomous neural systems. Neural systems include brain-inspired algorithms, computational models of biological neural networks and actual biological systems. Such neural systems can be embodied in machines with mechanic or any other forms of physical actuation. This includes robots, prosthetic or wearable systems but also, at smaller scale, micro-machines and, at the larger scales, furniture and infrastructures.

The Human Brain Project (HBP) was a large ten-year scientific research project, based on exascale supercomputers, that aimed to build a collaborative ICT-based scientific research infrastructure to allow researchers across Europe to advance knowledge in the fields of neuroscience, computing, and brain-related medicine.

<span class="mw-page-title-main">Manchester computers</span> Series of stored-program electronic computers

The Manchester computers were an innovative series of stored-program electronic computers developed during the 30-year period between 1947 and 1977 by a small team at the University of Manchester, under the leadership of Tom Kilburn. They included the world's first stored-program computer, the world's first transistorised computer, and what was the world's fastest computer at the time of its inauguration in 1962.

Brain simulation is the concept of creating a functioning computer model of a brain or part of a brain. Brain simulation projects intend to contribute to a complete understanding of the brain, and eventually also assist the process of treating and diagnosing brain diseases.

A physical neural network is a type of artificial neural network in which an electrically adjustable material is used to emulate the function of a neural synapse or a higher-order (dendritic) neuron model. "Physical" neural network is used to emphasize the reliance on physical hardware used to emulate neurons as opposed to software-based approaches. More generally the term is applicable to other artificial neural networks in which a memristor or other electrically adjustable resistance material is used to emulate a neural synapse.

<span class="mw-page-title-main">Dharmendra Modha</span> American computer scientist

Dharmendra S. Modha is an Indian American manager and lead researcher of the Cognitive Computing group at IBM Almaden Research Center. He is known for his pioneering works in Artificial Intelligence and Mind Simulation. In November 2009, Modha announced at a supercomputing conference that his team had written a program that simulated a cat brain. He is the recipient of multiple honors, including the Gordon Bell Prize, given each year to recognize outstanding achievement in high-performance computing applications. In November 2012, Modha announced on his blog that using 96 Blue Gene/Q racks of the Lawrence Livermore National Laboratory Sequoia supercomputer, a combined IBM and LBNL team achieved an unprecedented scale of 2.084 billion neurosynaptic cores containing 530 billion neurons and 137 trillion synapses running only 1542× slower than real time. In August 2014 a paper describing the TrueNorth Architecture, "the first-ever production-scale 'neuromorphic' computer chip designed to work more like a mammalian brain than" a processor was published in the journal Science. TrueNorth project culminated in a 64 million neuron system for running deep neural network applications.

<span class="mw-page-title-main">SyNAPSE</span> DARPA program

SyNAPSE is a DARPA program that aims to develop electronic neuromorphic machine technology, an attempt to build a new kind of cognitive computer with form, function, and architecture similar to the mammalian brain. Such artificial brains would be used in robots whose intelligence would scale with the size of the neural system in terms of the total number of neurons and synapses and their connectivity.

A Bayesian Confidence Propagation Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem, which regards neural computation and processing as probabilistic inference. Neural unit activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology. This probabilistic neural network model can also be run in generative mode to produce spontaneous activations and temporal sequences.

Kwabena Adu Boahen is a Professor of Bioengineering and Electrical Engineering at Stanford University. He previously taught at the University of Pennsylvania.

A cognitive computer is a computer that hardwires artificial intelligence and machine learning algorithms into an integrated circuit that closely reproduces the behavior of the human brain. It generally adopts a neuromorphic engineering approach. Synonyms include neuromorphic chip and cognitive chip.

Zeroth is a platform for brain-inspired computing from Qualcomm. It is based around a neural processing unit (NPU) AI accelerator chip and a software API to interact with the platform. It makes a form of machine learning known as deep learning available to mobile devices. It is used for image and sound processing, including speech recognition. The software operates locally rather than as a cloud application.

An AI accelerator or neural processing unit is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2018, a typical AI integrated circuit chip contains billions of MOSFET transistors. A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.

<span class="mw-page-title-main">André van Schaik</span> Professor of electrical engineering

André van Schaik is a professor of electrical engineering at the Western Sydney University, and director of the International Centre for Neuromorphic Systems, in Penrith, New South Wales, Australia. He was named a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2014 "for contributions to neuromorphic circuits and systems".

Electrochemical Random-Access Memory (ECRAM) is a type of non-volatile memory (NVM) with multiple levels per cell (MLC) designed for deep learning analog acceleration. An ECRAM cell is a three-terminal device composed of a conductive channel, an insulating electrolyte, an ionic reservoir, and metal contacts. The resistance of the channel is modulated by ionic exchange at the interface between the channel and the electrolyte upon application of an electric field. The charge-transfer process allows both for state retention in the absence of applied power, and for programming of multiple distinct levels, both differentiating ECRAM operation from that of a field-effect transistor (FET). The write operation is deterministic and can result in symmetrical potentiation and depression, making ECRAM arrays attractive for acting as artificial synaptic weights in physical implementations of artificial neural networks (ANN). The technological challenges include open circuit potential (OCP) and semiconductor foundry compatibility associated with energy materials. Universities, government laboratories, and corporate research teams have contributed to the development of ECRAM for analog computing. Notably, Sandia National Laboratories designed a lithium-based cell inspired by solid-state battery materials, Stanford University built an organic proton-based cell, and International Business Machines (IBM) demonstrated in-memory selector-free parallel programming for a logistic regression task in an array of metal-oxide ECRAM designed for insertion in the back end of line (BEOL). In 2022, researchers at Massachusetts Institute of Technology built an inorganic, CMOS-compatible protonic technology that achieved near-ideal modulation characteristics using nanosecond fast pulses

References

  1. Yan, Yexin; Kappel, David; Neumarker, Felix; Partzsch, Johannes; Vogginger, Bernhard; Hoppner, Sebastian; Furber, Steve; Maass, Wolfgang; Legenstein, Robert; Mayr, Christian (2019). "Efficient Reward-Based Structural Plasticity on a SpiNNaker 2 Prototype". IEEE Transactions on Biomedical Circuits and Systems. 13 (3): 579–591. arXiv: 1903.08500 . Bibcode:2019arXiv190308500Y. doi:10.1109/TBCAS.2019.2906401. ISSN   1932-4545. PMID   30932847. S2CID   84186422.
  2. Advanced Processor Technologies Research Group
  3. "SpiNNaker Project - The SpiNNaker Chip". apt.cs.manchester.ac.uk. Retrieved 17 November 2018.
  4. SpiNNaker Home Page, University of Manchester, retrieved 11 June 2012
  5. Furber, S. B.; Galluppi, F.; Temple, S.; Plana, L. A. (2014). "The SpiNNaker Project". Proceedings of the IEEE. 102 (5): 652–665. doi: 10.1109/JPROC.2014.2304638 .
  6. Xin Jin; Furber, S. B.; Woods, J. V. (2008). "Efficient modelling of spiking neural networks on a scalable chip multiprocessor". 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence). pp. 2812–2819. doi:10.1109/IJCNN.2008.4634194. ISBN   978-1-4244-1820-6. S2CID   2103654.
  7. A million ARM cores to host brain simulator Archived 17 July 2011 at the Wayback Machine News article on the project in the EE Times
  8. Temple, S.; Furber, S. (2007). "Neural systems engineering". Journal of the Royal Society Interface. 4 (13): 193–206. doi:10.1098/rsif.2006.0177. PMC   2359843 . PMID   17251143. A manifesto for the SpiNNaker project, surveying and reviewing the general level of understanding of brain function and approaches to building computer modelof the brain.
  9. Plana, L. A.; Furber, S. B.; Temple, S.; Khan, M.; Shi, Y.; Wu, J.; Yang, S. (2007). "A GALS Infrastructure for a Massively Parallel Multiprocessor". IEEE Design & Test of Computers. 24 (5): 454. doi:10.1109/MDT.2007.149. S2CID   16758888. A description of the Globally Asynchronous, Locally Synchronous (GALS) nature of SpiNNaker, with an overview of the asynchronous communications hardware designed to transmit neural 'spikes' between processors.
  10. Navaridas, J.; Luján, M.; Miguel-Alonso, J.; Plana, L. A.; Furber, S. (2009). "Understanding the interconnection network of SpiNNaker". Proceedings of the 23rd international conference on Conference on Supercomputing - ICS '09. p. 286. CiteSeerX   10.1.1.634.9481 . doi:10.1145/1542275.1542317. ISBN   9781605584980. S2CID   3710084. Modelling and analysis of the SpiNNaker interconnect in a million-core machine, showing the suitability of the packet-switched network for large-scale spiking neural network simulation.
  11. Rast, A.; Galluppi, F.; Davies, S.; Plana, L.; Patterson, C.; Sharp, T.; Lester, D.; Furber, S. (2011). "Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware". Neural Networks. 24 (9): 961–978. doi:10.1016/j.neunet.2011.06.014. PMID   21778034. A demonstration of SpiNNaker's ability to simulate different neural models (simultaneously, if necessary) in contrast to other neuromorphic hardware.
  12. Sharp, T.; Galluppi, F.; Rast, A.; Furber, S. (2012). "Power-efficient simulation of detailed cortical microcircuits on SpiNNaker". Journal of Neuroscience Methods. 210 (1): 110–118. doi:10.1016/j.jneumeth.2012.03.001. PMID   22465805. S2CID   19083072. Four-chip, real-time simulation of a four-million-synapse cortical circuit, showing the extreme energy efficiency of the SpiNNaker architecture
  13. 1 2 Video interview by computerphile with Steve Furber
  14. "SpiNNaker Project - Architectural Overview". apt.cs.manchester.ac.uk. Retrieved 17 November 2018.
  15. "SpiNNaker Project - Boards and Machines". apt.cs.manchester.ac.uk. Retrieved 17 November 2018.
  16. Calimera, A; Macii, E; Poncino, M (2013). "The Human Brain Project and neuromorphic computing". Functional Neurology. 28 (3): 191–6. PMC   3812737 . PMID   24139655.
  17. Monroe, D. (2014). "Neuromorphic computing gets ready for the (really) big time". Communications of the ACM . 57 (6): 13–15. doi:10.1145/2601069. S2CID   20051102.
  18. "SpiNNaker brain simulation project hits one million cores on a single machine" . Retrieved 19 October 2018.
  19. Petrut Bogdan (14 October 2018), SpiNNaker: 1 million core neuromorphic platform , retrieved 19 October 2018
  20. "Second Generation SpiNNaker Neuromorphic Supercomputer to be Built at TU Dresden - News". www.humanbrainproject.eu. Retrieved 2 October 2019.