Cognitive computer

Last updated

A cognitive computer is a computer that hardwires artificial intelligence and machine learning algorithms into an integrated circuit that closely reproduces the behavior of the human brain. [1] It generally adopts a neuromorphic engineering approach. Synonyms include neuromorphic chip and cognitive chip. [2] [3]

Contents

In 2023, IBM's proof-of-concept NorthPole chip achieved remarkable performance in image recognition. [4]

In 2013, IBM developed Watson, a cognitive computer implemented using neural networks and deep learning techniques. [5] The next year it developed the 2014 TrueNorth microchip architecture, [6] which is designed to be closer in structure to the human brain than the von Neumann architecture used in conventional computers. [1] In 2017, Intel also announced its version of a cognitive chip in "Loihi", which it intended to be available to university and research labs in 2018. Intel (most notably with its Pohoiki Beach and Springs systems [7] [8] ), Qualcomm, and others are improving neuromorphic processors steadily.

IBM TrueNorth chip

DARPA SyNAPSE board with 16 TrueNorth chips DARPA SyNAPSE 16 Chip Board.jpg
DARPA SyNAPSE board with 16 TrueNorth chips

TrueNorth was a neuromorphic CMOS integrated circuit produced by IBM in 2014. [9] It is a manycore processor network on a chip design, with 4096 cores, each one having 256 programmable simulated neurons for a total of just over a million neurons. In turn, each neuron has 256 programmable "synapses" that convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). Its basic transistor count is 5.4 billion.

Details

Memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents the von Neumann-architecture bottleneck and is very energy-efficient, with IBM claiming a power consumption of 70 milliwatts and a power density that is 1/10,000th of conventional microprocessors. [10] The SyNAPSE chip operates at lower temperatures and power because it only draws power necessary for computation. [11] Skyrmions have been proposed as models of the synapse on a chip. [12] [13]

The neurons are emulated using a Linear-Leak Integrate-and-Fire (LLIF) model, a simplification of the leaky integrate-and-fire model. [14]

According to IBM, it does not have a clock, [15] operates on unary numbers, and computes by counting to a maximum of 19 bits. [6] [16] The cores are event-driven by using both synchronous and asynchronous logic, and are interconnected through an asynchronous packet-switched mesh network on chip (NOC). [16]

IBM developed a new network to program and use TrueNorth. It included a simulator, a new programming language, an integrated programming environment, and libraries. [15] This lack of backward compatibility with any previous technology (e.g., C++ compilers) poses serious vendor lock-in risks and other adverse consequences that may prevent it from commercialization in the future. [15] [ failed verification ]

Research

In 2018, a cluster of TrueNorth network-linked to a master computer was used in stereo vision research that attempted to extract the depth of rapidly moving objects in a scene. [17]

IBM NorthPole chip

In 2023, IBM released its NorthPole chip, which is a proof-of-concept for dramatically improving performance by intertwining compute with memory on-chip, thus eliminating the Von Neumann bottleneck. It blends approaches from IBM's 2014 TrueNorth system with modern hardware designs to achieve speeds about 4,000 times faster than TrueNorth. It can run ResNet-50 or Yolo-v4 image recognition tasks about 22 times faster, with 25 times less energy and 5 times less space, when compared to GPUs which use the same 12-nm node process that it was fabricated with. It includes 224 MB of RAM and 256 processor cores and can perform 2,048 operations per core per cycle at 8-bit precision, and 8,192 operations at 2-bit precision. It runs at between 25 and 425 MHz. [4] [18] [19] [20] This is an inferencing chip, but it cannot yet handle GPT-4.

Intel Loihi chip

Intel's self-learning neuromorphic chip, named Loihi (produced in 2017, perhaps named after the Hawaiian seamount Lōʻihi), offers substantial power efficiency. Intel claims Loihi is about 1000 times more energy efficient than the general-purpose computing power needed to train the neural networks that rival Loihi's performance. In theory, this would support both machine learning training and inference on the same silicon independently of a cloud connection, and more efficient than using convolutional neural networks (CNNs) or deep learning neural networks. Intel points to a system for monitoring a person's heartbeat, taking readings after events such as exercise or eating, and using the cognitive computing chip to normalize the data and work out the ‘normal’ heartbeat. It can then spot abnormalities, but also deal with any new events or conditions.

The first iteration of the Loihi chip was made using Intel's 14 nm fabrication process and houses 128 clusters of 1,024 artificial neurons each for a total of 131,072 simulated neurons. [21] This offers around 130 million synapses, which is still a rather long way from the human brain's 800 trillion synapses, and behind IBM's TrueNorth, which has around 256 million by using 64 by 4,096 cores. [22] Loihi is now available for research purposes among more than 40 academic research groups as a USB form factor. [23] [24] Recent developments include a 64-core chip named Pohoiki Beach (after Isaac Hale Beach Park, also known as Pohoiki). [25]

In October 2019, researchers from Rutgers University published a research paper to demonstrate the energy efficiency of Intel's Loihi in solving Simultaneous localization and mapping. [26]

In March 2020, Intel and Cornell University published a research paper to demonstrate the ability of Intel's Loihi to recognize different hazardous materials, which could eventually aid to "diagnose diseases, detect weapons and explosives, find narcotics, and spot signs of smoke and carbon monoxide". [27]

Intel's Loihi 2, released in September 2021, boasts faster speeds, higher-bandwidth inter-chip communications for enhanced scalability, increased capacity per chip, a more compact size due to process scaling, and significantly improved programmability. [28]

SpiNNaker

SpiNNaker (Spiking Neural Network Architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group at the Department of Computer Science, University of Manchester. [29]

Criticism

Critics argue that a room-sized computer – as in the case of IBM's Watson – is not a viable alternative to a three-pound human brain. [30] Some also cite the difficulty for a single system to bring so many elements together, such as the disparate sources of information as well as computing resources. [31]

In 2021, The New York Times released Steve Lohr's article "What Ever Happened to IBM’s Watson?". [32] He wrote about some costly failures of IBM Watson. One of them, a cancer-related project called the Oncology Expert Advisor, [33] was abandoned in 2016 as a costly failure. During the collaboration, Watson could not use patient data. Watson struggled to decipher doctors’ notes and patient histories.

See also

Related Research Articles

Computational neuroscience is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.

An artificial neuron is a mathematical function conceived as a model of biological neurons in a neural network. Artificial neurons are the elementary units of artificial neural networks. The artificial neuron is a function that receives one or more inputs, applies weights to these inputs, and sums them to produce an output.

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, transistors, among others. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g., using Python based frameworks such as snnTorch, or using canonical learning rules from the biological learning literature, e.g., using BindsNet.

An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.

Neuroinformatics is the field that combines informatics and neuroscience. Neuroinformatics is related with neuroscience data and information processing by artificial neural networks. There are three main directions where neuroinformatics has to be applied:

<span class="mw-page-title-main">Spiking neural network</span> Artificial neural network that mimics neurons

Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.

No instruction set computing (NISC) is a computing architecture and compiler technology for designing highly efficient custom processors and hardware accelerators by allowing a compiler to have low-level control of hardware resources.

Brain simulation is the concept of creating a functioning computer model of a brain or part of a brain. Brain simulation projects intend to contribute to a complete understanding of the brain, and eventually also assist the process of treating and diagnosing brain diseases.

Manycore processors are special kinds of multi-core processors designed for a high degree of parallel processing, containing numerous simpler, independent processor cores. Manycore processors are used extensively in embedded computers and high-performance computing.

A physical neural network is a type of artificial neural network in which an electrically adjustable material is used to emulate the function of a neural synapse or a higher-order (dendritic) neuron model. "Physical" neural network is used to emphasize the reliance on physical hardware used to emulate neurons as opposed to software-based approaches. More generally the term is applicable to other artificial neural networks in which a memristor or other electrically adjustable resistance material is used to emulate a neural synapse.

<span class="mw-page-title-main">Dharmendra Modha</span> American computer scientist

Dharmendra S. Modha is an Indian American manager and lead researcher of the Cognitive Computing group at IBM Almaden Research Center. He is known for his pioneering works in Artificial Intelligence and Mind Simulation. In November 2009, Modha announced at a supercomputing conference that his team had written a program that simulated a cat brain. He is the recipient of multiple honors, including the Gordon Bell Prize, given each year to recognize outstanding achievement in high-performance computing applications. In November 2012, Modha announced on his blog that using 96 Blue Gene/Q racks of the Lawrence Livermore National Laboratory Sequoia supercomputer, a combined IBM and LBNL team achieved an unprecedented scale of 2.084 billion neurosynaptic cores containing 530 billion neurons and 137 trillion synapses running only 1542× slower than real time. In August 2014 a paper describing the TrueNorth Architecture, "the first-ever production-scale 'neuromorphic' computer chip designed to work more like a mammalian brain than" a processor was published in the journal Science. TrueNorth project culminated in a 64 million neuron system for running deep neural network applications.

<span class="mw-page-title-main">SyNAPSE</span> DARPA program

SyNAPSE is a DARPA program that aims to develop electronic neuromorphic machine technology, an attempt to build a new kind of cognitive computer with form, function, and architecture similar to the mammalian brain. Such artificial brains would be used in robots whose intelligence would scale with the size of the neural system in terms of the total number of neurons and synapses and their connectivity.

Kwabena Adu Boahen is a Ghanaian-born Professor of Bioengineering and Electrical Engineering at Stanford University. He previously taught at the University of Pennsylvania.

<span class="mw-page-title-main">SpiNNaker</span>

SpiNNaker is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. It is composed of 57,600 processing nodes, each with 18 ARM9 processors and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM. The computing platform is based on spiking neural networks, useful in simulating the human brain.

In computational neuroscience, SUPS or formerly CUPS is a measure of a neuronal network performance, useful in fields of neuroscience, cognitive science, artificial intelligence, and computer science.

A vision processing unit (VPU) is an emerging class of microprocessor; it is a specific type of AI accelerator, designed to accelerate machine vision tasks.

An AI accelerator, deep learning processor, or neural processing unit (NPU) is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFET transistors.

Electrochemical Random-Access Memory (ECRAM) is a type of non-volatile memory (NVM) with multiple levels per cell (MLC) designed for deep learning analog acceleration. An ECRAM cell is a three-terminal device composed of a conductive channel, an insulating electrolyte, an ionic reservoir, and metal contacts. The resistance of the channel is modulated by ionic exchange at the interface between the channel and the electrolyte upon application of an electric field. The charge-transfer process allows both for state retention in the absence of applied power, and for programming of multiple distinct levels, both differentiating ECRAM operation from that of a field-effect transistor (FET). The write operation is deterministic and can result in symmetrical potentiation and depression, making ECRAM arrays attractive for acting as artificial synaptic weights in physical implementations of artificial neural networks (ANN). The technological challenges include open circuit potential (OCP) and semiconductor foundry compatibility associated with energy materials. Universities, government laboratories, and corporate research teams have contributed to the development of ECRAM for analog computing. Notably, Sandia National Laboratories designed a lithium-based cell inspired by solid-state battery materials, Stanford University built an organic proton-based cell, and International Business Machines (IBM) demonstrated in-memory selector-free parallel programming for a logistic regression task in an array of metal-oxide ECRAM designed for insertion in the back end of line (BEOL). In 2022, researchers at Massachusetts Institute of Technology built an inorganic, CMOS-compatible protonic technology that achieved near-ideal modulation characteristics using nanosecond fast pulses

<span class="mw-page-title-main">BrainChip</span> Neuromorphic tech company

BrainChip is an Australia-based technology company, founded in 2004 by Peter Van Der Made, that specializes in developing advanced artificial intelligence (AI) and machine learning (ML) hardware. The company's primary products are the MetaTF development environment, which allows the training and deployment of spiking neural networks (SNN), and the AKD1000 neuromorphic processor, a hardware implementation of their spiking neural network system. BrainChip's technology is based on a neuromorphic computing architecture, which attempts to mimic the way the human brain works. The company is a part of Intel Foundry Services and Arm AI partnership.

References

  1. 1 2 Witchalls, Clint (November 2014). "A computer that thinks". New Scientist. 224 (2994): 28–29. Bibcode:2014NewSc.224...28W. doi:10.1016/S0262-4079(14)62145-X.
  2. Seo, Jae-sun; Brezzo, Bernard; Liu, Yong; Parker, Benjamin D.; Esser, Steven K.; Montoye, Robert K.; Rajendran, Bipin; Tierno, José A.; Chang, Leland; Modha, Dharmendra S.; Friedman, Daniel J. (September 2011). "A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons". 2011 IEEE Custom Integrated Circuits Conference (CICC). pp. 1–4. doi:10.1109/CICC.2011.6055293. ISBN   978-1-4577-0222-8. S2CID   18690998 . Retrieved 21 December 2021.
  3. "Samsung plugs IBM's brain-imitating chip into an advanced sensor". Engadget. Retrieved 21 December 2021.
  4. 1 2 "IBM Debuts Brain-Inspired Chip For Speedy, Efficient AI - IEEE Spectrum". spectrum.ieee.org. Retrieved 2023-10-30.
  5. KELLY, JOHN E.; HAMM, STEVE (2013). Smart Machines: IBM's Watson and the Era of Cognitive Computing. Columbia University Press. doi:10.7312/kell16856. ISBN   9780231537278. JSTOR   10.7312/kell16856.
  6. 1 2 "The brain's architecture, efficiency… on a chip". IBM Research Blog. 2016-12-19. Retrieved 2021-08-21.
  7. "Intel's Pohoiki Beach, a 64-Chip Neuromorphic System, Delivers Breakthrough Results in Research Tests". Intel Newsroom.
  8. "Korean Researchers Devel". 30 March 2020.
  9. Merolla, P. A.; Arthur, J. V.; Alvarez-Icaza, R.; Cassidy, A. S.; Sawada, J.; Akopyan, F.; Jackson, B. L.; Imam, N.; Guo, C.; Nakamura, Y.; Brezzo, B.; Vo, I.; Esser, S. K.; Appuswamy, R.; Taba, B.; Amir, A.; Flickner, M. D.; Risk, W. P.; Manohar, R.; Modha, D. S. (2014). "A million spiking-neuron integrated circuit with a scalable communication network and interface". Science. 345 (6197): 668–73. Bibcode:2014Sci...345..668M. doi:10.1126/science.1254642. PMID   25104385. S2CID   12706847.
  10. https://spectrum.ieee.org/computing/hardware/how-ibm-got-brainlike-efficiency-from-the-truenorth-chip How IBM Got Brainlike Efficiency From the TrueNorth Chip
  11. "Cognitive computing: Neurosynaptic chips". IBM. 11 December 2015.
  12. Song, Kyung Mee; Jeong, Jae-Seung; Pan, Biao; Zhang, Xichao; Xia, Jing; Cha, Sunkyung; Park, Tae-Eon; Kim, Kwangsu; Finizio, Simone; Raabe, Jörg; Chang, Joonyeon; Zhou, Yan; Zhao, Weisheng; Kang, Wang; Ju, Hyunsu; Woo, Seonghoon (March 2020). "Skyrmion-based artificial synapses for neuromorphic computing". Nature Electronics. 3 (3): 148–155. arXiv: 1907.00957 . doi:10.1038/s41928-020-0385-0. S2CID   195767210.
  13. "Neuromorphic computing: The long path from roots to real life". 15 December 2020.
  14. "The brain's architecture, efficiency… on a chip". IBM Research Blog. 2016-12-19. Retrieved 2022-09-28.
  15. 1 2 3 "IBM Research: Brain-inspired Chip". www.research.ibm.com. 9 February 2021. Retrieved 2021-08-21.
  16. 1 2 Andreou, Andreas G.; Dykman, Andrew A.; Fischl, Kate D.; Garreau, Guillaume; Mendat, Daniel R.; Orchard, Garrick; Cassidy, Andrew S.; Merolla, Paul; Arthur, John; Alvarez-Icaza, Rodrigo; Jackson, Bryan L. (May 2016). "Real-time sensory information processing using the TrueNorth Neurosynaptic System". 2016 IEEE International Symposium on Circuits and Systems (ISCAS). p. 2911. doi:10.1109/ISCAS.2016.7539214. ISBN   978-1-4799-5341-7. S2CID   29335047.
  17. "Stereo Vision Using Computing Architecture Inspired by the Brain". IBM Research Blog. 2018-06-19. Retrieved 2021-08-21.
  18. Afifi-Sabet, Keumars (2023-10-28). "Inspired by the human brain — how IBM's latest AI chip could be 25 times more efficient than GPUs by being more integrated — but neither Nvidia nor AMD have to worry just yet". TechRadar. Retrieved 2023-10-30.
  19. Modha, Dharmendra S.; Akopyan, Filipp; Andreopoulos, Alexander; Appuswamy, Rathinakumar; Arthur, John V.; Cassidy, Andrew S.; Datta, Pallab; DeBole, Michael V.; Esser, Steven K.; Otero, Carlos Ortega; Sawada, Jun; Taba, Brian; Amir, Arnon; Bablani, Deepika; Carlson, Peter J. (2023-10-20). "Neural inference at the frontier of energy, space, and time". Science. 382 (6668): 329–335. Bibcode:2023Sci...382..329M. doi:10.1126/science.adh1174. ISSN   0036-8075. PMID   37856600. S2CID   264306410.
  20. Modha, Dharmendra (2023-10-19). "NorthPole: Neural Inference at the Frontier of Energy, Space, and Time". Dharmendra S. Modha - My Work and Thoughts. Retrieved 2023-10-31.
  21. "Why Intel built a neuromorphic chip". ZDNET.
  22. ""Intel unveils Loihi neuromorphic chip, chases IBM in artificial brains". October 17, 2017. AITrends.com". Archived from the original on August 11, 2021. Retrieved October 17, 2017.
  23. Feldman, M. (7 December 2018). "Intel Ramps Up Neuromorphic Computing Effort with New Research Partners". TOP500. Retrieved 22 December 2023.
  24. Davies, M. (2018). "Loihi - a brief introduction" (PDF). Intel Corporation. Retrieved 22 December 2023.
  25. Hruska, J. (16 July 2019). "Intel's Neuromorphic Loihi Processor Scales to 8M Neurons, 64 Cores". Ziff Davis. Retrieved 22 December 2023.
  26. Tang, Guangzhi; Shah, Arpit; Michmizos, Konstantinos. (2019). "Spiking Neural Network on Neuromorphic Hardware for Energy-Efficient Unidimensional SLAM". 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 4176–4181. arXiv: 1903.02504 . doi:10.1109/IROS40897.2019.8967864. ISBN   978-1-7281-4004-9. S2CID   70349899.
  27. Imam, Nabil; Cleland, Thomas A. (2020). "Rapid online learning and robust recall in a neuromorphic olfactory circuit". Nature Machine Intelligence. 2 (3): 181–191. arXiv: 1906.07067 . doi:10.1038/s42256-020-0159-4. S2CID   189928531.
  28. Peckham, Oliver (2022-09-28). "Intel Labs Launches Neuromorphic 'Kapoho Point' Board". HPCwire. Retrieved 2023-10-26.
  29. "Research Groups: APT - Advanced Processor Technologies (School of Computer Science - the University of Manchester)".
  30. Neumeier, Marty (2012). Metaskills: Five Talents for the Robotic Age. Indianapolis, IN: New Riders. ISBN   9780133359329.
  31. Hurwitz, Judith; Kaufman, Marcia; Bowles, Adrian (2015). Cognitive Computing and Big Data Analytics. Indianapolis, IN: John Wiley & Sons. p. 110. ISBN   9781118896624.
  32. Lohr, Steve (2021-07-16). "What Ever Happened to IBM's Watson?". The New York Times. ISSN   0362-4331 . Retrieved 2022-09-28.
  33. Simon, George; DiNardo, Courtney D.; Takahashi, Koichi; Cascone, Tina; Powers, Cynthia; Stevens, Rick; Allen, Joshua; Antonoff, Mara B.; Gomez, Daniel; Keane, Pat; Suarez Saiz, Fernando; Nguyen, Quynh; Roarty, Emily; Pierce, Sherry; Zhang, Jianjun (June 2019). "Applying Artificial Intelligence to Address the Knowledge Gaps in Cancer Care". The Oncologist. 24 (6): 772–782. doi:10.1634/theoncologist.2018-0257. ISSN   1083-7159. PMC   6656515 . PMID   30446581.

Further reading