Deep learning processor

Last updated

A deep learning processor (DLP), or a deep learning accelerator, is an electronic circuit designed for deep learning algorithms, usually with separate data memory and dedicated instruction set architecture. Deep learning processors range from mobile devices, such as neural processing units (NPUs) in Apple iPhones [1] or Huawei cellphones, [2] and personal computers such as Apple silicon Macs, to cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform. [3]

Contents

The goal of DLPs is to provide higher efficiency and performance for deep learning algorithms than general central processing unit (CPUs) and graphics processing units (GPUs) would. Most DLPs employ a large number of computing components to leverage high data-level parallelism, a relatively larger on-chip buffer/memory to leverage the data reuse patterns, and limited data-width operators for error-resilience of deep learning. Deep learning processors differ from AI accelerators in that they are specialized for running learning algorithms, while AI accelerators are typically more specialized for inference[ citation needed ]. However, the two terms (DLP vs AI accelerator) are not used rigorously and there is often overlap between the two.

History

The use of CPUs/GPUs

At the beginning, general CPUs were adopted to perform deep learning algorithms. Later, GPUs are introduced to the domain of deep learning. For example, in 2012, Alex Krizhevsky adopted two GPUs to train a deep learning network, i.e., AlexNet, [4] which won the champion of the ISLVRC-2012 competition. As the interests in deep learning algorithms and DLPs keep increasing, GPU manufacturers start to add deep learning related features in both hardware (e.g., INT8 operators) and software (e.g., cuDNN Library). For example, Nvidia even released the Turing Tensor Core—a DLP—to accelerate deep learning processing.

The first DLP

To provide higher efficiency in performance and energy, domain-specific design starts to draw a great attention. In 2014, Chen et al. proposed the first DLP in the world, DianNao (Chinese for "electric brain"), [5] to accelerate deep neural networks especially. DianNao provides the 452 Gop/s peak performance (of key operations in deep neural networks) only in a small footprint of 3.02 mm2 and 485 mW. Later, the successors (DaDianNao, [6] ShiDianNao, [7] PuDianNao [8] ) are proposed by the same group, forming the DianNao Family [9]

The blooming DLPs

Inspired from the pioneer work of DianNao Family, many DLPs are proposed in both academia and industry with design optimized to leverage the features of deep neural networks for high efficiency. Only at ISCA 2016, three sessions, 15% (!) of the accepted papers, are all architecture designs about deep learning. Such efforts include Eyeriss (MIT), [10] EIE (Stanford), [11] Minerva (Harvard), [12] Stripes (University of Toronto) in academia, [13] TPU (Google), [14] and MLU (Cambricon) in industry. [15] We listed several representative works in Table 1.

Table 1. Typical DLPs
YearDLPsInstitutionTypeComputationMemory HierarchyControlPeak Performance
2014DianNao [5] ICT, CASdigitalvector MACs scratchpad VLIW 452 Gops (16-bit)
DaDianNao [6] ICT, CASdigitalvector MACsscratchpadVLIW5.58 Tops (16-bit)
2015ShiDianNao [7] ICT, CASdigitalscalar MACsscratchpadVLIW194 Gops (16-bit)
PuDianNao [8] ICT, CASdigitalvector MACsscratchpadVLIW1,056 Gops (16-bit)
2016DnnWeaverGeorgia TechdigitalVector MACsscratchpad--
EIE [11] Stanforddigitalscalar MACsscratchpad-102 Gops (16-bit)
Eyeriss [10] MITdigitalscalar MACsscratchpad-67.2 Gops (16-bit)
Prime [16] UCSBhybrid Process-in-Memory ReRAM--
2017TPU [14] Googledigitalscalar MACsscratchpad CISC 92 Tops (8-bit)
PipeLayer [17] U of PittsburghhybridProcess-in-MemoryReRAM-
FlexFlowICT, CASdigitalscalar MACsscratchpad-420 Gops ()
DNPU [18] KAISTdigitalscalar MACSscratchpad-300 Gops(16bit)

1200 Gops(4bit)

2018MAERIGeorgia Techdigitalscalar MACsscratchpad-
PermDNNCity University of New Yorkdigitalvector MACsscratchpad-614.4 Gops (16-bit)
UNPU [19] KAISTdigitalscalar MACsscratchpad-345.6 Gops(16bit)

691.2 Gops(8b) 1382 Gops(4bit) 7372 Gops(1bit)

2019FPSATsinghuahybridProcess-in-MemoryReRAM-
Cambricon-FICT, CASdigitalvector MACsscratchpadFISA14.9 Tops (F1, 16-bit)

956 Tops (F100, 16-bit)

DLP architecture

With the rapid evolution of deep learning algorithms and DLPs, many architectures have been explored. Roughly, DLPs can be classified into three categories based on their implementation: digital circuits, analog circuits, and hybrid circuits. As the pure analog DLPs are rarely seen, we introduce the digital DLPs and hybrid DLPs.

Digital DLPs

The major components of DLPs architecture usually include a computation component, the on-chip memory hierarchy, and the control logic that manages the data communication and computing flows.

Regarding the computation component, as most operations in deep learning can be aggregated into vector operations, the most common ways for building computation components in digital DLPs are the MAC-based (multiplier-accumulation) organization, either with vector MACs [5] [6] [8] or scalar MACs. [14] [7] [10] Rather than SIMD or SIMT in general processing devices, deep learning domain-specific parallelism is better explored on these MAC-based organizations. Regarding the memory hierarchy, as deep learning algorithms require high bandwidth to provide the computation component with sufficient data, DLPs usually employ a relatively larger size (tens of kilobytes or several megabytes) on-chip buffer but with dedicated on-chip data reuse strategy and data exchange strategy to alleviate the burden for memory bandwidth. For example, DianNao, 16 16-in vector MAC, requires 16 × 16 × 2 = 512 16-bit data, i.e., almost 1024GB/s bandwidth requirements between computation components and buffers. With on-chip reuse, such bandwidth requirements are reduced drastically. [5] Instead of the widely used cache in general processing devices, DLPs always use scratchpad memory as it could provide higher data reuse opportunities by leveraging the relatively regular data access pattern in deep learning algorithms. Regarding the control logic, as the deep learning algorithms keep evolving at a dramatic speed, DLPs start to leverage dedicated ISA (instruction set architecture) to support the deep learning domain flexibly. At first, DianNao used a VLIW-style instruction set where each instruction could finish a layer in a DNN. Cambricon [20] introduces the first deep learning domain-specific ISA, which could support more than ten different deep learning algorithms. TPU also reveals five key instructions from the CISC-style ISA.

Hybrid DLPs

Hybrid DLPs emerge for DNN inference and training acceleration because of their high efficiency. Processing-in-memory (PIM) architectures are one most important type of hybrid DLP. The key design concept of PIM is to bridge the gap between computing and memory, with the following manners: 1) Moving computation components into memory cells, controllers, or memory chips to alleviate the memory wall issue. [17] [21] [22] Such architectures significantly shorten data paths and leverage much higher internal bandwidth, hence resulting in attractive performance improvement. 2) Build high efficient DNN engines by adopting computational devices. In 2013, HP Lab demonstrated the astonishing capability of adopting ReRAM crossbar structure for computing. [23] Inspiring by this work, tremendous work are proposed to explore the new architecture and system design based on ReRAM, [16] [24] [25] [17] phase change memory, [21] [26] [27] etc.

GPUs and FPGAs

Despite the DLPs, GPUs and FPGAs are also being used as accelerators to speed up the execution of deep learning algorithms. For example, Summit, a supercomputer from IBM for Oak Ridge National Laboratory, [28] contains 27,648 Nvidia Tesla V100 cards, which can be used to accelerate deep learning algorithms. Microsoft builds its deep learning platform using FPGAs in its Azure to support real-time deep learning services. [29] In Table 2 we compare the DLPs against GPUs and FPGAs in terms of target, performance, energy efficiency, and flexibility.

Table 2. DLPs vs. GPUs vs. FPGAs
TargetPerformanceEnergy EfficiencyFlexibility
DLPsdeep learninghighhighdomain-specific
FPGAsalllowmoderategeneral
GPUsmatrix computationmoderatelowmatrix applications

Atomically thin semiconductors for deep learning

Atomically thin semiconductors are considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs). [30] They use two-dimensional materials such as semiconducting molybdenum disulphide to precisely tune FGFETs as building blocks in which logic operations can be performed with the memory elements. [30]

Integrated photonic tensor core

Already in 1988, Wei Zhang et al. discussed fast optical implementations of convolutional neural networks for alphabet recognition. [31] [32] In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing. [33] The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds. [33] Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications. [33]

Benchmarks

Benchmarking has served long as the foundation of designing new hardware architectures, where both architects and practitioners can compare various architectures, identify their bottlenecks, and conduct the corresponding system/architectural optimization. Table 3 lists several typical benchmarks for DLPs, dating from the year 2012 in time order.

Table 3. Benchmarks.
YearNN BenchmarkAffiliations# of microbenchmarks# of component benchmarks# of application benchmarks
2012BenchNNICT, CASN/A12N/A
2016FathomHarvardN/A8N/A
2017BenchIPICT, CAS1211N/A
2017 DAWNBench Stanford8N/AN/A
2017 DeepBench Baidu4N/AN/A
2018 MLPerf Harvard, Intel, and Google, etc.N/A7N/A
2019 AIBench ICT, CAS and Alibaba, etc.12162
2019NNBench-XUCSBN/A10N/A

See also

Related Research Articles

<span class="mw-page-title-main">Artificial neural network</span> Computational model used in machine learning, based on connected, hierarchical functions

Artificial neural networks are a branch of machine learning models that are built using principles of neuronal organization discovered by connectionism in the biological neural networks constituting animal brains.

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

In computing, cache replacement policies are optimizing instructions or algorithms which a computer program or hardware-maintained structure can utilize to manage a cache of information. Caching improves performance by keeping recent or often-used data items in memory locations which are faster, or computationally cheaper to access, than normal memory stores. When the cache is full, the algorithm must choose which items to discard to make room for new data.

A recurrent neural network (RNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. In contrast to the uni-directional feedforward neural network, it is a bi-directional artificial neural network, meaning that it allows the output from some nodes to affect subsequent input to the same nodes. Their ability to use internal state (memory) to process arbitrary sequences of inputs makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. The term "recurrent neural network" is used to refer to the class of networks with an infinite impulse response, whereas "convolutional neural network" refers to the class of finite impulse response. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that can not be unrolled.

<span class="mw-page-title-main">Hardware acceleration</span> Specialized computer hardware

Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both.

In data analysis, anomaly detection is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority of the data and do not conform to a well defined notion of normal behaviour. Such examples may arouse suspicions of being generated by a different mechanism, or appear inconsistent with the remainder of that set of data.

Thread Level Speculation (TLS), also known as Speculative Multi-threading, or Speculative Parallelization, is a technique to speculatively execute a section of computer code that is anticipated to be executed later in parallel with the normal execution on a separate independent thread. Such a speculative thread may need to make assumptions about the values of input variables. If these prove to be invalid, then the portions of the speculative thread that rely on these input variables will need to be discarded and squashed. If the assumptions are correct the program can complete in a shorter time provided the thread was able to be scheduled efficiently.

<span class="mw-page-title-main">International Symposium on Computer Architecture</span>

The International Symposium on Computer Architecture (ISCA) is an annual academic conference on computer architecture, generally viewed as the top-tier in the field. Association for Computing Machinery's Special Interest Group on Computer Architecture and Institute of Electrical and Electronics Engineers Computer Society are technical sponsors.

<span class="mw-page-title-main">Spiking neural network</span> Artificial neural network that mimics neurons

Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.

There are many types of artificial neural networks (ANN).

<span class="mw-page-title-main">Deep learning</span> Branch of machine learning

Deep learning is the subset of machine learning methods based on artificial neural networks with representation learning. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.

<span class="mw-page-title-main">Feature learning</span> Set of learning techniques in machine learning

In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.

Approximate computing is an emerging paradigm for energy-efficient and/or high-performance design. It includes a plethora of computation techniques that return a possibly inaccurate result rather than a guaranteed accurate result, and that can be used for applications where an approximate result is sufficient for its purpose. One example of such situation is for a search engine where no exact answer may exist for a certain search query and hence, many answers may be acceptable. Similarly, occasional dropping of some frames in a video application can go undetected due to perceptual limitations of humans. Approximate computing is based on the observation that in many scenarios, although performing exact computation requires large amount of resources, allowing bounded approximation can provide disproportionate gains in performance and energy, while still achieving acceptable result accuracy. For example, in k-means clustering algorithm, allowing only 5% loss in classification accuracy can provide 50 times energy saving compared to the fully accurate classification.

An AI accelerator or neural processing unit is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2018, a typical AI integrated circuit chip contains billions of MOSFET transistors. A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.

<span class="mw-page-title-main">Apache SINGA</span> Open-source machine learning library

Apache SINGA is an Apache top-level project for developing an open source machine learning library. It provides a flexible architecture for scalable distributed training, is extensible to run over a wide range of hardware, and has a focus on health-care applications.

<span class="mw-page-title-main">ACM SIGARCH</span> ACMs Special Interest Group on computer architecture

ACM SIGARCH is the Association for Computing Machinery's Special Interest Group on computer architecture, a community of computer professionals and students from academia and industry involved in research and professional practice related to computer architecture and design. The organization sponsors many prestigious international conferences in this area, including the International Symposium on Computer Architecture (ISCA), recognized as the top conference in this area since 1975. Together with IEEE Computer Society's Technical Committee on Computer Architecture (TCCA), it is one of the two main professional organizations for people working in computer architecture.

Multi-task optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks simultaneously. The paradigm has been inspired by the well-established concepts of transfer learning and multi-task learning in predictive analytics.

Yixin Chen is a computer scientist, academic, and author. He is a professor of computer science and engineering at Washington University in St. Louis.

<span class="mw-page-title-main">Edward Y. Chang</span> American computer scientist

Edward Y. Chang is a computer scientist, academic, and author. He is an adjunct professor of Computer Science at Stanford University, and Visiting Chair Professor of Bioinformatics and Medical Engineering at Asia University, since 2019.

The Center for Supercomputing Research and Development (CSRD) at the University of Illinois (UIUC) was a research center funded from 1984 to 1993. It built the shared memory Cedar computer system, which included four hardware multiprocessor clusters, as well as parallel system and applications software. It was distinguished from the four earlier UIUC Illiac systems by starting with commercial shared memory subsystems that were based on an earlier paper published by the CSRD founders. Thus CSRD was able to avoid many of the hardware design issues that slowed the Illiac series work. Over its 9 years of major funding, plus follow-on work by many of its participants, CSRD pioneered many of the shared memory architectural and software technologies upon which all 21st century computation is based.

References

  1. "Deploying Transformers on the Apple Neural Engine". Apple Machine Learning Research. Retrieved 2023-08-24.
  2. "HUAWEI Reveals the Future of Mobile AI at IFA".
  3. Jouppi, Norman P.; et al. (2017-06-24). "In-Datacenter Performance Analysis of a Tensor Processing Unit". ACM SIGARCH Computer Architecture News. 45 (2): 1–12. arXiv: 1704.04760 . doi: 10.1145/3140659.3080246 .
  4. Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E (2017-05-24). "ImageNet classification with deep convolutional neural networks". Communications of the ACM. 60 (6): 84–90. doi: 10.1145/3065386 .
  5. 1 2 3 4 Chen, Tianshi; Du, Zidong; Sun, Ninghui; Wang, Jia; Wu, Chengyong; Chen, Yunji; Temam, Olivier (2014-04-05). "DianNao". ACM SIGARCH Computer Architecture News. 42 (1): 269–284. doi: 10.1145/2654822.2541967 . ISSN   0163-5964.
  6. 1 2 3 Chen, Yunji; Luo, Tao; Liu, Shaoli; Zhang, Shijin; He, Liqiang; Wang, Jia; Li, Ling; Chen, Tianshi; Xu, Zhiwei; Sun, Ninghui; Temam, Olivier (December 2014). "DaDianNao: A Machine-Learning Supercomputer". 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE. pp. 609–622. doi:10.1109/micro.2014.58. ISBN   978-1-4799-6998-2. S2CID   6838992.
  7. 1 2 3 Du, Zidong; Fasthuber, Robert; Chen, Tianshi; Ienne, Paolo; Li, Ling; Luo, Tao; Feng, Xiaobing; Chen, Yunji; Temam, Olivier (2016-01-04). "ShiDianNao". ACM SIGARCH Computer Architecture News. 43 (3S): 92–104. doi:10.1145/2872887.2750389. ISSN   0163-5964.
  8. 1 2 3 Liu, Daofu; Chen, Tianshi; Liu, Shaoli; Zhou, Jinhong; Zhou, Shengyuan; Teman, Olivier; Feng, Xiaobing; Zhou, Xuehai; Chen, Yunji (2015-05-29). "PuDianNao". ACM SIGARCH Computer Architecture News. 43 (1): 369–381. doi:10.1145/2786763.2694358. ISSN   0163-5964.
  9. Chen, Yunji; Chen, Tianshi; Xu, Zhiwei; Sun, Ninghui; Temam, Olivier (2016-10-28). "DianNao family". Communications of the ACM. 59 (11): 105–112. doi:10.1145/2996864. ISSN   0001-0782. S2CID   207243998.
  10. 1 2 3 Chen, Yu-Hsin; Emer, Joel; Sze, Vivienne (2017). "Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks". IEEE Micro: 1. doi:10.1109/mm.2017.265085944. hdl: 1721.1/102369 . ISSN   0272-1732.
  11. 1 2 Han, Song; Liu, Xingyu; Mao, Huizi; Pu, Jing; Pedram, Ardavan; Horowitz, Mark A.; Dally, William J. (2016-02-03). EIE: Efficient Inference Engine on Compressed Deep Neural Network. OCLC   1106232247.
  12. Reagen, Brandon; Whatmough, Paul; Adolf, Robert; Rama, Saketh; Lee, Hyunkwang; Lee, Sae Kyu; Hernandez-Lobato, Jose Miguel; Wei, Gu-Yeon; Brooks, David (June 2016). "Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). Seoul: IEEE. pp. 267–278. doi:10.1109/ISCA.2016.32. ISBN   978-1-4673-8947-1.
  13. Judd, Patrick; Albericio, Jorge; Moshovos, Andreas (2017-01-01). "Stripes: Bit-Serial Deep Neural Network Computing". IEEE Computer Architecture Letters. 16 (1): 80–83. doi:10.1109/lca.2016.2597140. ISSN   1556-6056. S2CID   3784424.
  14. 1 2 3 Jouppi, N.; Young, C.; Patil, N.; Patterson, D. (24 June 2017). "In-Datacenter Performance Analysis of a Tensor Processing Unit". Association for Computing Machinery. pp. 1–12. doi: 10.1145/3079856.3080246 . ISBN   9781450348928. S2CID   4202768 . Retrieved 8 January 2024.
  15. "MLU 100 intelligence accelerator card" (in Japanese). Cambricon. 2024. Retrieved 8 January 2024.
  16. 1 2 Chi, Ping; Li, Shuangchen; Xu, Cong; Zhang, Tao; Zhao, Jishen; Liu, Yongpan; Wang, Yu; Xie, Yuan (June 2016). "PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). IEEE. pp. 27–39. doi:10.1109/isca.2016.13. ISBN   978-1-4673-8947-1.
  17. 1 2 3 Song, Linghao; Qian, Xuehai; Li, Hai; Chen, Yiran (February 2017). "PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning". 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE. pp. 541–552. doi:10.1109/hpca.2017.55. ISBN   978-1-5090-4985-1. S2CID   15281419.
  18. Shin, Dongjoo; Lee, Jinmook; Lee, Jinsu; Yoo, Hoi-Jun (2017). "14.2 DNPU: An 8.1TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks". 2017 IEEE International Solid-State Circuits Conference (ISSCC). pp. 240–241. doi:10.1109/ISSCC.2017.7870350. ISBN   978-1-5090-3758-2. S2CID   206998709 . Retrieved 2023-08-24.
  19. Lee, Jinmook; Kim, Changhyeon; Kang, Sanghoon; Shin, Dongjoo; Kim, Sangyeob; Yoo, Hoi-Jun (2018). "UNPU: A 50.6TOPS/W unified deep neural network accelerator with 1b-to-16b fully-variable weight bit-precision". 2018 IEEE International Solid - State Circuits Conference - (ISSCC). pp. 218–220. doi:10.1109/ISSCC.2018.8310262. ISBN   978-1-5090-4940-0. S2CID   3861747 . Retrieved 2023-11-30.
  20. Liu, Shaoli; Du, Zidong; Tao, Jinhua; Han, Dong; Luo, Tao; Xie, Yuan; Chen, Yunji; Chen, Tianshi (June 2016). "Cambricon: An Instruction Set Architecture for Neural Networks". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). IEEE. pp. 393–405. doi:10.1109/isca.2016.42. ISBN   978-1-4673-8947-1.
  21. 1 2 Ambrogio, Stefano; Narayanan, Pritish; Tsai, Hsinyu; Shelby, Robert M.; Boybat, Irem; di Nolfo, Carmelo; Sidler, Severin; Giordano, Massimo; Bodini, Martina; Farinha, Nathan C. P.; Killeen, Benjamin (June 2018). "Equivalent-accuracy accelerated neural-network training using analogue memory". Nature. 558 (7708): 60–67. Bibcode:2018Natur.558...60A. doi:10.1038/s41586-018-0180-5. ISSN   0028-0836. PMID   29875487. S2CID   46956938.
  22. Chen, Wei-Hao; Lin, Wen-Jang; Lai, Li-Ya; Li, Shuangchen; Hsu, Chien-Hua; Lin, Huan-Ting; Lee, Heng-Yuan; Su, Jian-Wei; Xie, Yuan; Sheu, Shyh-Shyuan; Chang, Meng-Fan (December 2017). "A 16Mb dual-mode ReRAM macro with sub-14ns computing-in-memory and memory functions enabled by self-write termination scheme". 2017 IEEE International Electron Devices Meeting (IEDM). IEEE. pp. 28.2.1–28.2.4. doi:10.1109/iedm.2017.8268468. ISBN   978-1-5386-3559-9. S2CID   19556846.
  23. Yang, J. Joshua; Strukov, Dmitri B.; Stewart, Duncan R. (January 2013). "Memristive devices for computing". Nature Nanotechnology. 8 (1): 13–24. Bibcode:2013NatNa...8...13Y. doi:10.1038/nnano.2012.240. ISSN   1748-3395. PMID   23269430.
  24. Shafiee, Ali; Nag, Anirban; Muralimanohar, Naveen; Balasubramonian, Rajeev; Strachan, John Paul; Hu, Miao; Williams, R. Stanley; Srikumar, Vivek (2016-10-12). "ISAAC". ACM SIGARCH Computer Architecture News. 44 (3): 14–26. doi:10.1145/3007787.3001139. ISSN   0163-5964. S2CID   6329628.
  25. Ji, Yu Zhang, Youyang Xie, Xinfeng Li, Shuangchen Wang, Peiqi Hu, Xing Zhang, Youhui Xie, Yuan (2019-01-27). FPSA: A Full System Stack Solution for Reconfigurable ReRAM-based NN Accelerator Architecture. OCLC   1106329050.{{cite book}}: CS1 maint: multiple names: authors list (link)
  26. Nandakumar, S. R.; Boybat, Irem; Joshi, Vinay; Piveteau, Christophe; Le Gallo, Manuel; Rajendran, Bipin; Sebastian, Abu; Eleftheriou, Evangelos (November 2019). "Phase-Change Memory Models for Deep Learning Training and Inference". 2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS). IEEE. pp. 727–730. doi:10.1109/icecs46596.2019.8964852. ISBN   978-1-7281-0996-1. S2CID   210930121.
  27. Joshi, Vinay; Le Gallo, Manuel; Haefeli, Simon; Boybat, Irem; Nandakumar, S. R.; Piveteau, Christophe; Dazzi, Martino; Rajendran, Bipin; Sebastian, Abu; Eleftheriou, Evangelos (2020-05-18). "Accurate deep neural network inference using computational phase-change memory". Nature Communications. 11 (1): 2473. arXiv: 1906.03138 . Bibcode:2020NatCo..11.2473J. doi: 10.1038/s41467-020-16108-9 . ISSN   2041-1723. PMC   7235046 . PMID   32424184.
  28. "Summit: Oak Ridge National Laboratory's 200 petaflop supercomputer". United States Department of Energy. 2024. Retrieved 8 January 2024.
  29. "Microsoft unveils Project Brainwave for real-time AI". Microsoft . 22 August 2017.
  30. 1 2 Marega, Guilherme Migliato; Zhao, Yanfei; Avsar, Ahmet; Wang, Zhenyu; Tripati, Mukesh; Radenovic, Aleksandra; Kis, Anras (2020). "Logic-in-memory based on an atomically thin semiconductor". Nature. 587 (2): 72–77. Bibcode:2020Natur.587...72M. doi:10.1038/s41586-020-2861-0. PMC   7116757 . PMID   33149289.
  31. Zhang, Wei (1988). "Shift-invariant pattern recognition neural network and its optical architecture". Proceedings of Annual Conference of the Japan Society of Applied Physics.
  32. Zhang, Wei (1990). "Parallel distributed processing model with local space-invariant interconnections and its optical architecture". Applied Optics. 29 (32): 4790–7. Bibcode:1990ApOpt..29.4790Z. doi:10.1364/AO.29.004790. PMID   20577468.
  33. 1 2 3 Feldmann, J.; Youngblood, N.; Karpov, M.; et al. (2021). "Parallel convolutional processing using an integrated photonic tensor". Nature. 589 (2): 52–58. arXiv: 2002.00281 . doi:10.1038/s41586-020-03070-1. PMID   33408373. S2CID   211010976.