An AI accelerator, deep learning processor or neural processing unit (NPU) is a class of specialized hardware accelerator [1] or computer system [2] [3] designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and computer vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. [4] They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024 [update] , a typical AI integrated circuit chip contains tens of billions of MOSFETs. [5]
AI accelerators are used in mobile devices such as Apple iPhones and Huawei cellphones, [6] and personal computers such as Intel laptops, [7] AMD laptops [8] and Apple silicon Macs. [9] Accelerators are used in cloud computing servers, including tensor processing units (TPU) in Google Cloud Platform [10] and Trainium and Inferentia chips in Amazon Web Services. [11] A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.
Graphics processing units designed by companies such as Nvidia and AMD often include AI-specific hardware, and are commonly used as AI accelerators, both for training and inference. [12]
Computer systems have frequently complemented the CPU with special-purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks.
First attempts like Intel's ETANN 80170NX incorporated analog circuits to compute neural functions. [13]
Later all-digital chips like the Nestor/Intel Ni1000 followed. As early as 1993, digital signal processors were used as neural network accelerators to accelerate optical character recognition software. [14]
By 1988, Wei Zhang et al. had discussed fast optical implementations of convolutional neural networks for alphabet recognition. [15] [16]
In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations. [17] [18]
FPGA-based accelerators were also first explored in the 1990s for both inference and training. [19] [20]
In 2014, Chen et al. proposed DianNao (Chinese for "electric brain"), [21] to accelerate deep neural networks especially. DianNao provides 452 Gop/s peak performance (of key operations in deep neural networks) in a footprint of 3.02 mm2 and 485 mW. Later, the successors (DaDianNao, [22] ShiDianNao, [23] PuDianNao [24] ) were proposed by the same group, forming the DianNao Family [25]
Smartphones began incorporating AI accelerators starting with the Qualcomm Snapdragon 820 in 2015. [26] [27]
Heterogeneous computing incorporates many specialized processors in a single system, or a single chip, each optimized for a specific type of task. Architectures such as the Cell microprocessor [28] have features significantly overlapping with AI accelerators including: support for packed low precision arithmetic, dataflow architecture, and prioritizing throughput over latency. The Cell microprocessor has been applied to a number of tasks [29] [30] [31] including AI. [32] [33] [34]
In the 2000s, CPUs also gained increasingly wide SIMD units, driven by video and gaming workloads; as well as support for packed low-precision data types. [35] Due to the increasing performance of CPUs, they are also used for running AI workloads. CPUs are superior for DNNs with small or medium-scale parallelism, for sparse DNNs and in low-batch-size scenarios.
Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, embarrassingly parallel tasks involving matrices, leading GPUs to become increasingly used for machine learning tasks. [36] [37]
In 2012, Alex Krizhevsky adopted two GPUs to train a deep learning network, i.e., AlexNet, [38] which won the champion of the ISLVRC-2012 competition. During the 2010s, GPU manufacturers such as Nvidia added deep learning related features in both hardware (e.g., INT8 operators) and software (e.g., cuDNN Library).
Over the 2010s GPUs continued to evolve in a direction to facilitate deep learning, both for training and inference in devices such as self-driving cars. [39] [40] GPU developers such as Nvidia NVLink are developing additional connective capability for the kind of dataflow workloads AI benefits from. As GPUs have been increasingly applied to AI acceleration, GPU manufacturers have incorporated neural network-specific hardware to further accelerate these tasks. [41] [42] Tensor cores are intended to speed up the training of neural networks. [42]
GPUs continue to be used in large-scale AI applications. For example, Summit, a supercomputer from IBM for Oak Ridge National Laboratory, [43] contains 27,648 Nvidia Tesla V100 cards, which can be used to accelerate deep learning algorithms.
Deep learning frameworks are still evolving, making it hard to design custom hardware. Reconfigurable devices such as field-programmable gate arrays (FPGA) make it easier to evolve hardware, frameworks, and software alongside each other. [44] [19] [20] [45]
Microsoft has used FPGA chips to accelerate inference for real-time deep learning services. [46]
Neural Processing Units (NPU) are another more native approach. Since 2017, several CPUs and SoCs have on-die NPUs: for example, Intel Meteor Lake, Apple A11.
While GPUs and FPGAs perform far better than CPUs for AI-related tasks, a factor of up to 10 in efficiency [47] [48] may be gained with a more specific design, via an application-specific integrated circuit (ASIC). [49] These accelerators employ strategies such as optimized memory use [ citation needed ] and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation. [50] [51] Some low-precision floating-point formats used for AI acceleration are half-precision and the bfloat16 floating-point format. [52] [53] Cerebras Systems has built a dedicated AI accelerator based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2), to support deep learning workloads. [54] [55]
This section needs expansion. You can help by adding to it. (October 2018) |
In June 2017, IBM researchers announced an architecture in contrast to the Von Neumann architecture based on in-memory computing and phase-change memory arrays applied to temporal correlation detection, intending to generalize the approach to heterogeneous computing and massively parallel systems. [56] In October 2018, IBM researchers announced an architecture based on in-memory processing and modeled on the human brain's synaptic network to accelerate deep neural networks. [57] The system is based on phase-change memory arrays. [58]
In 2019, researchers from Politecnico di Milano found a way to solve systems of linear equations in a few tens of nanoseconds via a single operation. Their algorithm is based on in-memory computing with analog resistive memories which performs with high efficiencies of time and energy, via conducting matrix–vector multiplication in one step using Ohm's law and Kirchhoff's law. The researchers showed that a feedback circuit with cross-point resistive memories can solve algebraic problems such as systems of linear equations, matrix eigenvectors, and differential equations in just one step. Such an approach improves computational times drastically in comparison with digital algorithms. [59]
In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs). [60] Such atomically thin semiconductors are considered promising for energy-efficient machine learning applications, where the same basic device structure is used for both logic operations and data storage. The authors used two-dimensional materials such as semiconducting molybdenum disulphide to precisely tune FGFETs as building blocks in which logic operations can be performed with the memory elements. [60]
In 1988, Wei Zhang et al. discussed fast optical implementations of convolutional neural networks for alphabet recognition. [15] [16] In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing. [61] The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds. [61] Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications. [61] Optical processors that can also perform backpropagation for artificial neural networks have been experimentally developed. [62]
As of 2016, the field is still in flux and vendors are pushing their own marketing term for what amounts to an "AI accelerator", in the hope that their designs and APIs will become the dominant design. There is no consensus on the boundary between these devices, nor the exact form they will take; however several examples clearly aim to fill this new space, with a fair amount of overlap in capabilities.
In the past when consumer graphics accelerators emerged, the industry eventually adopted Nvidia's self-assigned term, "the GPU", [63] as the collective noun for "graphics accelerators", which had taken many forms before settling on an overall pipeline implementing a model presented by Direct3D [ clarification needed ].
All models of Intel Meteor Lake processors have a Versatile Processor Unit (VPU) built-in for accelerating inference for computer vision and deep learning. [64]
Inspired from the pioneer work of DianNao Family, many DLPs are proposed in both academia and industry with design optimized to leverage the features of deep neural networks for high efficiency. At ISCA 2016, three sessions (15%) of the accepted papers, focused on architecture designs about deep learning. Such efforts include Eyeriss (MIT), [65] EIE (Stanford), [66] Minerva (Harvard), [67] Stripes (University of Toronto) in academia, [68] TPU (Google), [69] and MLU (Cambricon) in industry. [70] We listed several representative works in Table 1.
Table 1. Typical DLPs | |||||||
---|---|---|---|---|---|---|---|
Year | DLPs | Institution | Type | Computation | Memory Hierarchy | Control | Peak Performance |
2014 | DianNao [21] | ICT, CAS | digital | vector MACs | scratchpad | VLIW | 452 Gops (16-bit) |
DaDianNao [22] | ICT, CAS | digital | vector MACs | scratchpad | VLIW | 5.58 Tops (16-bit) | |
2015 | ShiDianNao [23] | ICT, CAS | digital | scalar MACs | scratchpad | VLIW | 194 Gops (16-bit) |
PuDianNao [24] | ICT, CAS | digital | vector MACs | scratchpad | VLIW | 1,056 Gops (16-bit) | |
2016 | DnnWeaver | Georgia Tech | digital | Vector MACs | scratchpad | - | - |
EIE [66] | Stanford | digital | scalar MACs | scratchpad | - | 102 Gops (16-bit) | |
Eyeriss [65] | MIT | digital | scalar MACs | scratchpad | - | 67.2 Gops (16-bit) | |
Prime [71] | UCSB | hybrid | Process-in-Memory | ReRAM | - | - | |
2017 | TPU [69] | digital | scalar MACs | scratchpad | CISC | 92 Tops (8-bit) | |
PipeLayer [72] | U of Pittsburgh | hybrid | Process-in-Memory | ReRAM | - | ||
FlexFlow | ICT, CAS | digital | scalar MACs | scratchpad | - | 420 Gops () | |
DNPU [73] | KAIST | digital | scalar MACS | scratchpad | - | 300 Gops(16bit) 1200 Gops(4bit) | |
2018 | MAERI | Georgia Tech | digital | scalar MACs | scratchpad | - | |
PermDNN | City University of New York | digital | vector MACs | scratchpad | - | 614.4 Gops (16-bit) | |
UNPU [74] | KAIST | digital | scalar MACs | scratchpad | - | 345.6 Gops(16bit) 691.2 Gops(8b) 1382 Gops(4bit) 7372 Gops(1bit) | |
2019 | FPSA | Tsinghua | hybrid | Process-in-Memory | ReRAM | - | |
Cambricon-F | ICT, CAS | digital | vector MACs | scratchpad | FISA | 14.9 Tops (F1, 16-bit) 956 Tops (F100, 16-bit) |
The major components of DLPs architecture usually include a computation component, the on-chip memory hierarchy, and the control logic that manages the data communication and computing flows.
Regarding the computation component, as most operations in deep learning can be aggregated into vector operations, the most common ways for building computation components in digital DLPs are the MAC-based (multiplier-accumulation) organization, either with vector MACs [21] [22] [24] or scalar MACs. [69] [23] [65] Rather than SIMD or SIMT in general processing devices, deep learning domain-specific parallelism is better explored on these MAC-based organizations. Regarding the memory hierarchy, as deep learning algorithms require high bandwidth to provide the computation component with sufficient data, DLPs usually employ a relatively larger size (tens of kilobytes or several megabytes) on-chip buffer but with dedicated on-chip data reuse strategy and data exchange strategy to alleviate the burden for memory bandwidth. For example, DianNao, 16 16-in vector MAC, requires 16 × 16 × 2 = 512 16-bit data, i.e., almost 1024 GB/s bandwidth requirements between computation components and buffers. With on-chip reuse, such bandwidth requirements are reduced drastically. [21] Instead of the widely used cache in general processing devices, DLPs always use scratchpad memory as it could provide higher data reuse opportunities by leveraging the relatively regular data access pattern in deep learning algorithms. Regarding the control logic, as the deep learning algorithms keep evolving at a dramatic speed, DLPs start to leverage dedicated ISA (instruction set architecture) to support the deep learning domain flexibly. At first, DianNao used a VLIW-style instruction set where each instruction could finish a layer in a DNN. Cambricon [75] introduces the first deep learning domain-specific ISA, which could support more than ten different deep learning algorithms. TPU also reveals five key instructions from the CISC-style ISA.
Hybrid DLPs emerge for DNN inference and training acceleration because of their high efficiency. Processing-in-memory (PIM) architectures are one most important type of hybrid DLP. The key design concept of PIM is to bridge the gap between computing and memory, with the following manners: 1) Moving computation components into memory cells, controllers, or memory chips to alleviate the memory wall issue. [72] [76] [77] Such architectures significantly shorten data paths and leverage much higher internal bandwidth, hence resulting in attractive performance improvement. 2) Build high efficient DNN engines by adopting computational devices. In 2013, HP Lab demonstrated the astonishing capability of adopting ReRAM crossbar structure for computing. [78] Inspiring by this work, tremendous work are proposed to explore the new architecture and system design based on ReRAM, [71] [79] [80] [72] phase change memory, [76] [81] [82] etc.
Benchmarks such as MLPerf and others may be used to evaluate the performance of AI accelerators. [83] Table 2 lists several typical benchmarks for AI accelerators.
Year | NN Benchmark | Affiliations | # of microbenchmarks | # of component benchmarks | # of application benchmarks |
---|---|---|---|---|---|
2012 | BenchNN | ICT, CAS | N/A | 12 | N/A |
2016 | Fathom | Harvard | N/A | 8 | N/A |
2017 | BenchIP | ICT, CAS | 12 | 11 | N/A |
2017 | DAWNBench | Stanford | 8 | N/A | N/A |
2017 | DeepBench | Baidu | 4 | N/A | N/A |
2018 | AI Benchmark | ETH Zurich | N/A | 26 | N/A |
2018 | MLPerf | Harvard, Intel, and Google, etc. | N/A | 7 | N/A |
2019 | AIBench | ICT, CAS and Alibaba, etc. | 12 | 16 | 2 |
2019 | NNBench-X | UCSB | N/A | 10 | N/A |
General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.
An optical neural network is a physical implementation of an artificial neural network with optical components. Early optical neural networks used a photorefractive Volume hologram to interconnect arrays of input neurons to arrays of output with synaptic weights in proportion to the multiplexed hologram's strength. Volume holograms were further multiplexed using spectral hole burning to add one dimension of wavelength to space to achieve four dimensional interconnects of two dimensional arrays of neural inputs and outputs. This research led to extensive research on alternative methods using the strength of the optical interconnect for implementing neuronal communications.
Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both.
In computing, CUDA is a proprietary parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for accelerated general-purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA API and its runtime: The CUDA API is an extension of the C programming language that adds the ability to specify thread-level parallelism in C and also to specify GPU device specific operations. CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements for the execution of compute kernels. In addition to drivers and runtime kernels, the CUDA platform includes compilers, libraries and developer tools to help programmers accelerate their applications.
William James Dally is an American computer scientist and educator. He is the chief scientist and senior vice president at Nvidia and was previously a professor of Electrical Engineering and Computer Science at Stanford University and MIT. Since 2021, he has been a member of the President's Council of Advisors on Science and Technology (PCAST).
Manycore processors are special kinds of multi-core processors designed for a high degree of parallel processing, containing numerous simpler, independent processor cores. Manycore processors are used extensively in embedded computers and high-performance computing.
Deep learning is a subset of machine learning that focuses on utilizing neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.
A cognitive computer is a computer that hardwires artificial intelligence and machine learning algorithms into an integrated circuit that closely reproduces the behavior of the human brain. It generally adopts a neuromorphic engineering approach. Synonyms include neuromorphic chip and cognitive chip.
A convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns features by itself via filter optimization. This type of deep learning network has been applied to process and make predictions from many different types of data including text, images and audio. Convolution-based networks are the de-facto standard in deep learning-based approaches to computer vision and image processing, and have only recently have been replaced -- in some cases -- by newer deep learning architectures such as the transformer. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer, 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.
A vision processing unit (VPU) is an emerging class of microprocessor; it is a specific type of AI accelerator, designed to accelerate machine vision tasks.
Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third-party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.
Coherent Accelerator Processor Interface (CAPI), is a high-speed processor expansion bus standard for use in large data center computers, initially designed to be layered on top of PCI Express, for directly connecting central processing units (CPUs) to external accelerators like graphics processing units (GPUs), ASICs, FPGAs or fast storage. It offers low latency, high speed, direct memory access connectivity between devices of different instruction set architectures.
AlexNet is a convolutional neural network (CNN) architecture, designed by Alex Krizhevsky in collaboration with Ilya Sutskever and Geoffrey Hinton, who was Krizhevsky's Ph.D. advisor at the University of Toronto. It had 60 million parameters and 650,000 neurons.
Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by biological neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling this period an "AI winter".
Hopper is a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is designed for datacenters and is used alongside the Lovelace microarchitecture. It is the latest generation of the line of products formerly branded as Nvidia Tesla, now Nvidia Data Centre GPUs.
Specialized computer hardware is often used to execute artificial intelligence (AI) programs faster, and with less energy, such as Lisp machines, neuromorphic engineering, event cameras, and physical neural networks. As of 2023, the market for AI hardware is dominated by GPUs.
Yixin Chen is a computer scientist, academic, and author. He is a professor of computer science and engineering at Washington University in St. Louis.
In machine learning, the term tensor informally refers to two different concepts for organizing and representing data. Data may be organized in a multidimensional array (M-way array), informally referred to as a "data tensor"; however, in the strict mathematical sense, a tensor is a multilinear mapping over a set of domain vector spaces to a range vector space. Observations, such as images, movies, volumes, sounds, and relationships among words and concepts, stored in an M-way array ("data tensor"), may be analyzed either by artificial neural networks or tensor methods.
A domain-specific architecture (DSA) is a programmable computer architecture specifically tailored to operate very efficiently within the confines of a given application domain. The term is often used in contrast to general-purpose architectures, such as CPUs, that are designed to operate on any computer program.
Clément Farabet is a computer scientist and AI expert known for his contributions to the field of deep learning. He served as a research scientist at the New York University. He serves as the Vice President of Research at Google DeepMind and previously served as the VP of AI Infrastructure at NVIDIA.
Intel said that the NNP-L1000 would also support bfloat16, a numerical format that's being adopted by all the ML industry players for neural networks. The company will also support bfloat16 in its FPGAs, Xeons, and other ML products. The Nervana NNP-L1000 is scheduled for release in 2019.
All operations in TensorFlow Distributions are numerically stable across half, single, and double floating-point precisions (as TensorFlow dtypes: tf.bfloat16 (truncated floating point), tf.float16, tf.float32, tf.float64). Class constructors have a validate_args flag for numerical asserts
{{cite book}}
: CS1 maint: multiple names: authors list (link)