Approximate computing

Last updated

Approximate computing is an emerging paradigm for energy-efficient and/or high-performance design. [1] It includes a plethora of computation techniques that return a possibly inaccurate result rather than a guaranteed accurate result, and that can be used for applications where an approximate result is sufficient for its purpose. [2] One example of such situation is for a search engine where no exact answer may exist for a certain search query and hence, many answers may be acceptable. Similarly, occasional dropping of some frames in a video application can go undetected due to perceptual limitations of humans. Approximate computing is based on the observation that in many scenarios, although performing exact computation requires large amount of resources, allowing bounded approximation can provide disproportionate gains in performance and energy, while still achieving acceptable result accuracy.[ clarification needed ] For example, in k-means clustering algorithm, allowing only 5% loss in classification accuracy can provide 50 times energy saving compared to the fully accurate classification.

Contents

The key requirement in approximate computing is that approximation can be introduced only in non-critical data, since approximating critical data (e.g., control operations) can lead to disastrous consequences, such as program crash or erroneous output.

Strategies

Several strategies can be used for performing approximate computing.

Approximate circuits
Approximate arithmetic circuits: [3] adders, [4] [5] multipliers [6] and other logical circuits can reduce hardware overhead. [7] [8] [9] For example, an approximate multi-bit adder can ignore the carry chain and thus, allow all its sub-adders to perform addition operation in parallel. [10] [11]
Approximate storage and memory
Instead of storing data values exactly, they can be stored approximately, e.g., by truncating the lower-bits in floating point data. Another method is to accept less reliable memory. For this, in DRAM [12] and eDRAM, refresh rate assignments can be lowered or controlled. [13] In SRAM, supply voltage can be lowered [14] or controlled. [15] Approximate storage can be applied to reduce MRAM's high write energy consumption. [16] In general, any error detection and correction mechanisms should be disabled.
Software-level approximation
There are several ways to approximate at software level. Memoization or fuzzy memoization (the use of a vector database for approximate retrieval from a cache, i.e. fuzzy caching) can be applied. Some iterations of loops can be skipped (termed as loop perforation) to achieve a result faster. Some tasks can also be skipped, for example when a run-time condition suggests that those tasks are not going to be useful (task skipping). Monte Carlo algorithms and Randomized algorithms trade correctness for execution time guarantees. [17] The computation can be reformulated according to paradigms that allow easily the acceleration on specialized hardware, e.g. a neural processing unit. [18]
Approximate system
In an approximate system, [19] different subsystems of the system such as the processor, memory, sensor, and communication modules are synergistically approximated to obtain a much better system-level Q-E trade-off curve compared to individual approximations to each of the subsystems.

Application areas

Approximate computing has been used in a variety of domains where the applications are error-tolerant, such as multimedia processing, machine learning, signal processing, scientific computing. Therefore, approximate computing is mostly driven by applications that are related to human perception/cognition and have inherent error resilience. Many of these applications are based on statistical or probabilistic computation, such as different approximations can be made to better suit the desired objectives. [20] One notable application in machine learning is that Google is using this approach in their Tensor processing units (TPU, a custom ASIC). [21]

Derived paradigms

The main issue in approximate computing is the identification of the section of the application that can be approximated. In the case of large scale applications, it is very common to find people holding the expertise on approximate computing techniques not having enough expertise on the application domain (and vice versa). In order to solve this problem, programming paradigms [22] have been proposed. They all have in common the clear role separation between application programmer and application domain expert. These approaches allow the spread of the most common optimizations and approximate computing techniques.

See also

Related Research Articles

<span class="mw-page-title-main">Computer science</span> Study of computation

Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines to applied disciplines.

<span class="mw-page-title-main">Fast Fourier transform</span> O(N log N) discrete Fourier transform algorithm

A Fast Fourier Transform (FFT) is an algorithm that computes the Discrete Fourier Transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where n is the data size. The difference in speed can be enormous, especially for long data sets where n may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.

<span class="mw-page-title-main">Neural network (machine learning)</span> Computational model used in machine learning, based on connected, hierarchical functions

In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.

<span class="mw-page-title-main">Quantum computing</span> Technology that uses quantum mechanics

A quantum computer is a computer that exploits quantum mechanical phenomena. On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware. Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. In particular, a large-scale quantum computer could break widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is largely experimental and impractical, with several obstacles to useful applications.

Theoretical computer science is a subfield of computer science and mathematics that focuses on the abstract and mathematical foundations of computation, such as the theory of computation, formal language theory, the lambda calculus and type theory.

Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, transistors, among others. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g., using Python based frameworks such as snnTorch, or using canonical learning rules from the biological learning literature, e.g., using BindsNet.

In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.

In electronics and computing, a soft error is a type of error where a signal or datum is wrong. Errors may be caused by a defect, usually understood either to be a mistake in design or construction, or a broken component. A soft error is also a signal or datum which is wrong, but is not assumed to imply such a mistake or breakage. After observing a soft error, there is no implication that the system is any less reliable than before. One cause of soft errors is single event upsets from cosmic rays.

The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.

<span class="mw-page-title-main">Hardware acceleration</span> Specialized computer hardware

Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both.

Non-negative matrix factorization, also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into (usually) two matrices W and H, with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered. Since the problem is not exactly solvable in general, it is commonly approximated numerically.

<span class="mw-page-title-main">Computational electromagnetics</span> Branch of physics

Computational electromagnetics (CEM), computational electrodynamics or electromagnetic modeling is the process of modeling the interaction of electromagnetic fields with physical objects and the environment using computers.

<span class="mw-page-title-main">Matching pursuit</span> Multidimensional data algorithm

Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete dictionary . The basic idea is to approximately represent a signal from Hilbert space as a weighted sum of finitely many functions taken from . An approximation with atoms has the form

<span class="mw-page-title-main">Echo state network</span> Type of reservoir computer

An echo state network (ESN) is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer. The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behavior is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.

<span class="mw-page-title-main">Spiking neural network</span> Artificial neural network that mimics neurons

Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.

Stochastic computing is a collection of techniques that represent continuous values by streams of random bits. Complex computations can then be computed by simple bit-wise operations on the streams. Stochastic computing is distinct from the study of randomized algorithms.

<span class="mw-page-title-main">Deep learning</span> Branch of machine learning

Deep learning is the subset of machine learning methods based on neural networks with representation learning. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.

Physical unclonable function (PUF), sometimes also called physically unclonable function, is a physical entity that is embodied in a physical structure and is easy to evaluate but hard to predict.

An AI accelerator, deep learning processor, or neural processing unit (NPU) is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFETs.

Soft computing is an umbrella term used to describe types of algorithms that produce approximate solutions to unsolvable high-level problems in computer science. Typically, traditional hard-computing algorithms heavily rely on concrete data and mathematical models to produce solutions to problems. Soft computing was coined in the late 20th century. During this period, revolutionary research in three fields greatly impacted soft computing. Fuzzy logic is a computational paradigm that entertains the uncertainties in data by using levels of truth rather than rigid 0s and 1s in binary. Next, neural networks which are computational models influenced by human brain functions. Finally, evolutionary computation is a term to describe groups of algorithm that mimic natural processes such as evolution and natural selection.

References

  1. J. Han and M. Orshansky, "Approximate computing: An emerging paradigm for energy-efficient design", in the 18th IEEE European Test Symposium, pp. 1-6, 2013.
  2. A. Sampson, et al. "EnerJ: Approximate data types for safe and general low-power computation", In ACM SIGPLAN Notices, vol. 46, no. 6, 2011.
  3. Jiang et al., "Approximate Arithmetic Circuits: A Survey, Characterization, and Recent Applications", the Proceedings of the IEEE, Vol. 108, No. 12, pp. 2108 - 2135, 2020.
  4. J. Echavarria, et al. "FAU: Fast and Error-Optimized Approximate Adder Units on LUT-Based FPGAs", FPT, 2016.
  5. J. Miao, et al. "Modeling and synthesis of quality-energy optimal approximate adders", ICCAD, 2012
  6. Rehman, Semeen; El-Harouni, Walaa; Shafique, Muhammad; Kumar, Akash; Henkel, Jörg (2016-11-07). Architectural-space exploration of approximate multipliers. ACM. p. 80. doi:10.1145/2966986.2967005. ISBN   9781450344661. S2CID   5326133.
  7. S. Venkataramani, et al. "SALSA: systematic logic synthesis of approximate circuits", DAC, 2012.
  8. J. Miao, et al. "Approximate logic synthesis under general error magnitude and frequency constraints", ICCAD, 2013
  9. R. Hegde et al. "Energy-efficient signal processing via algorithmic noise-tolerance", ISLPED, 1999.
  10. Camus, Vincent; Mei, Linyan; Enz, Christian; Verhelst, Marian (December 2019). "Review and Benchmarking of Precision-Scalable Multiply-Accumulate Unit Architectures for Embedded Neural-Network Processing". IEEE Journal on Emerging and Selected Topics in Circuits and Systems. 9 (4): 697–711. doi:10.1109/JETCAS.2019.2950386. ISSN   2156-3357. The implementation chosen in this study assumes a rightshifting sequential multiplier as it requires a smaller firststage adder than a left-shifting design, preventing long carry propagation and sign-bit extension.
  11. Nagornov, Nikolay N.; Lyakhov, Pavel A.; Bergerman, Maxim V.; Kalita, Diana I. (2024). "Modern Trends in Improving the Technical Characteristics of Devices and Systems for Digital Image Processing". IEEE Access. 12: 44659–44681. doi:10.1109/ACCESS.2024.3381493. ISSN   2169-3536. Addition and accumulation of high order bits are not performed until the partial product reduction for the next multiplication in the proposed architecture.
  12. Raha, A.; Sutar, S.; Jayakumar, H.; Raghunathan, V. (July 2017). "Quality Configurable Approximate DRAM". IEEE Transactions on Computers. 66 (7): 1172–1187. doi: 10.1109/TC.2016.2640296 . ISSN   0018-9340.
  13. Kim, Yongjune; Choi, Won Ho; Guyot, Cyril; Cassuto, Yuval (December 2019). "On the Optimal Refresh Power Allocation for Energy-Efficient Memories". 2019 IEEE Global Communications Conference (GLOBECOM). Waikoloa, HI, USA: IEEE. pp. 1–6. arXiv: 1907.01112 . doi:10.1109/GLOBECOM38437.2019.9013465. ISBN   978-1-7281-0962-6. S2CID   195776538.
  14. Frustaci, Fabio; Blaauw, David; Sylvester, Dennis; Alioto, Massimo (June 2016). "Approximate SRAMs With Dynamic Energy-Quality Management". IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 24 (6): 2128–2141. doi:10.1109/TVLSI.2015.2503733. ISSN   1063-8210. S2CID   8051173.
  15. Kim, Yongjune; Kang, Mingu; Varshney, Lav R.; Shanbhag, Naresh R. (2018). "Generalized Water-filling for Source-aware Energy-efficient SRAMs". IEEE Transactions on Communications. 66 (10): 4826–4841. arXiv: 1710.07153 . doi:10.1109/TCOMM.2018.2841406. ISSN   0090-6778. S2CID   24512949.
  16. Kim, Yongjune; Jeon, Yoocharn; Choi, Hyeokjin; Guyot, Cyril; Cassuto, Yuval (2022). "Optimizing Write Fidelity of MRAMs by Alternating Water-filling Algorithm". IEEE Transactions on Communications. 70 (9): 5825–5836. doi:10.1109/TCOMM.2022.3190868. ISSN   0090-6778. S2CID   250565077.
  17. C.Alippi, Intelligence for Embedded Systems: a Methodological approach, Springer, 2014, pp. 283
  18. Esmaeilzadeh, Hadi; Sampson, Adrian; Ceze, Luis; Burger, Doug (2012). Neural acceleration for general-purpose approximate programs. 45th Annual IEEE/ACM International Symposium on Microarchitecture. Vancouver, BC: IEEE. pp. 449–460. doi:10.1109/MICRO.2012.48.
  19. Raha, Arnab; Raghunathan, Vijay (2017). "Towards Full-System Energy-Accuracy Tradeoffs". Proceedings of the 54th Annual Design Automation Conference 2017. DAC '17. New York, NY, USA: ACM. pp. 74:1–74:6. doi:10.1145/3061639.3062333. ISBN   9781450349277. S2CID   2503638.
  20. Liu, Weiqiang; Lombardi, Fabrizio; Schulte, Michael (Dec 2020). "Approximate Computing: From Circuits to Applications". Proceedings of the IEEE. 108 (12): 2103. doi: 10.1109/JPROC.2020.3033361 .
  21. Liu, Weiqiang; Lombardi, Fabrizio; Schulte, Michael (Dec 2020). "Approximate Computing: From Circuits to Applications". Proceedings of the IEEE. 108 (12): 2104. doi: 10.1109/JPROC.2020.3033361 .
  22. Nguyen, Donald; Lenharth, Andrew; Pingali, Keshav (2013). "A lightweight infrastructure for graph analytics". Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles. ACM. pp. 456–471. doi: 10.1145/2517349.2522739 . ISBN   9781450323888.