Floating-point error mitigation

Last updated

Floating-point error mitigation is the minimization of errors caused by the fact that real numbers cannot, in general, be accurately represented in a fixed space. By definition, floating-point error cannot be eliminated, and, at best, can only be managed.

Contents

Huberto M. Sierra noted in his 1956 patent "Floating Decimal Point Arithmetic Control Means for Calculator": [1]

Thus under some conditions, the major portion of the significant data digits may lie beyond the capacity of the registers. Therefore, the result obtained may have little meaning if not totally erroneous.

The Z1, developed by Konrad Zuse in 1936, was the first computer with floating-point arithmetic and was thus susceptible to floating-point error. Early computers, however, with operation times measured in milliseconds, could not solve large, complex problems [2] and thus were seldom plagued with floating-point error. Today, however, with supercomputer system performance measured in petaflops, floating-point error is a major concern for computational problem solvers.

The following sections describe the strengths and weaknesses of various means of mitigating floating-point error.

Numerical error analysis

Though not the primary focus of numerical analysis, [3] [4] :5 numerical error analysis exists for the analysis and minimization of floating-point rounding error.

Monte Carlo arithmetic

Error analysis by Monte Carlo arithmetic is accomplished by repeatedly injecting small errors into an algorithm's data values and determining the relative effect on the results.

Extension of precision

Extension of precision is using of larger representations of real values than the one initially considered. The IEEE 754 standard defines precision as the number of digits available to represent real numbers. A programming language can include single precision (32 bits), double precision (64 bits), and quadruple precision (128 bits). While extension of precision makes the effects of error less likely or less important, the true accuracy of the results is still unknown.

Variable-length arithmetic

Variable length arithmetic represents numbers as a string of digits of a variable's length limited only by the memory available. Variable-length arithmetic operations are considerably slower than fixed-length format floating-point instructions. When high performance is not a requirement, but high precision is, variable length arithmetic can prove useful, though the actual accuracy of the result may not be known.

Use of the error term of a floating-point operation

The floating-point algorithm known as TwoSum [5] or 2Sum , due to Knuth and Møller, and its simpler, but restricted version FastTwoSum or Fast2Sum (3 operations instead of 6), allow one to get the (exact) error term of a floating-point addition rounded to nearest. One can also obtain the (exact) error term of a floating-point multiplication rounded to nearest in 2 operations with a fused multiply–add (FMA), or 17 operations if the FMA is not available (with an algorithm due to Dekker). These error terms can be used in algorithms in order to improve the accuracy of the final result, e.g. with floating-point expansions or compensated algorithms.

Operations giving the result of a floating-point addition or multiplication rounded to nearest with its error term (but slightly differing from algorithms mentioned above) have been standardized and recommended in the IEEE 754-2019 standard.

Choice of a different radix

Changing the radix, in particular from binary to decimal, can help to reduce the error and better control the rounding in some applications, such as financial applications.

Interval arithmetic

Interval arithmetic is a mathematical technique used to put bounds on rounding errors and measurement errors in mathematical computation. Values are intervals, which can be represented in various ways, such as: [6]

"Instead of using a single floating-point number as approximation for the value of a real variable in the mathematical model under investigation, interval arithmetic acknowledges limited precision by associating with the variable a set of reals as possible values. For ease of storage and computation, these sets are restricted to intervals." [7]

The evaluation of interval arithmetic expression may provide a large range of values, [7] and may seriously overestimate the true error boundaries. [8] :8

Gustafson's unums

Unums ("Universal Numbers") are an extension of variable length arithmetic proposed by John Gustafson. [9] Unums have variable length fields for the exponent and significand lengths and error information is carried in a single bit, the ubit, representing possible error in the least significant bit of the significand (ULP). [9] :4

The efficacy of unums is questioned by William Kahan. [8]

Bounded floating point

Bounded floating point is a method proposed and patented by Alan Jorgensen. [10] The data structure includes the standard IEEE 754 data structure and interpretation, as well as information about the error between the true real value represented and the value stored by the floating point representation. [11]

Bounded floating point has been criticized as being derivative of Gustafson's work on unums and interval arithmetic. [10] [12]

Related Research Articles

<span class="mw-page-title-main">Floating-point arithmetic</span> Computer approximation for real numbers

In computing, floating-point arithmetic (FP) in some base is arithmetic that represents subsets of real numbers using a fixed-point value in this base with a fixed precision, scaled by an integral power of this base. Numbers of this form are called floating-point numbers.

A computer number format is the internal representation of numeric values in digital device hardware and software, such as in programmable computers and calculators. Numerical values are stored as groupings of bits, such as bytes and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.

In computing, NaN, standing for Not a Number, is a particular value of a numeric data type which is undefined as a number, such as the result of 0/0. Systematic use of NaNs was introduced by the IEEE 754 floating-point standard in 1985, along with the representation of other non-finite quantities such as infinities.

Double-precision floating-point format is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide range of numeric values by using a floating radix point.

In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious approach. This is done by keeping a separate running compensation, in effect extending the precision of the sum by the precision of the compensation variable.

The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point arithmetic originally established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.

The significand is the first (left) part of a number in scientific notation or related concepts in floating-point representation, consisting of its significant digits.

In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are due to inexactness in the representation of real numbers and the arithmetic operations done with them. This is a form of quantization error. When using approximation equations or algorithms, especially when using finitely many digits to represent real numbers, one of the goals of numerical analysis is to estimate computation errors. Computation errors, also called numerical errors, include both truncation errors and roundoff errors.

In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are potentially limited only by the available memory of the host system. This contrasts with the faster fixed-precision arithmetic found in most arithmetic logic unit (ALU) hardware, which typically offers between 8 and 64 bits of precision.

Signed zero is zero with an associated sign. In ordinary arithmetic, the number 0 does not have a sign, so that −0, +0 and 0 are equivalent. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 and +0, regarded as equal by the numerical comparison operations but with possible different behaviors in particular operations. This occurs in the sign-magnitude and ones' complement signed number representations for integers, and in most floating-point number representations. The number 0 is usually encoded as +0, but can still be represented by +0, −0, or 0.

Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point number systems. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subject of computational science. The quantity is also called macheps and it has the symbols Greek epsilon .

Extended precision refers to floating-point number formats that provide greater precision than the basic floating-point formats. Extended-precision formats support a basic format by minimizing roundoff and overflow errors in intermediate values of expressions on the base format. In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types using special software.

Decimal floating-point (DFP) arithmetic refers to both a representation and operations on decimal floating-point numbers. Working directly with decimal (base-10) fractions can avoid the rounding errors that otherwise typically occur when converting between decimal fractions and binary (base-2) fractions.

IEEE 754-2008 is a revision of the IEEE 754 standard for floating-point arithmetic. It was published in August 2008 and is a significant revision to, and replaces, the IEEE 754-1985 standard. The 2008 revision extended the previous standard where it was necessary, added decimal arithmetic and formats, tightened up certain areas of the original standard which were left undefined, and merged in IEEE 854 . In a few cases, where stricter definitions of binary floating-point arithmetic might be performance-incompatible with some existing implementation, they were made optional. In 2019, it was updated with a minor revision IEEE 754-2019.

In computing, half precision is a binary floating-point computer number format that occupies 16 bits in computer memory. It is intended for storage of floating-point values in applications where higher precision is not essential, in particular image processing and neural networks.

In computing, quadruple precision is a binary floating-point–based computer number format that occupies 16 bytes with precision at least twice the 53-bit double precision.

Single-precision floating-point format is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

Unums are a family of number formats and arithmetic for implementing real numbers on a computer, proposed by John L. Gustafson in 2015. They are designed as an alternative to the ubiquitous IEEE 754 floating-point standard. The latest version is known as posits.

The bfloat16 floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a shortened (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage requirements and increase the calculation speed of machine learning algorithms.

In computing, tapered floating point (TFP) is a format similar to floating point, but with variable-sized entries for the significand and exponent instead of the fixed-length entries found in normal floating-point formats. In addition to this, tapered floating-point formats provide a fixed-size pointer entry indicating the number of digits in the exponent entry. The number of digits of the significand entry results from the difference of the fixed total length minus the length of the exponent and pointer entries.

References

  1. "Floating decimal point arithmetic control means for calculator: United States Patent 3037701". FreePatentsOnline.com. 1962-06-05. Retrieved 2022-01-21.
  2. "History of Computer Development & Generation of Computer". WikiEducator. September 2014. Retrieved 2018-02-17.
  3. Trefethen, Lloyd N. (1992). "The Definition of Numerical Analysis" (PDF). SIAM. Retrieved 2018-02-16.
  4. Higham, Nicholas John (2002). Accuracy and Stability of Numerical Algorithms (2 ed.). Society for Industrial and Applied Mathematics (SIAM). ISBN   978-0-89871-521-7.
  5. Richard Shewchuk, Jonathan (October 1997). "Adaptive Precision Floating-Point Arithmetic and Fast Robust Geometric Predicates" (PDF). Discrete & Computational Geometry. 18 (3): 305–363. doi:10.1007/PL00009321. S2CID   189937041 . Retrieved 2022-11-14.
  6. "IEEE Standard for Interval Arithmetic". IEEE STD 1788-2015: 1–97. 2015-06-30. doi:10.1109/IEEESTD.2015.7140721. ISBN   978-0-7381-9720-3.
  7. 1 2 Hickey, T.; Ju, Q.; van Emden, M.H. (September 2001). "Interval Arithmetic: from Principles to Implementation" (PDF). Journal of the ACM. 48 (5): 1038–1068. CiteSeerX   10.1.1.43.8001 . doi:10.1145/502102.502106. S2CID   15105694 . Retrieved 2018-02-16.
  8. 1 2 Kahan, William (July 2016). "A Critique of John L. Gustafson's THE END of ERROR — Unum Computation and his A Radical Approach to Computation with Real Numbers" (PDF). Retrieved 2018-02-17.
  9. 1 2 Gustafson, John Leroy (2016-02-04) [2015-02-05]. The End of Error: Unum Computing. Chapman & Hall / CRC Computational Science. Vol. 24 (2nd corrected printing, 1st ed.). CRC Press. ISBN   978-1-4822-3986-7 . Retrieved 2016-05-30.
  10. 1 2 Trader, Tiffany (2018-01-17). "Inventor Claims to Have Solved Floating Point Error Problem". HPCwire. Retrieved 2022-03-01.
  11. USpatent 11023230B2,Jorgensen, Alan A.,"Apparatus for Calculating and Retaining a Bound on Error during Floating Point Operations and Methods Thereof",issued 2021-06-01
  12. "Has the Decades-Old Floating Point Error Problem been Solved?". insideHPC. 2018-01-17. Retrieved 2022-03-01.