Single-precision floating-point format

Last updated

Single-precision floating-point format (sometimes called FP32 or float32) is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

Contents

A floating-point variable can represent a wider range of numbers than a fixed-point variable of the same bit width at the cost of precision. A signed 32-bit integer variable has a maximum value of 231 − 1 = 2,147,483,647, whereas an IEEE 754 32-bit base-2 floating-point variable has a maximum value of (2 − 2−23) × 2127 ≈ 3.4028235 × 1038. All integers with 7 or fewer decimal digits, and any 2n for a whole number −149 ≤ n ≤ 127, can be converted exactly into an IEEE 754 single-precision floating-point value.

In the IEEE 754-2008 standard, the 32-bit base-2 format is officially referred to as binary32; it was called single in IEEE 754-1985. IEEE 754 specifies additional floating-point types, such as 64-bit base-2 double precision and, more recently, base-10 representations.

One of the first programming languages to provide single- and double-precision floating-point data types was Fortran. Before the widespread adoption of IEEE 754-1985, the representation and properties of floating-point data types depended on the computer manufacturer and computer model, and upon decisions made by programming-language designers. E.g., GW-BASIC's single-precision data type was the 32-bit MBF floating-point format.

Single precision is termed REAL in Fortran; [1] SINGLE-FLOAT in Common Lisp; [2] float in C, C++, C# and Java; [3] Float in Haskell [4] and Swift; [5] and Single in Object Pascal (Delphi), Visual Basic, and MATLAB. However, float in Python, Ruby, PHP, and OCaml and single in versions of Octave before 3.2 refer to double-precision numbers. In most implementations of PostScript, and some embedded systems, the only supported precision is single.

IEEE 754 standard: binary32

The IEEE 754 standard specifies a binary32 as having:

This gives from 6 to 9 significant decimal digits precision. If a decimal string with at most 6 significant digits is converted to the IEEE 754 single-precision format, giving a normal number, and then converted back to a decimal string with the same number of digits, the final result should match the original string. If an IEEE 754 single-precision number is converted to a decimal string with at least 9 significant digits, and then converted back to single-precision representation, the final result must match the original number. [6]

The sign bit determines the sign of the number, which is the sign of the significand as well. The exponent field is an 8-bit unsigned integer from 0 to 255, in biased form: a value of 127 represents the actual exponent zero. Exponents range from −126 to +127 (thus 1 to 254 in the exponent field), because the biased exponent values 0 (all 0s) and 255 (all 1s) are reserved for special numbers (subnormal numbers, signed zeros, infinities, and NaNs).

The true significand of normal numbers includes 23 fraction bits to the right of the binary point and an implicit leading bit (to the left of the binary point) with value 1. Subnormal numbers and zeros (which are the floating-point numbers smaller in magnitude than the least positive normal number) are represented with the biased exponent value 0, giving the implicit leading bit the value 0. Thus only 23 fraction bits of the significand appear in the memory format, but the total precision is 24 bits (equivalent to log10(224) ≈ 7.225 decimal digits).

The bits are laid out as follows:

Float example.svg

The real value assumed by a given 32-bit binary32 data with a given sign, biased exponent e (the 8-bit unsigned integer), and a 23-bit fraction is

,

which yields

In this example:

thus:

Note:

Exponent encoding

The single-precision binary floating-point exponent is encoded using an offset-binary representation, with the zero offset being 127; also known as exponent bias in the IEEE 754 standard.

Thus, in order to get the true exponent as defined by the offset-binary representation, the offset of 127 has to be subtracted from the stored exponent.

The stored exponents 00H and FFH are interpreted specially.

Exponentfraction = 0fraction ≠ 0Equation
00H = 000000002 ±zero subnormal number
01H, ..., FEH = 000000012, ..., 111111102normal value
FFH = 111111112±infinity NaN (quiet, signalling)

The minimum positive normal value is and the minimum positive (subnormal) value is .

Converting decimal to binary32

In general, refer to the IEEE 754 standard itself for the strict conversion (including the rounding behaviour) of a real number into its equivalent binary32 format.

Here we can show how to convert a base-10 real number into an IEEE 754 binary32 format using the following outline:

Conversion of the fractional part: Consider 0.375, the fractional part of 12.375. To convert it into a binary fraction, multiply the fraction by 2, take the integer part and repeat with the new fraction by 2 until a fraction of zero is found or until the precision limit is reached which is 23 fraction digits for IEEE 754 binary32 format.

, the integer part represents the binary fraction digit. Re-multiply 0.750 by 2 to proceed
, fraction = 0.011, terminate

We see that can be exactly represented in binary as . Not all decimal fractions can be represented in a finite digit binary fraction. For example, decimal 0.1 cannot be represented in binary exactly, only approximated. Therefore:

Since IEEE 754 binary32 format requires real values to be represented in format (see Normalized number, Denormalized number), 1100.011 is shifted to the right by 3 digits to become

Finally we can see that:

From which we deduce:

From these we can form the resulting 32-bit IEEE 754 binary32 format representation of 12.375:

Note: consider converting 68.123 into IEEE 754 binary32 format: Using the above procedure you expect to get with the last 4 bits being 1001. However, due to the default rounding behaviour of IEEE 754 format, what you get is , whose last 4 bits are 1010.

Example 1: Consider decimal 1. We can see that:

From which we deduce:

From these we can form the resulting 32-bit IEEE 754 binary32 format representation of real number 1:

Example 2: Consider a value 0.25. We can see that:

From which we deduce:

From these we can form the resulting 32-bit IEEE 754 binary32 format representation of real number 0.25:

Example 3: Consider a value of 0.375. We saw that

Hence after determining a representation of 0.375 as we can proceed as above:

From these we can form the resulting 32-bit IEEE 754 binary32 format representation of real number 0.375:

Converting binary32 to decimal

If the binary32 value, 41C80000 in this example, is in hexadecimal we first convert it to binary:

then we break it down into three parts: sign bit, exponent, and significand.

We then add the implicit 24th bit to the significand:

and decode the exponent value by subtracting 127:

Each of the 24 bits of the significand (including the implicit 24th bit), bit 23 to bit 0, represents a value, starting at 1 and halves for each bit, as follows:

bit 23 = 1 bit 22 = 0.5 bit 21 = 0.25 bit 20 = 0.125 bit 19 = 0.0625 bit 18 = 0.03125 bit 17 = 0.015625 . . bit 6 = 0.00000762939453125 bit 5 = 0.000003814697265625 bit 4 = 0.0000019073486328125 bit 3 = 0.00000095367431640625 bit 2 = 0.000000476837158203125 bit 1 = 0.0000002384185791015625 bit 0 = 0.00000011920928955078125

The significand in this example has three bits set: bit 23, bit 22, and bit 19. We can now decode the significand by adding the values represented by these bits.

Then we need to multiply with the base, 2, to the power of the exponent, to get the final result:

Thus

This is equivalent to:

where s is the sign bit, x is the exponent, and m is the significand.

Precision limitations on decimal values (between 1 and 16777216)

Precision limitations on integer values

Notable single-precision cases

These examples are given in bit representation, in hexadecimal and binary, of the floating-point value. This includes the sign, (biased) exponent, and significand.

0 00000000 000000000000000000000012 = 0000 000116 = 2−126 × 2−23 = 2−149 ≈ 1.4012984643 × 10−45                                                    (smallest positive subnormal number)
0 00000000 111111111111111111111112 = 007f ffff16 = 2−126 × (1 − 2−23) ≈ 1.1754942107 ×10−38                                                    (largest subnormal number)
0 00000001 000000000000000000000002 = 0080 000016 = 2−126 ≈ 1.1754943508 × 10−38                                                    (smallest positive normal number)
0 11111110 111111111111111111111112 = 7f7f ffff16 = 2127 × (2 − 2−23) ≈ 3.4028234664 × 1038                                                    (largest normal number)
0 01111110 111111111111111111111112 = 3f7f ffff16 = 1 − 2−24 ≈ 0.999999940395355225                                                    (largest number less than one)
0 01111111 000000000000000000000002 = 3f80 000016 = 1 (one)
0 01111111 000000000000000000000012 = 3f80 000116 = 1 + 2−23 ≈ 1.00000011920928955                                                    (smallest number larger than one)
1 10000000 000000000000000000000002 = c000 000016 = −2 0 00000000 000000000000000000000002 = 0000 000016 = 0 1 00000000 000000000000000000000002 = 8000 000016 = −0                                     0 11111111 000000000000000000000002 = 7f80 000016 = infinity 1 11111111 000000000000000000000002 = ff80 000016 = −infinity                                     0 10000000 100100100001111110110112 = 4049 0fdb16 ≈ 3.14159274101257324 ≈ π ( pi ) 0 01111101 010101010101010101010112 = 3eaa aaab16 ≈ 0.333333343267440796 ≈ 1/3                                     x 11111111 100000000000000000000012 = ffc0 000116 = qNaN (on x86 and ARM processors) x 11111111 000000000000000000000012 = ff80 000116 = sNaN (on x86 and ARM processors)

By default, 1/3 rounds up, instead of down like double precision, because of the even number of bits in the significand. The bits of 1/3 beyond the rounding point are 1010... which is more than 1/2 of a unit in the last place.

Encodings of qNaN and sNaN are not specified in IEEE 754 and implemented differently on different processors. The x86 family and the ARM family processors use the most significant bit of the significand field to indicate a quiet NaN. The PA-RISC processors use the bit to indicate a signalling NaN.

Optimizations

The design of floating-point format allows various optimisations, resulting from the easy generation of a base-2 logarithm approximation from an integer view of the raw bit pattern. Integer arithmetic and bit-shifting can yield an approximation to reciprocal square root (fast inverse square root), commonly required in computer graphics.

See also

Related Research Articles

<span class="mw-page-title-main">Floating-point arithmetic</span> Computer approximation for real numbers

In computing, floating-point arithmetic (FP) is arithmetic that represents subsets of real numbers using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. Numbers of this form are called floating-point numbers. For example, 12.345 is a floating-point number in base ten with five digits of precision:

IEEE 754-1985 is a historic industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087.

A computer number format is the internal representation of numeric values in digital device hardware and software, such as in programmable computers and calculators. Numerical values are stored as groupings of bits, such as bytes and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.

Double-precision floating-point format is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.

The significand is the first (left) part of a number in scientific notation or related concepts in floating-point representation, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fractional number.

Hexadecimal floating point is a format for encoding floating-point numbers first introduced on the IBM System/360 computers, and supported on subsequent machines based on that architecture, as well as machines which were intended to be application-compatible with System/360.

In IEEE 754 floating-point numbers, the exponent is biased in the engineering sense of the word – the value stored is offset from the actual value by the exponent bias, also called a biased exponent. Biasing is done because exponents have to be signed values in order to be able to represent both tiny and huge values, but two's complement, the usual representation for signed values, would make comparison harder.

In computing, minifloats are floating-point values represented with very few bits. Predictably, they are not well suited for general-purpose numerical calculations. They are used for special purposes, most often in computer graphics, where iterations are small and precision has aesthetic effects. Machine learning also uses similar formats like bfloat16. Additionally, they are frequently encountered as a pedagogical tool in computer-science courses to demonstrate the properties and structures of floating-point arithmetic and IEEE 754 numbers.

Extended precision refers to floating-point number formats that provide greater precision than the basic floating-point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in intermediate values of expressions on the base format. In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types using special software.

Decimal floating-point (DFP) arithmetic refers to both a representation and operations on decimal floating-point numbers. Working directly with decimal (base-10) fractions can avoid the rounding errors that otherwise typically occur when converting between decimal fractions and binary (base-2) fractions.

The IEEE 754-2008 standard includes decimal floating-point number formats in which the significand and the exponent can be encoded in two ways, referred to as binary encoding and decimal encoding.

IEEE 754-2008 is a revision of the IEEE 754 standard for floating-point arithmetic. It was published in August 2008 and is a significant revision to, and replaces, the IEEE 754-1985 standard. The 2008 revision extended the previous standard where it was necessary, added decimal arithmetic and formats, tightened up certain areas of the original standard which were left undefined, and merged in IEEE 854 . In a few cases, where stricter definitions of binary floating-point arithmetic might be performance-incompatible with some existing implementation, they were made optional. In 2019, it was updated with a minor revision IEEE 754-2019.

In computing, half precision is a binary floating-point computer number format that occupies 16 bits in computer memory. It is intended for storage of floating-point values in applications where higher precision is not essential, in particular image processing and neural networks.

In computing, quadruple precision is a binary floating-point–based computer number format that occupies 16 bytes with precision at least twice the 53-bit double precision.

In computing, decimal32 is a decimal floating-point computer numbering format that occupies 4 bytes (32 bits) in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations. Like the binary16 format, it is intended for memory saving storage.

In computing, decimal64 is a decimal floating-point computer numbering format that occupies 8 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.

decimal128 is a decimal floating-point computer number format that occupies 128 bits in computer memory. Formally introduced in IEEE 754-2008, it is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.

In computing, octuple precision is a binary floating-point-based computer number format that occupies 32 bytes in computer memory. This 256-bit octuple precision is for applications requiring results in higher than quadruple precision. This format is rarely used and very few environments support it.

The bfloat16 floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a shortened (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage requirements and increase the calculation speed of machine learning algorithms.

References

  1. "REAL Statement". scc.ustc.edu.cn. Archived from the original on 2021-02-24. Retrieved 2013-02-28.
  2. "CLHS: Type SHORT-FLOAT, SINGLE-FLOAT, DOUBLE-FLOAT..."
  3. "Primitive Data Types". Java Documentation.
  4. "6 Predefined Types and Classes". haskell.org. 20 July 2010.
  5. "Float". Apple Developer Documentation.
  6. William Kahan (1 October 1997). "Lecture Notes on the Status of IEEE Standard 754 for Binary Floating-Point Arithmetic" (PDF). p. 4.