**Single-precision floating-point format** (sometimes called **FP32** or **float32**) is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

- IEEE 754 standard: binary32
- Exponent encoding
- Converting decimal to binary32
- Converting binary32 to decimal
- Precision limitations on decimal values (between 1 and 16777216)
- Precision limitations on integer values
- Notable single-precision cases
- Optimizations
- See also
- References
- External links

A floating-point variable can represent a wider range of numbers than a fixed-point variable of the same bit width at the cost of precision. A signed 32-bit integer variable has a maximum value of 2^{31} − 1 = 2,147,483,647, whereas an IEEE 754 32-bit base-2 floating-point variable has a maximum value of (2 − 2^{−23}) × 2^{127} ≈ 3.4028235 × 10^{38}. All integers with 7 or fewer decimal digits, and any 2^{n} for a whole number −149 ≤ *n* ≤ 127, can be converted exactly into an IEEE 754 single-precision floating-point value.

In the IEEE 754-2008 standard, the 32-bit base-2 format is officially referred to as **binary32**; it was called **single** in IEEE 754-1985. IEEE 754 specifies additional floating-point types, such as 64-bit base-2 * double precision * and, more recently, base-10 representations.

One of the first programming languages to provide single- and double-precision floating-point data types was Fortran. Before the widespread adoption of IEEE 754-1985, the representation and properties of floating-point data types depended on the computer manufacturer and computer model, and upon decisions made by programming-language designers. E.g., GW-BASIC's single-precision data type was the 32-bit MBF floating-point format.

Single precision is termed *REAL* in Fortran,^{ [1] }*SINGLE-FLOAT* in Common Lisp,^{ [2] }*float* in C, C++, C#, Java,^{ [3] }*Float* in Haskell ^{ [4] } and Swift,^{ [5] } and *Single* in Object Pascal (Delphi), Visual Basic, and MATLAB. However, *float* in Python, Ruby, PHP, and OCaml and *single* in versions of Octave before 3.2 refer to double-precision numbers. In most implementations of PostScript, and some embedded systems, the only supported precision is single.

Floating-point formats |
---|

IEEE 754 |

Other |

The IEEE 754 standard specifies a *binary32* as having:

- Sign bit: 1 bit
- Exponent width: 8 bits
- Significand precision: 24 bits (23 explicitly stored)

This gives from 6 to 9 significant decimal digits precision. If a decimal string with at most 6 significant digits is converted to IEEE 754 single-precision representation, and then converted back to a decimal string with the same number of digits, the final result should match the original string. If an IEEE 754 single-precision number is converted to a decimal string with at least 9 significant digits, and then converted back to single-precision representation, the final result must match the original number.^{ [6] }

The sign bit determines the sign of the number, which is the sign of the significand as well. The exponent is an 8-bit unsigned integer from 0 to 255, in biased form: an exponent value of 127 represents the actual zero. Exponents range from −126 to +127 because exponents of −127 (all 0s) and +128 (all 1s) are reserved for special numbers.

The true significand includes 23 fraction bits to the right of the binary point and an *implicit leading bit* (to the left of the binary point) with value 1, unless the exponent is stored with all zeros. Thus only 23 fraction bits of the significand appear in the memory format, but the total precision is 24 bits (equivalent to log_{10}(2^{24}) ≈ 7.225 decimal digits). The bits are laid out as follows:

The real value assumed by a given 32-bit *binary32* data with a given *sign*, biased exponent *e* (the 8-bit unsigned integer), and a *23-bit fraction* is

- ,

which yields

In this example:

- ,
- ,
- ,
- ,
- .

thus:

- .

Note:

- ,
- ,
- ,
- .

The single-precision binary floating-point exponent is encoded using an offset-binary representation, with the zero offset being 127; also known as exponent bias in the IEEE 754 standard.

- E
_{min}= 01_{H}−7F_{H}= −126 - E
_{max}= FE_{H}−7F_{H}= 127 - Exponent bias = 7F
_{H}= 127

Thus, in order to get the true exponent as defined by the offset-binary representation, the offset of 127 has to be subtracted from the stored exponent.

The stored exponents 00_{H} and FF_{H} are interpreted specially.

Exponent | fraction = 0 | fraction ≠ 0 | Equation |
---|---|---|---|

00_{H} = 00000000_{2} | ±zero | subnormal number | |

01_{H}, ..., FE_{H} = 00000001_{2}, ..., 11111110_{2} | normal value | ||

FF_{H} = 11111111_{2} | ±infinity | NaN (quiet, signalling) |

The minimum positive normal value is and the minimum positive (subnormal) value is .

This section possibly contains original research .(February 2020) |

This section may be confusing or unclear to readers. In particular, the examples are simple particular cases (simple values exactly representable in binary, without an exponent part). This section is also probably off-topic: this is not an article about conversion, and conversion from decimal using decimal arithmetic (as opposed to conversion from a character string) is uncommon.(February 2020) |

In general, refer to the IEEE 754 standard itself for the strict conversion (including the rounding behaviour) of a real number into its equivalent binary32 format.

Here we can show how to convert a base-10 real number into an IEEE 754 binary32 format using the following outline:

- Consider a real number with an integer and a fraction part such as 12.375
- Convert and normalize the integer part into binary
- Convert the fraction part using the following technique as shown here
- Add the two results and adjust them to produce a proper final conversion

**Conversion of the fractional part:** Consider 0.375, the fractional part of 12.375. To convert it into a binary fraction, multiply the fraction by 2, take the integer part and repeat with the new fraction by 2 until a fraction of zero is found or until the precision limit is reached which is 23 fraction digits for IEEE 754 binary32 format.

- , the integer part represents the binary fraction digit. Re-multiply 0.750 by 2 to proceed

- , fraction = 0.011, terminate

We see that can be exactly represented in binary as . Not all decimal fractions can be represented in a finite digit binary fraction. For example, decimal 0.1 cannot be represented in binary exactly, only approximated. Therefore:

Since IEEE 754 binary32 format requires real values to be represented in format (see Normalized number, Denormalized number), 1100.011 is shifted to the right by 3 digits to become

Finally we can see that:

From which we deduce:

- The exponent is 3 (and in the biased form it is therefore )
- The fraction is 100011 (looking to the right of the binary point)

From these we can form the resulting 32-bit IEEE 754 binary32 format representation of 12.375:

Note: consider converting 68.123 into IEEE 754 binary32 format: Using the above procedure you expect to get with the last 4 bits being 1001. However, due to the default rounding behaviour of IEEE 754 format, what you get is , whose last 4 bits are 1010.

**Example 1:** Consider decimal 1. We can see that:

From which we deduce:

- The exponent is 0 (and in the biased form it is therefore )
- The fraction is 0 (looking to the right of the binary point in 1.0 is all )

From these we can form the resulting 32-bit IEEE 754 binary32 format representation of real number 1:

**Example 2:** Consider a value 0.25. We can see that:

From which we deduce:

- The exponent is −2 (and in the biased form it is )
- The fraction is 0 (looking to the right of binary point in 1.0 is all zeroes)

From these we can form the resulting 32-bit IEEE 754 binary32 format representation of real number 0.25:

**Example 3:** Consider a value of 0.375. We saw that

Hence after determining a representation of 0.375 as we can proceed as above:

- The exponent is −2 (and in the biased form it is )
- The fraction is 1 (looking to the right of binary point in 1.1 is a single )

From these we can form the resulting 32-bit IEEE 754 binary32 format representation of real number 0.375:

This section possibly contains original research .(February 2020) |

This section may be confusing or unclear to readers. In particular, there is only a very simple example, without rounding. This section is also probably off-topic: this is not an article about conversion, and conversion to decimal, using decimal arithmetic, is uncommon.(February 2020) |

If the binary32 value, 41C80000 in this example, is in hexadecimal we first convert it to binary:

then we break it down into three parts: sign bit, exponent, and significand.

- Sign bit:
- Exponent:
- Significand:

We then add the implicit 24th bit to the significand:

- Significand:

and decode the exponent value by subtracting 127:

- Raw exponent:
- Decoded exponent:

Each of the 24 bits of the significand (including the implicit 24th bit), bit 23 to bit 0, represents a value, starting at 1 and halves for each bit, as follows:

bit 23 = 1 bit 22 = 0.5 bit 21 = 0.25 bit 20 = 0.125 bit 19 = 0.0625 bit 18 = 0.03125 . . bit 0 = 0.00000011920928955078125

The significand in this example has three bits set: bit 23, bit 22, and bit 19. We can now decode the significand by adding the values represented by these bits.

- Decoded significand:

Then we need to multiply with the base, 2, to the power of the exponent, to get the final result:

Thus

This is equivalent to:

where s is the sign bit, x is the exponent, and m is the significand.

- Decimals between 1 and 2: fixed interval 2
^{−23}(1+2^{−23}is the next largest float after 1) - Decimals between 2 and 4: fixed interval 2
^{−22} - Decimals between 4 and 8: fixed interval 2
^{−21} - ...
- Decimals between 2
^{n}and 2^{n+1}: fixed interval 2^{n-23} - ...
- Decimals between 2
^{22}=4194304 and 2^{23}=8388608: fixed interval 2^{−1}=0.5 - Decimals between 2
^{23}=8388608 and 2^{24}=16777216: fixed interval 2^{0}=1

- Integers between 0 and 16777216 can be exactly represented (also applies for negative integers between −16777216 and 0)
- Integers between 2
^{24}=16777216 and 2^{25}=33554432 round to a multiple of 2 (even number) - Integers between 2
^{25}and 2^{26}round to a multiple of 4 - ...
- Integers between 2
^{n}and 2^{n+1}round to a multiple of 2^{n-23} - ...
- Integers between 2
^{127}and 2^{128}round to a multiple of 2^{104} - Integers greater than or equal to 2
^{128}are rounded to "infinity".

These examples are given in bit *representation*, in hexadecimal and binary, of the floating-point value. This includes the sign, (biased) exponent, and significand.

0 00000000 00000000000000000000001_{2}= 0000 0001_{16}= 2^{−126}× 2^{−23}= 2^{−149}≈ 1.4012984643 × 10^{−45}(smallest positive subnormal number)

0 00000000 11111111111111111111111_{2}= 007f ffff_{16}= 2^{−126}× (1 − 2^{−23}) ≈ 1.1754942107 ×10^{−38}(largest subnormal number)

0 00000001 00000000000000000000000_{2}= 0080 0000_{16}= 2^{−126}≈ 1.1754943508 × 10^{−38}(smallest positive normal number)

0 11111110 11111111111111111111111_{2}= 7f7f ffff_{16}= 2^{127}× (2 − 2^{−23}) ≈ 3.4028234664 × 10^{38}(largest normal number)

0 01111110 11111111111111111111111_{2}= 3f7f ffff_{16}= 1 − 2^{−24}≈ 0.999999940395355225 (largest number less than one)

0 01111111 00000000000000000000000_{2}= 3f80 0000_{16}= 1 (one)

0 01111111 00000000000000000000001_{2}= 3f80 0001_{16}= 1 + 2^{−23}≈ 1.00000011920928955 (smallest number larger than one)

1 10000000 00000000000000000000000_{2}= c000 0000_{16}= −2 0 00000000 00000000000000000000000_{2}= 0000 0000_{16}= 0 1 00000000 00000000000000000000000_{2}= 8000 0000_{16}= −0 0 11111111 00000000000000000000000_{2}= 7f80 0000_{16}= infinity 1 11111111 00000000000000000000000_{2}= ff80 0000_{16}= −infinity 0 10000000 10010010000111111011011_{2}= 4049 0fdb_{16}≈ 3.14159274101257324 ≈ π ( pi ) 0 01111101 01010101010101010101011_{2}= 3eaa aaab_{16}≈ 0.333333343267440796 ≈ 1/3 x 11111111 10000000000000000000001_{2}= ffc0 0001_{16}= qNaN (on x86 and ARM processors) x 11111111 00000000000000000000001_{2}= ff80 0001_{16}= sNaN (on x86 and ARM processors)

By default, 1/3 rounds up, instead of down like double precision, because of the even number of bits in the significand. The bits of 1/3 beyond the rounding point are `1010...`

which is more than 1/2 of a unit in the last place.

Encodings of qNaN and sNaN are not specified in IEEE 754 and implemented differently on different processors. The x86 family and the ARM family processors use the most significant bit of the significand field to indicate a quiet NaN. The PA-RISC processors use the bit to indicate a signalling NaN.

The design of floating-point format allows various optimisations, resulting from the easy generation of a base-2 logarithm approximation from an integer view of the raw bit pattern. Integer arithmetic and bit-shifting can yield an approximation to reciprocal square root (fast inverse square root), commonly required in computer graphics.

- IEEE Standard for Floating-Point Arithmetic (IEEE 754)
- ISO/IEC 10967, language independent arithmetic
- Primitive data type
- Numerical stability

In computing, **floating-point arithmetic** (**FP**) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision. For this reason, floating-point computation is often used in systems with very small and very large real numbers that require fast processing times. In general, a floating-point number is represented approximately with a fixed number of significant digits and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form:

**IEEE 754-1985** was an industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087.

A **computer number format** is the internal representation of numeric values in digital device hardware and software, such as in programmable computers and calculators. Numerical values are stored as groupings of bits, such as bytes and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.

**Double-precision floating-point format** is a computer number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

The **IEEE Standard for Floating-Point Arithmetic** is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.

The **significand** is part of a number in scientific notation or in floating-point representation, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fraction.

**Hexadecimal floating point** is a format for encoding floating-point numbers first introduced on the IBM System/360 computers, and supported on subsequent machines based on that architecture, as well as machines which were intended to be application-compatible with System/360.

In computer science, a **scale factor** is a number used as a multiplier to represent a number on a different scale, functioning similarly to an exponent in mathematics. A scale factor is used when a real-world set of numbers needs to be represented on a different scale in order to fit a specific number format. Although using a scale factor extends the range of representable values, it also decreases the precision, resulting in rounding error for certain calculations.

In computing, **minifloats** are floating-point values represented with very few bits. Predictably, they are not well suited for general-purpose numerical calculations. They are used for special purposes, most often in computer graphics, where iterations are small and precision has aesthetic effects. Machine learning also uses similar formats like bfloat16. Additionally, they are frequently encountered as a pedagogical tool in computer-science courses to demonstrate the properties and structures of floating-point arithmetic and IEEE 754 numbers.

**Extended precision** refers to floating-point number formats that provide greater precision than the basic floating-point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in intermediate values of expressions on the base format. In contrast to *extended precision*, arbitrary-precision arithmetic refers to implementations of much larger numeric types using special software.

**Decimal floating-point** (**DFP**) arithmetic refers to both a representation and operations on decimal floating-point numbers. Working directly with decimal (base-10) fractions can avoid the rounding errors that otherwise typically occur when converting between decimal fractions and binary (base-2) fractions.

The IEEE 754-2008 standard includes decimal floating-point number formats in which the significand and the exponent can be encoded in two ways, referred to as **binary encoding** and *decimal encoding*.

**IEEE 754-2008** was published in August 2008 and is a significant revision to, and replaces, the IEEE 754-1985 floating-point standard, while in 2019 it was updated with a minor revision IEEE 754-2019. The 2008 revision extended the previous standard where it was necessary, added decimal arithmetic and formats, tightened up certain areas of the original standard which were left undefined, and merged in IEEE 854.

In computing, **half precision** is a binary floating-point computer number format that occupies 16 bits in computer memory. It is intended for storage of floating-point values in applications where higher precision is not essential, in particular computer graphics images and neural networks.

In computing, **quadruple precision** is a binary floating point–based computer number format that occupies 16 bytes with precision at least twice the 53-bit double precision.

In computing, **decimal32** is a decimal floating-point computer numbering format that occupies 4 bytes (32 bits) in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations. Like the binary16 format, it is intended for memory saving storage.

In computing, **decimal64** is a decimal floating-point computer numbering format that occupies 8 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.

In computing, **decimal128** is a decimal floating-point computer numbering format that occupies 16 bytes (128 bits) in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such scientific computations.

In computing, **octuple precision** is a binary floating-point-based computer number format that occupies 32 bytes in computer memory. This 256-bit octuple precision is for applications requiring results in higher than quadruple precision. This format is rarely used and very few environments support it.

The **bfloat16** floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a truncated (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage requirements and increase the calculation speed of machine learning algorithms.

- ↑ "REAL Statement".
*scc.ustc.edu.cn*. Archived from the original on 2021-02-24. Retrieved 2013-02-28. - ↑ "CLHS: Type SHORT-FLOAT, SINGLE-FLOAT, DOUBLE-FLOAT..."
- ↑ "Primitive Data Types".
*Java Documentation*. - ↑ "6 Predefined Types and Classes".
*haskell.org*. 20 July 2010. - ↑ "Float".
*Apple Developer Documentation*. - ↑ William Kahan (1 October 1997). "Lecture Notes on the Status of IEEE Standard 754 for Binary Floating-Point Arithmetic" (PDF). p. 4.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.