Decimal64 floating-point format

Last updated

In computing, decimal64 is a decimal floating-point computer numbering format that occupies 8 bytes (64 bits) in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.

Contents

Decimal64 supports 16 decimal digits of significand and an exponent range of −383 to +384, i.e. ±0.000000000000000×10^−383 to ±9.999999999999999×10^384. (Equivalently, ±0000000000000000×10^−398 to ±9999999999999999×10^369.) In contrast, the corresponding binary format, which is the most commonly used type, has an approximate range of ±0.000000000000001×10^−308 to ±1.797693134862315×10^308. Because the significand is not normalized, most values with less than 16 significant digits have multiple possible representations; 1 × 102=0.1 × 103=0.01 × 104, etc. Zero has 768 possible representations (1536 if both signed zeros are included).

Decimal64 floating point is a relatively new decimal floating-point format, formally introduced in the 2008 version [1] of IEEE 754 as well as with ISO/IEC/IEEE 60559:2011. [2]


Representation of decimal64 values

SignCombinationSignificand continuation
1 bit13 bits50 bits
smmmmmmmmmmmmmcccccccccccccccccccccccccccccccccccccccccccccccccc

IEEE 754 allows two alternative representation methods for decimal64 values. The standard does not specify how to signify which representation is used, for instance in a situation where decimal64 values are communicated between systems:

Both alternatives provide exactly the same range of representable numbers: 16 digits of significand and 3 × 28 = 768 possible decimal exponent values. (All the possible decimal exponent values storable in a binary64 number are representable in decimal64, and most bits of the significand of a binary64 are stored keeping roughly the same number of decimal digits in the significand.)

In both cases, the most significant 4 bits of the significand (which actually only have 10 possible values) are combined with the most significant 2 bits of the exponent (3 possible values) to use 30 of the 32 possible values of a 5-bit field. The remaining combinations encode infinities and NaNs.

Combination fieldExponentSignificand MsbitsOther
00mmmmmmmmmmm00xxxxxxxx0ccc
01mmmmmmmmmmm01xxxxxxxx0ccc
10mmmmmmmmmmm10xxxxxxxx0ccc
1100mmmmmmmmm00xxxxxxxx100c
1101mmmmmmmmm01xxxxxxxx100c
1110mmmmmmmmm10xxxxxxxx100c
11110mmmmmmmm±Infinity
11111mmmmmmmmNaN. Sign bit ignored. Sixth bit of the combination field determines if NaN is signaling.

In the cases of Infinity and NaN, all other bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a single byte value.

Binary integer significand field

This format uses a binary significand from 0 to 1016 − 1 = 9999999999999999 = 2386F26FC0FFFF16 = 1000111000011011110010011011111100000011111111111111112.

The encoding, completely stored on 64 bits, can represent binary significands up to 10 × 250 − 1 = 11258999068426239 = 27FFFFFFFFFFFF16, but values larger than 1016 − 1 are illegal (and the standard requires implementations to treat them as 0, if encountered on input).

As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7 (00002 to 01112), or higher (10002 or 10012).

If the 2 after the sign bit are "00", "01", or "10", then the exponent field consists of the 10 bits following the sign bit, and the significand is the remaining 53 bits, with an implicit leading 0 bit:

s 00eeeeeeee   (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 01eeeeeeee   (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 10eeeeeeee   (0)ttt tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt

This includes subnormal numbers where the leading significand digit is 0.

If the 2 bits after the sign bit are "11", then the 10-bit exponent field is shifted 2 bits to the right (after both the sign bit and the "11" bits thereafter), and the represented significand is in the remaining 51 bits. In this case there is an implicit (that is, not stored) leading 3-bit sequence "100" for the most bits of the true significand (in the remaining lower bits ttt...ttt of the significand, not all possible values are used).

s 1100eeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 1101eeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt s 1110eeeeeeee (100)t tttttttttt tttttttttt tttttttttt tttttttttt tttttttttt

The 2-bit sequence "11" after the sign bit indicates that there is an implicit 3-bit prefix "100" to the significand. Compare having an implicit 1-bit prefix "1" in the significand of normal values for the binary formats. The 2-bit sequences "00", "01", or "10" after the sign bit are part of the exponent field.

The leading bits of the significand field do not encode the most significant decimal digit; they are simply part of a larger pure-binary number. For example, a significand of 8000000000000000 is encoded as binary 0111000110101111110101001001100011010000000000000000002, with the leading 4 bits encoding 7; the first significand which requires a 54th bit is 253 = 9007199254740992. The highest valid significant is 9999999999999999 whose binary encoding is (100)0111000011011110010011011111100000011111111111111112 (with the 3 most significant bits (100) not stored but implicit as shown above; and the next bit is always zero in valid encodings).

In the above cases, the value represented is

(−1)sign × 10exponent−398 × significand

If the four bits after the sign bit are "1111" then the value is an infinity or a NaN, as described above:

s 11110 xx...x    ±infinity s 11111 0x...x    a quiet NaN s 11111 1x...x    a signalling NaN

Densely packed decimal significand field

In this version, the significand is stored as a series of decimal digits. The leading digit is between 0 and 9 (3 or 4 binary bits), and the rest of the significand uses the densely packed decimal (DPD) encoding.

The leading 2 bits of the exponent and the leading digit (3 or 4 bits) of the significand are combined into the five bits that follow the sign bit.

This eight bits after that are the exponent continuation field, providing the less-significant bits of the exponent.

The last 50 bits are the significand continuation field, consisting of five 10-bit declets . [3] Each declet encodes three decimal digits [3] using the DPD encoding.

If the first two bits after the sign bit are "00", "01", or "10", then those are the leading bits of the exponent, and the three bits "TTT" after that are interpreted as the leading decimal digit (0 to 7):

s 00 TTT (00)eeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt] s 01 TTT (01)eeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt] s 10 TTT (10)eeeeeeee (0TTT)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]

If the first two bits after the sign bit are "11", then the second 2-bits are the leading bits of the exponent, and the next bit "T" is prefixed with implicit bits "100" to form the leading decimal digit (8 or 9):

s 1100 T (00)eeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt] s 1101 T (01)eeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt] s 1110 T (10)eeeeeeee (100T)[tttttttttt][tttttttttt][tttttttttt][tttttttttt][tttttttttt]

The remaining two combinations (11 110 and 11 111) of the 5-bit field after the sign bit are used to represent ±infinity and NaNs, respectively.

The DPD/3BCD transcoding for the declets is given by the following table. b9...b0 are the bits of the DPD, and d2...d0 are the three BCD digits.

Densely packed decimal encoding rules [4]
DPD encoded valueDecimal digits
Code space
(1024 states)
b9b8b7b6b5b4b3b2b1b0d2d1d0Values encodedDescriptionOccurrences
(1000 states)
50.0%
(512 states)
abcdef0ghi0abc0def0ghi(0–7)(0–7)(0–7)3 small digits51.2%
(512 states)
37.5%
(384 states)
abcdef100i0abc0def100i(0–7)(0–7)(8–9)2 small digits,
1 large digit
38.4%
(384 states)
abcghf101i0abc100f0ghi(0–7)(8–9)(0–7)
ghcdef110i100c0def0ghi(8–9)(0–7)(0–7)
9.375%
(96 states)
ghc00f111i100c100f0ghi(8–9)(8–9)(0–7)1 small digit,
2 large digits
9.6%
(96 states)
dec01f111i100c0def100i(8–9)(0–7)(8–9)
abc10f111i0abc100f100i(0–7)(8–9)(8–9)
3.125%
(32 states, 8 used)
xxc11f111i100c100f100i(8–9)(8–9)(8–9)3 large digits,
b9,b8: don't care
0.8%
(8 states)

The 8 decimal values whose digits are all 8s or 9s have four codings each. The bits marked x in the table above are ignored on input, but will always be 0 in computed results. (The 8 × 3 = 24 non-standard encodings fill in the gap between 103 = 1000 and 210 = 1024.)

In the above cases, with the true significand as the sequence of decimal digits decoded, the value represented is

See also

Related Research Articles

<span class="mw-page-title-main">Binary-coded decimal</span> System of digitally encoding numbers

In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits, usually four or eight. Sometimes, special bit patterns are used for a sign or other indications.

<span class="mw-page-title-main">Floating-point arithmetic</span> Computer approximation for real numbers

In computing, floating-point arithmetic (FP) is arithmetic that represents subsets of real numbers using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. Numbers of this form are called floating-point numbers. For example, 12.345 is a floating-point number in base ten with five digits of precision:

IEEE 754-1985 is a historic industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087.

A computer number format is the internal representation of numeric values in digital device hardware and software, such as in programmable computers and calculators. Numerical values are stored as groupings of bits, such as bytes and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.

Double-precision floating-point format is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.

The significand is the first (left) part of a number in scientific notation or related concepts in floating-point representation, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fractional number.

Hexadecimal floating point is a format for encoding floating-point numbers first introduced on the IBM System/360 computers, and supported on subsequent machines based on that architecture, as well as machines which were intended to be application-compatible with System/360.

Densely packed decimal (DPD) is an efficient method for binary encoding decimal digits.

In computing, minifloats are floating-point values represented with very few bits. Predictably, they are not well suited for general-purpose numerical calculations. They are used for special purposes, most often in computer graphics, where iterations are small and precision has aesthetic effects. Machine learning also uses similar formats like bfloat16. Additionally, they are frequently encountered as a pedagogical tool in computer-science courses to demonstrate the properties and structures of floating-point arithmetic and IEEE 754 numbers.

Extended precision refers to floating-point number formats that provide greater precision than the basic floating-point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in intermediate values of expressions on the base format. In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types using special software.

Decimal floating-point (DFP) arithmetic refers to both a representation and operations on decimal floating-point numbers. Working directly with decimal (base-10) fractions can avoid the rounding errors that otherwise typically occur when converting between decimal fractions and binary (base-2) fractions.

The IEEE 754-2008 standard includes decimal floating-point number formats in which the significand and the exponent can be encoded in two ways, referred to as binary encoding and decimal encoding.

IEEE 754-2008 is a revision of the IEEE 754 standard for floating-point arithmetic. It was published in August 2008 and is a significant revision to, and replaces, the IEEE 754-1985 standard. The 2008 revision extended the previous standard where it was necessary, added decimal arithmetic and formats, tightened up certain areas of the original standard which were left undefined, and merged in IEEE 854 . In a few cases, where stricter definitions of binary floating-point arithmetic might be performance-incompatible with some existing implementation, they were made optional. In 2019, it was updated with a minor revision IEEE 754-2019.

In computing, half precision is a binary floating-point computer number format that occupies 16 bits in computer memory. It is intended for storage of floating-point values in applications where higher precision is not essential, in particular image processing and neural networks.

Single-precision floating-point format is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

In computing, decimal32 is a decimal floating-point computer numbering format that occupies 4 bytes (32 bits) in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations. Like the binary16 format, it is intended for memory saving storage.

decimal128 is a decimal floating-point computer number format that occupies 128 bits in computer memory. Formally introduced in IEEE 754-2008, it is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.

In computing, octuple precision is a binary floating-point-based computer number format that occupies 32 bytes in computer memory. This 256-bit octuple precision is for applications requiring results in higher than quadruple precision. This format is rarely used and very few environments support it.

The bfloat16 floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a shortened (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage requirements and increase the calculation speed of machine learning algorithms.

References

  1. IEEE Computer Society (2008-08-29). IEEE Standard for Floating-Point Arithmetic. IEEE. doi:10.1109/IEEESTD.2008.4610935. ISBN   978-0-7381-5753-5. IEEE Std 754-2008. Retrieved 2016-02-08.
  2. "ISO/IEC/IEEE 60559:2011". 2011. Retrieved 2016-02-08.{{cite journal}}: Cite journal requires |journal= (help)
  3. 1 2 Muller, Jean-Michel; Brisebarre, Nicolas; de Dinechin, Florent; Jeannerod, Claude-Pierre; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie; Stehlé, Damien; Torres, Serge (2010). Handbook of Floating-Point Arithmetic (1 ed.). Birkhäuser. doi:10.1007/978-0-8176-4705-6. ISBN   978-0-8176-4704-9. LCCN   2009939668.
  4. Cowlishaw, Michael Frederic (2007-02-13) [2000-10-03]. "A Summary of Densely Packed Decimal encoding". IBM. Archived from the original on 2015-09-24. Retrieved 2016-02-07.