IEEE 754-2008 revision

Last updated

IEEE 754-2008 (previously known as IEEE 754r) is a revision of the IEEE 754 standard for floating-point arithmetic. It was published in August 2008 and is a significant revision to, and replaces, the IEEE 754-1985 standard. The 2008 revision extended the previous standard where it was necessary, added decimal arithmetic and formats, tightened up certain areas of the original standard which were left undefined, and merged in IEEE 854 (the radix-independent floating-point standard). In a few cases, where stricter definitions of binary floating-point arithmetic might be performance-incompatible with some existing implementation, they were made optional. In 2019, it was updated with a minor revision IEEE 754-2019. [1]

Contents

Revision process

The standard had been under revision since 2000, with a target completion date of December 2006. The revision of an IEEE standard broadly follows three phases:

  1. Working group a committee that creates a draft standard
  2. Ballot interested parties subscribe to the balloting group and vote on the draft (75% of the group must participate, and 75% must approve for the draft to go forward); comments from the votes are resolved by a Ballot Resolution Committee (BRC) and changes made have to be recirculated with a new ballot if they are substantive
  3. When all comments are resolved and there are no further changes, the draft is submitted to the IEEE for review, approval, and publication (this can also result in changes and ballots, although this is rare).

On 11 June 2008, it was approved unanimously by the IEEE Revision Committee (RevCom), and it was formally approved by the IEEE-SA Standards Board on 12 June 2008. It was published on 29 August 2008.

754r Working Group phase

Participation in drafting the standard was open to people with a solid knowledge of floating-point arithmetic. More than 90 people attended at least one of the monthly meetings, which were held in Silicon Valley, and many more participated through the mailing list.

Progress at times was slow, leading the chairman to declare at the 15 September 2005 meeting [2] that "no progress is being made, I am suspending these meetings until further notice on those grounds". In December 2005, the committee reorganized under new rules with a target completion date of December 2006.

New policies and procedures were adopted in February 2006. In September 2006, a working draft was approved to be sent to the parent sponsoring committee (the IEEE Microprocessor Standards Committee, or MSC) for editing and to be sent to sponsor ballot.

754r Ballot phase

The last version of the draft, version 1.2.5, submitted to the MSC was from 4 October 2006. [3] The MSC accepted the draft on 9 October 2006. The draft has been changed significantly in detail during the balloting process.

The first sponsor ballot took place from 29 November 2006 through 28 December 2006. Of the 84 members of the voting body, 85.7% responded—78.6% voted approval. There were negative votes (and over 400 comments) so there was a recirculation ballot in March 2007; this received an 84% approval. There were sufficient comments (over 130) from that ballot that a third draft was prepared for second, 15-day, recirculation ballot which started in mid-April 2007. For a technical reason, the ballot process was restarted with the 4th ballot in October 2007; there were also substantial changes in the draft resulting from 650 voters' comments and from requests from the sponsor (the IEEE MSC); this ballot just failed to reach the required 75% approval. The 5th ballot had a 98.0% response rate with 91.0% approval, with comments leading to relatively small changes. The 6th, 7th, and 8th ballots sustained approval ratings of over 90% with progressively fewer comments on each draft; the 8th (which had no in-scope comments: 9 were repeats of previous comments and one referred to material not in the draft) was submitted to the IEEE Standards Revision Committee ('RevCom') for approval as an IEEE standard.

754r Review and Approval phase

The IEEE Standards Revision Committee (RevCom) considered and unanimously approved the IEEE 754r draft at its June 2008 meeting, and it was approved by the IEEE-SA Standards Board on 12 June 2008. Final editing is complete and the document has now been forwarded to the IEEE Standards Publications Department for publication.

IEEE Std 754-2008 publication

The new IEEE 754 (formally IEEE Std 754-2008, the IEEE Standard for Floating-Point Arithmetic) was published by the IEEE Computer Society on 29 August 2008, and is available from the IEEE Xplore website [4]

This standard replaces IEEE 754-1985. IEEE 854, the Radix-Independent floating-point standard was withdrawn in December 2008.

Summary of the revisions

The most obvious enhancements to the standard are the addition of a 16-bit and a 128-bit binary type and three decimal types, some new operations, and many recommended functions. However, there have been significant clarifications in terminology throughout. This summary highlights the main differences in each major clause of the standard.

Clause 1: Overview

The scope (determined by the sponsor of the standard) has been widened to include decimal formats and arithmetic, and adds extendable formats.

Clause 2: Definitions

Many of the definitions have been rewritten for clarification and consistency. A few terms have been renamed for clarity (for example, denormalized has been renamed to subnormal).

Clause 3: Formats

The description of formats has been made more regular, with a distinction between arithmetic formats (in which arithmetic may be carried out) and interchange formats (which have a standard encoding). Conformance to the standard is now defined in these terms.

The specification levels of a floating-point format have been enumerated, to clarify the distinction between:

  1. the theoretical real numbers (an extended number line)
  2. the entities which can be represented in the format (a finite set of numbers, together with −0, infinities, and NaN)
  3. the particular representations of the entities: sign-exponent-significand, etc.
  4. the bit-pattern (encoding) used.

The sets of representable entities are then explained in detail, showing that they can be treated with the significand being considered either as a fraction or an integer. The particular sets known as basic formats are defined, and the encodings used for interchange of binary and decimal formats are explained.

The binary interchange formats have the "half precision" (16-bit storage format) and "quad precision" (128-bit format) added, together with generalized formulae for some wider formats; the basic formats have 32-bit, 64-bit, and 128-bit encodings.

Three new decimal formats are described, matching the lengths of the 32–128-bit binary formats. These give decimal interchange formats with 7, 16, and 34-digit significands, which may be normalized or unnormalized. For maximum range and precision, the formats merge part of the exponent and significand into a combination field, and compress the remainder of the significand using either a decimal integer encoding (which uses Densely Packed Decimal , or DPD, a compressed form of BCD) encoding or conventional binary integer encoding. The basic formats are the two larger sizes, which have 64-bit and 128-bit encodings. Generalized formulae for some other interchange formats are also specified.

Extended and extendable formats allow for arithmetic at other precisions and ranges.

Clause 4: Attributes and rounding

This clause has been changed to encourage the use of static attributes for controlling floating-point operations, and (in addition to required rounding attributes) allow for alternate exception handling, widening of intermediate results, value-changing optimizations, and reproducibility.

The round-to-nearest, ties away from zero rounding attribute has been added (required for decimal operations only).

Clause 5: Operations

This section has numerous clarifications (notably in the area of comparisons), and several previously recommended operations (such as copy, negate, abs, and class) are now required.

New operations include fused multiply–add (FMA), explicit conversions, classification predicates (isNan(x), etc.), various min and max functions, a total ordering predicate, and two decimal-specific operations (samequantum and quantize).

Min and max

The min and max operations are defined but leave some leeway for the case where the inputs are equal in value but differ in representation. In particular:

  • min(+0,−0) or min(−0,+0) must produce something with a value of zero but may always return the first argument.

In order to support operations such as windowing in which a NaN input should be quietly replaced with one of the end points, min and max are defined to select a number, x, in preference to a quiet NaN:

  • min(x,qNaN) = min(qNaN,x) = x
  • max(x,qNaN) = max(qNaN,x) = x

These functions are called minNum and maxNum to indicate their preference for a number over a quiet NaN. However, in the presence of a signaling NaN input, a quiet NaN is returned as with the usual operations. After the publication of the standard, it was noticed that these rules make these operations non-associative; for this reason, they have been replaced by new operations in IEEE 754-2019.

Decimal arithmetic

Decimal arithmetic, compatible with that used in Java, C#, PL/I, COBOL, Python, REXX, etc., is also defined in this section. In general, decimal arithmetic follows the same rules as binary arithmetic (results are correctly rounded, and so on), with additional rules that define the exponent of a result (more than one is possible in many cases).

Correctly rounded base conversion

Unlike in 854, 754-2008 requires correctly rounded base conversion between decimal and binary floating point within a range which depends on the format.

Clause 6: Infinity, NaNs, and sign bit

This clause has been revised and clarified, but with no major additions. In particular, it makes formal recommendations for the encoding of the signaling/quiet NaN state.

Clause 7: Default exception handling

This clause has been revised and considerably clarified, but with no major additions.

Clause 8: Alternate exception handling

This clause has been extended from the previous Clause 8 ('Traps') to allow optional exception handling in various forms, including traps and other models such as try/catch. Traps and other exception mechanisms remain optional, as they were in IEEE 754-1985.

This clause is new; it recommends fifty operations, including log, power, and trigonometric functions, that language standards should define. These are all optional (none are required in order to conform to the standard). The operations include some on dynamic modes for attributes, and also a set of reduction operations (sum, scaled product, etc.).

Clause 10: Expression evaluation

This clause is new; it recommends how language standards should specify the semantics of sequences of operations, and points out the subtleties of literal meanings and optimizations that change the value of a result.

Clause 11: Reproducibility

This clause is new; it recommends that language standards should provide a means to write reproducible programs (i.e., programs that will produce the same result in all implementations of a language), and describes what needs to be done to achieve reproducible results.

Annex A: Bibliography

This annex is new; it lists some useful references.

Annex B: Program debugging support

This annex is new; it provides guidance to debugger developers for features that are desired for supporting the debugging of floating-point code.

Index of operations

This is a new index, which lists all the operations described in the standard (required or optional).

Discussed but not included

Due to changes in CPU design and development, the 2008 IEEE floating-point standard could be viewed as historical or outdated as the 1985 standard it replaced. There were many outside discussions and items not covered in the standardization process, the items below are the ones that became public knowledge:

In 754 the definition of underflow was that the result is tiny and encounters a loss of accuracy.
Two definitions were allowed for the determination of the 'tiny' condition: before or after rounding the infinitely precise result to working precision, with unbounded exponent.
Two definitions of loss of accuracy were permitted: inexact result or loss due only to denormalization. No known hardware systems implemented the latter and it has been removed from the revised standard as an option.
Annex U of 754r recommended that only tininess after rounding and inexact as loss of accuracy be a cause for underflow signal.

Related Research Articles

<span class="mw-page-title-main">Floating-point arithmetic</span> Computer approximation for real numbers

In computing, floating-point arithmetic (FP) is arithmetic that represents subsets of real numbers using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. Numbers of this form are called floating-point numbers. For example, 12.345 is a floating-point number in base ten with five digits of precision:

IEEE 754-1985 is a historic industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087.

A computer number format is the internal representation of numeric values in digital device hardware and software, such as in programmable computers and calculators. Numerical values are stored as groupings of bits, such as bytes and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.

In computing, NaN, standing for Not a Number, is a particular value of a numeric data type which is undefined or unrepresentable, such as the result of zero divided by zero. Systematic use of NaNs was introduced by the IEEE 754 floating-point standard in 1985, along with the representation of other non-finite quantities such as infinities.

Double-precision floating-point format is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

In computer science, subnormal numbers are the subset of denormalized numbers that fill the underflow gap around zero in floating-point arithmetic. Any non-zero number with magnitude smaller than the smallest normal number is subnormal.

The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.

The significand is part of a number in scientific notation or in floating-point representation, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fraction.

Hexadecimal floating point is a format for encoding floating-point numbers first introduced on the IBM System/360 computers, and supported on subsequent machines based on that architecture, as well as machines which were intended to be application-compatible with System/360.

Signed zero is zero with an associated sign. In ordinary arithmetic, the number 0 does not have a sign, so that −0, +0 and 0 are identical. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 and +0, regarded as equal by the numerical comparison operations but with possible different behaviors in particular operations. This occurs in the sign-magnitude and ones' complement signed number representations for integers, and in most floating-point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.

Extended precision refers to floating-point number formats that provide greater precision than the basic floating-point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in intermediate values of expressions on the base format. In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types using special software.

Decimal floating-point (DFP) arithmetic refers to both a representation and operations on decimal floating-point numbers. Working directly with decimal (base-10) fractions can avoid the rounding errors that otherwise typically occur when converting between decimal fractions and binary (base-2) fractions.

The IEEE 754-2008 standard includes decimal floating-point number formats in which the significand and the exponent can be encoded in two ways, referred to as binary encoding and decimal encoding.

In computing, half precision is a binary floating-point computer number format that occupies 16 bits in computer memory. It is intended for storage of floating-point values in applications where higher precision is not essential, in particular image processing and neural networks.

In computing, quadruple precision is a binary floating point–based computer number format that occupies 16 bytes with precision at least twice the 53-bit double precision.

Single-precision floating-point format is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

In computing, decimal32 is a decimal floating-point computer numbering format that occupies 4 bytes (32 bits) in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations. Like the binary16 format, it is intended for memory saving storage.

In computing, decimal64 is a decimal floating-point computer numbering format that occupies 8 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.

decimal128 is a decimal floating-point computer number format that occupies 128 bits in computer memory. Formally introduced in IEEE 754-2008, it is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial and tax computations.

The bfloat16 floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a shortened (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage requirements and increase the calculation speed of machine learning algorithms.

References

  1. "ANSI/IEEE Std 754-2019". 754r.ucbtest.org. Retrieved 2019-08-06.
  2. "15 September 2005 meeting".[ dead link ]
  3. DRAFT Standard for Floating-Point Arithmetic P754, version 1.2.5. Revising ANSI/IEEE Std 754-1985 (Report). 2006-10-04.
  4. 754-2008 - IEEE Standard for Floating-Point Arithmetic. IEEE. 2008-08-29. doi:10.1109/IEEESTD.2008.4610935. ISBN   978-0-7381-5752-8. (NB. Superseded by IEEE Std 754-2019, a revision of IEEE 754-2008.)