Offset binary, [1] also referred to as excess-K, [1] excess-N, excess-e, [2] [3] excess code or biased representation, is a method for signed number representation where a signed number n is represented by the bit pattern corresponding to the unsigned number n+K, K being the biasing value or offset. There is no standard for offset binary, but most often the K for an n-bit binary word is K = 2n−1 (for example, the offset for a four-digit binary number would be 23=8). This has the consequence that the minimal negative value is represented by all-zeros, the "zero" value is represented by a 1 in the most significant bit and zero in all other bits, and the maximal positive value is represented by all-ones (conveniently, this is the same as using two's complement but with the most significant bit is inverted). It also has the consequence that in a logical comparison operation, one gets the same result as with a true form numerical comparison operation, whereas, in two's complement notation a logical comparison will agree with true form numerical comparison operation if and only if the numbers being compared have the same sign. Otherwise the sense of the comparison will be inverted, with all negative values being taken as being larger than all positive values.
The 5-bit Baudot code used in early synchronous multiplexing telegraphs can be seen as an offset-1 (excess-1) reflected binary (Gray) code.
One historically prominent example of offset-64 (excess-64) notation was in the floating point (exponential) notation in the IBM System/360 and System/370 generations of computers. The "characteristic" (exponent) took the form of a seven-bit excess-64 number (The high-order bit of the same byte contained the sign of the significand). [4]
The 8-bit exponent in Microsoft Binary Format, a floating point format used in various programming languages (in particular BASIC) in the 1970s and 1980s, was encoded using an offset-129 notation (excess-129).
The IEEE Standard for Floating-Point Arithmetic (IEEE 754) uses offset notation for the exponent part in each of its various formats of precision. Unusually however, instead of using "excess 2n−1" it uses "excess 2n−1 − 1" (i.e. excess-15, excess-127, excess-1023, excess-16383) which means that inverting the leading (high-order) bit of the exponent will not convert the exponent to correct two's complement notation.
Offset binary is often used in digital signal processing (DSP). Most analog to digital (A/D) and digital to analog (D/A) chips are unipolar, which means that they cannot handle bipolar signals (signals with both positive and negative values). A simple solution to this is to bias the analog signals with a DC offset equal to half of the A/D and D/A converter's range. The resulting digital data then ends up being in offset binary format. [5]
Most standard computer CPU chips cannot handle the offset binary format directly[ citation needed ]. CPU chips typically can only handle signed and unsigned integers, and floating point value formats. Offset binary values can be handled in several ways by these CPU chips. The data may just be treated as unsigned integers, requiring the programmer to deal with the zero offset in software. The data may also be converted to signed integer format (which the CPU can handle natively) by simply subtracting the zero offset. As a consequence of the most common offset for an n-bit word being 2n−1, which implies that the first bit is inverted relative to two's complement, there is no need for a separate subtraction step, but one simply can invert the first bit. This sometimes is a useful simplification in hardware, and can be convenient in software as well.
Table of offset binary for four bits, with two's complement for comparison: [6]
Decimal | Offset binary, K = 8 | Two's complement |
---|---|---|
7 | 1111 | 0111 |
6 | 1110 | 0110 |
5 | 1101 | 0101 |
4 | 1100 | 0100 |
3 | 1011 | 0011 |
2 | 1010 | 0010 |
1 | 1001 | 0001 |
0 | 1000 | 0000 |
−1 | 0111 | 1111 |
−2 | 0110 | 1110 |
−3 | 0101 | 1101 |
−4 | 0100 | 1100 |
−5 | 0011 | 1011 |
−6 | 0010 | 1010 |
−7 | 0001 | 1001 |
−8 | 0000 | 1000 |
Offset binary may be converted into two's complement by inverting the most significant bit. For example, with 8-bit values, the offset binary value may be XORed with 0x80 in order to convert to two's complement. In specialised hardware it may be simpler to accept the bit as it stands, but to apply its value in inverted significance.
This section is missing information about these tables.(January 2022) |
Code | Type | Parameters | Weights | Distance | Checking | Complement | Groups of 5 | Simple addition | ||
---|---|---|---|---|---|---|---|---|---|---|
Offset, k | Width, n | Factor, q | ||||||||
8421 code | n [8] | 0 | 4 | 1 | 8 4 2 1 | 1–4 | No | No | No | No |
Nuding code [8] [9] | 3n + 2 [8] | 2 | 5 | 3 | N/A | 2–5 | Yes | 9 | Yes | Yes |
Stibitz code [10] | n + 3 [8] | 3 | 4 | 1 | 8 4 −2 −1 | 1–4 | No | 9 | Yes | Yes |
Diamond code [8] [11] | 27n + 6 [8] [12] [13] | 6 | 8 | 27 | N/A | 3–8 | Yes | 9 | Yes | Yes |
25n + 15 [12] [13] | 15 | 8 | 25 | N/A | 3+ | Yes | Yes | ? | Yes | |
23n + 24 [12] [13] | 24 | 8 | 23 | N/A | 3+ | Yes | Yes | ? | Yes | |
19n + 42 [12] [13] | 42 | 8 | 19 | N/A | 3–8 | Yes | 9 | Yes | Yes |
|
|
|
|
|
|
In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits, usually four or eight. Sometimes, special bit patterns are used for a sign or other indications.
In computing, floating-point arithmetic (FP) is arithmetic using formulaic representation of real numbers as an approximation to support a trade-off between range and precision. For this reason, floating-point computation is often used in systems with very small and very large real numbers that require fast processing times. In general, a floating-point number is represented approximately with a fixed number of significant digits and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen. A number that can be represented exactly is of the following form:
IEEE 754-1985 was an industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087.
A computer number format is the internal representation of numeric values in digital device hardware and software, such as in programmable computers and calculators. Numerical values are stored as groupings of bits, such as bytes and words. The encoding between numerical values and bit patterns is chosen for convenience of the operation of the computer; the encoding used by the computer's instruction set generally requires conversion for external use, such as for printing and display. Different types of processors may have different internal representations of numerical values and different conventions are used for integer and real numbers. Most calculations are carried out with number formats that fit into a processor register, but some software systems allow representation of arbitrarily large numbers using multiple words of memory.
Double-precision floating-point format is a computer number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.
The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.
The significand is part of a number in scientific notation or in floating-point representation, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fraction.
In computing, fixed-point refers to a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, representing the cents. More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g. a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demanding floating-point representation.
Excess-3, 3-excess or 10-excess-3 binary code, shifted binary or Stibitz code is a self-complementary binary-coded decimal (BCD) code and numeral system. It is a biased representation. Excess-3 code was used on some older computers as well as in cash registers and hand-held portable electronic calculators of the 1970s, among other uses.
Hexadecimal floating point is a format for encoding floating-point numbers first introduced on the IBM System/360 computers, and supported on subsequent machines based on that architecture, as well as machines which were intended to be application-compatible with System/360.
In computing, signed number representations are required to encode negative numbers in binary number systems.
In IEEE 754 floating-point numbers, the exponent is biased in the engineering sense of the word – the value stored is offset from the actual value by the exponent bias, also called a biased exponent. Biasing is done because exponents have to be signed values in order to be able to represent both tiny and huge values, but two's complement, the usual representation for signed values, would make comparison harder.
Signed zero is zero with an associated sign. In ordinary arithmetic, the number 0 does not have a sign, so that −0, +0 and 0 are identical. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 and +0, regarded as equal by the numerical comparison operations but with possible different behaviors in particular operations. This occurs in the sign and magnitude and ones' complement signed number representations for integers, and in most floating-point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0.
Decimal floating-point (DFP) arithmetic refers to both a representation and operations on decimal floating-point numbers. Working directly with decimal (base-10) fractions can avoid the rounding errors that otherwise typically occur when converting between decimal fractions and binary (base-2) fractions.
The IEEE 754-2008 standard includes decimal floating-point number formats in which the significand and the exponent can be encoded in two ways, referred to as binary encoding and decimal encoding.
In computing, half precision is a binary floating-point computer number format that occupies 16 bits in computer memory. It is intended for storage of floating-point values in applications where higher precision is not essential, in particular computer graphics images and neural networks.
In computing, quadruple precision is a binary floating point–based computer number format that occupies 16 bytes with precision at least twice the 53-bit double precision.
Single-precision floating-point format is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.
In computing, octuple precision is a binary floating-point-based computer number format that occupies 32 bytes in computer memory. This 256-bit octuple precision is for applications requiring results in higher than quadruple precision. This format is rarely used and very few environments support it.
The bfloat16 floating-point format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a truncated (16-bit) version of the 32-bit IEEE 754 single-precision floating-point format (binary32) with the intent of accelerating machine learning and near-sensor computing. It preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations, but this is not their intended use. Bfloat16 is used to reduce the storage requirements and increase the calculation speed of machine learning algorithms.
[…] [w]e use a[n exponent] value which is shifted by half the binary range of the number. […] This special form is sometimes referred to as a biased exponent, since it is the conventional value plus a constant. Some authors have called it a characteristic, but this term should not be used, since CDC and others use this term for the mantissa. It is also referred to as an 'excess -' representation, where, for example, - is 64 for a 7-bit exponent (27−1 = 64). […]