Redundant binary representation

Last updated

A redundant binary representation (RBR) is a numeral system that uses more bits than needed to represent a single binary digit so that most numbers have several representations. An RBR is unlike usual binary numeral systems, including two's complement, which use a single bit for each digit. Many of an RBR's properties differ from those of regular binary representation systems. Most importantly, an RBR allows addition without using a typical carry. [1] When compared to non-redundant representation, an RBR makes bitwise logical operation slower, but arithmetic operations are faster when a greater bit width is used. [2] Usually, each digit has its own sign that is not necessarily the same as the sign of the number represented. When digits have signs, that RBR is also a signed-digit representation.

Contents

Conversion from RBR

An RBR is a place-value notation system. In an RBR, digits are pairs of bits, that is, for every place, an RBR uses a pair of bits. The value represented by a redundant digit can be found using a translation table. This table indicates the mathematical value of each possible pair of bits.

As in conventional binary representation, the integer value of a given representation is a weighted sum of the values of the digits. The weight starts at 1 for the rightmost position and goes up by a factor of 2 for each next position. Usually, an RBR allows negative values. There is no single sign bit that tells if a redundantly represented number is positive or negative. Most integers have several possible representations in an RBR.

Often one of the several possible representations of an integer is chosen as the "canonical" form, so each integer has only one possible "canonical" representation; non-adjacent form and two's complement are popular choices for that canonical form.

An integer value can be converted back from an RBR using the following formula, where n is the number of digits and dk is the interpreted value of the k-th digit, where k starts at 0 at the rightmost position:

The conversion from an RBR to n-bit two's complement can be done in O(log(n)) time using a prefix adder. [3]

Example of redundant binary representation

Example of translation table for a digit
DigitInterpreted value
00−1
01 0
10 0
11 1

Not all redundant representations have the same properties. For example, using the translation table on the right, the number 1 can be represented in this RBR in many ways: "01·01·01·11" (0+0+0+1), "01·01·10·11" (0+0+0+1), "01·01·11·00" (0+0+2−1), or "11·00·00·00" (8−4−2−1). Also, for this translation table, flipping all bits (NOT gate) corresponds to finding the additive inverse (multiplication by −1) of the integer represented. [4]

In this case:

Arithmetic operations

Redundant representations are commonly used inside high-speed arithmetic logic units.

In particular, a carry-save adder uses a redundant representation.[ citation needed ]

Addition

Schematic of an adder unit using full adder block (z = x + y) Redundant binary adder.png
Schematic of an adder unit using full adder block (z = x + y)

The addition operation in all RBRs is carry-free, which means that the carry does not have to propagate through the full width of the addition unit. In effect, the addition in all RBRs is a constant-time operation. The addition will always take the same amount of time independently of the bit-width of the operands. This does not imply that the addition is always faster in an RBR than its two's complement equivalent, but that the addition will eventually be faster in an RBR with increasing bit width because the two's complement addition unit's delay is proportional to log(n) (where n is the bit width). [5] Addition in an RBR takes a constant time because each digit of the result can be calculated independently of one another, implying that each digit of the result can be calculated in parallel. [6]

Subtraction

Subtraction is the same as the addition except that the additive inverse of the second operand needs to be computed first. For common representations, this can be done on a digit-by-digit basis.

Multiplication

Many hardware multipliers internally use Booth encoding, a redundant binary representation.

Logical operations

Bitwise logical operations, such as AND, OR and XOR, are not possible in redundant representations. While it is possible to do bitwise operations directly on the underlying bits inside an RBR, it is not clear that this is a meaningful operation; there are many ways to represent a value in an RBR, and the value of the result would depend on the representation used.

To get the expected results, it is necessary to convert the two operands first to non-redundant representations. Consequently, logical operations are slower in an RBR. More precisely, they take a time proportional to log(n) (where n is the number of digit) compared to a constant-time in two's complement.

It is, however, possible to partially convert only the least-significant portion of a redundantly represented number to non-redundant form. This allows operations such as masking off the low k bits can be done in log(k) time.

Related Research Articles

<span class="mw-page-title-main">Binary-coded decimal</span> System of digitally encoding numbers

In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits, usually four or eight. Sometimes, special bit patterns are used for a sign or other indications.

<span class="mw-page-title-main">Floating-point arithmetic</span> Computer approximation for real numbers

In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be represented as a base-ten floating-point number:

In computer science, an integer is a datum of integral data type, a data type that represents some range of mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware nearly always provides a way to represent a processor register or memory address as an integer.

IEEE 754-1985 was an industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008, and then again in 2019 by minor revision IEEE 754-2019. During its 23 years, it was the most widely used format for floating-point computation. It was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087.

<span class="mw-page-title-main">Arithmetic shift</span> Shift operator in computer programming

In computer programming, an arithmetic shift is a shift operator, sometimes termed a signed shift. The two basic types are the arithmetic left shift and the arithmetic right shift. For binary numbers it is a bitwise operation that shifts all of the bits of its operand; every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled in. Instead of being filled with all 0s, as in logical shift, when shifting to the right, the leftmost bit is replicated to fill in all the vacant positions.

In computing, NaN, standing for Not a Number, is a member of a numeric data type that can be interpreted as a value that is undefined or unrepresentable, especially in floating-point arithmetic. Systematic use of NaNs was introduced by the IEEE 754 floating-point standard in 1985, along with the representation of other non-finite quantities such as infinities.

A ternary numeral system has three as its base. Analogous to a bit, a ternary digit is a trit. One trit is equivalent to log2 3 bits of information.

In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands.

Two's complement is a mathematical operation to reversibly convert a positive binary number into a negative binary number with equivalent value, using the binary digit with the greatest place value to indicate whether the binary number is positive or negative. It is used in computer science as the most common method of representing signed integers on computers, and more generally, fixed point binary values. When the most significant bit is a one, the number is signed as negative. (see Converting from two's complement representation, below).

The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found in the diverse floating-point implementations that made them difficult to use reliably and portably. Many hardware floating-point units use the IEEE 754 standard.

An adder, or summer, is a digital circuit that performs addition of numbers. In many computers and other kinds of processors adders are used in the arithmetic logic units (ALUs). They are also used in other parts of the processor, where they are used to calculate addresses, table indices, increment and decrement operators and similar operations.

In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, representing the cents. More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g. a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demanding floating-point representation.

In computing, signed number representations are required to encode negative numbers in binary number systems.

<span class="mw-page-title-main">Integer overflow</span> Computer arithmetic error

In computer programming, an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is outside of the range that can be represented with a given number of digits – either higher than the maximum or lower than the minimum representable value.

A binary multiplier is an electronic circuit used in digital electronics, such as a computer, to multiply two binary numbers.

Decimal floating-point (DFP) arithmetic refers to both a representation and operations on decimal floating-point numbers. Working directly with decimal (base-10) fractions can avoid the rounding errors that otherwise typically occur when converting between decimal fractions and binary (base-2) fractions.

Offset binary, also referred to as excess-K, excess-N, excess-e, excess code or biased representation, is a method for signed number representation where a signed number n is represented by the bit pattern corresponding to the unsigned number n+K, K being the biasing value or offset. There is no standard for offset binary, but most often the K for an n-bit binary word is K = 2n−1 (for example, the offset for a four-digit binary number would be 23=8). This has the consequence that the minimal negative value is represented by all-zeros, the "zero" value is represented by a 1 in the most significant bit and zero in all other bits, and the maximal positive value is represented by all-ones (conveniently, this is the same as using two's complement but with the most significant bit inverted). It also has the consequence that in a logical comparison operation, one gets the same result as with a true form numerical comparison operation, whereas, in two's complement notation a logical comparison will agree with true form numerical comparison operation if and only if the numbers being compared have the same sign. Otherwise the sense of the comparison will be inverted, with all negative values being taken as being larger than all positive values.

A truth table is a mathematical table used in logic—specifically in connection with Boolean algebra, boolean functions, and propositional calculus—which sets out the functional values of logical expressions on each of their functional arguments, that is, for each combination of values taken by their logical variables. In particular, truth tables can be used to show whether a propositional expression is true for all legitimate input values, that is, logically valid.

<span class="mw-page-title-main">Arithmetic logic unit</span> Combinational digital circuit

In computing, an arithmetic logic unit (ALU) is a combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit (FPU), which operates on floating point numbers. It is a fundamental building block of many types of computing circuits, including the central processing unit (CPU) of computers, FPUs, and graphics processing units (GPUs).

The ones' complement of a binary number is the value obtained by inverting all the bits in the binary representation of the number. The name "ones' complement" refers to the fact that such an inverted value, if added to the original, would always produce an 'all ones' number. This mathematical operation is primarily of interest in computer science, where it has varying effects depending on how a specific computer represents numbers.

References

  1. Phatak, Dhananjay S.; Koren, Israel (August 1994). "Hybrid Signed-Digit Number Systems: A Unified Framework for Redundant Number Representations with Bounded Carry Propagation Chains" (PDF). IEEE Transactions on Computers. 43 (8): 880–891. CiteSeerX   10.1.1.352.6407 . doi:10.1109/12.295850.
  2. Lessard, Louis Philippe (2008). "Fast Arithmetic on FPGA Using Redundant Binary Apparatus" . Retrieved 2015-09-12.
  3. Veeramachaneni, Sreehari; Krishna, M. Kirthi; Avinash, Lingamneni; Reddy P., Sreekanth; Srinivas, M.B. (May 2007). Novel High-Speed Redundant Binary to Binary converter using Prefix Networks (PDF). IEEE International Symposium on Circuits and Systems (ISCAS 2007). New Orleans. doi:10.1109/ISCAS.2007.378170.
  4. Lapointe, Marcel; Huynh, Huu Tue; Fortier, Paul (April 1993). "Systematic Design of Pipelined Recursive Filters". IEEE Transactions on Computers. 42 (4): 413–426. doi:10.1109/12.214688.
  5. Yu-Ting Pai; Yu-Kumg Chen (January 2004). The fastest carry lookahead adder (PDF). Second IEEE International Workshop on Electronic Design, Test and Applications (DELTA '04). Perth. doi:10.1109/DELTA.2004.10071.
  6. Jose, Bijoy; Radhakrishnan, Damu (December 2006). Delay Optimized Redundant Binary Adders. 13th IEEE International Conference on Electronics, Circuits and Systems, 2006. (ICECS '06). Nice. doi:10.1109/ICECS.2006.379838.