Language or processor | Left | Right |
---|---|---|
ActionScript 3, Java, JavaScript, Python, PHP, Ruby, C, C++, [1] D, C#, Go, Julia, Rust (signed types only) [2] , Swift (signed types only) [note 1] | << | >> |
Ada | Shift_Left [3] | Shift_Right_Arithmetic |
Kotlin | shl | shr |
Fortran | SHIFTL | SHIFTA [note 2] |
Standard ML | << | ~>> |
Verilog | <<< | >>> [note 3] |
OpenVMS macro language | @ [note 4] | |
Scheme | arithmetic-shift [note 5] | |
Common Lisp | ash | |
OCaml | lsl | asr |
Haskell | Data.Bits.shift [note 6] | |
VHDL | sla [note 7] | sra |
Assembly: Z80 | SLA [5] | SRA |
Assembly: x86 | SAL | SAR |
Assembly: 68k | ASL | ASR |
Assembly: RISC-V | sll , slli [6] | sra , srai |
In computer programming, an arithmetic shift is a shift operator, sometimes termed a signed shift (though it is not restricted to signed operands). The two basic types are the arithmetic left shift and the arithmetic right shift. For binary numbers it is a bitwise operation that shifts all of the bits of its operand; every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled in. Instead of being filled with all 0s, as in logical shift, when shifting to the right, the leftmost bit (usually the sign bit in signed integer representations) is replicated to fill in all the vacant positions (this is a kind of sign extension).
Some authors prefer the terms sticky right-shift and zero-fill right-shift for arithmetic and logical shifts respectively. [7]
Arithmetic shifts can be useful as efficient ways to perform multiplication or division of signed integers by powers of two. Shifting left by n bits on a signed or unsigned binary number has the effect of multiplying it by 2n. Shifting right by n bits on a two's complement signed binary number has the effect of dividing it by 2n, but it always rounds down (towards negative infinity). This is different from the way rounding is usually done in signed integer division (which rounds towards 0). This discrepancy has led to bugs in a number of compilers. [8]
For example, in the x86 instruction set, the SAR instruction (arithmetic right shift) divides a signed number by a power of two, rounding towards negative infinity. [9] However, the IDIV instruction (signed divide) divides a signed number, rounding towards zero. So a SAR instruction cannot be substituted for an IDIV by power of two instruction nor vice versa.
The formal definition of an arithmetic shift, from Federal Standard 1037C is that it is:
An important word in the FS 1073C definition is "usually".
Arithmetic left shifts are equivalent to multiplication by a (positive, integral) power of the radix (e.g., a multiplication by a power of 2 for binary numbers). Logical left shifts are also equivalent, except multiplication and arithmetic shifts may trigger arithmetic overflow whereas logical shifts do not [ citation needed ].
However, arithmetic right shifts are major traps for the unwary, specifically in treating rounding of negative integers. For example, in the usual two's complement representation of negative integers, −1 is represented as all 1's. For an 8-bit signed integer this is 1111 1111. An arithmetic right-shift by 1 (or 2, 3, ..., 7) yields 1111 1111 again, which is still −1. This corresponds to rounding down (towards negative infinity), but is not the usual convention for division.
It is frequently stated that arithmetic right shifts are equivalent to division by a (positive, integral) power of the radix (e.g., a division by a power of 2 for binary numbers), and hence that division by a power of the radix can be optimized by implementing it as an arithmetic right shift. (A shifter is much simpler than a divider. On most processors, shift instructions will execute faster than division instructions.) Large number of 1960s and 1970s programming handbooks, manuals, and other specifications from companies and institutions such as DEC, IBM, Data General, and ANSI make such incorrect statements [10] [ page needed ].
Logical right shifts are equivalent to division by a power of the radix (usually 2) only for positive or unsigned numbers. Arithmetic right shifts are equivalent to logical right shifts for positive signed numbers. Arithmetic right shifts for negative numbers in N's complement (usually two's complement) is roughly equivalent to division by a power of the radix (usually 2), where for odd numbers rounding downwards is applied (not towards 0 as usually expected).
Arithmetic right shifts for negative numbers are equivalent to division using rounding towards 0 in ones' complement representation of signed numbers as was used by some historic computers, but this is no longer in general use.
The (1999) ISO standard for the programming language C defines the right shift operator in terms of divisions by powers of 2. [11] Because of the above-stated non-equivalence, the standard explicitly excludes from that definition the right shifts of signed numbers that have negative values. It does not specify the behaviour of the right shift operator in such circumstances, but instead requires each individual C compiler to define the behaviour of shifting negative values right. [note 8]
Like C, C++ had an implementation-defined right shift for signed integers until C++20. Starting in the C++20 standard, right shift of a signed integer is defined to be an arithmetic shift. [13]
In applications where consistent rounding down is desired, arithmetic right shifts for signed values are useful. An example is in downscaling raster coordinates by a power of two, which maintains even spacing. For example, right shift by 1 sends 0, 1, 2, 3, 4, 5, ... to 0, 0, 1, 1, 2, 2, ..., and −1, −2, −3, −4, ... to −1, −1, −2, −2, ..., maintaining even spacing as −2, −2, −1, −1, 0, 0, 1, 1, 2, 2, ... In contrast, integer division with rounding towards zero sends −1, 0, and 1 all to 0 (3 points instead of 2), yielding −2, −1, −1, 0, 0, 0, 1, 1, 2, 2, ... instead, which is irregular at 0.
>>
operator in C and C++ is not necessarily an arithmetic shift. Usually it is only an arithmetic shift if used with a signed integer type on its left-hand side. If it is used on an unsigned integer type instead, it will be a logical shift.arithmetic-shift
can be both left and right shift, depending on the second operand, very similar to the OpenVMS macro language, although R6RS Scheme adds both -right
and -left
variants.Bits
class from Haskell's Data.Bits
module defines both shift
taking a signed argument and shiftL
/shiftR
taking unsigned arguments. These are isomorphic; for new definitions the programmer need provide only one of the two forms and the other form will be automatically defined in terms of the provided one.A binary number is a number expressed in the base-2 numeral system or binary numeral system, a method of mathematical expression which uses only two symbols: typically "0" (zero) and "1" (one).
In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands.
Two's complement is the most common method of representing signed integers on computers, and more generally, fixed point binary values. Two's complement uses the binary digit with the greatest place value as the sign to indicate whether the binary number is positive or negative. When the most significant bit is 1, the number is signed as negative; and when the most significant bit is 0 the number is signed as positive.
In mathematics and computing, the method of complements is a technique to encode a symmetric range of positive and negative integers in a way that they can use the same algorithm for addition throughout the whole range. For a given number of places half of the possible representations of numbers encode the positive numbers, the other half represents their respective additive inverses. The pairs of mutually additive inverse numbers are called complements. Thus subtraction of any number is implemented by adding its complement. Changing the sign of any number is encoded by generating its complement, which can be done by a very simple and efficient algorithm. This method was commonly used in mechanical calculators and is still used in modern computers. The generalized concept of the radix complement is also valuable in number theory, such as in Midy's theorem.
In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, representing the cents. More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g. a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demanding floating-point representation.
In computing, signed number representations are required to encode negative numbers in binary number systems.
The binary GCD algorithm, also known as Stein's algorithm or the binary Euclidean algorithm, is an algorithm that computes the greatest common divisor (GCD) of two nonnegative integers. Stein's algorithm uses simpler arithmetic operations than the conventional Euclidean algorithm; it replaces division with arithmetic shifts, comparisons, and subtraction.
In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another.
Saturation arithmetic is a version of arithmetic in which all operations, such as addition and multiplication, are limited to a fixed range between a minimum and maximum value.
In computer science, a logical shift is a bitwise operation that shifts all the bits of its operand. The two base variants are the logical left shift and the logical right shift. This is further modulated by the number of bit positions a given value shall be shifted, such as shift left by 1 or shift right by n. Unlike an arithmetic shift, a logical shift does not preserve a number's sign bit or distinguish a number's exponent from its significand (mantissa); every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled, usually with zeros, and possibly ones.
In computer programming, an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is outside of the range that can be represented with a given number of digits – either higher than the maximum or lower than the minimum representable value.
Bit manipulation is the act of algorithmically manipulating bits or other pieces of data shorter than a word. Computer programming tasks that require bit manipulation include low-level device control, error detection and correction algorithms, data compression, encryption algorithms, and optimization. For most other tasks, modern programming languages allow the programmer to work directly with abstractions instead of bits that represent those abstractions.
Signed zero is zero with an associated sign. In ordinary arithmetic, the number 0 does not have a sign, so that −0, +0 and 0 are equivalent. However, in computing, some number representations allow for the existence of two zeros, often denoted by −0 and +0, regarded as equal by the numerical comparison operations but with possible different behaviors in particular operations. This occurs in the sign-magnitude and ones' complement signed number representations for integers, and in most floating-point number representations. The number 0 is usually encoded as +0, but can still be represented by +0, −0, or 0.
In computer processors, the overflow flag is usually a single bit in a system status register used to indicate when an arithmetic overflow has occurred in an operation, indicating that the signed two's-complement result would not fit in the number of bits used for the result. Some architectures may be configured to automatically generate an exception on an operation resulting in overflow.
The Q notation is a way to specify the parameters of a binary fixed point number format. For example, in Q notation, the number format denoted by Q8.8
means that the fixed point numbers in this format have 8 bits for the integer part and 8 bits for the fraction part.
A binary multiplier is an electronic circuit used in digital electronics, such as a computer, to multiply two binary numbers.
A redundant binary representation (RBR) is a numeral system that uses more bits than needed to represent a single binary digit so that most numbers have several representations. An RBR is unlike usual binary numeral systems, including two's complement, which use a single bit for each digit. Many of an RBR's properties differ from those of regular binary representation systems. Most importantly, an RBR allows addition without using a typical carry. When compared to non-redundant representation, an RBR makes bitwise logical operation slower, but arithmetic operations are faster when a greater bit width is used. Usually, each digit has its own sign that is not necessarily the same as the sign of the number represented. When digits have signs, that RBR is also a signed-digit representation.
In computing, an arithmetic logic unit (ALU) is a combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit (FPU), which operates on floating point numbers. It is a fundamental building block of many types of computing circuits, including the central processing unit (CPU) of computers, FPUs, and graphics processing units (GPUs).
In computer software and hardware, find first set (ffs) or find first one is a bit operation that, given an unsigned machine word, designates the index or position of the least significant bit set to one in the word counting from the least significant bit position. A nearly equivalent operation is count trailing zeros (ctz) or number of trailing zeros (ntz), which counts the number of zero bits following the least significant one bit. The complementary operation that finds the index or position of the most significant set bit is log base 2, so called because it computes the binary logarithm ⌊log2(x)⌋. This is closely related to count leading zeros (clz) or number of leading zeros (nlz), which counts the number of zero bits preceding the most significant one bit. There are two common variants of find first set, the POSIX definition which starts indexing of bits at 1, herein labelled ffs, and the variant which starts indexing of bits at zero, which is equivalent to ctz and so will be called by that name.
In the C programming language, operations can be performed on a bit level using bitwise operators.
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22.
{{cite book}}
: |work=
ignored (help)