Binary GCD algorithm

Last updated

Visualisation of using the binary GCD algorithm to find the greatest common divisor (GCD) of 36 and 24. Thus, the GCD is 2 x 3 = 12. Binary GCD algorithm visualisation.svg
Visualisation of using the binary GCD algorithm to find the greatest common divisor (GCD) of 36 and 24. Thus, the GCD is 2 × 3 = 12.

The binary GCD algorithm, also known as Stein's algorithm or the binary Euclidean algorithm, [1] [2] is an algorithm that computes the greatest common divisor (GCD) of two nonnegative integers. Stein's algorithm uses simpler arithmetic operations than the conventional Euclidean algorithm; it replaces division with arithmetic shifts, comparisons, and subtraction.

Contents

Although the algorithm in its contemporary form was first published by the Israeli physicist and programmer Josef Stein in 1967, [3] it may have been known by the 2nd century BCE, in ancient China. [4]

Algorithm

The algorithm finds the GCD of two nonnegative numbers and by repeatedly applying these identities:

  1. : everything divides zero, and is the largest number that divides .
  2. : is a common divisor.
  3. if is odd: is then not a common divisor.
  4. if odd and .

As GCD is commutative (), those identities still apply if the operands are swapped: , if is odd, etc.

Implementation

While the above description of the algorithm is mathematically correct, performant software implementations typically differ from it in a few notable ways:

The following is an implementation of the algorithm in Rust exemplifying those differences, adapted from uutils:

usestd::cmp::min;usestd::mem::swap;pubfngcd(mutu: u64,mutv: u64)-> u64{// Base cases: gcd(n, 0) = gcd(0, n) = nifu==0{returnv;}elseifv==0{returnu;}// Using identities 2 and 3:// gcd(2ⁱ u, 2ʲ v) = 2ᵏ gcd(u, v) with u, v odd and k = min(i, j)// 2ᵏ is the greatest power of two that divides both 2ⁱ u and 2ʲ vleti=u.trailing_zeros();u>>=i;letj=v.trailing_zeros();v>>=j;letk=min(i,j);loop{// u and v are odd at the start of the loopdebug_assert!(u%2==1,"u = {} should be odd",u);debug_assert!(v%2==1,"v = {} should be odd",v);// Swap if necessary so u ≤ vifu>v{swap(&mutu,&mutv);}// Identity 4: gcd(u, v) = gcd(u, v-u) as u ≤ v and u, v are both odd v-=u;// v is now evenifv==0{// Identity 1: gcd(u, 0) = u// The shift by k is necessary to add back the 2ᵏ factor that was removed before the loopreturnu<<k;}// Identity 3: gcd(u, 2ʲ v) = gcd(u, v) as u is oddv>>=v.trailing_zeros();}}

Symbol information vote.svg  Note: The implementation above accepts unsigned (non-negative) integers; given that , the signed case can be handled as follows:

/// Computes the GCD of two signed 64-bit integers/// The result is unsigned and not always representable as i64: gcd(i64::MIN, i64::MIN) == 1 << 63pubfnsigned_gcd(u: i64,v: i64)-> u64{gcd(u.unsigned_abs(),v.unsigned_abs())}

Complexity

Asymptotically, the algorithm requires steps, where is the number of bits in the larger of the two numbers, as every two steps reduce at least one of the operands by at least a factor of . Each step involves only a few arithmetic operations ( with a small constant); when working with word-sized numbers, each arithmetic operation translates to a single machine operation, so the number of machine operations is on the order of , i.e. .

For arbitrarily-large numbers, the asymptotic complexity of this algorithm is , [8] as each arithmetic operation (subtract and shift) involves a linear number of machine operations (one per word in the numbers' binary representation). If the numbers can be represented in the machine's memory, i.e. each number's size can be represented by a single machine word, this bound is reduced to:

This is the same as for the Euclidean algorithm, though a more precise analysis by Akhavi and Vallée proved that binary GCD uses about 60% fewer bit operations. [9]

Extensions

The binary GCD algorithm can be extended in several ways, either to output additional information, deal with arbitrarily-large integers more efficiently, or to compute GCDs in domains other than the integers.

The extended binary GCD algorithm, analogous to the extended Euclidean algorithm, fits in the first kind of extension, as it provides the Bézout coefficients in addition to the GCD: integers and such that . [10] [11] [12]

In the case of large integers, the best asymptotic complexity is , with the cost of -bit multiplication; this is near-linear and vastly smaller than the binary GCD algorithm's , though concrete implementations only outperform older algorithms for numbers larger than about 64 kilobits (i.e. greater than 8×1019265). This is achieved by extending the binary GCD algorithm using ideas from the Schönhage–Strassen algorithm for fast integer multiplication. [13]

The binary GCD algorithm has also been extended to domains other than natural numbers, such as Gaussian integers, [14] Eisenstein integers, [15] quadratic rings, [16] [17] and integer rings of number fields. [18]

Historical description

An algorithm for computing the GCD of two numbers was known in ancient China, under the Han dynasty, as a method to reduce fractions:

If possible halve it; otherwise, take the denominator and the numerator, subtract the lesser from the greater, and do that alternately to make them the same. Reduce by the same number.

Fangtian – Land surveying, The Nine Chapters on the Mathematical Art

The phrase "if possible halve it" is ambiguous, [4]

See also

Related Research Articles

In number theory, an arithmetic, arithmetical, or number-theoretic function is generally any function f(n) whose domain is the positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n". There is a larger class of number-theoretic functions that do not fit this definition, for example, the prime-counting functions. This article provides links to functions of both classes.

In mathematics, Bézout's identity, named after Étienne Bézout who proved it for polynomials, is the following theorem:

In number theory, two integers a and b are coprime, relatively prime or mutually prime if the only positive integer that is a divisor of both of them is 1. Consequently, any prime number that divides a does not divide b, and vice versa. This is equivalent to their greatest common divisor (GCD) being 1. One says also ais prime tob or ais coprime withb.

<span class="mw-page-title-main">Euclidean algorithm</span> Algorithm for computing greatest common divisors

In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements . It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.

In mathematics, the greatest common divisor (GCD) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers x, y, the greatest common divisor of x and y is denoted . For example, the GCD of 8 and 12 is 4, that is, gcd(8, 12) = 4.

<span class="mw-page-title-main">Least common multiple</span> Smallest positive number divisible by two integers

In arithmetic and number theory, the least common multiple, lowest common multiple, or smallest common multiple of two integers a and b, usually denoted by lcm(ab), is the smallest positive integer that is divisible by both a and b. Since division of integers by zero is undefined, this definition has meaning only if a and b are both different from zero. However, some authors define lcm(a, 0) as 0 for all a, since 0 is the only common multiple of a and 0.

Shor's algorithm is a quantum algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor. It is one of the few known quantum algorithms with compelling potential applications and strong evidence of superpolynomial speedup compared to best known classical algorithms. On the other hand, factoring numbers of practical significance requires far more qubits than available in the near future. Another concern is that noise in quantum circuits may undermine results, requiring additional qubits for quantum error correction.

<span class="mw-page-title-main">Gaussian integer</span> Complex number whose real and imaginary parts are both integers

In number theory, a Gaussian integer is a complex number whose real and imaginary parts are both integers. The Gaussian integers, with ordinary addition and multiplication of complex numbers, form an integral domain, usually written as or

<span class="mw-page-title-main">Divisor</span> Integer that is a factor of another integer

In mathematics, a divisor of an integer also called a factor of is an integer that may be multiplied by some integer to produce In this case, one also says that is a multiple of An integer is divisible or evenly divisible by another integer if is a divisor of ; this implies dividing by leaves no remainder.

<span class="mw-page-title-main">Division (mathematics)</span> Arithmetic operation

Division is one of the four basic operations of arithmetic. The other operations are addition, subtraction, and multiplication. What is being divided is called the dividend, which is divided by the divisor, and the result is called the quotient.

<span class="mw-page-title-main">Factorization</span> (Mathematical) decomposition into a product

In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, 3 × 5 is an integer factorization of 15, and (x – 2)(x + 2) is a polynomial factorization of x2 – 4.

In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers a and b, also the coefficients of Bézout's identity, which are integers x and y such that

<span class="mw-page-title-main">Euclidean division</span> Division with remainder of integers

In arithmetic, Euclidean division – or division with remainder – is the process of dividing one integer by another, in a way that produces an integer quotient and a natural number remainder strictly smaller than the absolute value of the divisor. A fundamental property is that the quotient and the remainder exist and are unique, under some conditions. Because of this uniqueness, Euclidean division is often considered without referring to any method of computation, and without explicitly computing the quotient and the remainder. The methods of computation are called integer division algorithms, the best known of which being long division.

In modular arithmetic computation, Montgomery modular multiplication, more commonly referred to as Montgomery multiplication, is a method for performing fast modular multiplication. It was introduced in 1985 by the American mathematician Peter L. Montgomery.

In mathematics, particularly in the area of arithmetic, a modular multiplicative inverse of an integer a is an integer x such that the product ax is congruent to 1 with respect to the modulus m. In the standard notation of modular arithmetic this congruence is written as

In algebra, the greatest common divisor of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.

In algebra, the content of a nonzero polynomial with integer coefficients is the greatest common divisor of its coefficients. The primitive part of such a polynomial is the quotient of the polynomial by its content. Thus a polynomial is the product of its primitive part and its content, and this factorization is unique up to the multiplication of the content by a unit of the ring of the coefficients.

In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them.

In algebra, linear equations and systems of linear equations over a field are widely studied. "Over a field" means that the coefficients of the equations and the solutions that one is looking for belong to a given field, commonly the real or the complex numbers. This article is devoted to the same problems where "field" is replaced by "commutative ring", or, typically "Noetherian integral domain".

In computer science, a polynomial-time algorithm is - generally speaking - an algorithm whose running time is upper-bounded by some polynomial function of the input size. The definition naturally depends on the computational model, which determines how the running time is measured, and how the input size is measured. Two prominent computational models are the Turing-machine model and the arithmetic model. A strongly-polynomial time algorithm is polynomial in both models, whereas a weakly-polynomial time algorithm is polynomial only in the Turing machine model.

References

  1. Brent, Richard P. (13–15 September 1999). Twenty years' analysis of the Binary Euclidean Algorithm. 1999 Oxford-Microsoft Symposium in honour of Professor Sir Antony Hoare. Oxford.
  2. Brent, Richard P. (November 1999). Further analysis of the Binary Euclidean algorithm (Technical report). Oxford University Computing Laboratory. arXiv: 1303.2772 . PRG TR-7-99.
  3. Stein, J. (February 1967), "Computational problems associated with Racah algebra", Journal of Computational Physics, 1 (3): 397–405, Bibcode:1967JCoPh...1..397S, doi:10.1016/0021-9991(67)90047-2, ISSN   0021-9991
  4. 1 2 Knuth, Donald (1998), Seminumerical Algorithms, The Art of Computer Programming, vol. 2 (3rd ed.), Addison-Wesley, ISBN   978-0-201-89684-8
  5. Godbolt, Matt. "Compiler Explorer" . Retrieved 4 February 2024.
  6. Kapoor, Rajiv (21 February 2009). "Avoiding the Cost of Branch Misprediction". Intel Developer Zone.
  7. Lemire, Daniel (15 October 2019). "Mispredicted branches can multiply your running times".
  8. "GNU MP 6.1.2: Binary GCD".
  9. Akhavi, Ali; Vallée, Brigitte (2000), "Average Bit-Complexity of Euclidean Algorithms", Proceedings ICALP'00, Lecture Notes Computer Science 1853: 373–387, CiteSeerX   10.1.1.42.7616
  10. Knuth 1998 , p. 646, answer to exercise 39 of section 4.5.2
  11. Menezes, Alfred J.; van Oorschot, Paul C.; Vanstone, Scott A. (October 1996). "§14.4 Greatest Common Divisor Algorithms" (PDF). Handbook of Applied Cryptography. CRC Press. pp. 606–610. ISBN   0-8493-8523-7 . Retrieved 9 September 2017.
  12. Cohen, Henri (1993). "Chapter 1 : Fundamental Number-Theoretic Algorithms". A Course In Computational Algebraic Number Theory. Graduate Texts in Mathematics. Vol. 138. Springer-Verlag. pp. 17–18. ISBN   0-387-55640-0.
  13. Stehlé, Damien; Zimmermann, Paul (2004), "A binary recursive gcd algorithm" (PDF), Algorithmic number theory, Lecture Notes in Comput. Sci., vol. 3076, Springer, Berlin, pp. 411–425, CiteSeerX   10.1.1.107.8612 , doi:10.1007/978-3-540-24847-7_31, ISBN   978-3-540-22156-2, MR   2138011, S2CID   3119374, INRIA Research Report RR-5050.
  14. Weilert, André (July 2000). "(1+i)-ary GCD Computation in Z[i] as an Analogue to the Binary GCD Algorithm". Journal of Symbolic Computation. 30 (5): 605–617. doi: 10.1006/jsco.2000.0422 .
  15. Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg (12–15 August 2003). Efficient Algorithms for GCD and Cubic Residuosity in the Ring of Eisenstein Integers. 14th International Symposium on the Fundamentals of Computation Theory. Malmö, Sweden. pp. 109–117. doi:10.1007/978-3-540-45077-1_11.
  16. Agarwal, Saurabh; Frandsen, Gudmund Skovbjerg (13–18 June 2004). Binary GCD Like Algorithms for Some Complex Quadratic Rings. Algorithmic Number Theory Symposium. Burlington, VT, USA. pp. 57–71. doi:10.1007/978-3-540-24847-7_4.
  17. Agarwal, Saurabh; Frandsen, Gudmund Skovbjerg (20–24 March 2006). A New GCD Algorithm for Quadratic Number Rings with Unique Factorization. 7th Latin American Symposium on Theoretical Informatics. Valdivia, Chile. pp. 30–42. doi:10.1007/11682462_8.
  18. Wikström, Douglas (11–15 July 2005). On the l-Ary GCD-Algorithm in Rings of Integers. Automata, Languages and Programming, 32nd International Colloquium. Lisbon, Portugal. pp. 1189–1201. doi:10.1007/11523468_96.

Further reading

Covers the extended binary GCD, and a probabilistic analysis of the algorithm.

Covers a variety of topics, including the extended binary GCD algorithm which outputs Bézout coefficients, efficient handling of multi-precision integers using a variant of Lehmer's GCD algorithm, and the relationship between GCD and continued fraction expansions of real numbers.

An analysis of the algorithm in the average case, through the lens of functional analysis: the algorithms' main parameters are cast as a dynamical system, and their average value is related to the invariant measure of the system's transfer operator.