This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations .(January 2024) |
In number theory, a congruence of squares is a congruence commonly used in integer factorization algorithms.
Given a positive integer n, Fermat's factorization method relies on finding numbers x and y satisfying the equality
We can then factor n = x2 − y2 = (x + y)(x − y). This algorithm is slow in practice because we need to search many such numbers, and only a few satisfy the equation. However, n may also be factored if we can satisfy the weaker congruence of squares conditions:
From here we easily deduce
This means that n divides the product (x + y)(x − y). The second non-triviality condition guarantees that n does not divide (x + y) nor (x − y) individually. Thus (x + y) and (x − y) each contain some, but not all, factors of n, and the greatest common divisors of (x + y, n) and of (x − y, n) will give us these factors. This can be done quickly using the Euclidean algorithm.
Most algorithms for finding congruences of squares do not actually guarantee non-triviality; they only make it likely. There is a chance that a congruence found will be trivial, in which case we need to continue searching for another x and y.
Congruences of squares are extremely useful in integer factorization algorithms. Conversely, because finding square roots modulo a composite number turns out to be probabilistic polynomial-time equivalent to factoring that number, any integer factorization algorithm can be used efficiently to identify a congruence of squares.
A technique pioneered by Dixon's factorization method and improved by continued fraction factorization, the quadratic sieve, and the general number field sieve, is to construct a congruence of squares using a factor base.
Instead of looking for one pair directly, we find many "relations" where the y have only small prime factors (they are smooth numbers), and multiply some of them together to get a square on the right-hand side.
The set of small primes which all the y factor into is called the factor base. Construct a logical matrix where each row describes one y, each column corresponds to one prime in the factor base, and the entry is the parity (even or odd) of the number of times that factor occurs in y. Our goal is to select a subset of rows whose sum is the all-zero row. This corresponds to a set of y values whose product is a square number, i.e. one whose factorization has only even exponents. The products of x and y values together form a congruence of squares.
This is a classic system of linear equations problem, and can be efficiently solved using Gaussian elimination as soon as the number of rows exceeds the number of columns. Some additional rows are often included to ensure that several solutions exist in the nullspace of our matrix, in case the first solution produces a trivial congruence.
A great advantage of this technique is that the search for relations is embarrassingly parallel; a large number of computers can be set to work searching different ranges of x values and trying to factor the resultant ys. Only the found relations need to be reported to a central computer, and there is no particular hurry to do so. The searching computers do not even have to be trusted; a reported relation can be verified with minimal effort.
There are numerous elaborations on this technique. For example, in addition to relations where y factors completely in the factor base, the "large prime" variant also collects "partial relations" where y factors completely except for one larger factor. A second partial relation with the same larger factor can be multiplied by the first to produce a "complete relation".
We take n = 35 and find that
We thus factor as
Using n = 1649, as an example of finding a congruence of squares built up from the products of non-squares (see Dixon's factorization method), first we obtain several congruences
Of these, the first and third have only small primes as factors, and a product of these has an even power of each small prime, and is therefore a square
yielding the congruence of squares
So using the values of 80 and 114 as our x and y gives factors
In number theory, Fermat's little theorem states that if p is a prime number, then for any integer a, the number ap − a is an integer multiple of p. In the notation of modular arithmetic, this is expressed as
The Jacobi symbol is a generalization of the Legendre symbol. Introduced by Jacobi in 1837, it is of theoretical interest in modular arithmetic and other branches of number theory, but its main use is in computational number theory, especially primality testing and integer factorization; these in turn are important in cryptography.
In number theory, Euler's criterion is a formula for determining whether an integer is a quadratic residue modulo a prime. Precisely,
The Fermat primality test is a probabilistic test to determine whether a number is a probable prime.
In algebra and number theory, Wilson's theorem states that a natural number n > 1 is a prime number if and only if the product of all the positive integers less than n is one less than a multiple of n. That is, the factorial satisfies
The quadratic sieve algorithm (QS) is an integer factorization algorithm and, in practice, the second-fastest method known. It is still the fastest for integers under 100 decimal digits or so, and is considerably simpler than the number field sieve. It is a general-purpose factorization algorithm, meaning that its running time depends solely on the size of the integer to be factored, and not on special structure or properties. It was invented by Carl Pomerance in 1981 as an improvement to Schroeppel's linear sieve.
In number theory, a branch of mathematics, the special number field sieve (SNFS) is a special-purpose integer factorization algorithm. The general number field sieve (GNFS) was derived from it.
The Solovay–Strassen primality test, developed by Robert M. Solovay and Volker Strassen in 1977, is a probabilistic test to determine if a number is composite or probably prime. The idea behind the test was discovered by M. M. Artjuhov in 1967 (see Theorem E in the paper). This test has been largely superseded by the Baillie–PSW primality test and the Miller–Rabin primality test, but has great historical importance in showing the practical feasibility of the RSA cryptosystem. The Solovay–Strassen test is essentially an Euler–Jacobi probable prime test.
In number theory, Dixon's factorization method is a general-purpose integer factorization algorithm; it is the prototypical factor base method. Unlike for other factor base methods, its run-time bound comes with a rigorous proof that does not rely on conjectures about the smoothness properties of the values taken by a polynomial.
Lucas pseudoprimes and Fibonacci pseudoprimes are composite integers that pass certain tests which all primes and very few composite numbers pass: in this case, criteria relative to some Lucas sequence.
In mathematics, Hensel's lemma, also known as Hensel's lifting lemma, named after Kurt Hensel, is a result in modular arithmetic, stating that if a univariate polynomial has a simple root modulo a prime number p, then this root can be lifted to a unique root modulo any higher power of p. More generally, if a polynomial factors modulo p into two coprime polynomials, this factorization can be lifted to a factorization modulo any higher power of p.
In additive number theory, Fermat's theorem on sums of two squares states that an odd prime p can be expressed as:
In mathematics, the rational sieve is a general algorithm for factoring integers into prime factors. It is a special case of the general number field sieve. While it is less efficient than the general algorithm, it is conceptually simpler. It serves as a helpful first step in understanding how the general number field sieve works.
The Tonelli–Shanks algorithm is used in modular arithmetic to solve for r in a congruence of the form r2 ≡ n, where p is a prime: that is, to find a square root of n modulo p.
In mathematics, particularly computational algebra, Berlekamp's algorithm is a well-known method for factoring polynomials over finite fields. The algorithm consists mainly of matrix reduction and polynomial GCD computations. It was invented by Elwyn Berlekamp in 1967. It was the dominant algorithm for solving the problem until the Cantor–Zassenhaus algorithm of 1981. It is currently implemented in many well-known computer algebra systems.
In computational number theory, a factor base is a small set of prime numbers commonly used as a mathematical tool in algorithms involving extensive sieving for potential factors of a given integer.
In mathematics, particularly in the area of arithmetic, a modular multiplicative inverse of an integer a is an integer x such that the product ax is congruent to 1 with respect to the modulus m. In the standard notation of modular arithmetic this congruence is written as
In mathematics, the Pocklington–Lehmer primality test is a primality test devised by Henry Cabourn Pocklington and Derrick Henry Lehmer. The test uses a partial factorization of to prove that an integer is prime.
In number theory, Berlekamp's root finding algorithm, also called the Berlekamp–Rabin algorithm, is the probabilistic method of finding roots of polynomials over the field with elements. The method was discovered by Elwyn Berlekamp in 1970 as an auxiliary to the algorithm for polynomial factorization over finite fields. The algorithm was later modified by Rabin for arbitrary finite fields in 1979. The method was also independently discovered before Berlekamp by other researchers.
Kunerth's algorithm is an algorithm for computing the modular square root of a given number. The algorithm does not require the factorization of the modulus, and relies on modular operations that is often easy when the given number is prime.