Primality certificate

Last updated

In mathematics and computer science, a primality certificate or primality proof is a succinct, formal proof that a number is prime. Primality certificates allow the primality of a number to be rapidly checked without having to run an expensive or unreliable primality test. "Succinct" usually means that the proof should be at most polynomially larger than the number of digits in the number itself (for example, if the number has b bits, the proof might contain roughly b2 bits).

Contents

Primality certificates lead directly to proofs that problems such as primality testing and the complement of integer factorization lie in NP, the class of problems verifiable in polynomial time given a solution. These problems already trivially lie in co-NP. This was the first strong evidence that these problems are not NP-complete, since if they were, it would imply that NP is subset of co-NP, a result widely believed to be false; in fact, this was the first demonstration of a problem in NP intersect co-NP not known, at the time, to be in P.

Producing certificates for the complement problem, to establish that a number is composite, is straightforward: it suffices to give a nontrivial divisor. Standard probabilistic primality tests such as the Baillie–PSW primality test, the Fermat primality test, and the Miller–Rabin primality test also produce compositeness certificates in the event where the input is composite, but do not produce certificates for prime inputs.

Pratt certificates

The concept of primality certificates was historically introduced by the Pratt certificate, conceived in 1975 by Vaughan Pratt, [1] who described its structure and proved it to have polynomial size and to be verifiable in polynomial time. It is based on the Lucas primality test, which is essentially the converse of Fermat's little theorem with an added condition to make it true:

Lucas' theorem: Suppose we have an integer a such that:
  • an − 1 ≡ 1 (mod n),
  • for every prime factor q of n − 1, it is not the case that a(n − 1)/q ≡ 1 (mod n).
Then n is prime.

Given such an a (called a witness) and the prime factorization of n − 1, it's simple to verify the above conditions quickly: we only need to do a linear number of modular exponentiations, since every integer has fewer prime factors than bits, and each of these can be done by exponentiation by squaring in O(log n) multiplications (see big-O notation). Even with grade-school integer multiplication, this is only O((log n)4) time; using the multiplication algorithm with best-known asymptotic running time, due to David Harvey and Joris van der Hoeven, we can lower this to O((log n)3(log log n)) time, or using soft-O notation Õ((log n)3).

However, it is possible to trick a verifier into accepting a composite number by giving it a "prime factorization" of n − 1 that includes composite numbers. For example, suppose we claim that n = 85 is prime, supplying a = 4 and n − 1 = 6 × 14 as the "prime factorization". Then (using q = 6 and q = 14):

We would falsely conclude that 85 is prime. We don't want to just force the verifier to factor the number, so a better way to avoid this issue is to give primality certificates for each of the prime factors of n − 1 as well, which are just smaller instances of the original problem. We continue recursively in this manner until we reach a number known to be prime, such as 2. We end up with a tree of prime numbers, each associated with a witness a. For example, here is a complete Pratt certificate for the number 229:

This proof tree can be shown to contain at most values other than 2 by a simple inductive proof (based on theorem 2 of Pratt). The result holds for 3; in general, take p > 3 and let its children in the tree be p1, ..., pk. By the inductive hypothesis, the tree rooted at pi contains at most values, so the entire tree contains at most

since k ≥ 2, and p1...pk = p − 1. Since each value has at most log n bits, this also demonstrates that the certificate has a size of O((log n)2) bits.

Since there are O(log n) values other than 2, and each requires at most one exponentiation to verify (and exponentiations dominate the running time), the total time is O((log n)3(log log n)(log log log n)), or Õ((log n)3), which is quite feasible for numbers in the range that computational number theorists usually work with.

However, while useful in theory and easy to verify, actually generating a Pratt certificate for n requires factoring n − 1 and other potentially large numbers. This is simple for some special numbers such as Fermat primes, but currently much more difficult than simple primality testing for large primes of general form.

Atkin–Goldwasser–Kilian–Morain certificates

To address the problem of efficient certificate generation for larger numbers, in 1986 Shafi Goldwasser and Joe Kilian described a new type of certificate based on the theory of elliptic curves. [2] This was in turn used by A. O. L. Atkin and François Morain as the basis for Atkin-Goldwasser-Kilian-Morain certificates, which are the type of certificates generated and verified by elliptic curve primality proving systems. [3] Just as Pratt certificates are based on Lucas's theorem, Atkin–Goldwasser–Kilian–Morain certificates are based on the following theorem of Goldwasser and Kilian (lemma 2 of "Almost All Primes Can Be Quickly Certified"):

Theorem: Suppose we are given:
  • a positive integer n not divisible by 2 or 3;
  • Mx, My, A, B in (the integers mod n) satisfying My2 = Mx3 + AMx + B and with 4A3 + 27B2 coprime to n;
  • a prime .
Then M = (Mx, My) is a non-identity point on the elliptic curve y2 = x3 + Ax + B. Let kM be M added to itself k times using standard elliptic-curve addition. Then, if qM is the identity element I, then n is prime.

Technically, an elliptic curve can only be constructed over a field, and is only a field if n is prime, so we seem to be assuming the result we're trying to prove. The difficulty arises in the elliptic-curve addition algorithm, which takes inverses in the field that may not exist in . However, it can be shown (lemma 1 of "Almost All Primes Can Be Quickly Certified") that if we merely perform computations as though the curve were well-defined and do not at any point attempt to invert an element with no inverse, the result is still valid; if we do encounter an element with no inverse, this establishes that n is composite.

To derive a certificate from this theorem, we first encode Mx, My, A, B, and q, then recursively encode the proof of primality for q < n, continuing until we reach a known prime. This certificate has size O((log n)2) and can be verified in O((log n)4) time. Moreover, the algorithm that generates these certificates can be shown to be expected polynomial time for all but a small fraction of primes, and this fraction exponentially decreases with the size of the primes. Consequently, it's well-suited to generating certified large random primes, an application that is important in cryptography applications such as generating provably valid RSA keys.

Pocklington Based Certificates

Provable prime generation based on variants of Pocklington's theorem (see Pocklington primality test) [4] can be efficient techniques for generating primes (cost is generally less than probabilistic generation) with the added benefit of built in primality certificates. While these may seem to be special primes, notice that every prime integer could be generated with a Pocklington based provable generation algorithm.

Pocklington Primality Tests

Let where where are distinct primes with an integer greater than zero and a witness such that:

1.

(1)
2. for all .

(2)

Then P is prime if one of the following holds:

a) (see [5] ) or equivalently

(a)
b) (see [6] :Corollary 1) or equivalently and
  • with ,
  • such that and there exists a small prime such that .

(b)

Pocklington Primality Certificate

A Pocklington primality certificate consists of the prime P, a set primes dividing , each with their own Pocklington prime certificate or small enough to be a known prime, and a witness .

The bits needed for this certificate (and order of computational cost) should range from approximately for version ( b ) to for version ( a )

A Small Example

Let . Note that and , .

Impact of "PRIMES is in P"

"PRIMES is in P" [7] was a breakthrough in theoretical computer science. This article, published by Manindra Agrawal, Nitin Saxena, and Neeraj Kayal in August 2002, proves that the famous problem of checking primality of a number can be solved deterministically in polynomial time. The authors received the 2006 Gödel Prize and 2006 Fulkerson Prize for this work.

Because primality testing can now be done deterministically in polynomial time using the AKS primality test, a prime number could itself be considered a certificate of its own primality. This test runs in Õ((log n)6) time. In practice this method of verification is more expensive than the verification of Pratt certificates, but does not require any computation to determine the certificate itself.

Related Research Articles

<span class="mw-page-title-main">Chinese remainder theorem</span> Theorem for solving simultaneous congruences

In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer n by several integers, then one can determine uniquely the remainder of the division of n by the product of these integers, under the condition that the divisors are pairwise coprime.

<span class="mw-page-title-main">Prime number</span> Number divisible only by 1 or itself

A prime number is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways of writing it as a product, 1 × 5 or 5 × 1, involve 5 itself. However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central in number theory because of the fundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can be factorized as a product of primes that is unique up to their order.

<span class="mw-page-title-main">Quadratic reciprocity</span> Gives conditions for the solvability of quadratic equations modulo prime numbers

In number theory, the law of quadratic reciprocity is a theorem about modular arithmetic that gives conditions for the solvability of quadratic equations modulo prime numbers. Due to its subtlety, it has many formulations, but the most standard statement is:

The Riemann hypothesis is one of the most important conjectures in mathematics. It is a statement about the zeros of the Riemann zeta function. Various geometrical and arithmetical objects can be described by so-called global L-functions, which are formally similar to the Riemann zeta-function. One can then ask the same question about the zeros of these L-functions, yielding various generalizations of the Riemann hypothesis. Many mathematicians believe these generalizations of the Riemann hypothesis to be true. The only cases of these conjectures which have been proven occur in the algebraic function field case.

The Fermat primality test is a probabilistic test to determine whether a number is a probable prime.

A primality test is an algorithm for determining whether an input number is prime. Among other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not. Factorization is thought to be a computationally difficult problem, whereas primality testing is comparatively easy. Some primality tests prove that a number is prime, while others like Miller–Rabin prove that a number is composite. Therefore, the latter might more accurately be called compositeness tests instead of primality tests.

The Miller–Rabin primality test or Rabin–Miller primality test is a probabilistic primality test: an algorithm which determines whether a given number is likely to be prime, similar to the Fermat primality test and the Solovay–Strassen primality test.

<span class="mw-page-title-main">Trapdoor function</span> One-way cryptographic tool

In theoretical computer science and cryptography, a trapdoor function is a function that is easy to compute in one direction, yet difficult to compute in the opposite direction without special information, called the "trapdoor". Trapdoor functions are a special case of one-way functions and are widely used in public-key cryptography.

In algebra and number theory, Wilson's theorem states that a natural number n > 1 is a prime number if and only if the product of all the positive integers less than n is one less than a multiple of n. That is, the factorial satisfies

In mathematics, the Lucas–Lehmer test (LLT) is a primality test for Mersenne numbers. The test was originally developed by Édouard Lucas in 1878 and subsequently proved by Derrick Henry Lehmer in 1930.

In computational number theory, the Lucas test is a primality test for a natural number n; it requires that the prime factors of n − 1 be already known. It is the basis of the Pratt certificate that gives a concise verification that n is prime.

The AKS primality test is a deterministic primality-proving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientists at the Indian Institute of Technology Kanpur, on August 6, 2002, in an article titled "PRIMES is in P". The algorithm was the first one which is able to determine in polynomial time, whether a given number is prime or composite and this without relying on mathematical conjectures such as the generalized Riemann hypothesis. The proof is also notable for not relying on the field of analysis. In 2006 the authors received both the Gödel Prize and Fulkerson Prize for their work.

The Solovay–Strassen primality test, developed by Robert M. Solovay and Volker Strassen in 1977, is a probabilistic test to determine if a number is composite or probably prime. The idea behind the test was discovered by M. M. Artjuhov in 1967 (see Theorem E in the paper). This test has been largely superseded by the Baillie–PSW primality test and the Miller–Rabin primality test, but has great historical importance in showing the practical feasibility of the RSA cryptosystem. The Solovay–Strassen test is essentially an Euler–Jacobi probable prime test.

Lucas pseudoprimes and Fibonacci pseudoprimes are composite integers that pass certain tests which all primes and very few composite numbers pass: in this case, criteria relative to some Lucas sequence.

In mathematics, the interplay between the Galois group G of a Galois extension L of a number field K, and the way the prime ideals P of the ring of integers OK factorise as products of prime ideals of OL, provides one of the richest parts of algebraic number theory. The splitting of prime ideals in Galois extensions is sometimes attributed to David Hilbert by calling it Hilbert theory. There is a geometric analogue, for ramified coverings of Riemann surfaces, which is simpler in that only one kind of subgroup of G need be considered, rather than two. This was certainly familiar before Hilbert.

In additive number theory, Fermat's theorem on sums of two squares states that an odd prime p can be expressed as:

Schoof's algorithm is an efficient algorithm to count points on elliptic curves over finite fields. The algorithm has applications in elliptic curve cryptography where it is important to know the number of points to judge the difficulty of solving the discrete logarithm problem in the group of points on an elliptic curve.

An important aspect in the study of elliptic curves is devising effective ways of counting points on the curve. There have been several approaches to do so, and the algorithms devised have proved to be useful tools in the study of various fields such as number theory, and more recently in cryptography and Digital Signature Authentication. While in number theory they have important consequences in the solving of Diophantine equations, with respect to cryptography, they enable us to make effective use of the difficulty of the discrete logarithm problem (DLP) for the group , of elliptic curves over a finite field , where q = pk and p is a prime. The DLP, as it has come to be known, is a widely used approach to public key cryptography, and the difficulty in solving this problem determines the level of security of the cryptosystem. This article covers algorithms to count points on elliptic curves over fields of large characteristic, in particular p > 3. For curves over fields of small characteristic more efficient algorithms based on p-adic methods exist.

In mathematics, the Pocklington–Lehmer primality test is a primality test devised by Henry Cabourn Pocklington and Derrick Henry Lehmer. The test uses a partial factorization of to prove that an integer is prime.

In mathematics, elliptic curve primality testing techniques, or elliptic curve primality proving (ECPP), are among the quickest and most widely used methods in primality proving. It is an idea put forward by Shafi Goldwasser and Joe Kilian in 1986 and turned into an algorithm by A. O. L. Atkin the same year. The algorithm was altered and improved by several collaborators subsequently, and notably by Atkin and François Morain, in 1993. The concept of using elliptic curves in factorization had been developed by H. W. Lenstra in 1985, and the implications for its use in primality testing followed quickly.

References

  1. Vaughan Pratt. "Every prime has a succinct certificate". SIAM Journal on Computing, vol. 4, pp. 214–220. 1975. Citations, Full-text.
  2. Goldwasser, S. and Kilian, J. "Almost All Primes Can Be Quickly Certified". Proc. 18th STOC. pp. 316–329, 1986. Full text.
  3. Atkin, A O.L.; Morain, F. (1993). "Elliptic curves and primality proving" (PDF). Mathematics of Computation . 61 (203): 29–68. Bibcode:1993MaCom..61...29A. doi: 10.1090/s0025-5718-1993-1199989-x . JSTOR   2152935. MR   1199989.
  4. Pocklington, Henry C. (1914–1916). "The determination of the prime or composite nature of large numbers by Fermat's theorem". Proceedings of the Cambridge Philosophical Society. 18: 29–30.
  5. Crandall, Richard; Pomerance, Carl. "Prime Numbers: A computational perspective" (2 ed.). Springer-Verlag, 175 Fifth Ave, New York, New York 10010, U.S.A., 2005.
  6. Brillhart, John; Lehmer, D. H.; Selfridge, J. L. (April 1975). "New Primality Criteria and Factorizations of 2m ± 1" (PDF). Mathematics of Computation. 29 (130): 620–647. doi: 10.1090/S0025-5718-1975-0384673-1 . JSTOR   2005583.
  7. Agrawal, Manindra; Kayal, Neeraj; Saxena, Nitin (September 2004). "PRIMES is in P" (PDF). Annals of Mathematics . 160 (2): 781–793. doi: 10.4007/annals.2004.160.781 . JSTOR   3597229. MR   2123939.