Trapdoor function

Last updated
The idea of trapdoor function. A trapdoor function f with its trapdoor t can be generated by an algorithm Gen. f can be efficiently computed, i.e., in probabilistic polynomial time. However, the computation of the inverse of f is generally hard, unless the trapdoor t is given. Trapdoor permutation.svg
The idea of trapdoor function. A trapdoor function f with its trapdoor t can be generated by an algorithm Gen. f can be efficiently computed, i.e., in probabilistic polynomial time. However, the computation of the inverse of f is generally hard, unless the trapdoor t is given.

In theoretical computer science and cryptography, a trapdoor function is a function that is easy to compute in one direction, yet difficult to compute in the opposite direction (finding its inverse) without special information, called the "trapdoor". Trapdoor functions are a special case of one-way functions and are widely used in public-key cryptography. [2]

Contents

In mathematical terms, if f is a trapdoor function, then there exists some secret information t, such that given f(x) and t, it is easy to compute x. Consider a padlock and its key. It is trivial to change the padlock from open to closed without using the key, by pushing the shackle into the lock mechanism. Opening the padlock easily, however, requires the key to be used. Here the key t is the trapdoor and the padlock is the trapdoor function.

An example of a simple mathematical trapdoor is "6895601 is the product of two prime numbers. What are those numbers?" A typical "brute-force" solution would be to try dividing 6895601 by many prime numbers until finding the answer. However, if one is told that 1931 is one of the numbers, one can find the answer by entering "6895601 ÷ 1931" into any calculator. This example is not a sturdy trapdoor function – modern computers can guess all of the possible answers within a second – but this sample problem could be improved by using the product of two much larger primes.

Trapdoor functions came to prominence in cryptography in the mid-1970s with the publication of asymmetric (or public-key) encryption techniques by Diffie, Hellman, and Merkle. Indeed, Diffie & Hellman (1976) coined the term. Several function classes had been proposed, and it soon became obvious that trapdoor functions are harder to find than was initially thought. For example, an early suggestion was to use schemes based on the subset sum problem. This turned out rather quickly to be unsuitable.

As of 2004, the best known trapdoor function (family) candidates are the RSA and Rabin families of functions. Both are written as exponentiation modulo a composite number, and both are related to the problem of prime factorization.

Functions related to the hardness of the discrete logarithm problem (either modulo a prime or in a group defined over an elliptic curve) are not known to be trapdoor functions, because there is no known "trapdoor" information about the group that enables the efficient computation of discrete logarithms.

A trapdoor in cryptography has the very specific aforementioned meaning and is not to be confused with a backdoor (these are frequently used interchangeably, which is incorrect). A backdoor is a deliberate mechanism that is added to a cryptographic algorithm (e.g., a key pair generation algorithm, digital signing algorithm, etc.) or operating system, for example, that permits one or more unauthorized parties to bypass or subvert the security of the system in some fashion.

Definition

A trapdoor function is a collection of one-way functions { fk : DkRk } (kK), in which all of K, Dk, Rk are subsets of binary strings {0, 1}*, satisfying the following conditions:

If each function in the collection above is a one-way permutation, then the collection is also called a trapdoor permutation. [6]

Examples

In the following two examples, we always assume it is difficult to factorize a large composite number (see Integer factorization).

RSA Assumption

In this example, the inverse of modulo (Euler's totient function of ) is the trapdoor:

If the factorization of is known, then can be computed. With this the inverse of can be computed , and then given we can find . Its hardness follows from the RSA assumption. [7]

Rabin's Quadratic Residue Assumption

Let be a large composite number such that , where and are large primes such that , and kept confidential to the adversary. The problem is to compute given such that . The trapdoor is the factorization of . With the trapdoor, the solutions of z can be given as , where . See Chinese remainder theorem for more details. Note that given primes and , we can find and . Here the conditions and guarantee that the solutions and can be well defined. [8]

See also

Notes

  1. Ostrovsky, pp. 6-9
  2. Bellare, M (June 1998). "Many-to-one trapdoor functions and their relation to public-key cryptosystems". Advances in Cryptology — CRYPTO '98. Lecture Notes in Computer Science. Vol. 1462. pp. 283–298. doi:10.1007/bfb0055735. ISBN   978-3-540-64892-5. S2CID   215825522.
  3. Pass's Notes, def. 56.1
  4. Goldwasser's lecture notes, def. 2.16
  5. Ostrovsky, pp. 6-10, def. 11
  6. Pass's notes, def 56.1; Dodis's def 7, lecture 1.
  7. Goldwasser's lecture notes, 2.3.2; Lindell's notes, pp. 17, Ex. 1.
  8. Goldwasser's lecture notes, 2.3.4

Related Research Articles

RSA (Rivest–Shamir–Adleman) is a public-key cryptosystem, one of the oldest, that is widely used for secure data transmission. The acronym "RSA" comes from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 at Government Communications Headquarters (GCHQ) by the English mathematician Clifford Cocks. That system was declassified in 1997.

Fermat's little theorem states that if p is a prime number, then for any integer a, the number is an integer multiple of p. In the notation of modular arithmetic, this is expressed as

In computational number theory, the Lucas test is a primality test for a natural number n; it requires that the prime factors of n − 1 be already known. It is the basis of the Pratt certificate that gives a concise verification that n is prime.

The Rabin cryptosystem is a family of public-key encryption schemes based on a trapdoor function whose security, like that of RSA, is related to the difficulty of integer factorization.

The Paillier cryptosystem, invented by and named after Pascal Paillier in 1999, is a probabilistic asymmetric algorithm for public key cryptography. The problem of computing n-th residue classes is believed to be computationally difficult. The decisional composite residuosity assumption is the intractability hypothesis upon which this cryptosystem is based.

Pollard's rho algorithm is an algorithm for integer factorization. It was invented by John Pollard in 1975. It uses only a small amount of space, and its expected running time is proportional to the square root of the smallest prime factor of the composite number being factorized.

The quadratic sieve algorithm (QS) is an integer factorization algorithm and, in practice, the second fastest method known. It is still the fastest for integers under 100 decimal digits or so, and is considerably simpler than the number field sieve. It is a general-purpose factorization algorithm, meaning that its running time depends solely on the size of the integer to be factored, and not on special structure or properties. It was invented by Carl Pomerance in 1981 as an improvement to Schroeppel's linear sieve.

In number theory, Dixon's factorization method is a general-purpose integer factorization algorithm; it is the prototypical factor base method. Unlike for other factor base methods, its run-time bound comes with a rigorous proof that does not rely on conjectures about the smoothness properties of the values taken by a polynomial.

The ElGamal signature scheme is a digital signature scheme which is based on the difficulty of computing discrete logarithms. It was described by Taher Elgamal in 1985.

The Goldwasser–Micali (GM) cryptosystem is an asymmetric key encryption algorithm developed by Shafi Goldwasser and Silvio Micali in 1982. GM has the distinction of being the first probabilistic public-key encryption scheme which is provably secure under standard cryptographic assumptions. However, it is not an efficient cryptosystem, as ciphertexts may be several hundred times larger than the initial plaintext. To prove the security properties of the cryptosystem, Goldwasser and Micali proposed the widely used definition of semantic security.

The Blum–Goldwasser (BG) cryptosystem is an asymmetric key encryption algorithm proposed by Manuel Blum and Shafi Goldwasser in 1984. Blum–Goldwasser is a probabilistic, semantically secure cryptosystem with a constant-size ciphertext expansion. The encryption algorithm implements an XOR-based stream cipher using the Blum-Blum-Shub (BBS) pseudo-random number generator to generate the keystream. Decryption is accomplished by manipulating the final state of the BBS generator using the private key, in order to find the initial seed and reconstruct the keystream.

Schoof's algorithm is an efficient algorithm to count points on elliptic curves over finite fields. The algorithm has applications in elliptic curve cryptography where it is important to know the number of points to judge the difficulty of solving the discrete logarithm problem in the group of points on an elliptic curve.

In cryptography, the Rabin signature algorithm is a method of digital signature originally proposed by Michael O. Rabin in 1978.

The Benaloh Cryptosystem is an extension of the Goldwasser-Micali cryptosystem (GM) created in 1985 by Josh (Cohen) Benaloh. The main improvement of the Benaloh Cryptosystem over GM is that longer blocks of data can be encrypted at once, whereas in GM each bit is encrypted individually.

In cryptography, the Feige–Fiat–Shamir identification scheme is a type of parallel zero-knowledge proof developed by Uriel Feige, Amos Fiat, and Adi Shamir in 1988. Like all zero-knowledge proofs, it allows one party, the Prover, to prove to another party, the Verifier, that they possess secret information without revealing to Verifier what that secret information is. The Feige–Fiat–Shamir identification scheme, however, uses modular arithmetic and a parallel verification process that limits the number of communications between Prover and Verifier.

In computational number theory, a factor base is a small set of prime numbers commonly used as a mathematical tool in algorithms involving extensive sieving for potential factors of a given integer.

In mathematics, particularly in the area of arithmetic, a modular multiplicative inverse of an integer a is an integer x such that the product ax is congruent to 1 with respect to the modulus m. In the standard notation of modular arithmetic this congruence is written as

In cryptography, Very Smooth Hash (VSH) is a provably secure cryptographic hash function invented in 2005 by Scott Contini, Arjen Lenstra and Ron Steinfeld. Provably secure means that finding collisions is as difficult as some known hard mathematical problem. Unlike other provably secure collision-resistant hashes, VSH is efficient and usable in practice. Asymptotically, it only requires a single multiplication per log(n) message-bits and uses RSA-type arithmetic. Therefore, VSH can be useful in embedded environments where code space is limited.

In mathematics, elliptic curve primality testing techniques, or elliptic curve primality proving (ECPP), are among the quickest and most widely used methods in primality proving. It is an idea put forward by Shafi Goldwasser and Joe Kilian in 1986 and turned into an algorithm by A. O. L. Atkin the same year. The algorithm was altered and improved by several collaborators subsequently, and notably by Atkin and François Morain, in 1993. The concept of using elliptic curves in factorization had been developed by H. W. Lenstra in 1985, and the implications for its use in primality testing followed quickly.

In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them.

References