Generic group model

Last updated

The generic group model [1] [2] is an idealised cryptographic model, where the adversary is only given access to a randomly chosen encoding of a group, instead of efficient encodings, such as those used by the finite field or elliptic curve groups used in practice.

The model includes an oracle that executes the group operation. This oracle takes two encodings of group elements as input and outputs an encoding of a third element. If the group should allow for a pairing operation this operation would be modeled as an additional oracle.

One of the main uses of the generic group model is to analyse computational hardness assumptions. An analysis in the generic group model can answer the question: "What is the fastest generic algorithm for breaking a cryptographic hardness assumption". A generic algorithm is an algorithm that only makes use of the group operation, and does not consider the encoding of the group. This question was answered for the discrete logarithm problem by Victor Shoup using the generic group model. [1] Other results in the generic group model are for instance. [3] The model can also be extended to other algebraic structures like rings. [4]

The generic group model suffers from some of the same problems as the random oracle model. In particular, it has been shown [5] using a similar argument [6] that there exist cryptographic schemes which are provably secure in the generic group model but which are trivially insecure once the random group encoding is replaced with an efficiently computable instantiation of the encoding function.

Related Research Articles

In cryptography, the ElGamal encryption system is an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie–Hellman key exchange. It was described by Taher Elgamal in 1985. ElGamal encryption is used in the free GNU Privacy Guard software, recent versions of PGP, and other cryptosystems. The Digital Signature Algorithm (DSA) is a variant of the ElGamal signature scheme, which should not be confused with ElGamal encryption.

In mathematics, for given real numbers a and b, the logarithm logba is a number x such that bx = a. Analogously, in any group G, powers bk can be defined for all integers k, and the discrete logarithm logba is an integer k such that bk = a. In number theory, the more commonly used term is index: we can write x = indra (mod m) for rxa (mod m) if r is a primitive root of m and gcd(a,m) = 1.

In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way.

In cryptography, a Schnorr signature is a digital signature produced by the Schnorr signature algorithm that was described by Claus Schnorr. It is a digital signature scheme known for its simplicity, among the first whose security is based on the intractability of certain discrete logarithm problems. It is efficient and generates short signatures. It was covered by U.S. Patent 4,995,082 which expired in February 2008.

In cryptography, a random oracle is an oracle that responds to every unique query with a (truly) random response chosen uniformly from its output domain. If a query is repeated, it responds the same way every time that query is submitted.

Provable security refers to any type or level of computer security that can be proved. It is used in different ways by different fields.

The decisional Diffie–Hellman (DDH) assumption is a computational hardness assumption about a certain problem involving discrete logarithms in cyclic groups. It is used as the basis to prove the security of many cryptographic protocols, most notably the ElGamal and Cramer–Shoup cryptosystems.

In cryptography, the RSA problem summarizes the task of performing an RSA private-key operation given only the public key. The RSA algorithm raises a message to an exponent, modulo a composite number N whose factors are not known. Thus, the task can be neatly described as finding the eth roots of an arbitrary number, modulo N. For large RSA key sizes, no efficient method for solving this problem is known; if an efficient method is ever developed, it would threaten the current or eventual security of RSA-based cryptosystems—both for public-key encryption and digital signatures.

The Cramer–Shoup system is an asymmetric key encryption algorithm, and was the first efficient scheme proven to be secure against adaptive chosen ciphertext attack using standard cryptographic assumptions. Its security is based on the computational intractability of the decisional Diffie–Hellman assumption. Developed by Ronald Cramer and Victor Shoup in 1998, it is an extension of the ElGamal cryptosystem. In contrast to ElGamal, which is extremely malleable, Cramer–Shoup adds other elements to ensure non-malleability even against a resourceful attacker. This non-malleability is achieved through the use of a universal one-way hash function and additional computations, resulting in a ciphertext which is twice as large as in ElGamal.

In cryptography, Optimal Asymmetric Encryption Padding (OAEP) is a padding scheme often used together with RSA encryption. OAEP was introduced by Bellare and Rogaway, and subsequently standardized in PKCS#1 v2 and RFC 2437.

The computational Diffie–Hellman (CDH) assumption is a computational hardness assumption about the Diffie–Hellman problem. The CDH assumption involves the problem of computing the discrete logarithm in cyclic groups. The CDH problem illustrates the attack of an eavesdropper in the Diffie–Hellman key exchange protocol to obtain the exchanged secret key.

The external Diffie–Hellman (XDH) assumption is a computational hardness assumption used in elliptic curve cryptography. The XDH assumption holds that there exist certain subgroups of elliptic curves which have useful properties for cryptography. Specifically, XDH implies the existence of two distinct groups with the following properties:

  1. The discrete logarithm problem (DLP), the computational Diffie–Hellman problem (CDH), and the computational co-Diffie–Hellman problem are all intractable in and .
  2. There exists an efficiently computable bilinear map (pairing) .
  3. The decisional Diffie–Hellman problem (DDH) is intractable in .

The Diffie–Hellman problem (DHP) is a mathematical problem first proposed by Whitfield Diffie and Martin Hellman in the context of cryptography. The motivation for this problem is that many security systems use one-way functions: mathematical operations that are fast to compute, but hard to reverse. For example, they enable encrypting a message, but reversing the encryption is difficult. If solving the DHP were easy, these systems would be easily broken.

Plaintext-awareness is a notion of security for public-key encryption. A cryptosystem is plaintext-aware if it is difficult for any efficient algorithm to come up with a valid ciphertext without being aware of the corresponding plaintext.

Victor Shoup is a computer scientist and mathematician. He obtained a PhD in computer science from the University of Wisconsin–Madison in 1989, and he did his undergraduate work at the University of Wisconsin-Eau Claire. He is a professor at the Courant Institute of Mathematical Sciences at New York University, focusing on algorithm and cryptography courses. He has held positions at AT&T Bell Labs, the University of Toronto, Saarland University, and the IBM Zurich Research Laboratory.

In cryptography the standard model is the model of computation in which the adversary is only limited by the amount of time and computational power available. Other names used are bare model and plain model.

In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently. It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.

Lattice-based cryptography is the generic term for constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Lattice-based constructions are currently important candidates for post-quantum cryptography. Unlike more widely used and known public-key schemes such as the RSA, Diffie-Hellman or elliptic-curve cryptosystems—which could, theoretically, be defeated using Shor's algorithm on a quantum computer—some lattice-based constructions appear to be resistant to attack by both classical and quantum computers. Furthermore, many lattice-based constructions are considered to be secure under the assumption that certain well-studied computational lattice problems cannot be solved efficiently.

In cryptography, the Fiat–Shamir heuristic is a technique for taking an interactive proof of knowledge and creating a digital signature based on it. This way, some fact can be publicly proven without revealing underlying information. The technique is due to Amos Fiat and Adi Shamir (1986). For the method to work, the original interactive proof must have the property of being public-coin, i.e. verifier's random coins are made public throughout the proof protocol.

In cryptography, Very Smooth Hash (VSH) is a provably secure cryptographic hash function invented in 2005 by Scott Contini, Arjen Lenstra and Ron Steinfeld. Provably secure means that finding collisions is as difficult as some known hard mathematical problem. Unlike other provably secure collision-resistant hashes, VSH is efficient and usable in practice. Asymptotically, it only requires a single multiplication per log(n) message-bits and uses RSA-type arithmetic. Therefore, VSH can be useful in embedded environments where code space is limited.

References

  1. 1 2 Victor Shoup (1997). "Lower bounds for discrete logarithms and related problems" (PDF). Lecture Notes in Computer Science. Advances in Cryptology – Eurocrypt ’97. Vol. 1233. Springer-Verlag. pp. 256–266. Retrieved 2010-04-09.
  2. Ueli Maurer (2005). "Abstract models of computation in cryptography" (PDF). Lecture Notes in Computer Science. 10th IMA Conference On Cryptography and Coding. Vol. 2796. Springer-Verlag. pp. 1–12. Archived from the original (PDF) on 2017-07-06. Retrieved 2007-11-01.
  3. Ueli M. Maurer, Stefan Wolf: Lower Bounds on Generic Algorithms in Groups. EUROCRYPT 1998: 72-84
  4. Divesh Aggarwal, Ueli Maurer: Breaking RSA Generically Is Equivalent to Factoring. EUROCRYPT 2009:36-53
  5. Alexander W. Dent: Adapting the Weaknesses of the Random Oracle Model to the Generic Group Model. ASIACRYPT 2002: 100-109
  6. Ran Canetti, Oded Goldreich and Shai Halevi, The Random Oracle Methodology Revisited, STOC 1998, pp. 209218 (PS and PDF).