The forking lemma is any of a number of related lemmas in cryptography research. The lemma states that if an adversary (typically a probabilistic Turing machine), on inputs drawn from some distribution, produces an output that has some property with non-negligible probability, then with non-negligible probability, if the adversary is re-run on new inputs but with the same random tape, its second output will also have the property.
This concept was first used by David Pointcheval and Jacques Stern in "Security proofs for signature schemes," published in the proceedings of Eurocrypt 1996. [1] [2] In their paper, the forking lemma is specified in terms of an adversary that attacks a digital signature scheme instantiated in the random oracle model. They show that if an adversary can forge a signature with non-negligible probability, then there is a non-negligible probability that the same adversary with the same random tape can create a second forgery in an attack with a different random oracle. [3] The forking lemma was later generalized by Mihir Bellare and Gregory Neven. [4] The forking lemma has been used and further generalized to prove the security of a variety of digital signature schemes and other random-oracle based cryptographic constructions. [2] [5] [6]
The generalized version of the lemma is stated as follows. [4] Let A be a probabilistic algorithm, with inputs (x, h1, ..., hq; r) that outputs a pair (J, y), where r refers to the random tape of A (that is, the random choices A will make). Suppose further that IG is a probability distribution from which x is drawn, and that H is a set of size h from which each of the hi values are drawn according to the uniform distribution. Let acc be the probability that on inputs distributed as described, the J output by A is greater than or equal to 1.
We can then define a "forking algorithm" FA that proceeds as follows, on input x:
Let frk be the probability that FA outputs a triple starting with 1, given an input x chosen randomly from IG. Then
The idea here is to think of A as running two times in related executions, where the process "forks" at a certain point, when some but not all of the input has been examined. In the alternate version, the remaining inputs are re-generated but are generated in the normal way. The point at which the process forks may be something we only want to decide later, possibly based on the behavior of A the first time around: this is why the lemma statement chooses the branching point (J) based on the output of A. The requirement that hJ ≠ h'J is a technical one required by many uses of the lemma. (Note that since both hJ and h'J are chosen randomly from H, then if h is large, as is usually the case, the probability of the two values not being distinct is extremely small.)
For example, let A be an algorithm for breaking a digital signature scheme in the random oracle model. Then x would be the public parameters (including the public key) A is attacking, and hi would be the output of the random oracle on its ith distinct input. The forking lemma is of use when it would be possible, given two different random signatures of the same message, to solve some underlying hard problem. An adversary that forges once, however, gives rise to one that forges twice on the same message with non-negligible probability through the forking lemma. When A attempts to forge on a message m, we consider the output of A to be (J, y) where y is the forgery, and J is such that m was the Jth unique query to the random oracle (it may be assumed that A will query m at some point, if A is to be successful with non-negligible probability). (If A outputs an incorrect forgery, we consider the output to be (0, y).)
By the forking lemma, the probability (frk) of obtaining two good forgeries y and y' on the same message but with different random oracle outputs (that is, with hJ ≠ h'J) is non-negligible when acc is also non-negligible. This allows us to prove that if the underlying hard problem is indeed hard, then no adversary can forge signatures.
This is the essence of the proof given by Pointcheval and Stern for a modified ElGamal signature scheme against an adaptive adversary.
The reduction provided by the forking lemma is not tight. Pointcheval and Stern proposed security arguments for Digital Signatures and Blind Signature using Forking Lemma. [7] Claus P. Schnorr provided an attack on blind Schnorr signatures schemes, [8] with more than concurrent executions (the case studied and proven secure by Pointcheval and Stern). A polynomial-time attack, for concurrent executions, was shown in 2020 by Benhamouda, Lepoint, Raykova, and Orrù. Schnorr also suggested enhancements for securing blind signatures schemes based on discrete logarithm problem. [9]
In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.
A digital signature is a mathematical scheme for verifying the authenticity of digital messages or documents. A valid digital signature, where the prerequisites are satisfied, gives a recipient very high confidence that the message was created by a known sender (authenticity), and that the message was not altered in transit (integrity).
Articles related to cryptography include:
In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way.
A commitment scheme is a cryptographic primitive that allows one to commit to a chosen value while keeping it hidden to others, with the ability to reveal the committed value later. Commitment schemes are designed so that a party cannot change the value or statement after they have committed to it: that is, commitment schemes are binding. Commitment schemes have important applications in a number of cryptographic protocols including secure coin flipping, zero-knowledge proofs, and secure computation.
In cryptography, a Schnorr signature is a digital signature produced by the Schnorr signature algorithm that was described by Claus Schnorr. It is a digital signature scheme known for its simplicity, among the first whose security is based on the intractability of certain discrete logarithm problems. It is efficient and generates short signatures. It was covered by U.S. Patent 4,995,082 which expired in February 2008.
In cryptography, a random oracle is an oracle that responds to every unique query with a (truly) random response chosen uniformly from its output domain. If a query is repeated, it responds the same way every time that query is submitted.
In cryptography, a semantically secure cryptosystem is one where only negligible information about the plaintext can be feasibly extracted from the ciphertext. Specifically, any probabilistic, polynomial-time algorithm (PPTA) that is given the ciphertext of a certain message , and the message's length, cannot determine any partial information on the message with probability non-negligibly higher than all other PPTA's that only have access to the message length. This concept is the computational complexity analogue to Shannon's concept of perfect secrecy. Perfect secrecy means that the ciphertext reveals no information at all about the plaintext, whereas semantic security implies that any information revealed cannot be feasibly extracted.
The ElGamal signature scheme is a digital signature scheme which is based on the difficulty of computing discrete logarithms. It was described by Taher Elgamal in 1985.
Ciphertext indistinguishability is a property of many encryption schemes. Intuitively, if a cryptosystem possesses the property of indistinguishability, then an adversary will be unable to distinguish pairs of ciphertexts based on the message they encrypt. The property of indistinguishability under chosen plaintext attack is considered a basic requirement for most provably secure public key cryptosystems, though some schemes also provide indistinguishability under chosen ciphertext attack and adaptive chosen ciphertext attack. Indistinguishability under chosen plaintext attack is equivalent to the property of semantic security, and many cryptographic proofs use these definitions interchangeably.
In cryptography, a pseudorandom function family, abbreviated PRF, is a collection of efficiently-computable functions which emulate a random oracle in the following way: no efficient algorithm can distinguish between a function chosen randomly from the PRF family and a random oracle. Pseudorandom functions are vital tools in the construction of cryptographic primitives, especially secure encryption schemes.
In cryptography a universal one-way hash function, is a type of universal hash function of particular importance to cryptography. UOWHF's are proposed as an alternative to collision-resistant hash functions (CRHFs). CRHFs have a strong collision-resistance property: that it is hard, given randomly chosen hash function parameters, to find any collision of the hash function. In contrast, UOWHFs require that it be hard to find a collision where one preimage is chosen independently of the hash function parameters. The primitive was suggested by Moni Naor and Moti Yung and is also known as "target collision resistant" hash functions; it was employed to construct general digital signature schemes without trapdoor functions, and also within chosen-ciphertext secure public key encryption schemes.
In cryptography the standard model is the model of computation in which the adversary is only limited by the amount of time and computational power available. Other names used are bare model and plain model.
In cryptography, an adversary's advantage is a measure of how successfully it can attack a cryptographic algorithm, by distinguishing it from an idealized version of that type of algorithm. Note that in this context, the "adversary" is itself an algorithm and not a person. A cryptographic algorithm is considered secure if no adversary has a non-negligible advantage, subject to specified bounds on the adversary's computational resources. "Negligible" usually means "within O(2−p)" where p is a security parameter associated with the algorithm. For example, p might be the number of bits in a block cipher's key.
In cryptography, a pseudorandom permutation (PRP) is a function that cannot be distinguished from a random permutation with practical effort.
In cryptography, the Pointcheval–Stern signature algorithm is a digital signature scheme based on the closely related ElGamal signature scheme. It changes the ElGamal scheme slightly to produce an algorithm which has been proven secure in a strong sense against adaptive chosen-message attacks, assuming the discrete logarithm problem is intractable in a strong sense.
In cryptography, a proof of knowledge is an interactive proof in which the prover succeeds in 'convincing' a verifier that the prover knows something. What it means for a machine to 'know something' is defined in terms of computation. A machine 'knows something', if this something can be computed, given the machine as an input. As the program of the prover does not necessarily spit out the knowledge itself a machine with a different program, called the knowledge extractor is introduced to capture this idea. We are mostly interested in what can be proven by polynomial time bounded machines. In this case the set of knowledge elements is limited to a set of witnesses of some language in NP.
A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weakly random entropy source, together with a short, uniformly random seed, generates a highly random output that appears independent from the source and uniformly distributed. Examples of weakly random sources include radioactive decay or thermal noise; the only restriction on possible sources is that there is no way they can be fully controlled, calculated or predicted, and that a lower bound on their entropy rate can be established. For a given source, a randomness extractor can even be considered to be a true random number generator (TRNG); but there is no single extractor that has been proven to produce truly random output from any type of weakly random source.
In cryptography, the Fiat–Shamir heuristic is a technique for taking an interactive proof of knowledge and creating a digital signature based on it. This way, some fact can be publicly proven without revealing underlying information. The technique is due to Amos Fiat and Adi Shamir (1986). For the method to work, the original interactive proof must have the property of being public-coin, i.e. verifier's random coins are made public throughout the proof protocol.
The YAK is a public-key authenticated key-agreement protocol, proposed by Feng Hao in 2010. It is claimed to be the simplest authenticated key exchange protocol among the related schemes, including MQV, HMQV, Station-to-Station protocol, SSL/TLS etc. The authentication is based on public key pairs. As with other protocols, YAK normally requires a Public Key Infrastructure to distribute authentic public keys to the communicating parties. The security of YAK is disputed.