In cryptography the standard model is the model of computation in which the adversary is only limited by the amount of time and computational power available. Other names used are bare model and plain model.
Cryptographic schemes are usually based on complexity assumptions, which state that some problems, such as factorization, cannot be solved in polynomial time. Schemes that can be proven secure using only complexity assumptions are said to be secure in the standard model. Security proofs are notoriously difficult to achieve in the standard model, so in many proofs, cryptographic primitives are replaced by idealized versions. The most common example of this technique, known as the random oracle model, [1] [2] involves replacing a cryptographic hash function with a genuinely random function. Another example is the generic group model, [3] [4] where the adversary is given access to a randomly chosen encoding of a group, instead of the finite field or elliptic curve groups used in practice.
Other models used invoke trusted third parties to perform some task without cheating; for example, the public key infrastructure (PKI) model requires a certificate authority, which if it were dishonest, could produce fake certificates and use them to forge signatures, or mount a man in the middle attack to read encrypted messages. Other examples of this type are the common random string model, where it is assumed that all parties have access to some string chosen uniformly at random, and its generalization, the common reference string model, where a string is chosen according to some other probability distribution. [5] These models are often used for non-interactive zero-knowledge proofs (NIZK). In some applications, such as the Dolev–Dwork–Naor encryption scheme, [6] it makes sense for a particular party to generate the common reference string, while in other applications, the common reference string must be generated by a trusted third party. Collectively, these models are referred to as models with special setup assumptions.
A chosen-ciphertext attack (CCA) is an attack model for cryptanalysis where the cryptanalyst can gather information by obtaining the decryptions of chosen ciphertexts. From these pieces of information the adversary can attempt to recover the hidden secret key used for decryption.
In computational complexity theory, an interactive proof system is an abstract machine that models computation as the exchange of messages between two parties: a prover and a verifier. The parties interact by exchanging messages in order to ascertain whether a given string belongs to a language or not. The prover possesses unlimited computational resources but cannot be trusted, while the verifier has bounded computation power but is assumed to be always honest. Messages are sent between the verifier and prover until the verifier has an answer to the problem and has "convinced" itself that it is correct.
A commitment scheme is a cryptographic primitive that allows one to commit to a chosen value while keeping it hidden to others, with the ability to reveal the committed value later. Commitment schemes are designed so that a party cannot change the value or statement after they have committed to it: that is, commitment schemes are binding. Commitment schemes have important applications in a number of cryptographic protocols including secure coin flipping, zero-knowledge proofs, and secure computation.
In cryptography, a zero-knowledge proof or zero-knowledge protocol is a method by which one party can prove to another party that a given statement is true while the prover avoids conveying any additional information apart from the fact that the statement is indeed true. The essence of zero-knowledge proofs is that it is trivial to prove that one possesses knowledge of certain information by simply revealing it; the challenge is to prove such possession without revealing the information itself or any additional information.
In cryptography, a random oracle is an oracle that responds to every unique query with a (truly) random response chosen uniformly from its output domain. If a query is repeated, it responds the same way every time that query is submitted.
Secure multi-party computation is a subfield of cryptography with the goal of creating methods for parties to jointly compute a function over their inputs while keeping those inputs private. Unlike traditional cryptographic tasks, where cryptography assures security and integrity of communication or storage and the adversary is outside the system of participants, the cryptography in this model protects participants' privacy from each other.
Provable security refers to any type or level of computer security that can be proved. It is used in different ways by different fields.
The Cramer–Shoup system is an asymmetric key encryption algorithm, and was the first efficient scheme proven to be secure against adaptive chosen ciphertext attack using standard cryptographic assumptions. Its security is based on the computational intractability of the decisional Diffie–Hellman assumption. Developed by Ronald Cramer and Victor Shoup in 1998, it is an extension of the ElGamal cryptosystem. In contrast to ElGamal, which is extremely malleable, Cramer–Shoup adds other elements to ensure non-malleability even against a resourceful attacker. This non-malleability is achieved through the use of a universal one-way hash function and additional computations, resulting in a ciphertext which is twice as large as in ElGamal.
In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution.
In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently. It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.
Secure two-party computation (2PC) a.k.a. Secure function evaluation is sub-problem of secure multi-party computation (MPC) that has received special attention by researchers because of its close relation to many cryptographic tasks. The goal of 2PC is to create a generic protocol that allows two parties to jointly compute an arbitrary function on their inputs without sharing the value of their inputs with the opposing party. One of the most well known examples of 2PC is Yao's Millionaires' problem, in which two parties, Alice and Bob, are millionaires who wish to determine who is wealthier without revealing their wealth. Formally, Alice has wealth , Bob has wealth , and they wish to compute without revealing the values or .
Non-interactive zero-knowledge proofs are zero-knowledge proofs where information between a prover and a verifier can be authenticated by the prover, without revealing any of the specific information beyond the validity of the transaction itself. This function of encryption makes direct communication between the prover and verifier unnecessary, effectively removing any intermediaries. The core trustless cryptography "proofing" involves a hash function generation of a random number, constrained within mathematical parameters determined by the prover and verifier.
The framework of universal composability (UC) is a general-purpose model for the analysis of cryptographic protocols. It guarantees very strong security properties. Protocols remain secure even if arbitrarily composed with other instances of the same or other protocols. Security is defined in the sense of protocol emulation. Intuitively, a protocol is said to emulate another one, if no environment (observer) can distinguish the executions. Literally, the protocol may simulate the other protocol. The notion of security is derived by implication. Assume a protocol is secure per definition. If another protocol emulates protocol such that no environment tells apart the emulation from the execution of the protocol, then the emulated protocol is as secure as protocol .
Cynthia Dwork is an American computer scientist at Harvard University, where she is Gordon McKay Professor of Computer Science, Radcliffe Alumnae Professor at the Radcliffe Institute for Advanced Study, and Affiliated Professor, Harvard Law School and Harvard's Department of Statistics.
The generic group model is an idealised cryptographic model, where the adversary is only given access to a randomly chosen encoding of a group, instead of efficient encodings, such as those used by the finite field or elliptic curve groups used in practice.
Verifiable computing enables a computer to offload the computation of some function, to other perhaps untrusted clients, while maintaining verifiable results. The other clients evaluate the function and return the result with a proof that the computation of the function was carried out correctly. The introduction of this notion came as a result of the increasingly common phenomenon of "outsourcing" computation to untrusted users in projects such as SETI@home and also to the growing desire of weak clients to outsource computational tasks to a more powerful computation service like in cloud computing. The concept dates back to work by Babai et al., and has been studied under various terms, including "checking computations", "delegating computations", "certified computation", and verifiable computing. The term verifiable computing itself was formalized by Rosario Gennaro, Craig Gentry, and Bryan Parno, and echoes Micali's "certified computation".
Mordechai M. "Moti" Yung is a cryptographer and computer scientist known for his work on cryptovirology and kleptography.
Amit Sahai is an American computer scientist. He is a professor of computer science at UCLA and the director of the Center for Encrypted Functionalities.
A quantum cryptographic protocol is device-independent if its security does not rely on trusting that the quantum devices used are truthful. Thus the security analysis of such a protocol needs to consider scenarios of imperfect or even malicious devices. Several important problems have been shown to admit unconditional secure and device-independent protocols. A closely related topic is measurement-device independent quantum key distribution.
Garbled circuit is a cryptographic protocol that enables two-party secure computation in which two mistrusting parties can jointly evaluate a function over their private inputs without the presence of a trusted third party. In the garbled circuit protocol, the function has to be described as a Boolean circuit.