Hybrid argument (cryptography)

Last updated

In cryptography, the hybrid argument is a proof technique used to show that two distributions are computationally indistinguishable.

Contents

History

Hybrid arguments had their origin in a papers by Andrew Yao in 1982 and Shafi Goldwasser and Silvio Micali in 1983. [1]

Formal description

Formally, to show two distributions D1 and D2 are computationally indistinguishable, we can define a sequence of hybrid distributionsD1 := H0, H1, ..., Ht =: D2 where t is polynomial in the security parameter n. Define the advantage of any probabilistic efficient (polynomial-bounded time) algorithm A as

where the dollar symbol ($) denotes that we sample an element from the distribution at random.

By triangle inequality, it is clear that for any probabilistic polynomial time algorithm A,

Thus there must exist some k s.t. 0 ≤ k < t(n) and

Since t is polynomial-bounded, for any such algorithm A, if we can show that it has a negligible advantage function between distributions Hi and Hi+1 for every i, that is,

then it immediately follows that its advantage to distinguish the distributions D1 = H0 and D2 = Ht must also be negligible. This fact gives rise to the hybrid argument: it suffices to find such a sequence of hybrid distributions and show each pair of them is computationally indistinguishable. [2]

Applications

The hybrid argument is extensively used in cryptography. Some simple proofs using hybrid arguments are:

See also

Notes

  1. Bellare, Mihir, and Phillip Rogaway. "Code-based game-playing proofs and the security of triple encryption." Cryptology ePrint Archive (2004)
  2. Lemma 3 in Dodis's notes.
  3. Theorem 1 in Dodis's notes.
  4. Lemma 80.5, Corollary 81.7 in Pass's notes.

Related Research Articles

<span class="mw-page-title-main">BQP</span> Computational complexity class of problems

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

<span class="mw-page-title-main">Markov chain</span> Random process independent of past history

A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). Markov processes are named in honor of the Russian mathematician Andrey Markov.

In machine learning, support vector machines are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories by Vladimir Vapnik with colleagues SVMs are one of the most studied models, being based on statistical learning frameworks of VC theory proposed by Vapnik and Chervonenkis (1974).

A cryptographically secure pseudorandom number generator (CSPRNG) or cryptographic pseudorandom number generator (CPRNG) is a pseudorandom number generator (PRNG) with properties that make it suitable for use in cryptography. It is also referred to as a cryptographic random number generator (CRNG).

In mathematics, a Hermitian matrix is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the i-th row and j-th column is equal to the complex conjugate of the element in the j-th row and i-th column, for all indices i and j:

<span class="mw-page-title-main">Covariance matrix</span> Measure of covariance of components of a random vector

In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In computational complexity theory, an Arthur–Merlin protocol, introduced by Babai (1985), is an interactive proof system in which the verifier's coin tosses are constrained to be public. Goldwasser & Sipser (1986) proved that all (formal) languages with interactive proofs of arbitrary length with private coins also have interactive proofs with public coins.

In computational complexity and cryptography, two families of distributions are computationally indistinguishable if no efficient algorithm can tell the difference between them except with negligible probability.

Inversive congruential generators are a type of nonlinear congruential pseudorandom number generator, which use the modular multiplicative inverse to generate the next number in a sequence. The standard formula for an inversive congruential generator, modulo some prime q is:

<span class="mw-page-title-main">IP (complexity)</span>

In computational complexity theory, the class IP is the class of problems solvable by an interactive proof system. It is equal to the class PSPACE. The result was established in a series of papers: the first by Lund, Karloff, Fortnow, and Nisan showed that co-NP had multiple prover interactive proofs; and the second, by Shamir, employed their technique to establish that IP=PSPACE. The result is a famous example where the proof does not relativize.

In complexity theory, the Karp–Lipton theorem states that if the Boolean satisfiability problem (SAT) can be solved by Boolean circuits with a polynomial number of logic gates, then

<span class="mw-page-title-main">Z-channel (information theory)</span>

In coding theory and information theory, a Z-channel or binary asymmetric channel is a communications channel used to model the behaviour of some data storage systems.

In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case complexity which considers the maximal complexity of the algorithm over all possible inputs.

A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weak entropy source, together with a short, uniformly random seed, generates a highly random output that appears independent from the source and uniformly distributed. Examples of weakly random sources include radioactive decay or thermal noise; the only restriction on possible sources is that there is no way they can be fully controlled, calculated or predicted, and that a lower bound on their entropy rate can be established. For a given source, a randomness extractor can even be considered to be a true random number generator (TRNG); but there is no single extractor that has been proven to produce truly random output from any type of weakly random source.

In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.

In computational complexity theory and cryptography, averaging argument is a standard argument for proving theorems. It usually allows us to convert probabilistic polynomial-time algorithms into non-uniform polynomial-size circuits.

In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

References