Random self-reducibility

Last updated

Random self-reducibility (RSR) is the rule that a good algorithm for the average case implies a good algorithm for the worst case. RSR is the ability to solve all instances of a problem by solving a large fraction of the instances.

Contents

Definition

If for a function f evaluating any instance x can be reduced in polynomial time to the evaluation of f on one or more random instances yi, then it is self-reducible (this is also known as a non-adaptive uniform self-reduction). In a random self-reduction, an arbitrary worst-case instance x in the domain of f is mapped to a random set of instances y1, ..., yk. This is done so that f(x) can be computed in polynomial time, given the coin-toss sequence from the mapping, x, and f(y1), ..., f(yk). Therefore, taking the average with respect to the induced distribution on yi, the average-case complexity of f is the same (within polynomial factors) as the worst-case randomized complexity of f.

One special case of note is when each random instance yi is distributed uniformly over the entire set of elements in the domain of f that have a length of |x|. In this case f is as hard on average as it is in the worst case. This approach contains two key restrictions. First the generation of y1, ..., yk is performed non-adaptively. This means that y2 is picked before f(y1) is known. Second, it is not necessary that the points y1, ..., yk be uniformly distributed.

Application in cryptographic protocols

Problems that require some privacy in the data (typically cryptographic problems) can use randomization to ensure that privacy. In fact, the only provably secure cryptographic system (the one-time pad) has its security relying totally on the randomness of the key data supplied to the system.

The field of cryptography utilizes the fact that certain number-theoretic functions are randomly self-reducible. This includes probabilistic encryption and cryptographically strong pseudorandom number generation. Also, instance-hiding schemes (where a weak private device uses a strong public device without revealing its data) are easily exemplified by random self-reductions.

Examples

The discrete logarithm problem, the quadratic residuosity problem, the RSA inversion problem, and the problem of computing the permanent of a matrix are each random self-reducible problems.

Discrete logarithm

Theorem: Given a cyclic group G of size |G|. If a deterministic polynomial time algorithm A computes the discrete logarithm for a 1/poly(n) fraction of all inputs (where n = log |G| is the input size), then there is a randomized polynomial time algorithm for discrete logarithm for all inputs.

Given a generator g of a cyclic group G = { gi | 0 ≤ i < |G| }, and an xG, the discrete log of x to the base g is the integer k (0 ≤ k < |G|) with x = gk. Take B to be distributed uniformly on {0,...,|G|  1}, then xgB = gk+B is also distributed uniformly on G. Therefore xgB is independent of x, and its logarithm can be computed with probability 1/poly(n) in polynomial time. Then loggx ≡ loggxgB - B (mod |G|) and the discrete logarithm is self-reducible.

Permanent of a matrix

Given the definition of the permanent of a matrix, it is clear that PERM(M) for any n-by-n matrix M is a multivariate polynomial of degree n over the entries in M. Calculating the permanent of a matrix is a difficult computational taskPERM has been shown to be #P-complete (proof). Moreover, the ability to compute PERM(M) for most matrices implies the existence of a random program that computes PERM(M) for all matrices. This demonstrates that PERM is random self-reducible. The discussion below considers the case where the matrix entries are drawn from a finite field Fp for some prime p, and where all arithmetic is performed in that field.

Let X be a random n-by-n matrix with entries from Fp. Since all the entries of any matrix M + kX are linear functions of k, by composing those linear functions with the degree n multivariate polynomial that calculates PERM(M) we get another degree n polynomial on k, which we will call p(k). Clearly, p(0) is equal to the permanent of M.

Suppose we know a program that computes the correct value of PERM(A) for most n-by-n matrices with entries from Fp---specifically, 1  1/(3n) of them. Then with probability of approximately two-thirds, we can calculate PERM(M + kX) for k = 1,2,...,n + 1. Once we have those n + 1 values, we can solve for the coefficients of p(k) using interpolation (remember that p(k) has degree n). Once we know p(k) exactly, we evaluate p(0), which is equal to PERM(M).

If we do so, we run the risk of being wrong 1/3 of the time, but by picking multiple random Xs and repeating the above procedure many times, and only providing the majority winner as an answer, we can drive the error rate down very low.

Consequences

Related Research Articles

Shor's algorithm is a quantum algorithm for finding the prime factors of an integer. It was developed in 1994 by the American mathematician Peter Shor. It is one of the few known quantum algorithms with compelling potential applications and strong evidence of superpolynomial speedup compared to best known classical algorithms. On the other hand, factoring numbers of practical significance requires far more qubits than available in the near future. Another concern is that noise in quantum circuits may undermine results, requiring additional qubits for quantum error correction.

In mathematics, for given real numbers a and b, the logarithm logba is a number x such that bx = a. Analogously, in any group G, powers bk can be defined for all integers k, and the discrete logarithm logba is an integer k such that bk = a. In number theory, the more commonly used term is index: we can write x = indra (mod m) (read "the index of a to the base r modulo m") for rxa (mod m) if r is a primitive root of m and gcd(a,m) = 1.

In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way.

A commitment scheme is a cryptographic primitive that allows one to commit to a chosen value while keeping it hidden to others, with the ability to reveal the committed value later. Commitment schemes are designed so that a party cannot change the value or statement after they have committed to it: that is, commitment schemes are binding. Commitment schemes have important applications in a number of cryptographic protocols including secure coin flipping, zero-knowledge proofs, and secure computation.

In quantum computing, a quantum algorithm is an algorithm which runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is usually used for those algorithms which seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.

In computational number theory, the index calculus algorithm is a probabilistic algorithm for computing discrete logarithms. Dedicated to the discrete logarithm in where is a prime, index calculus leads to a family of algorithms adapted to finite fields and to some families of elliptic curves. The algorithm collects relations among the discrete logarithms of small primes, computes them by a linear algebra procedure and finally expresses the desired discrete logarithm with respect to the discrete logarithms of small primes.

In cryptography, XTR is an algorithm for public-key encryption. XTR stands for 'ECSTR', which is an abbreviation for Efficient and Compact Subgroup Trace Representation. It is a method to represent elements of a subgroup of a multiplicative group of a finite field. To do so, it uses the trace over to represent elements of a subgroup of .

In mathematics, particularly computational algebra, Berlekamp's algorithm is a well-known method for factoring polynomials over finite fields. The algorithm consists mainly of matrix reduction and polynomial GCD computations. It was invented by Elwyn Berlekamp in 1967. It was the dominant algorithm for solving the problem until the Cantor–Zassenhaus algorithm of 1981. It is currently implemented in many well-known computer algebra systems.

In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently. It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.

In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case complexity which considers the maximal complexity of the algorithm over all possible inputs.

In linear algebra, the computation of the permanent of a matrix is a problem that is thought to be more difficult than the computation of the determinant of a matrix despite the apparent similarity of the definitions.

In mathematics the Function Field Sieve is one of the most efficient algorithms to solve the Discrete Logarithm Problem (DLP) in a finite field. It has heuristic subexponential complexity. Leonard Adleman developed it in 1994 and then elaborated it together with M. D. Huang in 1999. Previous work includes the work of D. Coppersmith about the DLP in fields of characteristic two.

In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.

In cryptography, cryptographic hash functions can be divided into two main categories. In the first category are those functions whose designs are based on mathematical problems, and whose security thus follows from rigorous mathematical proofs, complexity theory and formal reduction. These functions are called Provably Secure Cryptographic Hash Functions. To construct these is very difficult, and few examples have been introduced. Their practical use is limited.

In cryptography, SWIFFT is a collection of provably secure hash functions. It is based on the concept of the fast Fourier transform (FFT). SWIFFT is not the first hash function based on FFT, but it sets itself apart by providing a mathematical proof of its security. It also uses the LLL basis reduction algorithm. It can be shown that finding collisions in SWIFFT is at least as difficult as finding short vectors in cyclic/ideal lattices in the worst case. By giving a security reduction to the worst-case scenario of a difficult mathematical problem, SWIFFT gives a much stronger security guarantee than most other cryptographic hash functions.

In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.

Digital signatures are a means to protect digital information from intentional modification and to authenticate the source of digital information. Public key cryptography provides a rich set of different cryptographic algorithms the create digital signatures. However, the primary public key signatures currently in use will become completely insecure if scientists are ever able to build a moderately sized quantum computer. Post quantum cryptography is a class of cryptographic algorithms designed to be resistant to attack by a quantum cryptography. Several post quantum digital signature algorithms based on hard problems in lattices are being created replace the commonly used RSA and elliptic curve signatures. A subset of these lattice based scheme are based on a problem known as Ring learning with errors. Ring learning with errors based digital signatures are among the post quantum signatures with the smallest public key and signature sizes

In post-quantum cryptography, ring learning with errors (RLWE) is a computational problem which serves as the foundation of new cryptographic algorithms, such as NewHope, designed to protect against cryptanalysis by quantum computers and also to provide the basis for homomorphic encryption. Public-key cryptography relies on construction of mathematical problems that are believed to be hard to solve if no further information is available, but are easy to solve if some information used in the problem construction is known. Some problems of this sort that are currently used in cryptography are at risk of attack if sufficiently large quantum computers can ever be built, so resistant problems are sought. Homomorphic encryption is a form of encryption that allows computation on ciphertext, such as arithmetic on numeric values stored in an encrypted database.

In cryptography, a public key exchange algorithm is a cryptographic algorithm which allows two parties to create and share a secret key, which they can use to encrypt messages between themselves. The ring learning with errors key exchange (RLWE-KEX) is one of a new class of public key exchange algorithms that are designed to be secure against an adversary that possesses a quantum computer. This is important because some public key algorithms in use today will be easily broken by a quantum computer if such computers are implemented. RLWE-KEX is one of a set of post-quantum cryptographic algorithms which are based on the difficulty of solving certain mathematical problems involving lattices. Unlike older lattice based cryptographic algorithms, the RLWE-KEX is provably reducible to a known hard problem in lattices.

Short integer solution (SIS) and ring-SIS problems are two average-case problems that are used in lattice-based cryptography constructions. Lattice-based cryptography began in 1996 from a seminal work by Miklós Ajtai who presented a family of one-way functions based on SIS problem. He showed that it is secure in an average case if the shortest vector problem (where for some constant ) is hard in a worst-case scenario.

References