Pseudorandom generators for polynomials

Last updated

In theoretical computer science, a pseudorandom generator for low-degree polynomials is an efficient procedure that maps a short truly random seed to a longer pseudorandom string in such a way that low-degree polynomials cannot distinguish the output distribution of the generator from the truly random distribution. That is, evaluating any low-degree polynomial at a point determined by the pseudorandom string is statistically close to evaluating the same polynomial at a point that is chosen uniformly at random.

Contents

Pseudorandom generators for low-degree polynomials are a particular instance of pseudorandom generators for statistical tests, where the statistical tests considered are evaluations of low-degree polynomials.

Definition

A pseudorandom generator for polynomials of degree over a finite field is an efficient procedure that maps a sequence of field elements to a sequence of field elements such that any -variate polynomial over of degree is fooled by the output distribution of . In other words, for every such polynomial , the statistical distance between the distributions and is at most a small , where is the uniform distribution over .

Construction

Lovett (3rd from left) in 2009 ComplexityTheory2009 MFO12245.jpg
Lovett (3rd from left) in 2009

The case corresponds to pseudorandom generators for linear functions and is solved by small-bias generators. For example, the construction of Naor & Naor (1990) achieves a seed length of , which is optimal up to constant factors.

Bogdanov & Viola (2007) conjectured that the sum of small-bias generators fools low-degree polynomials and were able to prove this under the Gowers inverse conjecture. Lovett (2009) proved unconditionally that the sum of small-bias spaces fools polynomials of degree . Viola (2008) proves that, in fact, taking the sum of only small-bias generators is sufficient to fool polynomials of degree . The analysis of Viola (2008) gives a seed length of .

Related Research Articles

A pseudorandom number generator (PRNG), also known as a deterministic random bit generator (DRBG), is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random numbers. The PRNG-generated sequence is not truly random, because it is completely determined by an initial value, called the PRNG's seed. Although sequences that are closer to truly random can be generated using hardware random number generators, pseudorandom number generators are important in practice for their speed in number generation and their reproducibility.

In mathematics, in particular abstract algebra, a graded ring is a ring such that the underlying additive group is a direct sum of abelian groups such that . The index set is usually the set of nonnegative integers or the set of integers, but can be any monoid. The direct sum decomposition is usually referred to as gradation or grading.

<span class="mw-page-title-main">Law of large numbers</span> Averages of repeated trials converge to the expected value

In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed.

In mathematics, a low-discrepancy sequence is a sequence with the property that for all values of N, its subsequence x1, ..., xN has a low discrepancy.

In the mathematical field of spectral graph theory, a Ramanujan graph is a regular graph whose spectral gap is almost as large as possible. Such graphs are excellent spectral expanders. As Murty's survey paper notes, Ramanujan graphs "fuse diverse branches of pure mathematics, namely, number theory, representation theory, and algebraic geometry". These graphs are indirectly named after Srinivasa Ramanujan; their name comes from the Ramanujan–Petersson conjecture, which was used in a construction of some of these graphs.

Inversive congruential generators are a type of nonlinear congruential pseudorandom number generator, which use the modular multiplicative inverse to generate the next number in a sequence. The standard formula for an inversive congruential generator, modulo some prime q is:

In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution.

A fully polynomial-time approximation scheme (FPTAS) is an algorithm for finding approximate solutions to function problems, especially optimization problems. An FPTAS takes as input an instance of the problem and a parameter ε > 0. It returns as output a value is at least times the correct value, and at most times the correct value.

In information theory, information dimension is an information measure for random vectors in Euclidean space, based on the normalized entropy of finely quantized versions of the random vectors. This concept was first introduced by Alfréd Rényi in 1959.

In mathematics, a negligible function is a function such that for every positive integer c there exists an integer Nc such that for all x > Nc,

In mathematics, the Grothendieck inequality states that there is a universal constant with the following property. If Mij is an n × n matrix with

In theoretical computer science, a small-bias sample space is a probability distribution that fools parity functions. In other words, no parity function can distinguish between a small-bias sample space and the uniform distribution with high probability, and hence, small-bias sample spaces naturally give rise to pseudorandom generators for parity functions.

A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weak entropy source, together with a short, uniformly random seed, generates a highly random output that appears independent from the source and uniformly distributed. Examples of weakly random sources include radioactive decay or thermal noise; the only restriction on possible sources is that there is no way they can be fully controlled, calculated or predicted, and that a lower bound on their entropy rate can be established. For a given source, a randomness extractor can even be considered to be a true random number generator (TRNG); but there is no single extractor that has been proven to produce truly random output from any type of weakly random source.

In cryptography, Learning with errors (LWE) is a mathematical problem that is widely used in cryptography to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.

In 1997, Moni Naor and Omer Reingold described efficient constructions for various cryptographic primitives in private key as well as public-key cryptography. Their result is the construction of an efficient pseudorandom function. Let p and l be prime numbers with l |p−1. Select an element g of multiplicative order l. Then for each (n+1)-dimensional vector a = (a0,a1, ..., an)∈ they define the function

An approach to nonlinear congruential methods of generating uniform pseudorandom numbers in the interval [0,1) is the Inversive congruential generator with prime modulus. A generalization for arbitrary composite moduli with arbitrary distinct primes will be present here.

The distributional learning theory or learning of probability distribution is a framework in computational learning theory. It has been proposed from Michael Kearns, Yishay Mansour, Dana Ron, Ronitt Rubinfeld, Robert Schapire and Linda Sellie in 1994 and it was inspired from the PAC-framework introduced by Leslie Valiant.

In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.

In mathematics, the field of logarithmic-exponential transseries is a non-Archimedean ordered differential field which extends comparability of asymptotic growth rates of elementary nontrigonometric functions to a much broader class of objects. Each log-exp transseries represents a formal asymptotic behavior, and it can be manipulated formally, and when it converges, corresponds to actual behavior. Transseries can also be convenient for representing functions. Through their inclusion of exponentiation and logarithms, transseries are a strong generalization of the power series at infinity and other similar asymptotic expansions.

Roth's theorem on arithmetic progressions is a result in additive combinatorics concerning the existence of arithmetic progressions in subsets of the natural numbers. It was first proven by Klaus Roth in 1953. Roth's theorem is a special case of Szemerédi's theorem for the case .

References