A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weak entropy source, together with a short, uniformly random seed, generates a highly random output that appears independent from the source and uniformly distributed. [1] Examples of weakly random sources include radioactive decay or thermal noise; the only restriction on possible sources is that there is no way they can be fully controlled, calculated or predicted, and that a lower bound on their entropy rate can be established. For a given source, a randomness extractor can even be considered to be a true random number generator (TRNG); but there is no single extractor that has been proven to produce truly random output from any type of weakly random source.
Sometimes the term "bias" is used to denote a weakly random source's departure from uniformity, and in older literature, some extractors are called unbiasing algorithms, [2] as they take the randomness from a so-called "biased" source and output a distribution that appears unbiased. The weakly random source will always be longer than the extractor's output, but an efficient extractor is one that lowers this ratio of lengths as much as possible, while simultaneously keeping the seed length low. Intuitively, this means that as much randomness as possible has been "extracted" from the source.
An extractor has some conceptual similarities with a pseudorandom generator (PRG), but the two concepts are not identical. Both are functions that take as input a small, uniformly random seed and produce a longer output that "looks" uniformly random. Some pseudorandom generators are, in fact, also extractors. (When a PRG is based on the existence of hard-core predicates, one can think of the weakly random source as a set of truth tables of such predicates and prove that the output is statistically close to uniform. [3] ) However, the general PRG definition does not specify that a weakly random source must be used, and while in the case of an extractor, the output should be statistically close to uniform, in a PRG it is only required to be computationally indistinguishable from uniform, a somewhat weaker concept.
The min-entropy of a distribution (denoted ), is the largest real number such that for every in the range of . In essence, this measures how likely is to take its most likely value, giving a worst-case bound on how random appears. Letting denote the uniform distribution over , clearly .
For an n-bit distribution with min-entropy k, we say that is an distribution.
Definition (Extractor):(k, ε)-extractor
Let be a function that takes as input a sample from an distribution and a d-bit seed from , and outputs an m-bit string. is a (k, ε)-extractor, if for all distributions , the output distribution of is ε-close to .
In the above definition, ε-close refers to statistical distance.
Intuitively, an extractor takes a weakly random n-bit input and a short, uniformly random seed and produces an m-bit output that looks uniformly random. The aim is to have a low (i.e. to use as little uniform randomness as possible) and as high an as possible (i.e. to get out as many close-to-random bits of output as we can).
An extractor is strong if concatenating the seed with the extractor's output yields a distribution that is still close to uniform.
Definition (Strong Extractor): A -strong extractor is a function
such that for every distribution the distribution (the two copies of denote the same random variable) is -close to the uniform distribution on .
Using the probabilistic method, it can be shown that there exists a (k, ε)-extractor, i.e. that the construction is possible. However, it is usually not enough merely to show that an extractor exists. An explicit construction is needed, which is given as follows:
Definition (Explicit Extractor): For functions k(n), ε(n), d(n), m(n) a family Ext = {Extn} of functions
is an explicit (k, ε)-extractor, if Ext(x, y) can be computed in polynomial time (in its input length) and for every n, Extn is a (k(n), ε(n))-extractor.
By the probabilistic method, it can be shown that there exists a (k, ε)-extractor with seed length
and output length
A variant of the randomness extractor with weaker properties is the disperser.
One of the most important aspects of cryptography is random key generation. [5] It is often necessary to generate secret and random keys from sources that are semi-secret or which may be compromised to some degree. By taking a single, short (and secret) random key as a source, an extractor can be used to generate a longer pseudo-random key, which then can be used for public key encryption. More specifically, when a strong extractor is used its output will appear be uniformly random, even to someone who sees part (but not all) of the source. For example, if the source is known but the seed is not known (or vice versa). This property of extractors is particularly useful in what is commonly called Exposure-Resilient cryptography in which the desired extractor is used as an Exposure-Resilient Function (ERF). Exposure-Resilient cryptography takes into account that the fact that it is difficult to keep secret the initial exchange of data which often takes place during the initialization of an encryption application e.g., the sender of encrypted information has to provide the receivers with information which is required for decryption.
The following paragraphs define and establish an important relationship between two kinds of ERF--k-ERF and k-APRF--which are useful in Exposure-Resilient cryptography.
Definition (k-ERF):An adaptive k-ERF is a functionwhere, for a random input, when a computationally unbounded adversarycan adaptively read all ofexcept forbits,for some negligible function (defined below).
The goal is to construct an adaptive ERF whose output is highly random and uniformly distributed. But a stronger condition is often needed in which every output occurs with almost uniform probability. For this purpose Almost-Perfect Resilient Functions (APRF) are used. The definition of an APRF is as follows:
Definition (k-APRF):AAPRF is a functionwhere, for any setting ofbits of the inputto any fixed values, the probability vectorof the outputover the random choices for theremaining bits satisfiesfor alland for some negligible function.
Kamp and Zuckerman [6] have proved a theorem stating that if a function is a k-APRF, then is also a k-ERF. More specifically, any extractor having sufficiently small error and taking as input an oblivious, bit-fixing source is also an APRF and therefore also a k-ERF. A more specific extractor is expressed in this lemma:
Lemma:Any-extractorfor the set ofoblivious bit-fixing sources, whereis negligible, is also a k-APRF.
This lemma is proved by Kamp and Zuckerman. [6] The lemma is proved by examining the distance from uniform of the output, which in a -extractor obviously is at most , which satisfies the condition of the APRF.
The lemma leads to the following theorem, stating that there in fact exists a k-APRF function as described:
Theorem (existence):For any positive constant, there exists an explicit k-APRF, computable in a linear number of arithmetic operations on-bit strings, withand.
Definition (negligible function): In the proof of this theorem, we need a definition of a negligible function. A function is defined as being negligible if for all constants .
Proof: Consider the following -extractor: The function is an extractor for the set of oblivious bit-fixing source: . has , and .
The proof of this extractor's existence with , as well as the fact that it is computable in linear computing time on the length of can be found in the paper by Jesse Kamp and David Zuckerman (p. 1240).
That this extractor fulfills the criteria of the lemma is trivially true as is a negligible function.
The size of is:
Since we know then the lower bound on is dominated by . In the last step we use the fact that which means that the power of is at most . And since is a positive integer we know that is at most .
The value of is calculated by using the definition of the extractor, where we know:
and by using the value of we have:
Using this value of we account for the worst case, where is on its lower bound. Now by algebraic calculations we get:
Which inserted in the value of gives
which proves that there exists an explicit k-APRF extractor with the given properties.
Perhaps the earliest example is due to John von Neumann. From the input stream, his extractor took bits, two at a time (first and second, then third and fourth, and so on). If the two bits matched, no output was generated. If the bits differed, the value of the first bit was output. The Von Neumann extractor can be shown to produce a uniform output even if the distribution of input bits is not uniform so long as each bit has the same probability of being one and there is no correlation between successive bits. [7]
Thus, it takes as input a Bernoulli sequence with p not necessarily equal to 1/2, and outputs a Bernoulli sequence with More generally, it applies to any exchangeable sequence—it only relies on the fact that for any pair, 01 and 10 are equally likely: for independent trials, these have probabilities , while for an exchangeable sequence the probability may be more complicated, but both are equally likely. To put it simply, because the bits are statistically independent and due to the commutative property of multiplication, it would follow that . Hence, if pairs of 01 and 10 are mapped onto bits 0 and 1 and pairs 00 and 11 are discarded, then the output will be a uniform distribution.
Iterations upon the Von Neumann extractor include the Elias and Peres extractor, the latter of which reuses bits in order to produce larger output streams than the Von Neumann extractor given the same size input stream. [8]
Another approach is to use the output of a chaos machine applied to the input stream. This approach generally relies on properties of chaotic systems. Input bits are pushed to the machine, evolving orbits and trajectories in multiple dynamical systems. Thus, small differences in the input produce very different outputs. Such a machine has a uniform output even if the distribution of input bits is not uniform or has serious flaws, and can therefore use weak entropy sources. Additionally, this scheme allows for increased complexity, quality, and security of the output stream, controlled by specifying three parameters: time cost, memory required, and secret key.
Note that while true chaotic systems are mathematically sound for 'amplifying' entropy, this is predicated on the availability of real numbers with an infinite precision. When implemented in digital computers with finite precision number representation, as in chaos machines using IEEE 754 Floating-Point, the periodicity has been shown to fall far short of the full space for a given bit length. [9]
It is also possible to use a cryptographic hash function as a randomness extractor. However, not every hashing algorithm is suitable for this purpose.[ citation needed ]
Randomness extractors are used widely in cryptographic applications, whereby a cryptographic hash function is applied to a high-entropy, but non-uniform source, such as disk drive timing information or keyboard delays, to yield a uniformly random result.
Randomness extractors have played a major role in recent quantum cryptography developments, for example, distilling the raw output from a quantum random number generators into a shorter, secure and uniformly random output. [10]
Randomness extraction is also used in some branches of computational complexity theory and in the construction of list-decodable error correcting codes.
In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size n is in NC if there exist constants c and k such that it can be solved in time O((log n)c) using O(nk) parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.
In the theory of computation, a branch of theoretical computer science, a pushdown automaton (PDA) is a type of automaton that employs a stack.
A binary symmetric channel is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit, and the receiver will receive a bit. The bit will be "flipped" with a "crossover probability" of p, and otherwise is received correctly. This model can be applied to varied communication channels such as telephone lines or disk drive storage.
In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. As a consequence, a probabilistic Turing machine can—unlike a deterministic Turing Machine—have stochastic results; that is, on a given input and instruction state machine, it may have different run times, or it may not halt at all; furthermore, it may accept an input in one execution and reject the same input in another execution.
In quantum information theory, a quantum circuit is a model for quantum computation, similar to classical circuits, in which a computation is a sequence of quantum gates, measurements, initializations of qubits to known values, and possibly other actions. The minimum set of actions that a circuit needs to be able to perform on the qubits to enable quantum computation is known as DiVincenzo's criteria.
A finite-state transducer (FST) is a finite-state machine with two memory tapes, following the terminology for Turing machines: an input tape and an output tape. This contrasts with an ordinary finite-state automaton, which has a single tape. An FST is a type of finite-state automaton (FSA) that maps between two sets of symbols. An FST is more general than an FSA. An FSA defines a formal language by defining a set of accepted strings, while an FST defines a relation between sets of strings.
In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it can be used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term ; thus the model is in the form of a stochastic difference equation which should not be confused with a differential equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.
In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution.
In computational complexity theory and cryptography, the existence of pseudorandom generators is related to the existence of one-way functions through a number of theorems, collectively referred to as the pseudorandom generator theorem.
In cryptography, a pseudorandom permutation (PRP) is a function that cannot be distinguished from a random permutation (that is, a permutation selected at random with uniform probability, from the family of all permutations on the function's domain) with practical effort.
In theoretical computer science, a small-bias sample space is a probability distribution that fools parity functions. In other words, no parity function can distinguish between a small-bias sample space and the uniform distribution with high probability, and hence, small-bias sample spaces naturally give rise to pseudorandom generators for parity functions.
A locally testable code is a type of error-correcting code for which it can be determined if a string is a word in that code by looking at a small number of bits of the string. In some situations, it is useful to know if the data is corrupted without decoding all of it so that appropriate action can be taken in response. For example, in communication, if the receiver encounters a corrupted code, it can request the data be re-sent, which could increase the accuracy of said data. Similarly, in data storage, these codes can allow for damaged data to be recovered and rewritten properly.
In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.
A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining a small number of bits of a possibly corrupted codeword. This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two.
In 1997, Moni Naor and Omer Reingold described efficient constructions for various cryptographic primitives in private key as well as public-key cryptography. Their result is the construction of an efficient pseudorandom function. Let p and l be prime numbers with l |p−1. Select an element g ∈ of multiplicative order l. Then for each (n+1)-dimensional vector a = (a0,a1, ..., an)∈ they define the function
Fuzzy extractors are a method that allows biometric data to be used as inputs to standard cryptographic techniques, to enhance computer security. "Fuzzy", in this context, refers to the fact that the fixed values required for cryptography will be extracted from values close to but not identical to the original key, without compromising the security required. One application is to encrypt and authenticate users records, using the biometric inputs of the user as a key.
In extractor theory, a randomness merger is a function which extracts randomness out of a set of random variables, provided that at least one of them is uniformly random. Its name stems from the fact that it can be seen as a procedure which "merges" all the variables into one, preserving at least some of the entropy contained in the uniformly random variable. Mergers are currently used in order to explicitly construct randomness extractors.
Adding controlled noise from predetermined distributions is a way of designing differentially private mechanisms. This technique is useful for designing private mechanisms for real-valued functions on sensitive data. Some commonly used distributions for adding noise include Laplace and Gaussian distributions.
The counting lemmas this article discusses are statements in combinatorics and graph theory. The first one extracts information from -regular pairs of subsets of vertices in a graph , in order to guarantee patterns in the entire graph; more explicitly, these patterns correspond to the count of copies of a certain graph in . The second counting lemma provides a similar yet more general notion on the space of graphons, in which a scalar of the cut distance between two graphs is correlated to the homomorphism density between them and .
In cryptography, full entropy is a property of an output of a random number generator. The output has full entropy if it cannot practically be distinguished from an output of a theoretical perfect random number source.