Reservoir sampling is a family of randomized algorithms for choosing a simple random sample, without replacement, of k items from a population of unknown size n in a single pass over the items. The size of the population n is not known to the algorithm and is typically too large for all n items to fit into main memory. The population is revealed to the algorithm over time, and the algorithm cannot look back at previous items. At any point, the current state of the algorithm must permit extraction of a simple random sample without replacement of size k over the part of the population seen so far.
Suppose we see a sequence of items, one at a time. We want to keep 10 items in memory, and we want them to be selected at random from the sequence. If we know the total number of items n and can access the items arbitrarily, then the solution is easy: select 10 distinct indices i between 1 and n with equal probability, and keep the i-th elements. The problem is that we do not always know the exact n in advance.
A simple and popular but slow algorithm, Algorithm R, was created by Jeffrey Vitter. [1]
Initialize an array indexed from to , containing the first k items of the input . This is the reservoir.
For each new input , generate a random number j uniformly in . If , then set Otherwise, discard .
Return after all inputs are processed.
This algorithm works by induction on .
When , Algorithm R returns all inputs, thus providing the basis for a proof by mathematical induction.
Here, the induction hypothesis is that the probability that a particular input is included in the reservoir just before the -th input is processed is and we must show that the probability that a particular input is included in the reservoir is just after the -th input is processed.
Apply Algorithm R to the -th input. Input is included with probability by definition of the algorithm. For any other input , by the induction hypothesis, the probability that it is included in the reservoir just before the -th input is processed is . The probability that it is still included in the reservoir after is processed (in other words, that is not replaced by ) is . The latter follows from the assumption that the integer is generated uniformly at random; once it becomes clear that a replacement will in fact occur, the probability that in particular is replaced by is .
We have shown that the probability that a new input enters the reservoir is equal to the probability that an existing input in the reservoir is retained. Therefore, we conclude by the principle of mathematical induction that Algorithm R does indeed produce a uniform random sample of the inputs.
While conceptually simple and easy to understand, this algorithm needs to generate a random number for each item of the input, including the items that are discarded. The algorithm's asymptotic running time is thus . Generating this amount of randomness and the linear run time causes the algorithm to be unnecessarily slow if the input population is large.
This is Algorithm R, implemented as follows:
(* S has items to sample, R will contain the result *)ReservoirSample(S[1..n],R[1..k])// fill the reservoir arrayfori:=1tokR[i]:=S[i]end// replace elements with gradually decreasing probabilityfori:=k+1ton(* randomInteger(a, b) generates a uniform integer from the inclusive range {a, ..., b} *)j:=randomInteger(1,i)ifj<=kR[j]:=S[i]endendend
If we generate random numbers independently, then the indices of the smallest of them is a uniform sample of the k-subsets of .
The process can be done without knowing :
Keep the smallest of that has been seen so far, as well as , the index of the largest among them. For each new , compare it with . If , then discard , store , and set to be the index of the largest among them. Otherwise, discard , and set .
Now couple this with the stream of inputs . Every time some is accepted, store the corresponding . Every time some is discarded, discard the corresponding .
This algorithm still needs random numbers, thus taking time. But it can be simplified.
First simplification: it is unnecessary to test new one by one, since the probability that the next acceptance happens at is , that is, the interval of acceptance follows a geometric distribution.
Second simplification: it's unnecessary to remember the entire array of the smallest of that has been seen so far, but merely , the largest among them. This is based on three observations:
This is Algorithm L, [2] which is implemented as follows:
(* S has items to sample, R will contain the result *)ReservoirSample(S[1..n],R[1..k])// fill the reservoir arrayfori=1tokR[i]:=S[i]end(* random() generates a uniform (0,1) random number *)W:=exp(log(random())/k)whilei<=ni:=i+floor(log(random())/log(1-W))+1ifi<=n(* replace a random item of the reservoir with item i *)R[randomInteger(1,k)]:=S[i]// random index between 1 and k, inclusiveW:=W*exp(log(random())/k)endendend
This algorithm computes three random numbers for each item that becomes part of the reservoir, and does not spend any time on items that do not. Its expected running time is thus , [2] which is optimal. [1] At the same time, it is simple to implement efficiently and does not depend on random deviates from exotic or hard-to-compute distributions.
If we associate with each item of the input a uniformly generated random number, the k items with the largest (or, equivalently, smallest) associated values form a simple random sample. [3] A simple reservoir-sampling thus maintains the k items with the currently largest associated values in a priority queue.
(* S is a stream of items to sample S.Current returns current item in stream S.Next advances stream to next position min-priority-queue supports: Count -> number of items in priority queue Minimum -> returns minimum key value of all items Extract-Min() -> Remove the item with minimum key Insert(key, Item) -> Adds item with specified key *)ReservoirSample(S[1..?])H:=newmin-priority-queuewhileShasdatar:=random()// uniformly random between 0 and 1, exclusiveifH.Count<kH.Insert(r,S.Current)else// keep k items with largest associated keysifr>H.MinimumH.Extract-Min()H.Insert(r,S.Current)endS.NextendreturnitemsinHend
The expected running time of this algorithm is and it is relevant mainly because it can easily be extended to items with weights.
The methods presented in the previous sections do not allow to obtain a priori fixed inclusion probabilities. Some applications require items' sampling probabilities to be according to weights associated with each item. For example, it might be required to sample queries in a search engine with weight as number of times they were performed so that the sample can be analyzed for overall impact on user experience. Let the weight of item i be , and the sum of all weights be W. There are two ways to interpret weights assigned to each item in the set: [4]
The following algorithm was given by Efraimidis and Spirakis that uses interpretation 1: [5]
(* S is a stream of items to sample S.Current returns current item in stream S.Weight returns weight of current item in stream S.Next advances stream to next position The power operator is represented by ^ min-priority-queue supports: Count -> number of items in priority queue Minimum() -> returns minimum key value of all items Extract-Min() -> Remove the item with minimum key Insert(key, Item) -> Adds item with specified key *)ReservoirSample(S[1..?])H:=newmin-priority-queuewhileShasdatar:=random()^(1/S.Weight)// random() produces a uniformly random number in (0,1)ifH.Count<kH.Insert(r,S.Current)else// keep k items with largest associated keysifr>H.MinimumH.Extract-Min()H.Insert(r,S.Current)endendS.NextendreturnitemsinHend
This algorithm is identical to the algorithm given in Reservoir Sampling with Random Sort except for the generation of the items' keys. The algorithm is equivalent to assigning each item a key where r is the random number and then selecting the k items with the largest keys. Equivalently, a more numerically stable formulation of this algorithm computes the keys as and select the k items with the smallest keys. [6] [ failed verification ]
The following algorithm is a more efficient version of A-Res, also given by Efraimidis and Spirakis: [5]
(* S is a stream of items to sample S.Current returns current item in stream S.Weight returns weight of current item in stream S.Next advances stream to next position The power operator is represented by ^ min-priority-queue supports: Count -> number of items in the priority queue Minimum -> minimum key of any item in the priority queue Extract-Min() -> Remove the item with minimum key Insert(Key, Item) -> Adds item with specified key *)ReservoirSampleWithJumps(S[1..?])H:=newmin-priority-queuewhileShasdataandH.Count<kr:=random()^(1/S.Weight)// random() produces a uniformly random number in (0,1)H.Insert(r,S.Current)S.NextendX:=log(random())/log(H.Minimum)// this is the amount of weight that needs to be jumped overwhileShasdataX:=X-S.WeightifX<=0t:=H.Minimum^S.Weightr:=random(t,1)^(1/S.Weight)// random(x, y) produces a uniformly random number in (x, y)H.Extract-Min()H.Insert(r,S.Current)X:=log(random())/log(H.Minimum)endS.NextendreturnitemsinHend
This algorithm follows the same mathematical properties that are used in A-Res, but instead of calculating the key for each item and checking whether that item should be inserted or not, it calculates an exponential jump to the next item which will be inserted. This avoids having to create random variates for each item, which may be expensive. The number of random variates required is reduced from to in expectation, where is the reservoir size, and is the number of items in the stream. [5]
Warning: the following description is wrong, see Chao's original paper and the discussion here.
Following algorithm was given by M. T. Chao uses interpretation 2: [7] and Tillé (2006). [8]
(* S has items to sample, R will contain the result S[i].Weight contains weight for each item *)WeightedReservoir-Chao(S[1..n],R[1..k])WSum:=0// fill the reservoir arrayfori:=1tokR[i]:=S[i]WSum:=WSum+S[i].Weightendfori:=k+1tonWSum:=WSum+S[i].Weightp:=S[i].Weight/WSum// probability for this itemj:=random();// uniformly random between 0 and 1ifj<=p// select item according to probabilityR[randomInteger(1,k)]:=S[i]//uniform selection in reservoir for replacementendendend
For each item, its relative weight is calculated and used to randomly decide if the item will be added into the reservoir. If the item is selected, then one of the existing items of the reservoir is uniformly selected and replaced with the new item. The trick here is that, if the probabilities of all items in the reservoir are already proportional to their weights, then by selecting uniformly which item to replace, the probabilities of all items remain proportional to their weight after the replacement.
Note that Chao doesn't specify how to sample the first k elements. He simple assumes we have some other way of picking them in proportion to their weight. Chao: "Assume that we have a sampling plan of fixed size with respect to S_k at time A; such that its first-order inclusion probability of X_t is π(k; i)".
Similar to the other algorithms, it is possible to compute a random weight j
and subtract items' probability mass values, skipping them while j > 0
, reducing the number of random numbers that have to be generated. [4]
(* S has items to sample, R will contain the result S[i].Weight contains weight for each item *)WeightedReservoir-Chao(S[1..n],R[1..k])WSum:=0// fill the reservoir arrayfori:=1tokR[i]:=S[i]WSum:=WSum+S[i].Weightendj:=random()// uniformly random between 0 and 1pNone:=1// probability that no item has been selected so far (in this jump)fori:=k+1tonWSum:=WSum+S[i].Weightp:=S[i].Weight/WSum// probability for this itemj-=p*pNonepNone:=pNone*(1-p)ifj<=0R[randomInteger(1,k)]:=S[i]//uniform selection in reservoir for replacementj=random()pNone:=1endendend
Suppose one wanted to draw k random cards from a deck of cards. A natural approach would be to shuffle the deck and then take the top k cards. In the general case, the shuffle also needs to work even if the number of cards in the deck is not known in advance, a condition which is satisfied by the inside-out version of the Fisher–Yates shuffle: [9]
(* S has the input, R will contain the output permutation *)Shuffle(S[1..n],R[1..n])R[1]:=S[1]forifrom2tondoj:=randomInteger(1,i)// inclusive rangeR[i]:=R[j]R[j]:=S[i]endend
Note that although the rest of the cards are shuffled, only the first k are important in the present context. Therefore, the array R need only track the cards in the first k positions while performing the shuffle, reducing the amount of memory needed. Truncating R to length k, the algorithm is modified accordingly:
(* S has items to sample, R will contain the result *)ReservoirSample(S[1..n],R[1..k])R[1]:=S[1]forifrom2tokdoj:=randomInteger(1,i)// inclusive rangeR[i]:=R[j]R[j]:=S[i]endforifromk+1tondoj:=randomInteger(1,i)// inclusive rangeif(j<=k)R[j]:=S[i]endendend
Since the order of the first k cards is immaterial, the first loop can be removed and R can be initialized to be the first k items of the input. This yields Algorithm R.
Reservoir sampling makes the assumption that the desired sample fits into main memory, often implying that k is a constant independent of n. In applications where we would like to select a large subset of the input list (say a third, i.e. ), other methods need to be adopted. Distributed implementations for this problem have been proposed. [10]
In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by David A. Huffman while he was a Sc.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes".
Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later.
In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.
In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:
In mathematics, a low-discrepancy sequence is a sequence with the property that for all values of , its subsequence has a low discrepancy.
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output are random variables.
In computer science, a selection algorithm is an algorithm for finding the th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of values, these algorithms take linear time, as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes time .
The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory.
Renewal theory is the branch of probability theory that generalizes the Poisson process for arbitrary holding times. Instead of exponentially distributed holding times, a renewal process may have any independent and identically distributed (IID) holding times that have finite mean. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times.
A fully polynomial-time approximation scheme (FPTAS) is an algorithm for finding approximate solutions to function problems, especially optimization problems. An FPTAS takes as input an instance of the problem and a parameter ε > 0. It returns as output a value which is at least times the correct value, and at most times the correct value.
In mathematics and computing, universal hashing refers to selecting a hash function at random from a family of hash functions with a certain mathematical property. This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known, and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.
In computer science, locality-sensitive hashing (LSH) is a fuzzy hashing technique that hashes similar input items into the same "buckets" with high probability. Since similar items end up in the same buckets, this technique can be used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.
A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weak entropy source, together with a short, uniformly random seed, generates a highly random output that appears independent from the source and uniformly distributed. Examples of weakly random sources include radioactive decay or thermal noise; the only restriction on possible sources is that there is no way they can be fully controlled, calculated or predicted, and that a lower bound on their entropy rate can be established. For a given source, a randomness extractor can even be considered to be a true random number generator (TRNG); but there is no single extractor that has been proven to produce truly random output from any type of weakly random source.
Samplesort is a sorting algorithm that is a divide and conquer algorithm often used in parallel processing systems. Conventional divide and conquer sorting algorithms partitions the array into sub-intervals or buckets. The buckets are then sorted individually and then concatenated together. However, if the array is non-uniformly distributed, the performance of these sorting algorithms can be significantly throttled. Samplesort addresses this issue by selecting a sample of size s from the n-element sequence, and determining the range of the buckets by sorting the sample and choosing p−1 < s elements from the result. These elements then divide the array into p approximately equal-sized buckets. Samplesort is described in the 1970 paper, "Samplesort: A Sampling Approach to Minimal Storage Tree Sorting", by W. D. Frazer and A. C. McKellar.
In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.
In statistics and combinatorial mathematics, group testing is any procedure that breaks up the task of identifying certain objects into tests on groups of items, rather than on individual ones. First studied by Robert Dorfman in 1943, group testing is a relatively new field of applied mathematics that can be applied to a wide range of practical applications and is an active area of research today.
In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.
In computer science and data mining, MinHash is a technique for quickly estimating how similar two sets are. The scheme was invented by Andrei Broder, and initially used in the AltaVista search engine to detect duplicate web pages and eliminate them from search results. It has also been applied in large-scale clustering problems, such as clustering documents by the similarity of their sets of words.
In computer science, integer sorting is the algorithmic problem of sorting a collection of data values by integer keys. Algorithms designed for integer sorting may also often be applied to sorting problems in which the keys are floating point numbers, rational numbers, or text strings. The ability to perform integer arithmetic on the keys allows integer sorting algorithms to be faster than comparison sorting algorithms in many cases, depending on the details of which operations are allowed in the model of computing and how large the integers to be sorted are.
Matrix completion is the task of filling in the missing entries of a partially observed matrix, which is equivalent to performing data imputation in statistics. A wide range of datasets are naturally organized in matrix form. One example is the movie-ratings matrix, as appears in the Netflix problem: Given a ratings matrix in which each entry represents the rating of movie by customer , if customer has watched movie and is otherwise missing, we would like to predict the remaining entries in order to make good recommendations to customers on what to watch next. Another example is the document-term matrix: The frequencies of words used in a collection of documents can be represented as a matrix, where each entry corresponds to the number of times the associated term appears in the indicated document.