In computer programming, primary clustering is a phenomenon that causes performance degradation in linear-probing hash tables. The phenomenon states that, as elements are added to a linear probing hash table, they have a tendency to cluster together into long runs (i.e., long contiguous regions of the hash table that contain no free slots). If the hash table is at a load factor of for some parameter , then the expected length of the run containing a given element is . This causes insertions and negative queries to take expected time in a linear-probing hash table.
Primary clustering has two causes:
Another way to understand primary clustering is by examining the standard deviation on the number of items that hash to a given region within the hash table. [2] Consider a sub-region of the hash table of size . The expected number of items that hash into the region is . On the other hand, the standard deviation on the number of such items is . It follows that, with probability , the number of items that hash into the region will exceed the size of the region. Intuitively, this means that regions of size will often overflow, while larger regions typically will not. This intuition is often used as the starting point for formal analyses of primary clustering. [2] [3] [4]
Primary clustering causes performance degradation for both insertions and queries in a linear probing hash table. Insertions must travel to the end of a run, and therefore take expected time . [1] Negative queries (i.e., queries that are searching for an element that turns out not to be present) must also travel to the end of a run, and thus also take expected time . [1] Positive queries can terminate as soon as they find the element that they are searching for. As a result, the expected time to query a random element in the hash table is . [1] However, positive queries to recently-inserted elements (e.g., an element that was just inserted) take expected time . [1]
These bounds also hold for linear probing with lazy deletions (i.e., using tombstones for deletions), as long as the hash table is rebuilt (and the tombstones are cleared out) semi-frequently. It suffices to perform such a rebuild at least once every insertions. [2]
Many textbooks describe the winner-keeps-winning effect (in which the longer a run becomes, the more likely it is to accrue additional elements) as the sole cause of primary clustering. [5] [6] [7] [8] [9] [10] [11] However, as noted by Knuth, [1] this is not the main cause of primary clustering.
Some textbooks state that the expected time for a positive query is , [11] [12] typically citing Knuth. [1] This is true for a query to a random element. Some positive queries may have much larger expected running times, however. For example, if one inserts an element and then immediately queries that element, the query will take the same amount of time as did the insertion, which is in expectation.
Ordered linear probing [13] (often referred to as Robin Hood hashing [14] ) is a technique for reducing the effects of primary clustering on queries. Ordered linear probing sorts the elements within each run by their hash. Thus, a query can terminate as soon as it encounters any element whose hash is larger than that of the element being queried. This results in both positive and negative queries taking expected time .
Graveyard hashing is a variant of ordered linear probing that eliminates the asymptotic effects of primary clustering for all operations. [2] Graveyard hashing strategically leaves gaps within runs that future insertions can make use of. These gaps, which can be thought of as tombstones (like those created by lazy deletions), are inserted into the table during semi-regular rebuilds. The gaps then speed up the insertions that take place until the next semi-regular rebuild occurs. Every operation in a graveyard hash table takes expected time .
Many sources recommend the use of quadratic probing as an alternative to linear probing that empirically avoids the effects of primary clustering.
In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.
A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable length output. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter storage addressing.
In computing, a hash table, also known as hash map, is a data structure that implements an associative array or dictionary. It is an abstract data type that maps keys to values. A hash table uses a hash function to compute an index, also called a hash code, into an array of buckets or slots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored.
In algebra, a quadratic equation is any equation that can be rearranged in standard form as
In mathematics, a square root of a number x is a number y such that ; in other words, a number y whose square is x. For example, 4 and −4 are square roots of 16 because .
In computer science, selection sort is an in-place comparison sorting algorithm. It has an O(n2) time complexity, which makes it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.
In mathematical optimization and decision theory, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite, in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output are random variables.
A Bloom filter is a space-efficient probabilistic data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a set. False positive matches are possible, but false negatives are not – in other words, a query returns either "possibly in set" or "definitely not in set". Elements can be added to the set, but not removed ; the more items added, the larger the probability of false positives.
In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.
In computer science, a suffix tree is a compressed trie containing all the suffixes of the given text as their keys and positions in the text as their values. Suffix trees allow particularly fast implementations of many important string operations.
Linear probing is a scheme in computer programming for resolving collisions in hash tables, data structures for maintaining a collection of key–value pairs and looking up the value associated with a given key. It was invented in 1954 by Gene Amdahl, Elaine M. McGraw, and Arthur Samuel and first analyzed in 1963 by Donald Knuth.
Quadratic probing is an open addressing scheme in computer programming for resolving hash collisions in hash tables. Quadratic probing operates by taking the original hash index and adding successive values of an arbitrary quadratic polynomial until an open slot is found.
In computer science, consistent hashing is a special kind of hashing technique such that when a hash table is resized, only keys need to be remapped on average where is the number of keys and is the number of slots. In contrast, in most traditional hash tables, a change in the number of array slots causes nearly all keys to be remapped because the mapping between the keys and the slots is defined by a modular operation.
Cuckoo hashing is a scheme in computer programming for resolving hash collisions of values of hash functions in a table, with worst-case constant lookup time. The name derives from the behavior of some species of cuckoo, where the cuckoo chick pushes the other eggs or young out of the nest when it hatches in a variation of the behavior referred to as brood parasitism; analogously, inserting a new key into a cuckoo hashing table may push an older key to a different location in the table.
2-choice hashing, also known as 2-choice chaining, is "a variant of a hash table in which keys are added by hashing with two hash functions. The key is put in the array position with the fewer (colliding) keys. Some collision resolution scheme is needed, unless keys are kept in buckets. The average-case cost of a successful search is , where is the number of keys and is the size of the array. The most collisions is with high probability."
In computer science, dynamic perfect hashing is a programming technique for resolving collisions in a hash table data structure. While more memory-intensive than its hash table counterparts, this technique is useful for situations where fast queries, insertions, and deletions must be made on a large set of elements.
A quotient filter is a space-efficient probabilistic data structure used to test whether an element is a member of a set. A query will elicit a reply specifying either that the element is definitely not in the set or that the element is probably in the set. The former result is definitive; i.e., the test does not generate false negatives. But with the latter result there is some probability, ε, of the test returning "element is in the set" when in fact the element is not present in the set. There is a tradeoff between ε, the false positive rate, and storage size; increasing the filter's storage size reduces ε. Other AMQ operations include "insert" and "optionally delete". The more elements are added to the set, the larger the probability of false positives.
A counting Bloom filter is a generalized data structure of Bloom filter, that is used to test whether a count number of a given element is smaller than a given threshold when a sequence of elements is given. As a generalized form of Bloom filter, false positive matches are possible, but false negatives are not – in other words, a query returns either "possibly bigger or equal than the threshold" or "definitely smaller than the threshold".
In computer science, a retrieval data structure, also known as static function, is a space-efficient dictionary-like data type composed of a collection of pairs that allows the following operations: