In mathematics and computing, universal hashing (in a randomized algorithm or data structure) refers to selecting a hash function at random from a family of hash functions with a certain mathematical property (see definition below). This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known (for hashing integers, vectors, strings), and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.
Assume we want to map keys from some universe into bins (labelled ). The algorithm will have to handle some data set of keys, which is not known in advance. Usually, the goal of hashing is to obtain a low number of collisions (keys from that land in the same bin). A deterministic hash function cannot offer any guarantee in an adversarial setting if , since the adversary may choose to be precisely the preimage of a bin. This means that all data keys land in the same bin, making hashing useless. Furthermore, a deterministic hash function does not allow for rehashing: sometimes the input data turns out to be bad for the hash function (e.g. there are too many collisions), so one would like to change the hash function.
The solution to these problems is to pick a function randomly from a family of hash functions. A family of functions is called a universal family if, .
In other words, any two different keys of the universe collide with probability at most when the hash function is drawn uniformly at random from . This is exactly the probability of collision we would expect if the hash function assigned truly random hash codes to every key.
Sometimes, the definition is relaxed by a constant factor, only requiring collision probability rather than . This concept was introduced by Carter and Wegman [1] in 1977, and has found numerous applications in computer science (see, for example [2] ).
If we have an upper bound of on the collision probability, we say that we have -almost universality. So for example, a universal family has -almost universality.
Many, but not all, universal families have the following stronger uniform difference property:
Note that the definition of universality is only concerned with whether , which counts collisions. The uniform difference property is stronger.
(Similarly, a universal family can be XOR universal if , the value is uniformly distributed in where is the bitwise exclusive or operation. This is only possible if is a power of two.)
An even stronger condition is pairwise independence: we have this property when we have the probability that will hash to any pair of hash values is as if they were perfectly random: . Pairwise independence is sometimes called strong universality.
Another property is uniformity. We say that a family is uniform if all hash values are equally likely: for any hash value . Universality does not imply uniformity. However, strong universality does imply uniformity.
Given a family with the uniform distance property, one can produce a pairwise independent or strongly universal hash family by adding a uniformly distributed random constant with values in to the hash functions. (Similarly, if is a power of two, we can achieve pairwise independence from an XOR universal hash family by doing an exclusive or with a uniformly distributed random constant.) Since a shift by a constant is sometimes irrelevant in applications (e.g. hash tables), a careful distinction between the uniform distance property and pairwise independent is sometimes not made. [3]
For some applications (such as hash tables), it is important for the least significant bits of the hash values to be also universal. When a family is strongly universal, this is guaranteed: if is a strongly universal family with , then the family made of the functions for all is also strongly universal for . Unfortunately, the same is not true of (merely) universal families. For example, the family made of the identity function is clearly universal, but the family made of the function fails to be universal.
UMAC and Poly1305-AES and several other message authentication code algorithms are based on universal hashing. [4] [5] In such applications, the software chooses a new hash function for every message, based on a unique nonce for that message.
Several hash table implementations are based on universal hashing. In such applications, typically the software chooses a new hash function only after it notices that "too many" keys have collided; until then, the same hash function continues to be used over and over. (Some collision resolution schemes, such as dynamic perfect hashing, pick a new hash function every time there is a collision. Other collision resolution schemes, such as cuckoo hashing and 2-choice hashing, allow a number of collisions before picking a new hash function). A survey of fastest known universal and strongly universal hash functions for integers, vectors, and strings is found in. [6]
For any fixed set of keys, using a universal family guarantees the following properties.
As the above guarantees hold for any fixed set , they hold if the data set is chosen by an adversary. However, the adversary has to make this choice before (or independent of) the algorithm's random choice of a hash function. If the adversary can observe the random choice of the algorithm, randomness serves no purpose, and the situation is the same as deterministic hashing.
The second and third guarantee are typically used in conjunction with rehashing. For instance, a randomized algorithm may be prepared to handle some number of collisions. If it observes too many collisions, it chooses another random from the family and repeats. Universality guarantees that the number of repetitions is a geometric random variable.
Since any computer data can be represented as one or more machine words, one generally needs hash functions for three types of domains: machine words ("integers"); fixed-length vectors of machine words; and variable-length vectors ("strings").
This section refers to the case of hashing integers that fit in machines words; thus, operations like multiplication, addition, division, etc. are cheap machine-level instructions. Let the universe to be hashed be .
The original proposal of Carter and Wegman [1] was to pick a prime and define
where are randomly chosen integers modulo with . (This is a single iteration of a linear congruential generator.)
To see that is a universal family, note that only holds when
for some integer between and . Since , if their difference is nonzero and has an inverse modulo . Solving for yields
There are possible choices for (since is excluded) and, varying in the allowed range, possible non-zero values for the right hand side. Thus the collision probability is
Another way to see is a universal family is via the notion of statistical distance. Write the difference as
Since is nonzero and is uniformly distributed in , it follows that modulo is also uniformly distributed in . The distribution of is thus almost uniform, up to a difference in probability of between the samples. As a result, the statistical distance to a uniform family is , which becomes negligible when .
The family of simpler hash functions
is only approximately universal: for all . [1] Moreover, this analysis is nearly tight; Carter and Wegman [1] show that whenever .
The state of the art for hashing integers is the multiply-shift scheme described by Dietzfelbinger et al. in 1997. [8] By avoiding modular arithmetic, this method is much easier to implement and also runs significantly faster in practice (usually by at least a factor of four [9] ). The scheme assumes the number of bins is a power of two, . Let be the number of bits in a machine word. Then the hash functions are parametrised over odd positive integers (that fit in a word of bits). To evaluate , multiply by modulo and then keep the high order bits as the hash code. In mathematical notation, this is
This scheme does not satisfy the uniform difference property and is only -almost-universal; for any , .
To understand the behavior of the hash function, notice that, if and have the same highest-order 'M' bits, then has either all 1's or all 0's as its highest order M bits (depending on whether or is larger). Assume that the least significant set bit of appears on position . Since is a random odd integer and odd integers have inverses in the ring , it follows that will be uniformly distributed among -bit integers with the least significant set bit on position . The probability that these bits are all 0's or all 1's is therefore at most . On the other hand, if , then higher-order M bits of contain both 0's and 1's, so it is certain that . Finally, if then bit of is 1 and if and only if bits are also 1, which happens with probability .
This analysis is tight, as can be shown with the example and . To obtain a truly 'universal' hash function, one can use the multiply-add-shift scheme that picks higher-order bits
where is a random positive integer with and is a random non-negative integer with . This requires doing arithmetic on -bit unsigned integers. This version of multiply-shift is due to Dietzfelbinger, and was later analyzed more precisely by Woelfel. [10]
This section is concerned with hashing a fixed-length vector of machine words. Interpret the input as a vector of machine words (integers of bits each). If is a universal family with the uniform difference property, the following family (dating back to Carter and Wegman [1] ) also has the uniform difference property (and hence is universal):
If is a power of two, one may replace summation by exclusive or. [11]
In practice, if double-precision arithmetic is available, this is instantiated with the multiply-shift hash family of hash functions. [12] Initialize the hash function with a vector of random odd integers on bits each. Then if the number of bins is for :
It is possible to halve the number of multiplications, which roughly translates to a two-fold speed-up in practice. [11] Initialize the hash function with a vector of random odd integers on bits each. The following hash family is universal: [13]
If double-precision operations are not available, one can interpret the input as a vector of half-words (-bit integers). The algorithm will then use multiplications, where was the number of half-words in the vector. Thus, the algorithm runs at a "rate" of one multiplication per word of input.
The same scheme can also be used for hashing integers, by interpreting their bits as vectors of bytes. In this variant, the vector technique is known as tabulation hashing and it provides a practical alternative to multiplication-based universal hashing schemes. [14]
Strong universality at high speed is also possible. [15] Initialize the hash function with a vector of random integers on bits. Compute
The result is strongly universal on bits. Experimentally, it was found to run at 0.2 CPU cycle per byte on recent Intel processors for .
This refers to hashing a variable-sized vector of machine words. If the length of the string can be bounded by a small number, it is best to use the vector solution from above (conceptually padding the vector with zeros up to the upper bound). The space required is the maximal length of the string, but the time to evaluate is just the length of . As long as zeroes are forbidden in the string, the zero-padding can be ignored when evaluating the hash function without affecting universality. [11] Note that if zeroes are allowed in the string, then it might be best to append a fictitious non-zero (e.g., 1) character to all strings prior to padding: this will ensure that universality is not affected. [15]
Now assume we want to hash , where a good bound on is not known a priori. A universal family proposed by [12] treats the string as the coefficients of a polynomial modulo a large prime. If , let be a prime and define:
Using properties of modular arithmetic, above can be computed without producing large numbers for large strings as follows: [16]
uinthash(Stringx,inta,intp)uinth=INITIAL_VALUEfor(uinti=0;i<x.length;++i)h=((h*a)+x[i])modpreturnh
This Rabin-Karp rolling hash is based on a linear congruential generator. [17] Above algorithm is also known as Multiplicative hash function. [18] In practice, the mod operator and the parameter p can be avoided altogether by simply allowing integer to overflow because it is equivalent to mod (Max-Int-Value + 1) in many programming languages. Below table shows values chosen to initialize h and a for some of the popular implementations.
Implementation | INITIAL_VALUE | a |
---|---|---|
Bernstein's hash function djb2 [19] | 5381 | 33 |
STLPort 4.6.2 | 0 | 5 |
Kernighan and Ritchie's hash function [20] | 0 | 31 |
java.lang.String.hashCode() [21] | 0 | 31 |
Consider two strings and let be length of the longer one; for the analysis, the shorter string is conceptually padded with zeros up to length . A collision before applying implies that is a root of the polynomial with coefficients . This polynomial has at most roots modulo , so the collision probability is at most . The probability of collision through the random brings the total collision probability to . Thus, if the prime is sufficiently large compared to the length of strings hashed, the family is very close to universal (in statistical distance).
Other universal families of hash functions used to hash unknown-length strings to fixed-length hash values include the Rabin fingerprint and the Buzhash.
To mitigate the computational penalty of modular arithmetic, three tricks are used in practice: [11]
A hash function is any function that can be used to map data of arbitrary size to fixed-size values, though there are some hash functions that support variable-length output. The values returned by a hash function are called hash values, hash codes, hash digests, digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter-storage addressing.
In computing, a hash table is a data structure that implements an associative array, also called a dictionary or simply map; an associative array is an abstract data type that maps keys to values. A hash table uses a hash function to compute an index, also called a hash code, into an array of buckets or slots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored. A map implemented by a hash table is called a hash map.
The Digital Signature Algorithm (DSA) is a public-key cryptosystem and Federal Information Processing Standard for digital signatures, based on the mathematical concept of modular exponentiation and the discrete logarithm problem. In a public-key cryptosystem, two keys are generated: data can only be encrypted with the public key and encrypted data can only be decrypted with the private key. DSA is a variant of the Schnorr and ElGamal signature schemes.
In computer science, a perfect hash functionh for a set S is a hash function that maps distinct elements in S to a set of m integers, with no collisions. In mathematical terms, it is an injective function.
In cryptography, the Elliptic Curve Digital Signature Algorithm (ECDSA) offers a variant of the Digital Signature Algorithm (DSA) which uses elliptic-curve cryptography.
The Rabin cryptosystem is a family of public-key encryption schemes based on a trapdoor function whose security, like that of RSA, is related to the difficulty of integer factorization.
KCDSA is a digital signature algorithm created by a team led by the Korea Internet & Security Agency (KISA). It is an ElGamal variant, similar to the Digital Signature Algorithm and GOST R 34.10-94. The standard algorithm is implemented over , but an elliptic curve variant (EC-KCDSA) is also specified.
Double hashing is a computer programming technique used in conjunction with open addressing in hash tables to resolve hash collisions, by using a secondary hash of the key as an offset when a collision occurs. Double hashing with open addressing is a classical data structure on a table .
In cryptography, a universal hashing message authentication code, or UMAC, is a message authentication code (MAC) calculated using universal hashing, which involves choosing a hash function from a class of hash functions according to some secret (random) process and applying it to the message. The resulting digest or fingerprint is then encrypted to hide the identity of the hash function that was used. A variation of the scheme was first published in 1999. As with any MAC, it may be used to simultaneously verify both the data integrity and the authenticity of a message. In contrast to traditional MACs, which are serializable, a UMAC can be executed in parallel. Thus, as machines continue to offer more parallel-processing capabilities, the speed of implementing UMAC can increase.
Poly1305 is a universal hash family designed by Daniel J. Bernstein in 2002 for use in cryptography.
The ElGamal signature scheme is a digital signature scheme which is based on the difficulty of computing discrete logarithms. It was described by Taher Elgamal in 1985.
The Blum–Goldwasser (BG) cryptosystem is an asymmetric key encryption algorithm proposed by Manuel Blum and Shafi Goldwasser in 1984. Blum–Goldwasser is a probabilistic, semantically secure cryptosystem with a constant-size ciphertext expansion. The encryption algorithm implements an XOR-based stream cipher using the Blum-Blum-Shub (BBS) pseudo-random number generator to generate the keystream. Decryption is accomplished by manipulating the final state of the BBS generator using the private key, in order to find the initial seed and reconstruct the keystream.
In cryptography, the Rabin signature algorithm is a method of digital signature originally proposed by Michael O. Rabin in 1978.
In cryptography, a key encapsulation mechanism, or KEM, is a public-key cryptosystem that allows a sender to generate a short secret key and transmit it to a receiver securely, in spite of eavesdropping and intercepting adversaries.
In cryptography, Very Smooth Hash (VSH) is a provably secure cryptographic hash function invented in 2005 by Scott Contini, Arjen Lenstra, and Ron Steinfeld. Provably secure means that finding collisions is as difficult as some known hard mathematical problem. Unlike other provably secure collision-resistant hashes, VSH is efficient and usable in practice. Asymptotically, it only requires a single multiplication per log(n) message-bits and uses RSA-type arithmetic. Therefore, VSH can be useful in embedded environments where code space is limited.
In cryptography, SWIFFT is a collection of provably secure hash functions. It is based on the concept of the fast Fourier transform (FFT). SWIFFT is not the first hash function based on the FFT, but it sets itself apart by providing a mathematical proof of its security. It also uses the LLL basis reduction algorithm. It can be shown that finding collisions in SWIFFT is at least as difficult as finding short vectors in cyclic/ideal lattices in the worst case. By giving a security reduction to the worst-case scenario of a difficult mathematical problem, SWIFFT gives a much stronger security guarantee than most other cryptographic hash functions.
In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.
In computer science and data mining, MinHash is a technique for quickly estimating how similar two sets are. The scheme was invented by Andrei Broder, and initially used in the AltaVista search engine to detect duplicate web pages and eliminate them from search results. It has also been applied in large-scale clustering problems, such as clustering documents by the similarity of their sets of words.
Badger is a message authentication code (MAC) based on the idea of universal hashing and was developed by Boesgaard, Scavenius, Pedersen, Christensen, and Zenner. It is constructed by strengthening the ∆-universal hash family MMH using an ϵ-almost strongly universal (ASU) hash function family after the application of ENH, where the value of ϵ is . Since Badger is a MAC function based on the universal hash function approach, the conditions needed for the security of Badger are the same as those for other universal hash functions such as UMAC.
Network coding has been shown to optimally use bandwidth in a network, maximizing information flow but the scheme is very inherently vulnerable to pollution attacks by malicious nodes in the network. A node injecting garbage can quickly affect many receivers. The pollution of network packets spreads quickly since the output of honest node is corrupted if at least one of the incoming packets is corrupted.
{{cite book}}
: CS1 maint: multiple names: authors list (link)