Hamming distance

Last updated

Hamming distance
Hamming distance 4 bit binary.svg
4-bit binary tesseract for finding Hamming distance.
Hamming distance 4 bit binary example.svg
Two example distances: 0100→1001 has distance 3; 0110→1110 has distance 1
Class String similarity
Data structure string
Worst-case performance
Best-case performance
Average performance
Worst-case space complexity
Hamming distance 3 bit binary.svg
3-bit binary cube for finding Hamming distance
Hamming distance 3 bit binary example.svg
Two example distances: 100→011 has distance 3; 010→111 has distance 2
The minimum distance between any two vertices is the Hamming distance between the two binary strings.

In information theory, the Hamming distance between two strings or vectors of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of substitutions required to change one string into the other, or equivalently, the minimum number of errors that could have transformed one string into the other. In a more general context, the Hamming distance is one of several string metrics for measuring the edit distance between two sequences. It is named after the American mathematician Richard Hamming.

Contents

A major application is in coding theory, more specifically to block codes, in which the equal-length strings are vectors over a finite field.

Definition

The Hamming distance between two equal-length strings of symbols is the number of positions at which the corresponding symbols are different. [1]

Examples

The symbols may be letters, bits, or decimal digits, among other possibilities. For example, the Hamming distance between:

Properties

For a fixed length n, the Hamming distance is a metric on the set of the words of length n (also known as a Hamming space), as it fulfills the conditions of non-negativity, symmetry, the Hamming distance of two words is 0 if and only if the two words are identical, and it satisfies the triangle inequality as well: [2] Indeed, if we fix three words a, b and c, then whenever there is a difference between the ith letter of a and the ith letter of c, then there must be a difference between the ith letter of a and ith letter of b, or between the ith letter of b and the ith letter of c. Hence the Hamming distance between a and c is not larger than the sum of the Hamming distances between a and b and between b and c. The Hamming distance between two words a and b can also be seen as the Hamming weight of ab for an appropriate choice of the − operator, much as the difference between two integers can be seen as a distance from zero on the number line.[ clarification needed ]

For binary strings a and b the Hamming distance is equal to the number of ones (population count) in a XOR b. [3] The metric space of length-n binary strings, with the Hamming distance, is known as the Hamming cube; it is equivalent as a metric space to the set of distances between vertices in a hypercube graph. One can also view a binary string of length n as a vector in by treating each symbol in the string as a real coordinate; with this embedding, the strings form the vertices of an n-dimensional hypercube, and the Hamming distance of the strings is equivalent to the Manhattan distance between the vertices.

Error detection and error correction

The minimum Hamming distance or minimum distance (usually denoted by dmin) is used to define some essential notions in coding theory, such as error detecting and error correcting codes. In particular, a code C is said to be k error detecting if, and only if, the minimum Hamming distance between any two of its codewords is at least k+1. [2]

For example, consider a code consisting of two codewords "000" and "111". The Hamming distance between these two words is 3, and therefore it is k=2 error detecting. This means that if one bit is flipped or two bits are flipped, the error can be detected. If three bits are flipped, then "000" becomes "111" and the error cannot be detected.

A code C is said to be k-error correcting if, for every word w in the underlying Hamming space H, there exists at most one codeword c (from C) such that the Hamming distance between w and c is at most k. In other words, a code is k-errors correcting if the minimum Hamming distance between any two of its codewords is at least 2k+1. This is also understood geometrically as any closed balls of radius k centered on distinct codewords being disjoint. [2] These balls are also called Hamming spheres in this context. [4]

For example, consider the same 3-bit code consisting of the two codewords "000" and "111". The Hamming space consists of 8 words 000, 001, 010, 011, 100, 101, 110 and 111. The codeword "000" and the single bit error words "001","010","100" are all less than or equal to the Hamming distance of 1 to "000". Likewise, codeword "111" and its single bit error words "110","101" and "011" are all within 1 Hamming distance of the original "111". In this code, a single bit error is always within 1 Hamming distance of the original codes, and the code can be 1-error correcting, that is k=1. Since the Hamming distance between "000" and "111" is 3, and those comprise the entire set of codewords in the code, the minimum Hamming distance is 3, which satisfies 2k+1 = 3.

Thus a code with minimum Hamming distance d between its codewords can detect at most d-1 errors and can correct ⌊(d-1)/2⌋ errors. [2] The latter number is also called the packing radius or the error-correcting capability of the code. [4]

History and applications

The Hamming distance is named after Richard Hamming, who introduced the concept in his fundamental paper on Hamming codes, Error detecting and error correcting codes, in 1950. [5] Hamming weight analysis of bits is used in several disciplines including information theory, coding theory, and cryptography. [6]

It is used in telecommunication to count the number of flipped bits in a fixed-length binary word as an estimate of error, and therefore is sometimes called the signal distance. [7] For q-ary strings over an alphabet of size q  2 the Hamming distance is applied in case of the q-ary symmetric channel, while the Lee distance is used for phase-shift keying or more generally channels susceptible to synchronization errors because the Lee distance accounts for errors of ±1. [8] If or both distances coincide because any pair of elements from or differ by 1, but the distances are different for larger .

The Hamming distance is also used in systematics as a measure of genetic distance. [9]

However, for comparing strings of different lengths, or strings where not just substitutions but also insertions or deletions have to be expected, a more sophisticated metric like the Levenshtein distance is more appropriate.

Algorithm example

The following function, written in Python 3, returns the Hamming distance between two strings:

defhamming_distance(string1:str,string2:str)->int:"""Return the Hamming distance between two strings."""iflen(string1)!=len(string2):raiseValueError("Strings must be of equal length.")dist_counter=0forninrange(len(string1)):ifstring1[n]!=string2[n]:dist_counter+=1returndist_counter

Or, in a shorter expression:

sum(char1!=char2forchar1,char2inzip(string1,string2))

The function hamming_distance(), implemented in Python 3, computes the Hamming distance between two strings (or other iterable objects) of equal length by creating a sequence of Boolean values indicating mismatches and matches between corresponding positions in the two inputs, then summing the sequence with True and False values, interpreted as one and zero, respectively.

defhamming_distance(s1:str,s2:str)->int:"""Return the Hamming distance between equal-length sequences."""iflen(s1)!=len(s2):raiseValueError("Undefined for sequences of unequal length.")returnsum(char1!=char2forchar1,char2inzip(s1,s2))

where the zip() function merges two equal-length collections in pairs.

The following C function will compute the Hamming distance of two integers (considered as binary values, that is, as sequences of bits). The running time of this procedure is proportional to the Hamming distance rather than to the number of bits in the inputs. It computes the bitwise exclusive or of the two inputs, and then finds the Hamming weight of the result (the number of nonzero bits) using an algorithm of Wegner (1960) that repeatedly finds and clears the lowest-order nonzero bit. Some compilers support the __builtin_popcount function which can calculate this using specialized processor hardware where available.

inthamming_distance(unsignedx,unsignedy){intdist=0;// The ^ operators sets to 1 only the bits that are differentfor(unsignedval=x^y;val>0;++dist){// We then count the bit set to 1 using the Peter Wegner wayval=val&(val-1);// Set to zero val's lowest-order 1}// Return the number of differing bitsreturndist;}

A faster alternative is to use the population count (popcount) assembly instruction. Certain compilers such as GCC and Clang make it available via an intrinsic function:

// Hamming distance for 32-bit integersinthamming_distance32(unsignedintx,unsignedinty){return__builtin_popcount(x^y);}// Hamming distance for 64-bit integersinthamming_distance64(unsignedlonglongx,unsignedlonglongy){return__builtin_popcountll(x^y);}

See also

Related Research Articles

<span class="mw-page-title-main">Hamming code</span> Family of linear error-correcting codes

In computer science and telecommunication, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors. By contrast, the simple parity code cannot correct errors, and can detect only an odd number of bits in error. Hamming codes are perfect codes, that is, they achieve the highest possible rate for codes with their block length and minimum distance of three. Richard W. Hamming invented Hamming codes in 1950 as a way of automatically correcting errors introduced by punched card readers. In his original paper, Hamming elaborated his general idea, but specifically focused on the Hamming(7,4) code which adds three parity bits to four bits of data.

Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.

<span class="mw-page-title-main">Coding theory</span> Study of the properties of codes and their fitness

Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data.

In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way. Such limitations often take the form of bounds that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.

In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as a hybrid of these two types. Linear codes allow for more efficient encoding and decoding algorithms than other codes.

In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code with block length , size and minimum distance . It is also known as the Joshibound. proved by Joshi (1958) and even earlier by Komamiya (1953).

In mathematics and computer science, in the field of coding theory, the Hamming bound is a limit on the parameters of an arbitrary block code: it is also known as the sphere-packing bound or the volume bound from an interpretation in terms of packing balls in the Hamming metric into the space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which its code words are embedded. A code that attains the Hamming bound is said to be a perfect code.

In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.

Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. Moreover, the proposed 5G standard relies on the closely related polar codes for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in theoretical computer science.

In coding theory, the repetition code is one of the most basic linear error-correcting codes. In order to transmit a message over a noisy channel that may corrupt the transmission in a few places, the idea of the repetition code is to just repeat the message several times. The hope is that the channel corrupts only a minority of these repetitions. This way the receiver will notice that a transmission error occurred since the received data stream is not the repetition of a single message, and moreover, the receiver can recover the original message by looking at the received message in the data stream that occurs most often.

<span class="mw-page-title-main">Hamming space</span>

In statistics and coding theory, a Hamming space is usually the set of all binary strings of length N. It is used in the theory of coding signals and transmission.

<span class="mw-page-title-main">Cyclic code</span> Type of block code

In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives another word that belongs to the code. They are error-correcting codes that have algebraic properties that are convenient for efficient error detection and correction.

In coding theory, a constant-weight code, also called an m-of-n code, is an error detection and correction code where all codewords share the same Hamming weight. The one-hot code and the balanced code are two widely used kinds of constant-weight code.

<span class="mw-page-title-main">Hamming(7,4)</span> Linear error-correcting code

In coding theory, Hamming(7,4) is a linear error-correcting code that encodes four bits of data into seven bits by adding three parity bits. It is a member of a larger family of Hamming codes, but the term Hamming code often refers to this specific code that Richard W. Hamming introduced in 1950. At the time, Hamming worked at Bell Telephone Laboratories and was frustrated with the error-prone punched card reader, which is why he started working on error-correcting codes.

Lexicographic codes or lexicodes are greedily generated error-correcting codes with remarkably good properties. They were produced independently by Vladimir Levenshtein and by John Horton Conway and Neil Sloane. The binary lexicographic codes are linear codes, and include the Hamming codes and the binary Golay codes.

The cyclic redundancy check (CRC) is based on division in the ring of polynomials over the finite field GF(2), that is, the set of polynomials where each coefficient is either zero or one, and arithmetic operations wrap around.

In coding theory, a polynomial code is a type of linear code whose set of valid code words consists of those polynomials that are divisible by a given fixed polynomial.

A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining a small number of bits of a possibly corrupted codeword. This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Note that locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two.

AN codes are error-correcting code that are used in arithmetic applications. Arithmetic codes were commonly used in computer processors to ensure the accuracy of its arithmetic operations when electronics were more unreliable. Arithmetic codes help the processor to detect when an error is made and correct it. Without these codes, processors would be unreliable since any errors would go undetected. AN codes are arithmetic codes that are named for the integers and that are used to encode and decode the codewords.

In coding theory, burst error-correcting codes employ methods of correcting burst errors, which are errors that occur in many consecutive bits rather than occurring in bits independently of each other.

References

  1. Waggener, Bill (1995). Pulse Code Modulation Techniques. Springer. p. 206. ISBN   978-0-442-01436-0 . Retrieved 13 June 2020.
  2. 1 2 3 4 Robinson, Derek J. S. (2003). An Introduction to Abstract Algebra. Walter de Gruyter. pp. 255–257. ISBN   978-3-11-019816-4.
  3. Warren Jr., Henry S. (2013) [2002]. Hacker's Delight (2 ed.). Addison WesleyPearson Education, Inc. pp. 81–96. ISBN   978-0-321-84268-8. 0-321-84268-5.
  4. 1 2 Cohen, G.; Honkala, I.; Litsyn, S.; Lobstein, A. (1997), Covering Codes, North-Holland Mathematical Library, vol. 54, Elsevier, pp. 16–17, ISBN   978-0-08-053007-9
  5. Hamming, R. W. (April 1950). "Error detecting and error correcting codes" (PDF). The Bell System Technical Journal. 29 (2): 147–160. doi: 10.1002/j.1538-7305.1950.tb00463.x . hdl: 10945/46756 . ISSN   0005-8580. S2CID   61141773. Archived (PDF) from the original on 2022-10-09.
  6. Jarrous, Ayman; Pinkas, Benny (2009). "Secure Hamming Distance Based Computation and Its Applications". In Abdalla, Michel; Pointcheval, David; Fouque, Pierre-Alain; Vergnaud, Damien (eds.). Applied Cryptography and Network Security. Lecture Notes in Computer Science. Vol. 5536. Berlin, Heidelberg: Springer. pp. 107–124. doi: 10.1007/978-3-642-01957-9_7 . ISBN   978-3-642-01957-9.
  7. Ayala, Jose (2012). Integrated Circuit and System Design. Springer. p. 62. ISBN   978-3-642-36156-2.
  8. Roth, Ron (2006). Introduction to Coding Theory. Cambridge University Press. p. 298. ISBN   978-0-521-84504-5.
  9. Pilcher, Christopher D.; Wong, Joseph K.; Pillai, Satish K. (2008-03-18). "Inferring HIV Transmission Dynamics from Phylogenetic Sequence Relationships". PLOS Medicine. 5 (3): e69. doi: 10.1371/journal.pmed.0050069 . ISSN   1549-1676. PMC   2267810 . PMID   18351799.

Further reading