Hamming bound

Last updated

In mathematics and computer science, in the field of coding theory, the Hamming bound is a limit on the parameters of an arbitrary block code: it is also known as the sphere-packing bound or the volume bound from an interpretation in terms of packing balls in the Hamming metric into the space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which its code words are embedded. A code that attains the Hamming bound is said to be a perfect code.

Contents

Background on error-correcting codes

An original message and an encoded version are both composed in an alphabet of q letters. Each code word contains n letters. The original message (of length m) is shorter than n letters. The message is converted into an n-letter codeword by an encoding algorithm, transmitted over a noisy channel, and finally decoded by the receiver. The decoding process interprets a garbled codeword, referred to as simply a word, as the valid codeword "nearest" the n-letter received string.

Mathematically, there are exactly qm possible messages of length m, and each message can be regarded as a vector of length m. The encoding scheme converts an m-dimensional vector into an n-dimensional vector. Exactly qm valid codewords are possible, but any one of qn words can be received because the noisy channel might distort one or more of the n letters when a codeword is transmitted.

Statement of the bound

Preliminary definitions

An alphabet set is a set of symbols with elements. The set of strings of length on the alphabet set are denoted . (There are distinct strings in this set of strings.) A -ary block code of length is a subset of the strings of , where the alphabet set is any alphabet set having elements. (The choice of alphabet set makes no difference to the result, provided the alphabet is of size .)

Defining the bound

Let denote the maximum possible size of a -ary block code of length and minimum Hamming distance between elements of the block code (necessarily positive for ).

Then, the Hamming bound is:

where

Proof

It follows from the definition of that if at most

errors are made during transmission of a codeword then minimum distance decoding will decode it correctly (i.e., it decodes the received word as the codeword that was sent). Thus the code is said to be capable of correcting errors.

For each codeword , consider a ball of fixed radius around . Every pair of these balls (Hamming spheres) are non-intersecting by the -error-correcting property. Let be the number of words in each ball (in other words, the volume of the ball). A word that is in such a ball can deviate in at most components from those of the ball's centre, which is a codeword. The number of such words is then obtained by choosing up to of the components of a codeword to deviate to one of possible other values (recall, the code is -ary: it takes values in ). Thus,

is the (maximum) total number of codewords in , and so, by the definition of , the greatest number of balls with no two balls having a word in common. Taking the union of the words in these balls centered at codewords, results in a set of words, each counted precisely once, that is a subset of (where words) and so:

Whence:

Covering radius and packing radius

For an code C (a subset of ), the covering radius of C is the smallest value of r such that every element of is contained in at least one ball of radius r centered at each codeword of C. The packing radius of C is the largest value of s such that the set of balls of radius s centered at each codeword of C are mutually disjoint.

From the proof of the Hamming bound, it can be seen that for , we have:

st and tr.

Therefore, sr and if equality holds then s = r = t. The case of equality means that the Hamming bound is attained.

Perfect codes

Codes that attain the Hamming bound are called perfect codes. Examples include codes that have only one codeword, and codes that are the whole of . Another example is given by the repeat codes, where each symbol of the message is repeated an odd fixed number of times to obtain a codeword where q = 2. All of these examples are often called the trivial perfect codes. In 1973, Tietäväinen proved [1] that any non-trivial perfect code over a prime-power alphabet has the parameters of a Hamming code or a Golay code.

A perfect code may be interpreted as one in which the balls of Hamming radius t centered on codewords exactly fill out the space (t is the covering radius = packing radius). A quasi-perfect code is one in which the balls of Hamming radius t centered on codewords are disjoint and the balls of radius t+1 cover the space, possibly with some overlaps. [2] Another way to say this is that a code is quasi-perfect if its covering radius is one greater than its packing radius. [3]

See also

Notes

  1. Tietäväinen 1973.
  2. McWilliams and Sloane, p. 19
  3. Roman 1992 , p. 140

Related Research Articles

<span class="mw-page-title-main">Hamming distance</span> Number of bits that differ between two strings

In information theory, the Hamming distance between two strings or vectors of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of substitutions required to change one string into the other, or equivalently, the minimum number of errors that could have transformed one string into the other. In a more general context, the Hamming distance is one of several string metrics for measuring the edit distance between two sequences. It is named after the American mathematician Richard Hamming.

Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.

In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way. Such limitations often take the form of bounds that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.

In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as a hybrid of these two types. Linear codes allow for more efficient encoding and decoding algorithms than other codes.

In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code with block length , size and minimum distance . It is also known as the Joshibound. proved by Joshi (1958) and even earlier by Komamiya (1953).

In coding theory, the Gilbert–Varshamov bound is a bound on the size of a code. It is occasionally known as the Gilbert–Shannon–Varshamov bound, but the name "Gilbert–Varshamov bound" is by far the most popular. Varshamov proved this bound by using the probabilistic method for linear codes. For more about that proof, see Gilbert–Varshamov bound for linear codes.

In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.

<span class="mw-page-title-main">Cyclic code</span> Type of block code

In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives another word that belongs to the code. They are error-correcting codes that have algebraic properties that are convenient for efficient error detection and correction.

<span class="mw-page-title-main">Hadamard code</span> Error-correcting code

The Hadamard code is an error-correcting code named after Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mars back to Earth from the NASA space probe Mariner 9. Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in coding theory, mathematics, and theoretical computer science. The Hadamard code is also known under the names Walsh code, Walsh family, and Walsh–Hadamard code in recognition of the American mathematician Joseph Leonard Walsh.

Kravchuk polynomials or Krawtchouk polynomials are discrete orthogonal polynomials associated with the binomial distribution, introduced by Mykhailo Kravchuk (1929). The first few polynomials are :

In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding.

In coding theory, a covering code is a set of elements in a space, with the property that every element of the space is within a fixed distance of some codeword.

In coding theory, Justesen codes form a class of error-correcting codes that have a constant rate, constant relative distance, and a constant alphabet size.

A locally testable code is a type of error-correcting code for which it can be determined if a string is a word in that code by looking at a small number of bits of the string. In some situations, it is useful to know if the data is corrupted without decoding all of it so that appropriate action can be taken in response. For example, in communication, if the receiver encounters a corrupted code, it can request the data be re-sent, which could increase the accuracy of said data. Similarly, in data storage, these codes can allow for damaged data to be recovered and rewritten properly.

A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining a small number of bits of a possibly corrupted codeword. This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Note that locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two.

DNA code construction refers to the application of coding theory to the design of nucleic acid systems for the field of DNA–based computation.

The Elias Bassalygo bound is a mathematical limit used in coding theory for error correction during data transmission or communications.

The Gilbert–Varshamov bound for linear codes is related to the general Gilbert–Varshamov bound, which gives a lower bound on the maximal number of elements in an error-correcting code of a given block length and minimum Hamming weight over a field . This may be translated into a statement about the maximum rate of a code with given length and minimum distance. The Gilbert–Varshamov bound for linear codes asserts the existence of q-ary linear codes for any relative minimum distance less than the given bound that simultaneously have high rate. The existence proof uses the probabilistic method, and thus is not constructive. The Gilbert–Varshamov bound is the best known in terms of relative distance for codes over alphabets of size less than 49. For larger alphabets, algebraic geometry codes sometimes achieve an asymptotically better rate vs. distance tradeoff than is given by the Gilbert–Varshamov bound.

In coding theory, folded Reed–Solomon codes are like Reed–Solomon codes, which are obtained by mapping Reed–Solomon codewords over a larger alphabet by careful bundling of codeword symbols.

Permutation codes are a family of error correction codes that were introduced first by Slepian in 1965. and have been widely studied both in Combinatorics and Information theory due to their applications related to Flash memory and Power-line communication.

References