Singleton bound

Last updated

In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code with block length , size and minimum distance . It is also known as the Joshibound [1] proved by Joshi (1958) and even earlier by Komamiya (1953).

Contents

Statement of the bound

The minimum distance of a set of codewords of length is defined as where is the Hamming distance between and . The expression represents the maximum number of possible codewords in a -ary block code of length and minimum distance .

Then the Singleton bound states that

Proof

First observe that the number of -ary words of length is , since each letter in such a word may take one of different values, independently of the remaining letters.

Now let be an arbitrary -ary block code of minimum distance . Clearly, all codewords are distinct. If we puncture the code by deleting the first letters of each codeword, then all resulting codewords must still be pairwise different, since all of the original codewords in have Hamming distance at least from each other. Thus the size of the altered code is the same as the original code.

The newly obtained codewords each have length and thus, there can be at most of them. Since was arbitrary, this bound must hold for the largest possible code with these parameters, thus: [2]

Linear codes

If is a linear code with block length , dimension and minimum distance over the finite field with elements, then the maximum number of codewords is and the Singleton bound implies: so that which is usually written as [3]

In the linear code case a different proof of the Singleton bound can be obtained by observing that rank of the parity check matrix is . [4] Another simple proof follows from observing that the rows of any generator matrix in standard form have weight at most .

History

The usual citation given for this result is Singleton (1964), but it was proven earlier by Joshi (1958). Joshi notes that the result was obtained earlier by Komamiya (1953) using a more complex proof. Welsh (1988 , p. 72) also notes the same regarding Komamiya (1953).

MDS codes

Linear block codes that achieve equality in the Singleton bound are called MDS (maximum distance separable) codes. Examples of such codes include codes that have only codewords (the all- word for , having thus minimum distance ), codes that use the whole of (minimum distance 1), codes with a single parity symbol (minimum distance 2) and their dual codes. These are often called trivial MDS codes.

In the case of binary alphabets, only trivial MDS codes exist. [5] [6]

Examples of non-trivial MDS codes include Reed-Solomon codes and their extended versions. [7] [8]

MDS codes are an important class of block codes since, for a fixed and , they have the greatest error correcting and detecting capabilities. There are several ways to characterize MDS codes: [9]

Theorem  Let be a linear [] code over . The following are equivalent:

The last of these characterizations permits, by using the MacWilliams identities, an explicit formula for the complete weight distribution of an MDS code. [10]

Theorem  Let be a linear [] MDS code over . If denotes the number of codewords in of weight , then

Arcs in projective geometry

The linear independence of the columns of a generator matrix of an MDS code permits a construction of MDS codes from objects in finite projective geometry. Let be the finite projective space of (geometric) dimension over the finite field . Let be a set of points in this projective space represented with homogeneous coordinates. Form the matrix whose columns are the homogeneous coordinates of these points. Then, [11]

Theorem   is a (spatial) -arc if and only if is the generator matrix of an MDS code over .

See also

Notes

  1. Keedwell, A. Donald; Dénes, József (24 January 1991). Latin Squares: New Developments in the Theory and Applications. Amsterdam: Elsevier. p. 270. ISBN   0-444-88899-3.
  2. Ling & Xing 2004 , p. 93
  3. Roman 1992 , p. 175
  4. Pless 1998 , p. 26
  5. Vermani 1996 , Proposition 9.2
  6. Ling & Xing 2004 , p. 94 Remark 5.4.7
  7. MacWilliams & Sloane 1977 , Ch. 11
  8. Ling & Xing 2004 , p. 94
  9. Roman 1992 , p. 237, Theorem 5.3.7
  10. Roman 1992 , p. 240
  11. Bruen, A.A.; Thas, J.A.; Blokhuis, A. (1988), "On M.D.S. codes, arcs in PG(n,q), with q even, and a solution of three fundamental problems of B. Segre", Invent. Math., 92 (3): 441–459, Bibcode:1988InMat..92..441B, doi:10.1007/bf01393742, S2CID   120077696

Related Research Articles

In information theory and coding theory, Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, including consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, Data Matrix, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.

In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way. Such limitations often take the form of bounds that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.

In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as a hybrid of these two types. Linear codes allow for more efficient encoding and decoding algorithms than other codes.

In mathematics and computer science, in the field of coding theory, the Hamming bound is a limit on the parameters of an arbitrary block code: it is also known as the sphere-packing bound or the volume bound from an interpretation in terms of packing balls in the Hamming metric into the space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which its code words are embedded. A code that attains the Hamming bound is said to be a perfect code.

In coding theory, the Gilbert–Varshamov bound is a bound on the size of a code. It is occasionally known as the Gilbert–Shannon–Varshamov bound, but the name "Gilbert–Varshamov bound" is by far the most popular. Varshamov proved this bound by using the probabilistic method for linear codes. For more about that proof, see Gilbert–Varshamov bound for linear codes.

Algebraic geometry codes, often abbreviated AG codes, are a type of linear code that generalize Reed–Solomon codes. The Russian mathematician V. D. Goppa constructed these codes for the first time in 1982.

In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.

Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. Moreover, the proposed 5G standard relies on the closely related polar codes for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in theoretical computer science.

In the mathematics of coding theory, the Plotkin bound, named after Morris Plotkin, is a limit on the maximum possible number of codewords in binary codes of given length n and given minimum distance d.

In applied mathematics, the Johnson bound is a limit on the size of error-correcting codes, as used in coding theory for data transmission or communications.

In coding theory, a generator matrix is a matrix whose rows form a basis for a linear code. The codewords are all of the linear combinations of the rows of this matrix, that is, the linear code is the row space of its generator matrix.

<span class="mw-page-title-main">Hadamard code</span> Error-correcting code

The Hadamard code is an error-correcting code named after the French mathematician Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mars back to Earth from the NASA space probe Mariner 9. Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in coding theory, mathematics, and theoretical computer science. The Hadamard code is also known under the names Walsh code, Walsh family, and Walsh–Hadamard code in recognition of the American mathematician Joseph Leonard Walsh.

In coding theory, Justesen codes form a class of error-correcting codes that have a constant rate, constant relative distance, and a constant alphabet size.

A locally testable code is a type of error-correcting code for which it can be determined if a string is a word in that code by looking at a small number of bits of the string. In some situations, it is useful to know if the data is corrupted without decoding all of it so that appropriate action can be taken in response. For example, in communication, if the receiver encounters a corrupted code, it can request the data be re-sent, which could increase the accuracy of said data. Similarly, in data storage, these codes can allow for damaged data to be recovered and rewritten properly.

In the mathematics of coding theory, the Griesmer bound, named after James Hugo Griesmer, is a bound on the length of linear binary codes of dimension k and minimum distance d. There is also a very similar version for non-binary codes.

A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining a small number of bits of a possibly corrupted codeword. This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Note that locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two.

DNA code construction refers to the application of coding theory to the design of nucleic acid systems for the field of DNA–based computation.

In coding theory, the Wozencraft ensemble is a set of linear codes in which most of codes satisfy the Gilbert-Varshamov bound. It is named after John Wozencraft, who proved its existence. The ensemble is described by Massey (1963), who attributes it to Wozencraft. Justesen (1972) used the Wozencraft ensemble as the inner codes in his construction of strongly explicit asymptotically good code.

The Gilbert–Varshamov bound for linear codes is related to the general Gilbert–Varshamov bound, which gives a lower bound on the maximal number of elements in an error-correcting code of a given block length and minimum Hamming weight over a field . This may be translated into a statement about the maximum rate of a code with given length and minimum distance. The Gilbert–Varshamov bound for linear codes asserts the existence of q-ary linear codes for any relative minimum distance less than the given bound that simultaneously have high rate. The existence proof uses the probabilistic method, and thus is not constructive. The Gilbert–Varshamov bound is the best known in terms of relative distance for codes over alphabets of size less than 49. For larger alphabets, algebraic geometry codes sometimes achieve an asymptotically better rate vs. distance tradeoff than is given by the Gilbert–Varshamov bound.

In coding theory, folded Reed–Solomon codes are like Reed–Solomon codes, which are obtained by mapping Reed–Solomon codewords over a larger alphabet by careful bundling of codeword symbols.

References

Further reading