Parity-check matrix

Last updated

In coding theory, a parity-check matrix of a linear block code C is a matrix which describes the linear relations that the components of a codeword must satisfy. It can be used to decide whether a particular vector is a codeword and is also used in decoding algorithms.

Contents

Definition

Formally, a parity check matrix H of a linear code C is a generator matrix of the dual code, C. This means that a codeword c is in C if and only if the matrix-vector product Hc = 0 (some authors [1] would write this in an equivalent form, cH = 0.)

The rows of a parity check matrix are the coefficients of the parity check equations. [2] That is, they show how linear combinations of certain digits (components) of each codeword equal zero. For example, the parity check matrix

,

compactly represents the parity check equations,

,

that must be satisfied for the vector to be a codeword of C.

From the definition of the parity-check matrix it directly follows the minimum distance of the code is the minimum number d such that every d - 1 columns of a parity-check matrix H are linearly independent while there exist d columns of H that are linearly dependent.

Creating a parity check matrix

The parity check matrix for a given code can be derived from its generator matrix (and vice versa). [3] If the generator matrix for an [n,k]-code is in standard form

,

then the parity check matrix is given by

,

because

.

Negation is performed in the finite field Fq. Note that if the characteristic of the underlying field is 2 (i.e., 1 + 1 = 0 in that field), as in binary codes, then -P = P, so the negation is unnecessary.

For example, if a binary code has the generator matrix

,

then its parity check matrix is

.

It can be verified that G is a matrix, while H is a matrix.

Syndromes

For any (row) vector x of the ambient vector space, s = Hx is called the syndrome of x. The vector x is a codeword if and only if s = 0. The calculation of syndromes is the basis for the syndrome decoding algorithm. [4]

See also

Notes

  1. for instance, Roman 1992 , p. 200
  2. Roman 1992 , p. 201
  3. Pless 1998 , p. 9
  4. Pless 1998 , p. 20

Related Research Articles

Hamming code Family of linear error-correcting codes

In computer science and telecommunication, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors. By contrast, the simple parity code cannot correct errors, and can detect only an odd number of bits in error. Hamming codes are perfect codes, that is, they achieve the highest possible rate for codes with their block length and minimum distance of three. Richard W. Hamming invented Hamming codes in 1950 as a way of automatically correcting errors introduced by punched card readers. In his original paper, Hamming elaborated his general idea, but specifically focused on the Hamming(7,4) code which adds three parity bits to four bits of data.

Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.

Coset Disjoint, equal-size subsets of a groups underlying set

In mathematics, specifically group theory, a subgroup H of a group G may be used to decompose the underlying set of G into disjoint, equal-size subsets called cosets. There are left cosets and right cosets. Cosets have the same number of elements (cardinality) as does H. Furthermore, H itself is both a left coset and a right coset. The number of left cosets of H in G is equal to the number of right cosets of H in G. This common value is called the index of H in G and is usually denoted by [G : H].

Binary Golay code

In mathematics and electronics engineering, a binary Golay code is a type of linear error-correcting code used in digital communications. The binary Golay code, along with the ternary Golay code, has a particularly deep and interesting connection to the theory of finite sporadic groups in mathematics. These codes are named in honor of Marcel J. E. Golay whose 1949 paper introducing them has been called, by E. R. Berlekamp, the "best single published page" in coding theory.

In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC is constructed using a sparse Tanner graph. LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical maximum for a symmetric memoryless channel. The noise threshold defines an upper bound for the channel noise, up to which the probability of lost information can be made as small as desired. Using iterative belief propagation techniques, LDPC codes can be decoded in time linear to their block length.

In coding theory, the dual code of a linear code

In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as a hybrid of these two types. Linear codes allow for more efficient encoding and decoding algorithms than other codes.

In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code with block length , size and minimum distance . It is also known as the Joshibound. proved by Joshi (1958) and even earlier by Komamiya (1953).

In mathematics and computer science, in the field of coding theory, the Hamming bound is a limit on the parameters of an arbitrary block code: it is also known as the sphere-packing bound or the volume bound from an interpretation in terms of packing balls in the Hamming metric into the space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which its code words are embedded. A code that attains the Hamming bound is said to be a perfect code.

In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.

Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. Moreover, the proposed 5G standard relies on the closely related polar codes for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in theoretical computer science.

In coding theory, group codes are a type of code. Group codes consist of linear block codes which are subgroups of , where is a finite Abelian group.

In coding theory, the ternary Golay codes are two closely related error-correcting codes. The code generally known simply as the ternary Golay code is an -code, that is, it is a linear code over a ternary alphabet; the relative distance of the code is as large as it possibly can be for a ternary code, and hence, the ternary Golay code is a perfect code. The extended ternary Golay code is a [12, 6, 6] linear code obtained by adding a zero-sum check digit to the [11, 6, 5] code. In finite group theory, the extended ternary Golay code is sometimes referred to as the ternary Golay code.

Cyclic code Type of block code

In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives another word that belongs to the code. They are error-correcting codes that have algebraic properties that are convenient for efficient error detection and correction.

The theory of quantum error correction plays a prominent role in the practical realization and engineering of quantum computing and quantum communication devices. The first quantum error-correcting codes are strikingly similar to classical block codes in their operation and performance. Quantum error-correcting codes restore a noisy, decohered quantum state to a pure quantum state. A stabilizer quantum error-correcting code appends ancilla qubits to qubits that we want to protect. A unitary encoding circuit rotates the global state into a subspace of a larger Hilbert space. This highly entangled, encoded state corrects for local noisy errors. A quantum error-correcting code makes quantum computation and quantum communication practical by providing a way for a sender and receiver to simulate a noiseless qubit channel given a noisy qubit channel whose noise conforms to a particular error model.

In coding theory, a generator matrix is a matrix whose rows form a basis for a linear code. The codewords are all of the linear combinations of the rows of this matrix, that is, the linear code is the row space of its generator matrix.

Hadamard code

The Hadamard code is an error-correcting code named after Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mars back to Earth from the NASA space probe Mariner 9. Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in coding theory, mathematics, and theoretical computer science. The Hadamard code is also known under the names Walsh code, Walsh family, and Walsh–Hadamard code in recognition of the American mathematician Joseph Leonard Walsh.

Hamming(7,4)

In coding theory, Hamming(7,4) is a linear error-correcting code that encodes four bits of data into seven bits by adding three parity bits. It is a member of a larger family of Hamming codes, but the term Hamming code often refers to this specific code that Richard W. Hamming introduced in 1950. At the time, Hamming worked at Bell Telephone Laboratories and was frustrated with the error-prone punched card reader, which is why he started working on error-correcting codes.

The Gilbert–Varshamov bound for linear codes is related to the general Gilbert–Varshamov bound, which gives a lower bound on the maximal number of elements in an error-correcting code of a given block length and minimum Hamming weight over a field . This may be translated into a statement about the maximum rate of a code with given length and minimum distance. The Gilbert–Varshamov bound for linear codes asserts the existence of q-ary linear codes for any relative minimum distance less than the given bound that simultaneously have high rate. The existence proof uses the probabilistic method, and thus is not constructive. The Gilbert–Varshamov bound is the best known in terms of relative distance for codes over alphabets of size less than 49. For larger alphabets, Goppa codes sometimes achieve an asymptotically better rate vs. distance tradeoff than is given by the Gilbert-Varshamov bound.

In mathematics and computer science, the binary Goppa code is an error-correcting code that belongs to the class of general Goppa codes originally described by Valerii Denisovich Goppa, but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for cryptography in McEliece-like cryptosystems and similar setups.

References