Generator matrix

Last updated

In coding theory, a generator matrix is a matrix whose rows form a basis for a linear code. The codewords are all of the linear combinations of the rows of this matrix, that is, the linear code is the row space of its generator matrix.

Contents

Terminology

If G is a matrix, it generates the codewords of a linear code C by

where w is a codeword of the linear code C, and s is any input vector. Both w and s are assumed to be row vectors. [1] A generator matrix for a linear -code has format , where n is the length of a codeword, k is the number of information bits (the dimension of C as a vector subspace), d is the minimum distance of the code, and q is size of the finite field, that is, the number of symbols in the alphabet (thus, q = 2 indicates a binary code, etc.). The number of redundant bits is denoted by .

The standard form for a generator matrix is, [2]

,

where is the identity matrix and P is a matrix. When the generator matrix is in standard form, the code C is systematic in its first k coordinate positions. [3]

A generator matrix can be used to construct the parity check matrix for a code (and vice versa). If the generator matrix G is in standard form, , then the parity check matrix for C is [4]

,

where is the transpose of the matrix . This is a consequence of the fact that a parity check matrix of is a generator matrix of the dual code .

G is a matrix, while H is a matrix.

Equivalent codes

Codes C1 and C2 are equivalent (denoted C1 ~ C2) if one code can be obtained from the other via the following two transformations: [5]

  1. arbitrarily permute the components, and
  2. independently scale by a non-zero element any components.

Equivalent codes have the same minimum distance.

The generator matrices of equivalent codes can be obtained from one another via the following elementary operations: [6]

  1. permute rows
  2. scale rows by a nonzero scalar
  3. add rows to other rows
  4. permute columns, and
  5. scale columns by a nonzero scalar.

Thus, we can perform Gaussian elimination on G. Indeed, this allows us to assume that the generator matrix is in the standard form. More precisely, for any matrix G we can find an invertible matrix U such that , where G and generate equivalent codes.

See also

Notes

  1. MacKay, David, J.C. (2003). Information Theory, Inference, and Learning Algorithms (PDF). Cambridge University Press. p. 9. ISBN   9780521642989. Because the Hamming code is a linear code, it can be written compactly in terms of matrices as follows. The transmitted codeword is obtained from the source sequence by a linear operation,
    where is the generator matrix of the code... I have assumed that and are column vectors. If instead they are row vectors, then this equation is replaced by
    ... I find it easier to relate to the right-multiplication (...) than the left-multiplication (...). Many coding theory texts use the left-multiplying conventions (...), however. ...The rows of the generator matrix can be viewed as defining the basis vectors.
  2. Ling & Xing 2004 , p. 52
  3. Roman 1992 , p. 198
  4. Roman 1992 , p. 200
  5. Pless 1998 , p. 8
  6. Welsh 1988 , pp. 54-55

Related Research Articles

In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of

<span class="mw-page-title-main">Hamming code</span> Family of linear error-correcting codes

In computer science and telecommunication, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors. By contrast, the simple parity code cannot correct errors, and can detect only an odd number of bits in error. Hamming codes are perfect codes, that is, they achieve the highest possible rate for codes with their block length and minimum distance of three. Richard W. Hamming invented Hamming codes in 1950 as a way of automatically correcting errors introduced by punched card readers. In his original paper, Hamming elaborated his general idea, but specifically focused on the Hamming(7,4) code which adds three parity bits to four bits of data.

<span class="mw-page-title-main">Coset</span> Disjoint, equal-size subsets of a groups underlying set

In mathematics, specifically group theory, a subgroup H of a group G may be used to decompose the underlying set of G into disjoint, equal-size subsets called cosets. There are left cosets and right cosets. Cosets have the same number of elements (cardinality) as does H. Furthermore, H itself is both a left coset and a right coset. The number of left cosets of H in G is equal to the number of right cosets of H in G. This common value is called the index of H in G and is usually denoted by [G : H].

<span class="mw-page-title-main">Linear independence</span> Vectors whose linear combinations are nonzero

In the theory of vector spaces, a set of vectors is said to be linearly independent if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be linearly dependent. These concepts are central to the definition of dimension.

In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions n and m, then their outer product is an n × m matrix. More generally, given two tensors, their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

In mathematics, particularly in matrix theory, a permutation matrix is a square binary matrix that has exactly one entry of 1 in each row and each column and 0s elsewhere. Each such matrix, say P, represents a permutation of m elements and, when used to multiply another matrix, say A, results in permuting the rows or columns of the matrix A.

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as a hybrid of these two types. Linear codes allow for more efficient encoding and decoding algorithms than other codes.

In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code with block length , size and minimum distance . It is also known as the Joshibound. proved by Joshi (1958) and even earlier by Komamiya (1953).

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.

<span class="mw-page-title-main">Hadamard code</span> Error-correcting code

The Hadamard code is an error-correcting code named after Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mars back to Earth from the NASA space probe Mariner 9. Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in coding theory, mathematics, and theoretical computer science. The Hadamard code is also known under the names Walsh code, Walsh family, and Walsh–Hadamard code in recognition of the American mathematician Joseph Leonard Walsh.

In coding theory, a parity-check matrix of a linear block code C is a matrix which describes the linear relations that the components of a codeword must satisfy. It can be used to decide whether a particular vector is a codeword and is also used in decoding algorithms.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In mathematics, more specifically in linear algebra, the spark of a matrix is the smallest integer such that there exists a set of columns in which are linearly dependent. If all the columns are linearly independent, is usually defined to be 1 more than the number of rows. The concept of matrix spark finds applications in error-correction codes, compressive sensing, and matroid theory, and provides a simple criterion for maximal sparsity of solutions to a system of linear equations.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

DNA code construction refers to the application of coding theory to the design of nucleic acid systems for the field of DNA–based computation.

In coding theory, burst error-correcting codes employ methods of correcting burst errors, which are errors that occur in many consecutive bits rather than occurring in bits independently of each other.

References

Further reading