Cyclic code

Last updated

In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives another word that belongs to the code. They are error-correcting codes that have algebraic properties that are convenient for efficient error detection and correction.

Contents

If 00010111 is a valid codeword, applying a right circular shift gives the string 10001011. If the code is cyclic, then 10001011 is again a valid codeword. In general, applying a right circular shift moves the least significant bit (LSB) to the leftmost position, so that it becomes the most significant bit (MSB); the other positions are shifted by 1 to the right. Rotate right.svg
If 00010111 is a valid codeword, applying a right circular shift gives the string 10001011. If the code is cyclic, then 10001011 is again a valid codeword. In general, applying a right circular shift moves the least significant bit (LSB) to the leftmost position, so that it becomes the most significant bit (MSB); the other positions are shifted by 1 to the right.

Definition

Let be a linear code over a finite field (also called Galois field) of block length . is called a cyclic code if, for every codeword from , the word in obtained by a cyclic right shift of components is again a codeword. Because one cyclic right shift is equal to cyclic left shifts, a cyclic code may also be defined via cyclic left shifts. Therefore, the linear code is cyclic precisely when it is invariant under all cyclic shifts.

Cyclic codes have some additional structural constraint on the codes. They are based on Galois fields and because of their structural properties they are very useful for error controls. Their structure is strongly related to Galois fields because of which the encoding and decoding algorithms for cyclic codes are computationally efficient.

Algebraic structure

Cyclic codes can be linked to ideals in certain rings. Let be a polynomial ring over the finite field . Identify the elements of the cyclic code with polynomials in such that maps to the polynomial : thus multiplication by corresponds to a cyclic shift. Then is an ideal in , and hence principal, since is a principal ideal ring. The ideal is generated by the unique monic element in of minimum degree, the generator polynomial. [1] This must be a divisor of . It follows that every cyclic code is a polynomial code. If the generator polynomial has degree then the rank of the code is .

The idempotent of is a codeword such that (that is, is an idempotent element of ) and is an identity for the code, that is for every codeword . If and are coprime such a word always exists and is unique; [2] it is a generator of the code.

An irreducible code is a cyclic code in which the code, as an ideal is irreducible, i.e. is minimal in , so that its check polynomial is an irreducible polynomial.

Examples

For example, if and , the set of codewords contained in cyclic code generated by is precisely

.

It corresponds to the ideal in generated by .

The polynomial is irreducible in the polynomial ring, and hence the code is an irreducible code.

The idempotent of this code is the polynomial , corresponding to the codeword .

Trivial examples

Trivial examples of cyclic codes are itself and the code containing only the zero codeword. These correspond to generators and respectively: these two polynomials must always be factors of .

Over the parity bit code, consisting of all words of even weight, corresponds to generator . Again over this must always be a factor of .

Quasi-cyclic codes and shortened codes

Before delving into the details of cyclic codes first we will discuss quasi-cyclic and shortened codes which are closely related to the cyclic codes and they all can be converted into each other.

Definition

Quasi-cyclic codes:[ citation needed ]

An quasi-cyclic code is a linear block code such that, for some which is coprime to , the polynomial is a codeword polynomial whenever is a codeword polynomial.

Here, codeword polynomial is an element of a linear code whose code words are polynomials that are divisible by a polynomial of shorter length called the generator polynomial. Every codeword polynomial can be expressed in the form , where is the generator polynomial. Any codeword of a cyclic code can be associated with a codeword polynomial, namely, . A quasi-cyclic code with equal to is a cyclic code.

Definition

Shortened codes:

An linear code is called a proper shortened cyclic code if it can be obtained by deleting positions from an cyclic code.

In shortened codes information symbols are deleted to obtain a desired blocklength smaller than the design blocklength. The missing information symbols are usually imagined to be at the beginning of the codeword and are considered to be 0. Therefore, is fixed, and then is decreased which eventually decreases . It is not necessary to delete the starting symbols. Depending on the application sometimes consecutive positions are considered as 0 and are deleted.

All the symbols which are dropped need not be transmitted and at the receiving end can be reinserted. To convert cyclic code to shortened code, set symbols to zero and drop them from each codeword. Any cyclic code can be converted to quasi-cyclic codes by dropping every th symbol where is a factor of . If the dropped symbols are not check symbols then this cyclic code is also a shortened code.

For correcting errors

Cyclic codes can be used to correct errors, like Hamming codes as cyclic codes can be used for correcting single error. Likewise, they are also used to correct double errors and burst errors. All types of error corrections are covered briefly in the further subsections.

The (7,4) Hamming code has a generator polynomial . This polynomial has a zero in Galois extension field at the primitive element , and all codewords satisfy . Cyclic codes can also be used to correct double errors over the field . Blocklength will be equal to and primitive elements and as zeros in the because we are considering the case of two errors here, so each will represent one error.

The received word is a polynomial of degree given as

where can have at most two nonzero coefficients corresponding to 2 errors.

We define the syndrome polynomial, as the remainder of polynomial when divided by the generator polynomial i.e.

as .

For correcting two errors

Let the field elements and be the two error location numbers. If only one error occurs then is equal to zero and if none occurs both are zero.

Let and .

These field elements are called "syndromes". Now because is zero at primitive elements and , so we can write and . If say two errors occur, then

and .

And these two can be considered as two pair of equations in with two unknowns and hence we can write

and .

Hence if the two pair of nonlinear equations can be solved cyclic codes can used to correct two errors.

Hamming code

The Hamming(7,4) code may be written as a cyclic code over GF(2) with generator . In fact, any binary Hamming code of the form Ham(r, 2) is equivalent to a cyclic code, [3] and any Hamming code of the form Ham(r,q) with r and q-1 relatively prime is also equivalent to a cyclic code. [4] Given a Hamming code of the form Ham(r,2) with , the set of even codewords forms a cyclic -code. [5]

Hamming code for correcting single errors

A code whose minimum distance is at least 3, have a check matrix all of whose columns are distinct and non zero. If a check matrix for a binary code has rows, then each column is an -bit binary number. There are possible columns. Therefore, if a check matrix of a binary code with at least 3 has rows, then it can only have columns, not more than that. This defines a code, called Hamming code.

It is easy to define Hamming codes for large alphabets of size . We need to define one matrix with linearly independent columns. For any word of size there will be columns who are multiples of each other. So, to get linear independence all non zero -tuples with one as a top most non zero element will be chosen as columns. Then two columns will never be linearly dependent because three columns could be linearly dependent with the minimum distance of the code as 3.

So, there are nonzero columns with one as top most non zero element. Therefore, a Hamming code is a code.

Now, for cyclic codes, Let be primitive element in , and let . Then and thus is a zero of the polynomial and is a generator polynomial for the cyclic code of block length .

But for , . And the received word is a polynomial of degree given as

where, or where represents the error locations.

But we can also use as an element of to index error location. Because , we have and all powers of from to are distinct. Therefore, we can easily determine error location from unless which represents no error. So, a Hamming code is a single error correcting code over with and .

For correcting burst errors

From Hamming distance concept, a code with minimum distance can correct any errors. But in many channels error pattern is not very arbitrary, it occurs within very short segment of the message. Such kind of errors are called burst errors. So, for correcting such errors we will get a more efficient code of higher rate because of the less constraints. Cyclic codes are used for correcting burst error. In fact, cyclic codes can also correct cyclic burst errors along with burst errors. Cyclic burst errors are defined as

A cyclic burst of length is a vector whose nonzero components are among (cyclically) consecutive components, the first and the last of which are nonzero.

In polynomial form cyclic burst of length can be described as with as a polynomial of degree with nonzero coefficient . Here defines the pattern and defines the starting point of error. Length of the pattern is given by deg. The syndrome polynomial is unique for each pattern and is given by

A linear block code that corrects all burst errors of length or less must have at least check symbols. Proof: Because any linear code that can correct burst pattern of length or less cannot have a burst of length or less as a codeword because if it did then a burst of length could change the codeword to burst pattern of length , which also could be obtained by making a burst error of length in all zero codeword. Now, any two vectors that are non zero in the first components must be from different co-sets of an array to avoid their difference being a codeword of bursts of length . Therefore, number of such co-sets are equal to number of such vectors which are . Hence at least co-sets and hence at least check symbol.

This property is also known as Rieger bound and it is similar to the Singleton bound for random error correcting.

Fire codes as cyclic bounds

In 1959, Philip Fire [6] presented a construction of cyclic codes generated by a product of a binomial and a primitive polynomial. The binomial has the form for some positive odd integer . [7] Fire code is a cyclic burst error correcting code over with the generator polynomial

where is a prime polynomial with degree not smaller than and does not divide . Block length of the fire code is the smallest integer such that divides .

A fire code can correct all burst errors of length t or less if no two bursts and appear in the same co-set. This can be proved by contradiction. Suppose there are two distinct nonzero bursts and of length or less and are in the same co-set of the code. So, their difference is a codeword. As the difference is a multiple of it is also a multiple of . Therefore,

.

This shows that is a multiple of , So

for some . Now, as is less than and is less than so is a codeword. Therefore,

.

Since degree is less than degree of , cannot divide . If is not zero, then also cannot divide as is less than and by definition of , divides for no smaller than . Therefore and equals to zero. That means both that both the bursts are same, contrary to assumption.

Fire codes are the best single burst correcting codes with high rate and they are constructed analytically. They are of very high rate and when and are equal, redundancy is least and is equal to . By using multiple fire codes longer burst errors can also be corrected.

For error detection cyclic codes are widely used and are called cyclic redundancy codes.

On Fourier transform

Applications of Fourier transform are widespread in signal processing. But their applications are not limited to the complex fields only; Fourier transforms also exist in the Galois field . Cyclic codes using Fourier transform can be described in a setting closer to the signal processing.

Fourier transform over finite fields

Fourier transform over finite fields

The discrete Fourier transform of vector is given by a vector where,

= where,

where exp() is an th root of unity. Similarly in the finite field th root of unity is element of order . Therefore

If is a vector over , and be an element of of order , then Fourier transform of the vector is the vector and components are given by

= where,

Here is time index, is frequency and is the spectrum. One important difference between Fourier transform in complex field and Galois field is that complex field exists for every value of while in Galois field exists only if divides . In case of extension fields, there will be a Fourier transform in the extension field if divides for some . In Galois field time domain vector is over the field but the spectrum may be over the extension field .

Spectral description

Any codeword of cyclic code of blocklength can be represented by a polynomial of degree at most . Its encoder can be written as . Therefore, in frequency domain encoder can be written as . Here codeword spectrum has a value in but all the components in the time domain are from . As the data spectrum is arbitrary, the role of is to specify those where will be zero.

Thus, cyclic codes can also be defined as

Given a set of spectral indices,, whose elements are called check frequencies, the cyclic codeis the set of words overwhose spectrum is zero in the components indexed by. Any such spectrumwill have components of the form.

So, cyclic codes are vectors in the field and the spectrum given by its inverse fourier transform is over the field and are constrained to be zero at certain components. But every spectrum in the field and zero at certain components may not have inverse transforms with components in the field . Such spectrum can not be used as cyclic codes.

Following are the few bounds on the spectrum of cyclic codes.

BCH bound

If be a factor of for some . The only vector in of weight or less that has consecutive components of its spectrum equal to zero is all-zero vector.

Hartmann-Tzeng bound

If be a factor of for some , and an integer that is coprime with . The only vector in of weight or less whose spectral components equal zero for , where and , is the all zero vector.

Roos bound

If be a factor of for some and . The only vector in of weight or less whose spectral components equal to zero for , where and takes at least values in the range , is the all-zero vector.

Quadratic residue codes

When the prime is a quadratic residue modulo the prime there is a quadratic residue code which is a cyclic code of length , dimension and minimum weight at least over .

Generalizations

A constacyclic code is a linear code with the property that for some constant λ if (c1,c2,...,cn) is a codeword then so is (λcn,c1,...,cn-1). A negacyclic code is a constacyclic code with λ=-1. [8] A quasi-cyclic code has the property that for some s, any cyclic shift of a codeword by s places is again a codeword. [9] A double circulant code is a quasi-cyclic code of even length with s=2. [9] Quasi-twisted codes and multi-twisted codes are further generalizations of constacyclic codes. [10] [11]

See also

Notes

  1. Van Lint 1998 , p. 76
  2. Van Lint 1998 , p. 80
  3. Hill 1988 , pp. 159–160
  4. Blahut 2003 , Theorem 5.5.1
  5. Hill 1988 , pp. 162–163
  6. P. Fire, E, P. (1959). A class of multiple-error-correcting binary codes for non-independent errors. Sylvania Reconnaissance Systems Laboratory, Mountain View, CA, Rept. RSL-E-2, 1959.
  7. Wei Zhou, Shu Lin, Khaled Abdel-Ghaffar. Burst or random error correction based on Fire and BCH codes. ITA 2014: 1-5 2013.
  8. Van Lint 1998 , p. 75
  9. 1 2 MacWilliams & Sloane 1977 , p. 506
  10. Aydin, Nuh; Siap, Irfan; K. Ray-Chaudhuri, Dijen (2001). "The Structure of 1-Generator Quasi-Twisted Codes and New Linear Codes". Designs, Codes and Cryptography. 24 (3): 313–326. doi:10.1023/A:1011283523000. S2CID   17376783.
  11. Aydin, Nuh; Halilović, Ajdin (2017). "A generalization of quasi-twisted codes: multi-twisted codes". Finite Fields and Their Applications. 45: 96–106. arXiv: 1701.01044 . doi: 10.1016/j.ffa.2016.12.002 . S2CID   7694655.

Related Research Articles

<span class="mw-page-title-main">BQP</span> Computational complexity class of problems

In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.

In mathematics, a finite field or Galois field is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are given by the integers mod p when p is a prime number.

In coding theory, the Bose–Chaudhuri–Hocquenghem codes form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field. BCH codes were invented in 1959 by French mathematician Alexis Hocquenghem, and independently in 1960 by Raj Chandra Bose and D.K. Ray-Chaudhuri. The name Bose–Chaudhuri–Hocquenghem arises from the initials of the inventors' surnames.

Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.

<span class="mw-page-title-main">Lindemann–Weierstrass theorem</span> On algebraic independence of exponentials of linearly independent algebraic numbers over Q

In transcendental number theory, the Lindemann–Weierstrass theorem is a result that is very useful in establishing the transcendence of numbers. It states the following:

<span class="mw-page-title-main">Polynomial ring</span> Algebraic structure

In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring formed from the set of polynomials in one or more indeterminates with coefficients in another ring, often a field.

In mathematics, specifically the algebraic theory of fields, a normal basis is a special kind of basis for Galois extensions of finite degree, characterised as forming a single orbit for the Galois group. The normal basis theorem states that any finite Galois extension of fields has a normal basis. In algebraic number theory, the study of the more refined question of the existence of a normal integral basis is part of Galois module theory.

Algebraic geometry codes, often abbreviated AG codes, are a type of linear code that generalize Reed–Solomon codes. The Russian mathematician V. D. Goppa constructed these codes for the first time in 1982.

In abstract algebra, the Chien search, named after Robert Tienwen Chien, is a fast algorithm for determining roots of polynomials defined over a finite field. Chien search is commonly used to find the roots of error-locator polynomials encountered in decoding Reed-Solomon codes and BCH codes.

The cyclic redundancy check (CRC) is based on division in the ring of polynomials over the finite field GF(2), that is, the set of polynomials where each coefficient is either zero or one, and arithmetic operations wrap around.

In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding.

In mathematics, a permutation polynomial is a polynomial that acts as a permutation of the elements of the ring, i.e. the map is a bijection. In case the ring is a finite field, the Dickson polynomials, which are closely related to the Chebyshev polynomials, provide examples. Over a finite field, every function, so in particular every permutation of the elements of that field, can be written as a polynomial function.

In coding theory, a polynomial code is a type of linear code whose set of valid code words consists of those polynomials that are divisible by a given fixed polynomial.

DNA code construction refers to the application of coding theory to the design of nucleic acid systems for the field of DNA–based computation.

In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign.

In coding theory, the Forney algorithm calculates the error values at known error locations. It is used as one of the steps in decoding BCH codes and Reed–Solomon codes. George David Forney Jr. developed the algorithm.

In coding theory, list decoding is an alternative to unique decoding of error-correcting codes in the presence of many errors. If a code has relative distance , then it is possible in principle to recover an encoded message when up to fraction of the codeword symbols are corrupted. But when error rate is greater than , this will not in general be possible. List decoding overcomes that issue by allowing the decoder to output a short list of messages that might have been encoded. List decoding can correct more than fraction of errors.

In mathematics and computer science, the binary Goppa code is an error-correcting code that belongs to the class of general Goppa codes originally described by Valerii Denisovich Goppa, but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for cryptography in McEliece-like cryptosystems and similar setups.

In coding theory, burst error-correcting codes employ methods of correcting burst errors, which are errors that occur in many consecutive bits rather than occurring in bits independently of each other.

In coding theory, Zemor's algorithm, designed and developed by Gilles Zemor, is a recursive low-complexity approach to code construction. It is an improvement over the algorithm of Sipser and Spielman.

References

Further reading

This article incorporates material from cyclic code on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.