Zemor's decoding algorithm

Last updated

In coding theory, Zemor's algorithm, designed and developed by Gilles Zemor, [1] is a recursive low-complexity approach to code construction. It is an improvement over the algorithm of Sipser and Spielman.

Contents

Zemor considered a typical class of Sipser–Spielman construction of expander codes, where the underlying graph is bipartite graph. Sipser and Spielman introduced a constructive family of asymptotically good linear-error codes together with a simple parallel algorithm that will always remove a constant fraction of errors. The article is based on Dr. Venkatesan Guruswami's course notes [2]

Code construction

Zemor's algorithm is based on a type of expander graphs called Tanner graph. The construction of code was first proposed by Tanner. [3] The codes are based on double cover , regular expander , which is a bipartite graph. =, where is the set of vertices and is the set of edges and = and = , where and denotes sets of vertices. Let be the number of vertices in each group, i.e, . The edge set be of size = and every edge in has one endpoint in both and . denotes the set of edges containing .

Assume an ordering on , therefore ordering will be done on every edges of for every . Let finite field , and for a word in , let the subword of the word will be indexed by . Let that word be denoted by . The subset of vertices and induces every word a partition into non-overlapping sub-words , where ranges over the elements of . For constructing a code , consider a linear subcode , which is a code, where , the size of the alphabet is . For any vertex , let be some ordering of the vertices of adjacent to . In this code, each bit is linked with an edge of .

We can define the code to be the set of binary vectors of such that, for every vertex of , is a code word of . In this case, we can consider a special case when every edge of is adjacent to exactly vertices of . It means that and make up, respectively, the vertex set and edge set of regular graph .

Let us call the code constructed in this way as code. For a given graph and a given code , there are several codes as there are different ways of ordering edges incident to a given vertex , i.e., . In fact our code consist of all codewords such that for all . The code is linear in as it is generated from a subcode , which is linear. The code is defined as for every .

Graph G and code C Zemor Decoding.jpg
Graph G and code C

In this figure, . It shows the graph and code .

In matrix , let is equal to the second largest eigenvalue of adjacency matrix of . Here the largest eigenvalue is . Two important claims are made:

Claim 1


. Let be the rate of a linear code constructed from a bipartite graph whose digit nodes have degree and whose subcode nodes have degree . If a single linear code with parameters and rate is associated with each of the subcode nodes, then .

Proof

Let be the rate of the linear code, which is equal to Let there are subcode nodes in the graph. If the degree of the subcode is , then the code must have digits, as each digit node is connected to of the edges in the graph. Each subcode node contributes equations to parity check matrix for a total of . These equations may not be linearly independent. Therefore,

, Since the value of , i.e., the digit node of this bipartite graph is and here , we can write as:

Claim 2

If is linear code of rate , block code length , and minimum relative distance , and if is the edge vertex incidence graph of a – regular graph with second largest eigenvalue , then the code has rate at least and minimum relative distance at least .

Proof

Let be derived from the regular graph . So, the number of variables of is and the number of constraints is . According to Alon - Chung, [4] if is a subset of vertices of of size , then the number of edges contained in the subgraph is induced by in is at most .

As a result, any set of variables will be having at least constraints as neighbours. So the average number of variables per constraint is :

So if , then a word of relative weight , cannot be a codeword of . The inequality is satisfied for . Therefore, cannot have a non zero codeword of relative weight or less.

In matrix , we can assume that is bounded away from . For those values of in which is odd prime, there are explicit constructions of sequences of - regular bipartite graphs with arbitrarily large number of vertices such that each graph in the sequence is a Ramanujan graph. It is called Ramanujan graph as it satisfies the inequality . Certain expansion properties are visible in graph as the separation between the eigenvalues and . If the graph is Ramanujan graph, then that expression will become eventually as becomes large.

Zemor's algorithm

The iterative decoding algorithm written below alternates between the vertices and in and corrects the codeword of in and then it switches to correct the codeword in . Here edges associated with a vertex on one side of a graph are not incident to other vertex on that side. In fact, it doesn't matter in which order, the set of nodes and are processed. The vertex processing can also be done in parallel.

The decoder stands for a decoder for that recovers correctly with any codewords with less than errors.

Decoder algorithm

Received word :

For to do // is the number of iterations
{ if ( is odd) // Here the algorithm will alternate between its two vertex sets.

else
Iteration : For every , let // Decoding to its nearest codeword.
}
Output:

Explanation of the algorithm

Since is bipartite, the set of vertices induces the partition of the edge set = . The set induces another partition, = .

Let be the received vector, and recall that . The first iteration of the algorithm consists of applying the complete decoding for the code induced by for every . This means that for replacing, for every , the vector by one of the closest codewords of . Since the subsets of edges are disjoint for , the decoding of these subvectors of may be done in parallel.

The iteration will yield a new vector . The next iteration consists of applying the preceding procedure to but with replaced by . In other words, it consists of decoding all the subvectors induced by the vertices of . The coming iterations repeat those two steps alternately applying parallel decoding to the subvectors induced by the vertices of and to the subvectors induced by the vertices of .
Note: [If and is the complete bipartite graph, then is a product code of with itself and the above algorithm reduces to the natural hard iterative decoding of product codes].

Here, the number of iterations, is . In general, the above algorithm can correct a code word whose Hamming weight is no more than for values of . Here, the decoding algorithm is implemented as a circuit of size and depth that returns the codeword given that error vector has weight less than .

Theorem

If is a Ramanujan graph of sufficiently high degree, for any , the decoding algorithm can correct errors, in rounds ( where the big- notation hides a dependence on ). This can be implemented in linear time on a single processor; on processors each round can be implemented in constant time.

Proof

Since the decoding algorithm is insensitive to the value of the edges and by linearity, we can assume that the transmitted codeword is the all zeros - vector. Let the received codeword be . The set of edges which has an incorrect value while decoding is considered. Here by incorrect value, we mean in any of the bits. Let be the initial value of the codeword, be the values after first, second . . . stages of decoding. Here, , and . Here corresponds to those set of vertices that was not able to successfully decode their codeword in the round. From the above algorithm as number of unsuccessful vertices will be corrected in every iteration. We can prove that is a decreasing sequence. In fact, . As we are assuming, , the above equation is in a geometric decreasing sequence. So, when , more than rounds are necessary. Furthermore, , and if we implement the round in time, then the total sequential running time will be linear.

Drawbacks of Zemor's algorithm

  1. It is lengthy process as the number of iterations in decoder algorithm takes is
  2. Zemor's decoding algorithm finds it difficult to decode erasures. A detailed way of how we can improve the algorithm is

given in. [5]

See also

Related Research Articles

In graph theory, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion. Expander constructions have spawned research in pure and applied mathematics, with several applications to complexity theory, design of robust computer networks, and the theory of error-correcting codes.

<span class="mw-page-title-main">Lorentz transformation</span> Family of linear transformations

In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.

In coding theory, the Bose–Chaudhuri–Hocquenghem codes form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field. BCH codes were invented in 1959 by French mathematician Alexis Hocquenghem, and independently in 1960 by Raj Chandra Bose and D.K. Ray-Chaudhuri. The name Bose–Chaudhuri–Hocquenghem arises from the initials of the inventors' surnames.

Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.

In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.

<span class="mw-page-title-main">Cayley graph</span> Graph defined from a mathematical group

In mathematics, a Cayley graph, also known as a Cayley color graph, Cayley diagram, group diagram, or color group is a graph that encodes the abstract structure of a group. Its definition is suggested by Cayley's theorem, and uses a specified set of generators for the group. It is a central tool in combinatorial and geometric group theory. The structure and symmetry of Cayley graphs makes them particularly good candidates for constructing families of expander graphs.

In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants".

<span class="mw-page-title-main">Granular material</span> Conglomeration of discrete solid, macroscopic particles

A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact. The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids.

<span class="mw-page-title-main">Curvilinear coordinates</span> Coordinate system whose directions vary in space

In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally invertible at each point. This means that one can convert a point given in a Cartesian coordinate system to its curvilinear coordinates and back. The name curvilinear coordinates, coined by the French mathematician Lamé, derives from the fact that the coordinate surfaces of the curvilinear systems are curved.

In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph, the discrete Laplace operator is more commonly called the Laplacian matrix.

In graph theory, a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability. There is no concrete definition of graph pseudorandomness, but there are many reasonable characterizations of pseudorandomness one can consider.

In the mathematical discipline of graph theory, the expander walk sampling theorem intuitively states that sampling vertices in an expander graph by doing relatively short random walk can simulate sampling the vertices independently from a uniform distribution. The earliest version of this theorem is due to Ajtai, Komlós & Szemerédi (1987), and the more general version is typically attributed to Gillman (1998).

In mathematics, the Fortuin–Kasteleyn–Ginibre (FKG) inequality is a correlation inequality, a fundamental tool in statistical mechanics and probabilistic combinatorics, due to Cees M. Fortuin, Pieter W. Kasteleyn, and Jean Ginibre (1971). Informally, it says that in many random systems, increasing events are positively correlated, while an increasing and a decreasing event are negatively correlated. It was obtained by studying the random cluster model.

<span class="mw-page-title-main">Expander code</span>

In coding theory, expander codes form a class of error-correcting codes that are constructed from bipartite expander graphs. Along with Justesen codes, expander codes are of particular interest since they have a constant positive rate, a constant positive relative distance, and a constant alphabet size. In fact, the alphabet contains only two elements, so expander codes belong to the class of binary codes. Furthermore, expander codes can be both encoded and decoded in time proportional to the block length of the code.

In coding theory, the Forney algorithm calculates the error values at known error locations. It is used as one of the steps in decoding BCH codes and Reed–Solomon codes. George David Forney Jr. developed the algorithm.

In coding theory, folded Reed–Solomon codes are like Reed–Solomon codes, which are obtained by mapping Reed–Solomon codewords over a larger alphabet by careful bundling of codeword symbols.

In coding theory, burst error-correcting codes employ methods of correcting burst errors, which are errors that occur in many consecutive bits rather than occurring in bits independently of each other.

Maximal entropy random walk (MERW) is a popular type of biased random walk on a graph, in which transition probabilities are chosen accordingly to the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. While standard random walk chooses for every vertex uniform probability distribution among its outgoing edges, locally maximizing entropy rate, MERW maximizes it globally by assuming uniform probability distribution among all paths in a given graph.

A Boltzmann sampler is an algorithm intended for random sampling of combinatorial structures. If the object size is viewed as its energy, and the argument of the corresponding generating function is interpreted in terms of the temperature of the physical system, then a Boltzmann sampler returns an object from a classical Boltzmann distribution.

In mathematics, a quasirandom group is a group that does not contain a large product-free subset. Such groups are precisely those without a small non-trivial irreducible representation. The namesake of these groups stems from their connection to graph theory: bipartite Cayley graphs over any subset of a quasirandom group are always bipartite quasirandom graphs.

References

  1. "Gilles Zémor". www.math.u-bordeaux.fr. Retrieved 9 April 2023.
  2. http://www.cs.washington.edu/education/courses/cse590vg/03wi/scribes/1-27.ps [ dead link ]
  3. "Lecture notes" (PDF). washington.edu. Retrieved 9 April 2023.
  4. N. Alon; F.R.K. Chung (December 1988). "Explicit construction of linear sized tolerant networks". Discrete Mathematics . 72 (1–3). doi:10.1016/0012-365X(88)90189-6.
  5. "Archived copy". Archived from the original on September 14, 2004. Retrieved May 1, 2012.{{cite web}}: CS1 maint: archived copy as title (link)