Elias Bassalygo bound

Last updated

The Elias Bassalygo bound is a mathematical limit used in coding theory for error correction during data transmission or communications.

Contents

Definition

Let be a -ary code of length , i.e. a subset of . [1] Let be the rate of , the relative distance and

be the Hamming ball of radius centered at . Let be the volume of the Hamming ball of radius . It is obvious that the volume of a Hamming Ball is translation-invariant, i.e. indifferent to In particular,

With large enough , the rate and the relative distance satisfy the Elias-Bassalygo bound:

where

is the q-ary entropy function and

is a function related with Johnson bound.

Proof

To prove the Elias–Bassalygo bound, start with the following Lemma:

Lemma. For and , there exists a Hamming ball of radius with at least
codewords in it.
Proof of Lemma. Randomly pick a received word and let be the Hamming ball centered at with radius . Since is (uniform) randomly selected the expected size of overlapped region is
Since this is the expected value of the size, there must exist at least one such that
otherwise the expectation must be smaller than this value.

Now we prove the Elias–Bassalygo bound. Define By Lemma, there exists a Hamming ball with codewords such that:

By the Johnson bound, we have . Thus,

The second inequality follows from lower bound on the volume of a Hamming ball:

Putting in and gives the second inequality.

Therefore we have

See also

Related Research Articles

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a function on Euclidean space. It is usually denoted by the symbols , , or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf(p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f(p).

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a Functional to a change in a function on which the functional depends.

In theoretical physics, the Batalin–Vilkovisky (BV) formalism was developed as a method for determining the ghost structure for Lagrangian gauge theories, such as gravity and supergravity, whose corresponding Hamiltonian formulation has constraints not related to a Lie algebra. The BV formalism, based on an action that contains both fields and "antifields", can be thought of as a vast generalization of the original BRST formalism for pure Yang–Mills theory to an arbitrary Lagrangian gauge theory. Other names for the Batalin–Vilkovisky formalism are field-antifield formalism, Lagrangian BRST formalism, or BV–BRST formalism. It should not be confused with the Batalin–Fradkin–Vilkovisky (BFV) formalism, which is the Hamiltonian counterpart.

In coding theory, the Gilbert–Varshamov bound is a limit on the parameters of a code. It is occasionally known as the Gilbert–Shannon–Varshamov bound, but the name "Gilbert–Varshamov bound" is by far the most popular. Varshamov proved this bound by using the probabilistic method for linear codes. For more about that proof, see Gilbert–Varshamov bound for linear codes.

In mathematics, the von Mangoldt function is an arithmetic function named after German mathematician Hans von Mangoldt. It is an example of an important arithmetic function that is neither multiplicative nor additive.

In mathematics, the (linear) Peetre theorem, named after Jaak Peetre, is a result of functional analysis that gives a characterisation of differential operators in terms of their effect on generalized function spaces, and without mentioning differentiation in explicit terms. The Peetre theorem is an example of a finite order theorem in which a function or a functor, defined in a very general way, can in fact be shown to be a polynomial because of some extraneous condition or symmetry imposed upon it.

Charge density Electric charge per volume, length or area

In electromagnetism, charge density is the amount of electric charge per unit length, surface area, or volume. Volume charge density is the quantity of charge per unit volume, measured in the SI system in coulombs per cubic meter (C⋅m−3), at any point in a volume. Surface charge density (σ) is the quantity of charge per unit area, measured in coulombs per square meter (C⋅m−2), at any point on a surface charge distribution on a two dimensional surface. Linear charge density (λ) is the quantity of charge per unit length, measured in coulombs per meter (C⋅m−1), at any point on a line charge distribution. Charge density can be either positive or negative, since electric charge can be either positive or negative.

In mathematics, in particular in algebraic geometry and differential geometry, Dolbeault cohomology is an analog of de Rham cohomology for complex manifolds. Let M be a complex manifold. Then the Dolbeault cohomology groups depend on a pair of integers p and q and are realized as a subquotient of the space of complex differential forms of degree (p,q).

In information theory, information dimension is an information measure for random vectors in Euclidean space, based on the normalized entropy of finely quantized versions of the random vectors. This concept was first introduced by Alfréd Rényi in 1959.

In mathematics, the Carathéodory metric is a metric defined on the open unit ball of a complex Banach space that has many similar properties to the Poincaré metric of hyperbolic geometry. It is named after the Greek mathematician Constantin Carathéodory.

In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding.

In quantum computing, the quantum phase estimation algorithm, is a quantum algorithm to estimate the phase of an eigenvector of a unitary operator. More precisely, given a unitary matrix and a quantum state such that , the algorithm estimates the value of with high probability within additive error , using qubits and controlled-U operations. The algorithm was initially introduced by Alexei Kitaev in 1995.

In coding theory, the Wozencraft ensemble is a set of linear codes in which most of codes satisfy the Gilbert-Varshamov bound. It is named after John Wozencraft, who proved its existence. The ensemble is described by Massey (1963), who attributes it to Wozencraft. Justesen (1972) used the Wozencraft ensemble as the inner codes in his construction of strongly explicit asymptotically good code.

The Gilbert–Varshamov bound for linear codes is related to the general Gilbert–Varshamov bound, which gives a lower bound on the maximal number of elements in an error-correcting code of a given block length and minimum Hamming weight over a field . This may be translated into a statement about the maximum rate of a code with given length and minimum distance. The Gilbert–Varshamov bound for linear codes asserts the existence of q-ary linear codes for any relative minimum distance less than the given bound that simultaneously have high rate. The existence proof uses the probabilistic method, and thus is not constructive. The Gilbert–Varshamov bound is the best known in terms of relative distance for codes over alphabets of size less than 49. For larger alphabets, Goppa codes sometimes achieve an asymptotically better rate vs. distance tradeoff than is given by the Gilbert-Varshamov bound.

In coding theory, generalized minimum-distance (GMD) decoding provides an efficient algorithm for decoding concatenated codes, which is based on using an errors-and-erasures decoder for the outer code.

In quantum information theory, the idea of a typical subspace plays an important role in the proofs of many coding theorems. Its role is analogous to that of the typical set in classical information theory.

In quantum information theory, the classical capacity of a quantum channel is the maximum rate at which classical data can be sent over it error-free in the limit of many uses of the channel. Holevo, Schumacher, and Westmoreland proved the following least upper bound on the classical capacity of any quantum channel :

In coding theory, folded Reed–Solomon codes are like Reed–Solomon codes, which are obtained by mapping Reed–Solomon codewords over a larger alphabet by careful bundling of codeword symbols.

In coding theory, burst error-correcting codes employ methods of correcting burst errors, which are errors that occur in many consecutive bits rather than occurring in bits independently of each other.

Generalized relative entropy is a measure of dissimilarity between two quantum states. It is a "one-shot" analogue of quantum relative entropy and shares many properties of the latter quantity.

References

  1. Each -ary block code of length is a subset of the strings of where the alphabet set has elements.

Bassalygo, L. A. (1965), "New upper bounds for error-correcting codes", Problems of Information Transmission, 1 (1): 32–35

Claude E. Shannon, Robert G. Gallager; Berlekamp, Elwyn R. (1967), "Lower bounds to error probability for coding on discrete memoryless channels. Part I.", Information and Control, 10: 65–103, doi: 10.1016/s0019-9958(67)90052-6

Claude E. Shannon, Robert G. Gallager; Berlekamp, Elwyn R. (1967), "Lower bounds to error probability for coding on discrete memoryless channels. Part II.", Information and Control, 10: 522–552, doi: 10.1016/s0019-9958(67)91200-4