AN codes

Last updated

AN codes [1] are error-correcting code that are used in arithmetic applications. Arithmetic codes were commonly used in computer processors to ensure the accuracy of its arithmetic operations when electronics were more unreliable. Arithmetic codes help the processor to detect when an error is made and correct it. Without these codes, processors would be unreliable since any errors would go undetected. AN codes are arithmetic codes that are named for the integers and that are used to encode and decode the codewords.

Contents

These codes differ from most other codes in that they use arithmetic weight to maximize the arithmetic distance between codewords as opposed to the hamming weight and hamming distance. The arithmetic distance between two words is a measure of the number of errors made while computing an arithmetic operation. Using the arithmetic distance is necessary since one error in an arithmetic operation can cause a large hamming distance between the received answer and the correct answer.

Arithmetic Weight and Distance

The arithmetic weight of an integer in base is defined by

[ citation needed ]

where < , , and . The arithmetic distance of a word is upper bounded by its hamming weight since any integer can be represented by its standard polynomial form of where the are the digits in the integer. Removing all the terms where will simulate a equal to its hamming weight. The arithmetic weight will usually be less than the hamming weight since the are allowed to be negative. For example, the integer which is in binary has a hamming weight of . This is a quick upper bound on the arithmetic weight since . However, since the can be negative, we can write which makes the arithmetic weight equal to .

The arithmetic distance between two integers is defined by

[ citation needed ]

This is one of the primary metrics used when analyzing arithmetic codes. [ citation needed ]

AN Codes

AN codes are defined by integers and and are used to encode integers from to such that

<

Each choice of will result in a different code, while serves as a limiting factor to ensure useful properties in the distance of the code. If is too large, it could let a codeword with a very small arithmetic weight into the code which will degrade the distance of the entire code. To utilize these codes, before an arithmetic operation is performed on two integers, each integer is multiplied by . Let the result of the operation on the codewords be . Note that must also be between to for proper decoding. To decode, simply divide . If is not a factor of , then at least one error has occurred and the most likely solution will be the codeword with the least arithmetic distance from . As with codes using hamming distance, AN codes can correct up to errors where is the distance of the code.

For example, an AN code with , the operation of adding and will start by encoding both operands. This results in the operation . Then, to find the solution we divide . As long as >, this will be a possible operation under the code. Suppose an error occurs in each of the binary representation of the operands such that and , then . Notice that since , the hamming weight between the received word and the correct solution is after just errors. To compute the arithmetic weight, we take which can be represented as or . In either case, the arithmetic distance is as expected since this is the number of errors that were made. To correct this error, an algorithm would be used to compute the nearest codeword to the received word in terms of arithmetic distance. We will not describe the algorithms in detail.

To ensure that the distance of the code will not be too small, we will define modular AN codes. A modular AN code is a subgroup of , where . The codes are measured in terms of modular distance which is defined in terms of a graph with vertices being the elements of . Two vertices and are connected iff

where and <<, . Then the modular distance between two words is the length of the shortest path between their nodes in the graph. The modular weight of a word is its distance from which is equal to

In practice, the value of is typically chosen such that since most computer arithmetic is computed so there is no additional loss of data due to the code going out of bounds since the computer will also be out of bounds. Choosing also tends to result in codes with larger distances than other codes.

By using modular weight with , the AN codes will be cyclic code.

definition: A cyclic AN code is a code that is a subgroup of , where .

A cyclic AN code is a principal ideal of the ring . There are integers and where and satisfy the definition of an AN code. Cyclic AN codes are a subset of cyclic codes and have the same properties.

Mandelbaum-Barrows Codes

The Mandelbaum-Barrows Codes are a type of cyclic AN codes introduced by D. Mandelbaum and J. T. Barrows. [2] [3] These codes are created by choosing to be a prime number that does not divide such that is generated by and , and . Let be a positive integer where and . For example, choosing , and the result will be a Mandelbaum-Barrows Code such that < in base .

To analyze the distance of the Mandelbaum-Barrows Codes, we will need the following theorem.

theorem: Let be a cyclic AN code with generator , and

Then,

proof: Assume that each has a unique cyclic NAF [4] representation which is

We define an matrix with elements where and . This matrix is essentially a list of all the codewords in where each column is a codeword. Since is cyclic, each column of the matrix has the same number of zeros. We must now calculate , which is times the number of codewords that don't end with a . As a property of being in cyclic NAF, iff there is a with <. Since with <, then <. Then the number of integers that have a zero as their last bit are . Multiplying this by the characters in the codewords gives us a sum of the weights of the codewords of as desired.

We will now use the previous theorem to show that the Mandelbaum-Barrows Codes are equidistant (which means that every pair of codewords have the same distance), with a distance of

proof: Let , then and is not divisible by . This implies there . Then . This proves that is equidistant since all codewords have the same weight as . Since all codewords have the same weight, and by the previous theorem we know the total weight of all codewords, the distance of the code is found by dividing the total weight by the number of codewords (excluding 0).

See also

Related Research Articles

Modular arithmetic Computation modulo a fixed integer

In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value—the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.

Quadratic reciprocity theorem

In number theory, the law of quadratic reciprocity is a theorem about modular arithmetic that gives conditions for the solvability of quadratic equations modulo prime numbers. Due to its subtlety, it has many formulations, but the most standard statement is:

Floor and ceiling functions functions of a real returning respectively the largest smaller and the smallest larger integer

In mathematics and computer science, the floor function is the function that takes as input a real number and gives as output the greatest integer less than or equal to , denoted . Similarly, the ceiling function maps to the least integer greater than or equal to , denoted .

In digital computer programming, a bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast and simple action, directly supported by the processor, and is used to manipulate values for comparisons and calculations.

In number theory, the Ankeny–Artin–Chowla congruence is a result published in 1953 by N. C. Ankeny, Emil Artin and S. Chowla. It concerns the class number h of a real quadratic field of discriminant d > 0. If the fundamental unit of the field is

In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way. Such limitations often take the form of bounds that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.

Modulo operation Computational operation

In computing, the modulo operation finds the remainder after division of one number by another.

In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.

In number theory, the law of quadratic reciprocity, like the Pythagorean theorem, has lent itself to an unusual number of proofs. Several hundred proofs of the law of quadratic reciprocity have been found.

In the mathematics of coding theory, the Plotkin bound, named after Morris Plotkin, is a limit on the maximum possible number of codewords in binary codes of given length n and given minimum distance d.

Cyclic code

In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives another word that belongs to the code. They are error-correcting codes that have algebraic properties that are convenient for efficient error detection and correction.

A Z-channel is a communications channel used in coding theory and information theory to model the behaviour of some data storage systems.

In mathematics, a Beatty sequence is the sequence of integers found by taking the floor of the positive multiples of a positive irrational number. Beatty sequences are named after Samuel Beatty, who wrote about them in 1926.

In mathematics, in particular the area of number theory, a modular multiplicative inverse of an integer a is an integer x such that the product ax is congruent to 1 with respect to the modulus m. In the standard notation of modular arithmetic this congruence is written as

In number theory, the Fermat quotient of an integer a with respect to an odd prime p is defined as:

In modular arithmetic, Barrett reduction is a reduction algorithm introduced in 1986 by P.D. Barrett. A naive way of computing

DNA code construction refers to the application of coding theory to the design of nucleic acid systems for the field of DNA–based computation.

Coppersmith's attack describes a class of cryptographic attacks on the public-key cryptosystem RSA based on the Coppersmith method. Particular applications of the Coppersmith method for attacking RSA include cases when the public exponent e is small or when partial knowledge of the secret key is available.

In algebraic number theory Eisenstein's reciprocity law is a reciprocity law that extends the law of quadratic reciprocity and the cubic reciprocity law to residues of higher powers. It is one of the earliest and simplest of the higher reciprocity laws, and is a consequence of several later and stronger reciprocity laws such as the Artin reciprocity law. It was introduced by Eisenstein (1850), though Jacobi had previously announced a similar result for the special cases of 5th, 8th and 12th powers in 1839.

In mathematics and computer science, the binary Goppa code is an error-correcting code that belongs to the class of general Goppa codes originally described by Valerii Denisovich Goppa, but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for cryptography in McEliece-like cryptosystems and similar setups.

References

  1. Peterson, W. W. and Weldon, E. J.: Error-correcting Codes. Cambridge, Mass.: MIT Press, 1972
  2. Massey, J. L. and Garcia, O. N.: Error-correcting codes in computer arithmetic. In: Advances in Information Systems Science, Vol. 4, Ch. 5. (Edited by J. T. Ton). New York: Plenum Press, 1972
  3. J.H. Van Lint (1982). Introduction to Coding Theory. GTM. 86. New York: Springer-Verlag.
  4. Clark, W. E. and Liang, J. J.: On modular weight and cyclic nonadjacent forms for arithmetic codes. IEEE Trans. Info. Theory, 20 pp. 767-770(1974)