This article needs additional citations for verification .(May 2011) |
AN codes are error-correcting code that are used in arithmetic applications. [1] Arithmetic codes were commonly used in computer processors to ensure the accuracy of its arithmetic operations when electronics were more unreliable. Arithmetic codes help the processor to detect when an error is made and correct it. Without these codes, processors would be unreliable since any errors would go undetected. AN codes are arithmetic codes that are named for the integers and that are used to encode and decode the codewords.
These codes differ from most other codes in that they use arithmetic weight to maximize the arithmetic distance between codewords as opposed to the hamming weight and hamming distance. The arithmetic distance between two words is a measure of the number of errors made while computing an arithmetic operation. Using the arithmetic distance is necessary since one error in an arithmetic operation can cause a large hamming distance between the received answer and the correct answer.
The arithmetic weight of an integer in base is defined by
where < , , and . [2] The arithmetic distance of a word is upper bounded by its hamming weight since any integer can be represented by its standard polynomial form of where the are the digits in the integer. Removing all the terms where will simulate a equal to its hamming weight. The arithmetic weight will usually be less than the hamming weight since the are allowed to be negative. For example, the integer which is in binary has a hamming weight of . This is a quick upper bound on the arithmetic weight since . However, since the can be negative, we can write which makes the arithmetic weight equal to .
The arithmetic distance between two integers is defined by
This is one of the primary metrics used when analyzing arithmetic codes. [3] [4]
AN codes are defined by integers and and are used to encode integers from to such that
Each choice of will result in a different code, while serves as a limiting factor to ensure useful properties in the distance of the code. If is too large, it could let a codeword with a very small arithmetic weight into the code which will degrade the distance of the entire code. To utilize these codes, before an arithmetic operation is performed on two integers, each integer is multiplied by . Let the result of the operation on the codewords be . Note that must also be between to for proper decoding. To decode, simply divide . If is not a factor of , then at least one error has occurred and the most likely solution will be the codeword with the least arithmetic distance from . As with codes using hamming distance, AN codes can correct up to errors where is the distance of the code.
For example, an AN code with , the operation of adding and will start by encoding both operands. This results in the operation . Then, to find the solution we divide . As long as >, this will be a possible operation under the code. Suppose an error occurs in each of the binary representation of the operands such that and , then . Notice that since , the hamming weight between the received word and the correct solution is after just errors. To compute the arithmetic weight, we take which can be represented as or . In either case, the arithmetic distance is as expected since this is the number of errors that were made. To correct this error, an algorithm would be used to compute the nearest codeword to the received word in terms of arithmetic distance. We will not describe the algorithms in detail.
To ensure that the distance of the code will not be too small, we will define modular AN codes. A modular AN code is a subgroup of , where . The codes are measured in terms of modular distance which is defined in terms of a graph with vertices being the elements of . Two vertices and are connected iff
where and <<, . Then the modular distance between two words is the length of the shortest path between their nodes in the graph. The modular weight of a word is its distance from which is equal to
In practice, the value of is typically chosen such that since most computer arithmetic is computed so there is no additional loss of data due to the code going out of bounds since the computer will also be out of bounds. Choosing also tends to result in codes with larger distances than other codes.
By using modular weight with , the AN codes will be cyclic code.
definition: A cyclic AN code is a code that is a subgroup of , where .
A cyclic AN code is a principal ideal of the ring . There are integers and where and satisfy the definition of an AN code. Cyclic AN codes are a subset of cyclic codes and have the same properties.
The Mandelbaum-Barrows Codes are a type of cyclic AN codes introduced by D. Mandelbaum and J. T. Barrows. [5] [6] These codes are created by choosing to be a prime number that does not divide such that is generated by and , and . Let be a positive integer where and . For example, choosing , and the result will be a Mandelbaum-Barrows Code such that < in base .
To analyze the distance of the Mandelbaum-Barrows Codes, we will need the following theorem.
theorem: Let be a cyclic AN code with generator , and
Then,
proof: Assume that each has a unique cyclic NAF [7] representation which is
We define an matrix with elements where and . This matrix is essentially a list of all the codewords in where each column is a codeword. Since is cyclic, each column of the matrix has the same number of zeros. We must now calculate , which is times the number of codewords that don't end with a . As a property of being in cyclic NAF, iff there is a with <. Since with <, then <. Then the number of integers that have a zero as their last bit are . Multiplying this by the characters in the codewords gives us a sum of the weights of the codewords of as desired.
We will now use the previous theorem to show that the Mandelbaum-Barrows Codes are equidistant (which means that every pair of codewords have the same distance), with a distance of
proof: Let , then and is not divisible by . This implies there . Then . This proves that is equidistant since all codewords have the same weight as . Since all codewords have the same weight, and by the previous theorem we know the total weight of all codewords, the distance of the code is found by dividing the total weight by the number of codewords (excluding 0).
In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.
In mathematics, the floor function is the function that takes as input a real number x, and gives as output the greatest integer less than or equal to x, denoted ⌊x⌋ or floor(x). Similarly, the ceiling function maps x to the least integer greater than or equal to x, denoted ⌈x⌉ or ceil(x).
In analytic number theory and related branches of mathematics, a complex-valued arithmetic function is a Dirichlet character of modulus if for all integers and :
In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands.
In mathematics, a modular form is a (complex) analytic function on the upper half-plane, , that roughly satisfies a functional equation with respect to the group action of the modular group and a growth condition. The theory of modular forms has origins in complex analysis, with important connections with number theory. Modular forms also appear in other areas, such as algebraic topology, sphere packing, and string theory.
In number theory, a formula for primes is a formula generating the prime numbers, exactly and without exception. Formulas for calculating primes do exist; however, they are computationally very slow. A number of constraints are known, showing what such a "formula" can and cannot be.
In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way. Such limitations often take the form of bounds that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.
In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another, called the modulus of the operation.
In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.
In number theory, the law of quadratic reciprocity, like the Pythagorean theorem, has lent itself to an unusually large number of proofs. Several hundred proofs of the law of quadratic reciprocity have been published.
In the mathematics of coding theory, the Plotkin bound, named after Morris Plotkin, is a limit on the maximum possible number of codewords in binary codes of given length n and given minimum distance d.
In coding theory, a cyclic code is a block code, where the circular shifts of each codeword gives another word that belongs to the code. They are error-correcting codes that have algebraic properties that are convenient for efficient error detection and correction.
The digital root of a natural number in a given radix is the value obtained by an iterative process of summing digits, on each iteration using the result from the previous iteration to compute a digit sum. The process continues until a single-digit number is reached. For example, in base 10, the digital root of the number 12345 is 6 because the sum of the digits in the number is 1 + 2 + 3 + 4 + 5 = 15, then the addition process is repeated again for the resulting number 15, so that the sum of 1 + 5 equals 6, which is the digital root of that number. In base 10, this is equivalent to taking the remainder upon division by 9, which allows it to be used as a divisibility rule.
In mathematics, particularly in the area of arithmetic, a modular multiplicative inverse of an integer a is an integer x such that the product ax is congruent to 1 with respect to the modulus m. In the standard notation of modular arithmetic this congruence is written as
In number theory, the p-adic valuation or p-adic order of an integer n is the exponent of the highest power of the prime number p that divides n. It is denoted . Equivalently, is the exponent to which appears in the prime factorization of .
In number theory, the Fermat quotient of an integer a with respect to an odd prime p is defined as
In modular arithmetic, Barrett reduction is a reduction algorithm introduced in 1986 by P.D. Barrett.
DNA code construction refers to the application of coding theory to the design of nucleic acid systems for the field of DNA–based computation.
In mathematics and computer science, the binary Goppa code is an error-correcting code that belongs to the class of general Goppa codes originally described by Valerii Denisovich Goppa, but the binary structure gives it several mathematical advantages over non-binary variants, also providing a better fit for common usage in computers and telecommunication. Binary Goppa codes have interesting properties suitable for cryptography in McEliece-like cryptosystems and similar setups.
In coding theory, burst error-correcting codes employ methods of correcting burst errors, which are errors that occur in many consecutive bits rather than occurring in bits independently of each other.