In modular arithmetic, Barrett reduction is a reduction algorithm introduced in 1986 by P.D. Barrett. [1]
A naive way of computing
would be to use a fast division algorithm. Barrett reduction is an algorithm designed to optimize this operation assuming is constant, and , replacing divisions by multiplications.
Historically, for values , one computed by applying Barrett reduction to the full product . Recently, it was shown that the full product is unnecessary if we can perform precomputation on one of the operands. [2] [3]
We call a function an integer approximation if . For a modulus and an integer approximation , we define as
Common choices of are floor, ceiling, and rounding functions.
Generally, Barrett multiplication starts by specifying two integer approximations and computes a reasonably close approximation of as
where is a fixed constant, typically a power of 2, chosen so that multiplication and division by can be performed efficiently.
The case was introduced by P.D. Barrett [1] for the floor-function case . The general case for can be found in NTL. [2] The integer approximation view and the correspondence between Montgomery multiplication and Barrett multiplication was discovered by Hanno Becker, Vincent Hwang, Matthias J. Kannwischer, Bo-Yin Yang, and Shang-Yi Yang. [3]
Barrett initially considered an integer version of the above algorithm when the values fit into machine words. We illustrate the idea for the floor-function case with and .
When calculating for unsigned integers, the obvious analog would be to use division by :
funcreduce(auint)uint{q:=a/n// Division implicitly returns the floor of the result.returna-q*n}
However, division can be expensive and, in cryptographic settings, might not be a constant-time instruction on some CPUs, subjecting the operation to a timing attack. Thus Barrett reduction approximates with a value because division by is just a right-shift, and so it is cheap.
In order to calculate the best value for given consider:
For to be an integer, we need to round somehow. Rounding to the nearest integer will give the best approximation but can result in being larger than , which can cause underflows. Thus is used for unsigned arithmetic.
Thus we can approximate the function above with the following:
funcreduce(auint)uint{q:=(a*m)>>k// ">> k" denotes bitshift by k.returna-q*n}
However, since , the value of q
in that function can end up being one too small, and thus a
is only guaranteed to be within rather than as is generally required. A conditional subtraction will correct this:
funcreduce(auint)uint{q:=(a*m)>>ka-=q*nifa>=n{a-=n}returna}
Suppose is known. This allows us to precompute before we receive . Barrett multiplication computes , approximates the high part of with , and subtracts the approximation. Since is a multiple of , the resulting value is a representative of .
Recall that unsigned Montgomery multiplication computes a representative of as
In fact, this value is equal to .
We prove the claim as follows.
Generally, for integer approximations , we have
We bound the output with .
Similar bounds hold for other kinds of integer approximation functions. For example, if we choose , the rounding half up function, then we have
Barrett's primary motivation for considering reduction was the implementation of RSA, where the values in question will almost certainly exceed the size of a machine word. In this situation, Barrett provided an algorithm that approximates the single-word version above but for multi-word values. For details see section 14.3.3 of the Handbook of Applied Cryptography. [4]
It is also possible to use Barrett algorithm for polynomial division, by reversing polynomials and using X-adic arithmetic. [5]
In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as double-and-add.
In number theory, the law of quadratic reciprocity is a theorem about modular arithmetic that gives conditions for the solvability of quadratic equations modulo prime numbers. Due to its subtlety, it has many formulations, but the most standard statement is:
In mathematics, the floor function is the function that takes as input a real number x, and gives as output the greatest integer less than or equal to x, denoted ⌊x⌋ or floor(x). Similarly, the ceiling function maps x to the least integer greater than or equal to x, denoted ⌈x⌉ or ceil(x).
In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands.
The AKS primality test is a deterministic primality-proving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientists at the Indian Institute of Technology Kanpur, on August 6, 2002, in an article titled "PRIMES is in P". The algorithm was the first one which is able to determine in polynomial time, whether a given number is prime or composite and this without relying on mathematical conjectures such as the generalized Riemann hypothesis. The proof is also notable for not relying on the field of analysis. In 2006 the authors received both the Gödel Prize and Fulkerson Prize for their work.
In number theory, a formula for primes is a formula generating the prime numbers, exactly and without exception. Formulas for calculating primes do exist; however, they are computationally very slow. A number of constraints are known, showing what such a "formula" can and cannot be.
The Doomsday rule, Doomsday algorithm or Doomsday method is an algorithm of determination of the day of the week for a given date. It provides a perpetual calendar because the Gregorian calendar moves in cycles of 400 years. The algorithm for mental calculation was devised by John Conway in 1973, drawing inspiration from Lewis Carroll's perpetual calendar algorithm. It takes advantage of each year having a certain day of the week upon which certain easy-to-remember dates, called the doomsdays, fall; for example, the last day of February, April 4 (4/4), June 6 (6/6), August 8 (8/8), October 10 (10/10), and December 12 (12/12) all occur on the same day of the week in any year.
In modular arithmetic computation, Montgomery modular multiplication, more commonly referred to as Montgomery multiplication, is a method for performing fast modular multiplication. It was introduced in 1985 by the American mathematician Peter L. Montgomery.
In number theory, the integer square root (isqrt) of a non-negative integer n is the non-negative integer m which is the greatest integer less than or equal to the square root of n,
Zeller's congruence is an algorithm devised by Christian Zeller in the 19th century to calculate the day of the week for any Julian or Gregorian calendar date. It can be considered to be based on the conversion between Julian day and the calendar date.
In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another, called the modulus of the operation.
The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers, published by Arnold Schönhage and Volker Strassen in 1971. It works by recursively applying fast Fourier transform (FFT) over the integers modulo . The run-time bit complexity to multiply two n-digit numbers using the algorithm is in big O notation.
In number theory, the law of quadratic reciprocity, like the Pythagorean theorem, has lent itself to an unusually large number of proofs. Several hundred proofs of the law of quadratic reciprocity have been published.
A division algorithm is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software.
In mathematics and computing, universal hashing refers to selecting a hash function at random from a family of hash functions with a certain mathematical property. This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known, and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.
In mathematics, a Beatty sequence is the sequence of integers found by taking the floor of the positive multiples of a positive irrational number. Beatty sequences are named after Samuel Beatty, who wrote about them in 1926.
In computer science, multiply-with-carry (MWC) is a method invented by George Marsaglia for generating sequences of random integers based on an initial set from two to many thousands of randomly chosen seed values. The main advantages of the MWC method are that it invokes simple computer integer arithmetic and leads to very fast generation of sequences of random numbers with immense periods, ranging from around to .
In number theory, the Fermat quotient of an integer a with respect to an odd prime p is defined as
In mathematics, elliptic curve primality testing techniques, or elliptic curve primality proving (ECPP), are among the quickest and most widely used methods in primality proving. It is an idea put forward by Shafi Goldwasser and Joe Kilian in 1986 and turned into an algorithm by A. O. L. Atkin the same year. The algorithm was altered and improved by several collaborators subsequently, and notably by Atkin and François Morain, in 1993. The concept of using elliptic curves in factorization had been developed by H. W. Lenstra in 1985, and the implications for its use in primality testing followed quickly.
The quadratic Frobenius test (QFT) is a probabilistic primality test to determine whether a number is a probable prime. It is named after Ferdinand Georg Frobenius. The test uses the concepts of quadratic polynomials and the Frobenius automorphism. It should not be confused with the more general Frobenius test using a quadratic polynomial – the QFT restricts the polynomials allowed based on the input, and also has other conditions that must be met. A composite passing this test is a Frobenius pseudoprime, but the converse is not necessarily true.