An addition-subtraction chain, a generalization of addition chains to include subtraction, is a sequence a0, a1, a2, a3, ... that satisfies
An addition-subtraction chain for n, of length L, is an addition-subtraction chain such that . That is, one can thereby compute n by L additions and/or subtractions. (Note that n need not be positive. In this case, one may also include a−1 = 0 in the sequence, so that n = −1 can be obtained by a chain of length 1.)
By definition, every addition chain is also an addition-subtraction chain, but not vice versa. Therefore, the length of the shortest addition-subtraction chain for n is bounded above by the length of the shortest addition chain for n. In general, however, the determination of a minimal addition-subtraction chain (like the problem of determining a minimum addition chain) is a difficult problem for which no efficient algorithms are currently known. The related problem of finding an optimal addition sequence is NP-complete (Downey et al., 1981), but it is not known for certain whether finding optimal addition or addition-subtraction chains is NP-hard.
For example, one addition-subtraction chain is: , , , . This is not a minimal addition-subtraction chain for n=3, however, because we could instead have chosen . The smallest n for which an addition-subtraction chain is shorter than the minimal addition chain is n=31, which can be computed in only 6 additions (rather than 7 for the minimal addition chain):
Like an addition chain, an addition-subtraction chain can be used for addition-chain exponentiation: given the addition-subtraction chain of length L for n, the power can be computed by multiplying or dividing by xL times, where the subtractions correspond to divisions. This is potentially efficient in problems where division is an inexpensive operation, most notably for exponentiation on elliptic curves where division corresponds to a mere sign change (as proposed by Morain and Olivos, 1990).
Some hardware multipliers multiply by n using an addition chain described by n in binary:
n = 31 = 0 0 0 1 1 1 1 1 (binary).
Other hardware multipliers multiply by n using an addition-subtraction chain described by n in Booth encoding:
n = 31 = 0 0 1 0 0 0 0 −1 (Booth encoding).
In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as double-and-add.
A hash function is any function that can be used to map data of arbitrary size to fixed-size values. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. The values are usually used to index a fixed-size table called a hash table. Use of a hash function to index a hash table is called hashing or scatter storage addressing.
In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.
Multiplication is one of the four elementary mathematical operations of arithmetic, with the other ones being addition, subtraction, and division. The result of a multiplication operation is called a product.
In computational complexity theory, the class NC is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem is in NC if there exist constants c and k such that it can be solved in time O(logc n) using O(nk) parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.
A linear congruential generator (LCG) is an algorithm that yields a sequence of pseudo-randomized numbers calculated with a discontinuous piecewise linear equation. The method represents one of the oldest and best-known pseudorandom number generator algorithms. The theory behind them is relatively easy to understand, and they are easily implemented and fast, especially on computer hardware which can provide modular arithmetic by storage-bit truncation.
A multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are used. Efficient multiplication algorithms have existed since the advent of the decimal system.
Addition is one of the four basic operations of arithmetic, the other three being subtraction, multiplication and division. The addition of two whole numbers results in the total amount or sum of those values combined. The example in the adjacent image shows a combination of three apples and two apples, making a total of five apples. This observation is equivalent to the mathematical expression "3 + 2 = 5".
Exponentiation is a mathematical operation, written as bn, involving two numbers, the baseb and the exponent or powern, and pronounced as "b raised to the power of n". When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, bn is the product of multiplying n bases:
Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.
In mathematics, an addition chain for computing a positive integer n can be given by a sequence of natural numbers starting with 1 and ending with n, such that each number in the sequence is the sum of two previous numbers. The length of an addition chain is the number of sums needed to express all its numbers, which is one less than the cardinality of the sequence of numbers.
In mathematics and computer science, optimal addition-chain exponentiation is a method of exponentiation by positive integer powers that requires a minimal number of multiplications. This corresponds to the sequence A003313 on the Online Encyclopedia of Integer Sequences. It works by creating the shortest addition chain that generates the desired exponent. Each exponentiation in the chain can be evaluated by multiplying two of the earlier exponentiation results. More generally, addition-chain exponentiation may also refer to exponentiation by non-minimal addition chains constructed by a variety of algorithms.
Modular exponentiation is exponentiation performed over a modulus. It is useful in computer science, especially in the field of public-key cryptography, where it is used in both Diffie-Hellman Key Exchange and RSA public/private keys.
In modular arithmetic computation, Montgomery modular multiplication, more commonly referred to as Montgomery multiplication, is a method for performing fast modular multiplication. It was introduced in 1985 by the American mathematician Peter L. Montgomery.
Matrix chain multiplication is an optimization problem concerning the most efficient way to multiply a given sequence of matrices. The problem is not actually to perform the multiplications, but merely to decide the sequence of the matrix multiplications involved. The problem may be solved using dynamic programming.
In the mathematical field of set theory, ordinal arithmetic describes the three usual operations on ordinal numbers: addition, multiplication, and exponentiation. Each can be defined in essentially two different ways: either by constructing an explicit well-ordered set that represents the result of the operation or by using transfinite recursion. Cantor normal form provides a standardized way of writing ordinals. In addition to these usual ordinal operations, there are also the "natural" arithmetic of ordinals and the nimber operations.
A division algorithm is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software.
In mathematics and computer science, a primality certificate or primality proof is a succinct, formal proof that a number is prime. Primality certificates allow the primality of a number to be rapidly checked without having to run an expensive or unreliable primality test. "Succinct" usually means that the proof should be at most polynomially larger than the number of digits in the number itself.
The split-radix FFT is a fast Fourier transform (FFT) algorithm for computing the discrete Fourier transform (DFT), and was first described in an initially little-appreciated paper by R. Yavne (1968) and subsequently rediscovered simultaneously by various authors in 1984. In particular, split radix is a variant of the Cooley–Tukey FFT algorithm that uses a blend of radices 2 and 4: it recursively expresses a DFT of length N in terms of one smaller DFT of length N/2 and two smaller DFTs of length N/4.
In mathematics, for positive integers k and s, a vectorial addition chain is a sequence V of k-dimensional vectors of nonnegative integers vi for −k + 1 ≤ i ≤ s together with a sequence w, such that