In mathematics and computer science, optimal addition-chain exponentiation is a method of exponentiation by a positive integer power that requires a minimal number of multiplications. Using the form of the shortest addition chain, with multiplication instead of addition, computes the desired exponent (instead of multiple) of the base. (This corresponds to OEIS sequenceA003313(Length of shortest addition chain for n).) Each exponentiation in the chain can be evaluated by multiplying two of the earlier exponentiation results. More generally, addition-chain exponentiation may also refer to exponentiation by non-minimal addition chains constructed by a variety of algorithms (since a shortest addition chain is very difficult to find).
The shortest addition-chain algorithm requires no more multiplications than binary exponentiation and usually less. The first example of where it does better is for a15, where the binary method needs six multiplications but the shortest addition chain requires only five:
Number of multiplications | Actual exponentiation | Specific implementation of addition chains to do exponentiation |
---|---|---|
0 | a1 | a |
1 | a2 | a × a |
2 | a3 | a × a × a |
2 | a4 | (a × a→b) × b |
3 | a5 | (a × a→b) × b × a |
3 | a6 | (a × a→b) × b × b |
4 | a7 | (a × a→b) × b × b × a |
3 | a8 | ((a × a→b) × b→d) × d |
4 | a9 | (a × a × a→c) × c × c |
4 | a10 | ((a × a→b) × b→d) × d × b |
5 | a11 | ((a × a→b) × b→d) × d × b × a |
4 | a12 | ((a × a→b) × b→d) × d × d |
5 | a13 | ((a × a→b) × b→d) × d × d × a |
5 | a14 | ((a × a→b) × b→d) × d × d × b |
5 | a15 | ((a × a→b) × b × a→e) × e × e |
4 | a16 | (((a × a→b) × b→d) × d→h) × h |
On the other hand, the determination of a shortest addition chain is hard: no efficient optimal methods are currently known for arbitrary exponents, and the related problem of finding a shortest addition chain for a given set of exponents has been proven NP-complete. [1] Even given a shortest chain, addition-chain exponentiation requires more memory than the binary method, because it must potentially store many previous exponents from the chain. So in practice, shortest addition-chain exponentiation is primarily used for small fixed exponents for which a shortest chain can be pre-computed and is not too large.
There are also several methods to approximate a shortest addition chain, and which often require fewer multiplications than binary exponentiation; binary exponentiation itself is a suboptimal addition-chain algorithm. The optimal algorithm choice depends on the context (such as the relative cost of the multiplication and the number of times a given exponent is re-used). [2]
The problem of finding the shortest addition chain cannot be solved by dynamic programming, because it does not satisfy the assumption of optimal substructure. That is, it is not sufficient to decompose the power into smaller powers, each of which is computed minimally, since the addition chains for the smaller powers may be related (to share computations). For example, in the shortest addition chain for a15 above, the subproblem for a6 must be computed as (a3)2 since a3 is re-used (as opposed to, say, a6 = a2(a2)2, which also requires three multiplies).
If both multiplication and division are allowed, then an addition-subtraction chain may be used to obtain even fewer total multiplications+divisions (where subtraction corresponds to division). However, the slowness of division compared to multiplication makes this technique unattractive in general. For exponentiation to negative integer powers, on the other hand, since one division is required anyway, an addition-subtraction chain is often beneficial. One such example is a−31, where computing 1/a31 by a shortest addition chain for a31 requires 7 multiplications and one division, whereas the shortest addition-subtraction chain requires 5 multiplications and one division:
For exponentiation on elliptic curves, the inverse of a point (x, y) is available at no cost, since it is simply (x, −y), and therefore addition-subtraction chains are optimal in this context even for positive integer exponents. [3]
Arithmetic is an elementary part of mathematics that consists of the study of the properties of the traditional operations on numbers—addition, subtraction, multiplication, division, exponentiation, and extraction of roots. In the 19th century, Italian mathematician Giuseppe Peano formalized arithmetic with his Peano axioms, which are highly important to the field of mathematical logic today.
In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as double-and-add.
In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.
Exponentiation is a mathematical operation, written as bn, involving two numbers, the baseb and the exponent or powern, and pronounced as "b (raised) to the n". When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, bn is the product of multiplying n bases:
In mathematics, for given real numbers a and b, the logarithm logb a is a number x such that bx = a. Analogously, in any group G, powers bk can be defined for all integers k, and the discrete logarithm logb a is an integer k such that bk = a. In number theory, the more commonly used term is index: we can write x = indra (mod m) for rx ≡ a (mod m) if r is a primitive root of m and gcd(a,m) = 1.
In mathematics, an addition chain for computing a positive integer n can be given by a sequence of natural numbers starting with 1 and ending with n, such that each number in the sequence is the sum of two previous numbers. The length of an addition chain is the number of sums needed to express all its numbers, which is one less than the cardinality of the sequence of numbers.
Pollard's p − 1 algorithm is a number theoretic integer factorization algorithm, invented by John Pollard in 1974. It is a special-purpose algorithm, meaning that it is only suitable for integers with specific types of factors; it is the simplest example of an algebraic-group factorisation algorithm.
Modular exponentiation is exponentiation performed over a modulus. It is useful in computer science, especially in the field of public-key cryptography, where it is used in both Diffie-Hellman Key Exchange and RSA public/private keys.
In modular arithmetic computation, Montgomery modular multiplication, more commonly referred to as Montgomery multiplication, is a method for performing fast modular multiplication. It was introduced in 1985 by the American mathematician Peter L. Montgomery.
The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers. It was developed by Arnold Schönhage and Volker Strassen in 1971. The run-time bit complexity is, in big O notation, for two n-digit numbers. The algorithm uses recursive fast Fourier transforms in rings with 2n+1 elements, a specific type of number theoretic transform.
A division algorithm is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software.
The Karatsuba algorithm is a fast multiplication algorithm. It was discovered by Anatoly Karatsuba in 1960 and published in 1962. It is a divide-and-conquer algorithm that reduces the multiplication of two n-digit numbers to three multiplications of n/2-digit numbers and, by repeating this reduction, to at most single-digit multiplications. It is therefore asymptotically faster than the traditional algorithm, which performs single-digit products. For example, the traditional algorithm requires (210)2 = 1,048,576 single-digit multiplications to multiply two 1024-digit numbers (n = 1024 = 210), whereas the Karatsuba algorithm requires 310 = 59,049 thus being ~17.758 times faster.
In mathematics, particularly in the area of arithmetic, a modular multiplicative inverse of an integer a is an integer x such that the product ax is congruent to 1 with respect to the modulus m. In the standard notation of modular arithmetic this congruence is written as
An addition-subtraction chain, a generalization of addition chains to include subtraction, is a sequence a0, a1, a2, a3, ... that satisfies
Algebraic-group factorisation algorithms are algorithms for factoring an integer N by working in an algebraic group defined modulo N whose group structure is the direct sum of the 'reduced groups' obtained by performing the equations defining the group arithmetic modulo the unknown prime factors p1, p2, ... By the Chinese remainder theorem, arithmetic modulo N corresponds to arithmetic in all the reduced groups simultaneously.
Kochanski multiplication is an algorithm that allows modular arithmetic to be performed efficiently when the modulus is large. This has particular application in number theory and in cryptography: for example, in the RSA cryptosystem and Diffie–Hellman key exchange.
In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the right amount of time it should take is of major practical relevance.
Elliptic curve scalar multiplication is the operation of successively adding a point along an elliptic curve to itself repeatedly. It is used in elliptic curve cryptography (ECC) as a means of producing a one-way function. The literature presents this operation as scalar multiplication, as written in Hessian form of an elliptic curve. A widespread name for this operation is also elliptic curve point multiplication, but this can convey the wrong impression of being a multiplication between two points.
In mathematics and computer science, polynomial evaluation refers to computation of the value of a polynomial when its indeterminates are substituted for some values. In other words, evaluating the polynomial at consists of computing See also Polynomial ring § Polynomial evaluation