This article needs additional citations for verification .(April 2015) |
The following tables list the computational complexity of various algorithms for common mathematical operations.
Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. [1] See big O notation for an explanation of the notation used.
Note: Due to the variety of multiplication algorithms, below stands in for the complexity of the chosen multiplication algorithm.
This table lists the complexity of mathematical operations on integers.
Operation | Input | Output | Algorithm | Complexity |
---|---|---|---|---|
Addition | Two -digit numbers | One -digit number | Schoolbook addition with carry | |
Subtraction | Two -digit numbers | One -digit number | Schoolbook subtraction with borrow | |
Multiplication | Two -digit numbers | One -digit number | Schoolbook long multiplication | |
Karatsuba algorithm | ||||
3-way Toom–Cook multiplication | ||||
-way Toom–Cook multiplication | ||||
Mixed-level Toom–Cook (Knuth 4.3.3-T) [2] | ||||
Schönhage–Strassen algorithm | ||||
Harvey-Hoeven algorithm [3] [4] | ||||
Division | Two -digit numbers | One -digit number | Schoolbook long division | |
Burnikel–Ziegler Divide-and-Conquer Division [5] | ||||
Newton–Raphson division | ||||
Square root | One -digit number | One -digit number | Newton's method | |
Modular exponentiation | Two -digit integers and a -bit exponent | One -digit integer | Repeated multiplication and reduction | |
Exponentiation by squaring | ||||
Exponentiation with Montgomery reduction |
On stronger computational models, specifically a pointer machine and consequently also a unit-cost random-access machine it is possible to multiply two n-bit numbers in time O(n). [6]
Here we consider operations over polynomials and n denotes their degree; for the coefficients we use a unit-cost model, ignoring the number of bits in a number. In practice this means that we assume them to be machine integers.
Operation | Input | Output | Algorithm | Complexity |
---|---|---|---|---|
Polynomial evaluation | One polynomial of degree with integer coefficients | One number | Direct evaluation | |
Horner's method | ||||
Polynomial gcd (over or ) | Two polynomials of degree with integer coefficients | One polynomial of degree at most | Euclidean algorithm | |
Fast Euclidean algorithm (Lehmer)[ citation needed ] |
Many of the methods in this section are given in Borwein & Borwein. [7]
The elementary functions are constructed by composing arithmetic operations, the exponential function (), the natural logarithm (), trigonometric functions (), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible by means of Newton's method. In particular, if either or in the complex domain can be computed with some complexity, then that complexity is attainable for all other elementary functions.
Below, the size refers to the number of digits of precision at which the function is to be evaluated.
Algorithm | Applicability | Complexity |
---|---|---|
Taylor series; repeated argument reduction (e.g. ) and direct summation | ||
Taylor series; FFT-based acceleration | ||
Taylor series; binary splitting + bit-burst algorithm [8] | ||
Arithmetic–geometric mean iteration [9] |
It is not known whether is the optimal complexity for elementary functions. The best known lower bound is the trivial bound .
Function | Input | Algorithm | Complexity |
---|---|---|---|
Gamma function | -digit number | Series approximation of the incomplete gamma function | |
Fixed rational number | Hypergeometric series | ||
, for integer. | Arithmetic-geometric mean iteration | ||
Hypergeometric function | -digit number | (As described in Borwein & Borwein) | |
Fixed rational number | Hypergeometric series |
This table gives the complexity of computing approximations to the given constants to correct digits.
Constant | Algorithm | Complexity |
---|---|---|
Golden ratio, | Newton's method | |
Square root of 2, | Newton's method | |
Euler's number, | Binary splitting of the Taylor series for the exponential function | |
Newton inversion of the natural logarithm | ||
Pi, | Binary splitting of the arctan series in Machin's formula | [10] |
Gauss–Legendre algorithm | [10] | |
Euler's constant, | Sweeney's method (approximation in terms of the exponential integral) |
Algorithms for number theoretical calculations are studied in computational number theory.
Operation | Input | Output | Algorithm | Complexity |
---|---|---|---|---|
Greatest common divisor | Two -digit integers | One integer with at most digits | Euclidean algorithm | |
Binary GCD algorithm | ||||
Left/right k-ary binary GCD algorithm [11] | ||||
Stehlé–Zimmermann algorithm [12] | ||||
Schönhage controlled Euclidean descent algorithm [13] | ||||
Jacobi symbol | Two -digit integers | , or | Schönhage controlled Euclidean descent algorithm [14] | |
Stehlé–Zimmermann algorithm [15] | ||||
Factorial | A positive integer less than | One -digit integer | Bottom-up multiplication | |
Binary splitting | ||||
Exponentiation of the prime factors of | , [16] [1] | |||
Primality test | A -digit integer | True or false | AKS primality test | [17] [18] , assuming Agrawal's conjecture |
Elliptic curve primality proving | heuristically [19] | |||
Baillie–PSW primality test | [20] [21] | |||
Miller–Rabin primality test | [22] | |||
Solovay–Strassen primality test | [22] | |||
Integer factorization | A -bit input integer | A set of factors | General number field sieve | [nb 1] |
Shor's algorithm | , on a quantum computer | |||
The following complexity figures assume that arithmetic with individual elements has complexity O(1), as is the case with fixed-precision floating-point arithmetic or operations on a finite field.
Operation | Input | Output | Algorithm | Complexity |
---|---|---|---|---|
Matrix multiplication | Two matrices | One matrix | Schoolbook matrix multiplication | |
Strassen algorithm | ||||
Coppersmith–Winograd algorithm (galactic algorithm) | ||||
Optimized CW-like algorithms [23] [24] [25] [26] (galactic algorithms) | ||||
Matrix multiplication | One matrix, and one matrix | One matrix | Schoolbook matrix multiplication | |
Matrix multiplication | One matrix, and one matrix, for some | One matrix | Algorithms given in [27] | , where upper bounds on are given in [27] |
Matrix inversion | One matrix | One matrix | Gauss–Jordan elimination | |
Strassen algorithm | ||||
Coppersmith–Winograd algorithm | ||||
Optimized CW-like algorithms | ||||
Singular value decomposition | One matrix | One matrix, one matrix, & one matrix | Bidiagonalization and QR algorithm | () |
One matrix, one matrix, & one matrix | Bidiagonalization and QR algorithm | () | ||
QR decomposition | One matrix | One matrix, & one matrix | Algorithms in [28] | () |
Determinant | One matrix | One number | Laplace expansion | |
Division-free algorithm [29] | ||||
LU decomposition | ||||
Bareiss algorithm | ||||
Fast matrix multiplication [30] | ||||
Back substitution | Triangular matrix | solutions | Back substitution [31] | |
Characteristic polynomial | One matrix | One degree- polynomial | Faddeev-LeVerrier algorithm | |
Samuelson-Berkowitz algorithm | (smaller constant factor) | |||
Preparata-Sarwate algorithm [32] [33] |
In 2005, Henry Cohn, Robert Kleinberg, Balázs Szegedy, and Chris Umans showed that either of two different conjectures would imply that the exponent of matrix multiplication is 2. [34]
Algorithms for computing transforms of functions (particularly integral transforms) are widely used in all areas of mathematics, particularly analysis and signal processing.
Operation | Input | Output | Algorithm | Complexity |
---|---|---|---|---|
Discrete Fourier transform | Finite data sequence of size | Set of complex numbers | Schoolbook | |
Fast Fourier transform |
In number theory, a Carmichael number is a composite number which in modular arithmetic satisfies the congruence relation:
In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as double-and-add.
In mathematics, the factorial of a non-negative integer , denoted by , is the product of all positive integers less than or equal to . The factorial of also equals the product of with the next smaller factorial: For example, The value of 0! is 1, according to the convention for an empty product.
A fast Fourier transform (FFT) is an algorithm that computes the Discrete Fourier Transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where n is the data size. The difference in speed can be enormous, especially for long data sets where n may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.
In mathematics, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors greater than 1, in which case it is called a composite number, or it is not, in which case it is called a prime number. For example, 15 is a composite number because 15 = 3 · 5, but 7 is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example 60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem.
A multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Numerous algorithms are known and there has been much research into the topic.
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
In computational number theory, a variety of algorithms make it possible to generate prime numbers efficiently. These are used in various applications, for example hashing, public-key cryptography, and search of prime factors in large numbers.
Gene Howard Golub, was an American numerical analyst who taught at Stanford University as Fletcher Jones Professor of Computer Science and held a courtesy appointment in electrical engineering.
In analytic number theory, the Dickman function or Dickman–de Bruijn functionρ is a special function used to estimate the proportion of smooth numbers up to a given bound. It was first studied by actuary Karl Dickman, who defined it in his only mathematical publication, which is not easily available, and later studied by the Dutch mathematician Nicolaas Govert de Bruijn.
In the mathematical area of graph theory, a triangle-free graph is an undirected graph in which no three vertices form a triangle of edges. Triangle-free graphs may be equivalently defined as graphs with clique number ≤ 2, graphs with girth ≥ 4, graphs with no induced 3-cycle, or locally independent graphs.
In mathematics, a Cauchy matrix, named after Augustin-Louis Cauchy, is an m×n matrix with elements aij in the form
In mathematics, the Riemann hypothesis is the conjecture that the Riemann zeta function has its zeros only at the negative even integers and complex numbers with real part 1/2. Many consider it to be the most important unsolved problem in pure mathematics. It is of great interest in number theory because it implies results about the distribution of prime numbers. It was proposed by Bernhard Riemann, after whom it is named.
Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors.
In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the fastest algorithm for matrix multiplication is of major practical relevance.
In numerical mathematics, hierarchical matrices (H-matrices) are used as data-sparse approximations of non-sparse matrices. While a sparse matrix of dimension can be represented efficiently in units of storage by storing only its non-zero entries, a non-sparse matrix would require units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only units of storage, where is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations, preconditioning the resulting systems of linear equations, or solving elliptic partial differential equations, a rank proportional to with a small constant is sufficient to ensure an accuracy of . Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage: the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated in operations, where
The Rybicki–Press algorithm is a fast algorithm for inverting a matrix whose entries are given by , where and where the are sorted in order. The key observation behind the Rybicki-Press observation is that the matrix inverse of such a matrix is always a tridiagonal matrix, and tridiagonal systems of equations can be solved efficiently. It is a computational optimization of a general set of statistical methods developed to determine whether two noisy, irregularly sampled data sets are, in fact, dimensionally shifted representations of the same underlying function. The most common use of the algorithm is in the detection of periodicity in astronomical observations, such as for detecting quasars.
Virginia Vassilevska Williams is a theoretical computer scientist and mathematician known for her research in computational complexity theory and algorithms. She is currently the Steven and Renee Finn Career Development Associate Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. She is notable for her breakthrough results in fast matrix multiplication, for her work on dynamic algorithms, and for helping to develop the field of fine-grained complexity.
Beresford Neill Parlett is an English applied mathematician, specializing in numerical analysis and scientific computation.
In mathematics and computer science, polynomial evaluation refers to computation of the value of a polynomial when its indeterminates are substituted for some values. In other words, evaluating the polynomial at consists of computing See also Polynomial ring § Polynomial evaluation