Computational complexity of mathematical operations

Last updated
Graphs of functions commonly used in the analysis of algorithms, showing the number of operations
N
{\displaystyle N}
versus input size
n
{\displaystyle n}
for each function Comparison computational complexity.svg
Graphs of functions commonly used in the analysis of algorithms, showing the number of operations versus input size for each function

The following tables list the computational complexity of various algorithms for common mathematical operations.

Contents

Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. [1] See big O notation for an explanation of the notation used.

Note: Due to the variety of multiplication algorithms, below stands in for the complexity of the chosen multiplication algorithm.

Arithmetic functions

This table lists the complexity of mathematical operations on integers.

OperationInputOutputAlgorithmComplexity
Addition Two -digit numbersOne -digit numberSchoolbook addition with carry
Subtraction Two -digit numbersOne -digit numberSchoolbook subtraction with borrow
Multiplication Two -digit numbers
One -digit number Schoolbook long multiplication
Karatsuba algorithm
3-way Toom–Cook multiplication
-way Toom–Cook multiplication
Mixed-level Toom–Cook (Knuth 4.3.3-T) [2]
Schönhage–Strassen algorithm
Harvey-Hoeven algorithm [3] [4]
Division Two -digit numbersOne -digit number Schoolbook long division
Burnikel–Ziegler Divide-and-Conquer Division [5]
Newton–Raphson division
Square root One -digit numberOne -digit number Newton's method
Modular exponentiation Two -digit integers and a -bit exponentOne -digit integerRepeated multiplication and reduction
Exponentiation by squaring
Exponentiation with Montgomery reduction

On stronger computational models, specifically a pointer machine and consequently also a unit-cost random-access machine it is possible to multiply two n-bit numbers in time O(n). [6]

Algebraic functions

Here we consider operations over polynomials and n denotes their degree; for the coefficients we use a unit-cost model, ignoring the number of bits in a number. In practice this means that we assume them to be machine integers.

OperationInputOutputAlgorithmComplexity
Polynomial evaluationOne polynomial of degree with integer coefficientsOne numberDirect evaluation
Horner's method
Polynomial gcd (over or )Two polynomials of degree with integer coefficientsOne polynomial of degree at most Euclidean algorithm
Fast Euclidean algorithm (Lehmer)[ citation needed ]

Special functions

Many of the methods in this section are given in Borwein & Borwein. [7]

Elementary functions

The elementary functions are constructed by composing arithmetic operations, the exponential function (), the natural logarithm (), trigonometric functions (), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible by means of Newton's method. In particular, if either or in the complex domain can be computed with some complexity, then that complexity is attainable for all other elementary functions.

Below, the size refers to the number of digits of precision at which the function is to be evaluated.

AlgorithmApplicabilityComplexity
Taylor series; repeated argument reduction (e.g. ) and direct summation
Taylor series; FFT-based acceleration
Taylor series; binary splitting + bit-burst algorithm [8]
Arithmetic–geometric mean iteration [9]

It is not known whether is the optimal complexity for elementary functions. The best known lower bound is the trivial bound .

Non-elementary functions

FunctionInputAlgorithmComplexity
Gamma function -digit numberSeries approximation of the incomplete gamma function
Fixed rational numberHypergeometric series
, for integer. Arithmetic-geometric mean iteration
Hypergeometric function -digit number(As described in Borwein & Borwein)
Fixed rational numberHypergeometric series

Mathematical constants

This table gives the complexity of computing approximations to the given constants to correct digits.

ConstantAlgorithmComplexity
Golden ratio, Newton's method
Square root of 2, Newton's method
Euler's number, Binary splitting of the Taylor series for the exponential function
Newton inversion of the natural logarithm
Pi, Binary splitting of the arctan series in Machin's formula [10]
Gauss–Legendre algorithm [10]
Euler's constant, Sweeney's method (approximation in terms of the exponential integral)

Number theory

Algorithms for number theoretical calculations are studied in computational number theory.

OperationInputOutputAlgorithmComplexity
Greatest common divisor Two -digit integersOne integer with at most digits Euclidean algorithm
Binary GCD algorithm
Left/right k-ary binary GCD algorithm [11]
Stehlé–Zimmermann algorithm [12]
Schönhage controlled Euclidean descent algorithm [13]
Jacobi symbol Two -digit integers, or Schönhage controlled Euclidean descent algorithm [14]
Stehlé–Zimmermann algorithm [15]
Factorial A positive integer less than One -digit integerBottom-up multiplication
Binary splitting
Exponentiation of the prime factors of , [16]
[1]
Primality test A -digit integerTrue or false AKS primality test [17] [18]
, assuming Agrawal's conjecture
Elliptic curve primality proving heuristically [19]
Baillie–PSW primality test [20] [21]
Miller–Rabin primality test [22]
Solovay–Strassen primality test [22]
Integer factorization A -bit input integerA set of factors General number field sieve [nb 1]
Shor's algorithm , on a quantum computer

Matrix algebra

The following complexity figures assume that arithmetic with individual elements has complexity O(1), as is the case with fixed-precision floating-point arithmetic or operations on a finite field.

OperationInputOutputAlgorithmComplexity
Matrix multiplication Two matricesOne matrix Schoolbook matrix multiplication
Strassen algorithm
Coppersmith–Winograd algorithm (galactic algorithm)
Optimized CW-like algorithms [23] [24] [25] [26] (galactic algorithms)
Matrix multiplicationOne matrix, and
one matrix
One matrixSchoolbook matrix multiplication
Matrix multiplicationOne matrix, and
one matrix, for some
One matrixAlgorithms given in [27] , where upper bounds on are given in [27]
Matrix inversion One matrixOne matrix Gauss–Jordan elimination
Strassen algorithm
Coppersmith–Winograd algorithm
Optimized CW-like algorithms
Singular value decomposition One matrixOne matrix,
one matrix, &
one matrix
Bidiagonalization and QR algorithm
()
One matrix,
one matrix, &
one matrix
Bidiagonalization and QR algorithm
()
QR decomposition One matrixOne matrix, &
one matrix
Algorithms in [28]
()
Determinant One matrixOne number Laplace expansion
Division-free algorithm [29]
LU decomposition
Bareiss algorithm
Fast matrix multiplication [30]
Back substitution Triangular matrix solutionsBack substitution [31]

In 2005, Henry Cohn, Robert Kleinberg, Balázs Szegedy, and Chris Umans showed that either of two different conjectures would imply that the exponent of matrix multiplication is 2. [32]


Transforms

Algorithms for computing transforms of functions (particularly integral transforms) are widely used in all areas of mathematics, particularly analysis and signal processing.

OperationInputOutputAlgorithmComplexity
Discrete Fourier transform Finite data sequence of size Set of complex numbersSchoolbook
Fast Fourier transform

Notes

  1. This form of sub-exponential time is valid for all . A more precise form of the complexity can be given as

Related Research Articles

<span class="mw-page-title-main">Carmichael number</span> Composite number in number theory

In number theory, a Carmichael number is a composite number , which in modular arithmetic satisfies the congruence relation:

In mathematics, the factorial of a non-negative integer , denoted by , is the product of all positive integers less than or equal to . The factorial of Failed to parse : {\displaystyle n} also equals the product of with the next smaller factorial:

<span class="mw-page-title-main">Fast Fourier transform</span> O(N log N) discrete Fourier transform algorithm

A Fast Fourier Transform (FFT) is an algorithm that computes the Discrete Fourier Transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where n is the data size. The difference in speed can be enormous, especially for long data sets where n may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.

A multiplication algorithm is an algorithm to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the decimal numeral system.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

In number theory, a branch of mathematics, the special number field sieve (SNFS) is a special-purpose integer factorization algorithm. The general number field sieve (GNFS) was derived from it.

In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.

<span class="mw-page-title-main">Gene H. Golub</span> American mathematician

Gene Howard Golub, was an American numerical analyst who taught at Stanford University as Fletcher Jones Professor of Computer Science and held a courtesy appointment in electrical engineering.

<span class="mw-page-title-main">Dickman function</span>

In analytic number theory, the Dickman function or Dickman–de Bruijn functionρ is a special function used to estimate the proportion of smooth numbers up to a given bound. It was first studied by actuary Karl Dickman, who defined it in his only mathematical publication, which is not easily available, and later studied by the Dutch mathematician Nicolaas Govert de Bruijn.

<span class="mw-page-title-main">Triangle-free graph</span> Graph without triples of adjacent vertices

In the mathematical area of graph theory, a triangle-free graph is an undirected graph in which no three vertices form a triangle of edges. Triangle-free graphs may be equivalently defined as graphs with clique number ≤ 2, graphs with girth ≥ 4, graphs with no induced 3-cycle, or locally independent graphs.

In mathematics, a Cauchy matrix, named after Augustin-Louis Cauchy, is an m×n matrix with elements aij in the form

Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors.

In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the fastest algorithm for matrix multiplication is of major practical relevance.

In numerical mathematics, hierarchical matrices (H-matrices) are used as data-sparse approximations of non-sparse matrices. While a sparse matrix of dimension can be represented efficiently in units of storage by storing only its non-zero entries, a non-sparse matrix would require units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only units of storage, where is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations, preconditioning the resulting systems of linear equations, or solving elliptic partial differential equations, a rank proportional to with a small constant is sufficient to ensure an accuracy of . Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage: the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated in operations, where

<span class="mw-page-title-main">Rybicki Press algorithm</span>

The Rybicki–Press algorithm is a fast algorithm for inverting a matrix whose entries are given by , where and where the are sorted in order. The key observation behind the Rybicki-Press observation is that the matrix inverse of such a matrix is always a tridiagonal matrix, and tridiagonal systems of equations can be solved efficiently. It is a computational optimization of a general set of statistical methods developed to determine whether two noisy, irregularly sampled data sets are, in fact, dimensionally shifted representations of the same underlying function. The most common use of the algorithm is in the detection of periodicity in astronomical observations, such as for detecting quasars.

<span class="mw-page-title-main">Virginia Vassilevska Williams</span> Theoretical computer scientist

Virginia Vassilevska Williams is a theoretical computer scientist and mathematician known for her research in computational complexity theory and algorithms. She is currently the Steven and Renee Finn Career Development Associate Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. She is notable for her breakthrough results in fast matrix multiplication, for her work on dynamic algorithms, and for helping to develop the field of fine-grained complexity.

Beresford Neill Parlett is an English applied mathematician, specializing in numerical analysis and scientific computation.

<span class="mw-page-title-main">Andrew Sutherland (mathematician)</span>

Andrew Victor Sutherland is an American mathematician and Principal Research Scientist at the Massachusetts Institute of Technology. His research focuses on computational aspects of number theory and arithmetic geometry. He is known for his contributions to several projects involving large scale computations, including the Polymath project on bounded gaps between primes, the L-functions and Modular Forms Database, the sums of three cubes project, and the computation and classification of Sato-Tate distributions.

In mathematics and computer science, polynomial evaluation refers to computation of the value of a polynomial when its indeterminates are substituted for some values. In other words, evaluating the polynomial at consists of computing See also Polynomial ring § Polynomial evaluation

The square-root sum problem (SRS) is a computational decision problem from the field of numerical analysis, with applications to computational geometry.

References

  1. 1 2 Schönhage, A.; Grotefeld, A.F.W.; Vetter, E. (1994). Fast Algorithms—A Multitape Turing Machine Implementation. BI Wissenschafts-Verlag. ISBN   978-3-411-16891-0. OCLC   897602049.
  2. Knuth 1997
  3. Harvey, D.; Van Der Hoeven, J. (2021). "Integer multiplication in time O (n log n)" (PDF). Annals of Mathematics. 193 (2): 563–617. doi:10.4007/annals.2021.193.2.4. S2CID   109934776.
  4. Klarreich, Erica (December 2019). "Multiplication hits the speed limit". Commun. ACM. 63 (1): 11–13. doi:10.1145/3371387. S2CID   209450552.
  5. Burnikel, Christoph; Ziegler, Joachim (1998). Fast Recursive Division. Forschungsberichte des Max-Planck-Instituts für Informatik. Saarbrücken: MPI Informatik Bibliothek & Dokumentation. OCLC   246319574. MPII-98-1-022.
  6. Schönhage, Arnold (1980). "Storage Modification Machines". SIAM Journal on Computing. 9 (3): 490–508. doi:10.1137/0209036.
  7. Borwein, J.; Borwein, P. (1987). Pi and the AGM: A Study in Analytic Number Theory and Computational Complexity. Wiley. ISBN   978-0-471-83138-9. OCLC   755165897.
  8. Chudnovsky, David; Chudnovsky, Gregory (1988). "Approximations and complex multiplication according to Ramanujan". Ramanujan revisited: Proceedings of the Centenary Conference. Academic Press. pp. 375–472. ISBN   978-0-01-205856-5.
  9. Brent, Richard P. (2014) [1975]. "Multiple-precision zero-finding methods and the complexity of elementary function evaluation". In Traub, J.F. (ed.). Analytic Computational Complexity. Elsevier. pp. 151–176. arXiv: 1004.3412 . ISBN   978-1-4832-5789-1.
  10. 1 2 Richard P. Brent (2020), The Borwein Brothers, Pi and the AGM, Springer Proceedings in Mathematics & Statistics, vol. 313, arXiv: 1802.07558 , doi:10.1007/978-3-030-36568-4, ISBN   978-3-030-36567-7, S2CID   214742997
  11. Sorenson, J. (1994). "Two Fast GCD Algorithms". Journal of Algorithms. 16 (1): 110–144. doi:10.1006/jagm.1994.1006.
  12. Crandall, R.; Pomerance, C. (2005). "Algorithm 9.4.7 (Stehlé-Zimmerman binary-recursive-gcd)". Prime Numbers – A Computational Perspective (2nd ed.). Springer. pp. 471–3. ISBN   978-0-387-28979-3.
  13. Möller N (2008). "On Schönhage's algorithm and subquadratic integer gcd computation" (PDF). Mathematics of Computation. 77 (261): 589–607. Bibcode:2008MaCom..77..589M. doi: 10.1090/S0025-5718-07-02017-0 .
  14. Bernstein, D.J. "Faster Algorithms to Find Non-squares Modulo Worst-case Integers".
  15. Brent, Richard P.; Zimmermann, Paul (2010). "An algorithm for the Jacobi symbol". International Algorithmic Number Theory Symposium. Springer. pp. 83–95. arXiv: 1004.2091 . doi:10.1007/978-3-642-14518-6_10. ISBN   978-3-642-14518-6. S2CID   7632655.
  16. Borwein, P. (1985). "On the complexity of calculating factorials". Journal of Algorithms. 6 (3): 376–380. doi:10.1016/0196-6774(85)90006-9.
  17. Lenstra jr., H.W.; Pomerance, Carl (2019). "Primality testing with Gaussian periods" (PDF). Journal of the European Mathematical Society . 21 (4): 1229–69. doi:10.4171/JEMS/861. hdl:21.11116/0000-0005-717D-0.
  18. Tao, Terence (2010). "1.11 The AKS primality test". An epsilon of room, II: Pages from year three of a mathematical blog. Graduate Studies in Mathematics. Vol. 117. American Mathematical Society. pp. 82–86. doi:10.1090/gsm/117. ISBN   978-0-8218-5280-4. MR   2780010.
  19. Morain, F. (2007). "Implementing the asymptotically fast version of the elliptic curve primality proving algorithm". Mathematics of Computation . 76 (257): 493–505. arXiv: math/0502097 . Bibcode:2007MaCom..76..493M. doi:10.1090/S0025-5718-06-01890-4. MR   2261033. S2CID   133193.
  20. Pomerance, Carl; Selfridge, John L.; Wagstaff, Jr., Samuel S. (July 1980). "The pseudoprimes to 25·109" (PDF). Mathematics of Computation. 35 (151): 1003–26. doi: 10.1090/S0025-5718-1980-0572872-7 . JSTOR   2006210.
  21. Baillie, Robert; Wagstaff, Jr., Samuel S. (October 1980). "Lucas Pseudoprimes" (PDF). Mathematics of Computation. 35 (152): 1391–1417. doi: 10.1090/S0025-5718-1980-0583518-6 . JSTOR   2006406. MR   0583518.
  22. 1 2 Monier, Louis (1980). "Evaluation and comparison of two efficient probabilistic primality testing algorithms". Theoretical Computer Science . 12 (1): 97–108. doi: 10.1016/0304-3975(80)90007-9 . MR   0582244.
  23. Alman, Josh; Williams, Virginia Vassilevska (2020), "A Refined Laser Method and Faster Matrix Multiplication", 32nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2021), arXiv: 2010.05846 , doi:10.1137/1.9781611976465.32, S2CID   222290442
  24. Davie, A.M.; Stothers, A.J. (2013), "Improved bound for complexity of matrix multiplication", Proceedings of the Royal Society of Edinburgh, 143A (2): 351–370, doi:10.1017/S0308210511001648, S2CID   113401430
  25. Vassilevska Williams, Virginia (2014), Breaking the Coppersmith-Winograd barrier: Multiplying matrices in O(n2.373) time
  26. Le Gall, François (2014), "Powers of tensors and fast matrix multiplication", Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation — ISSAC '14, p. 23, arXiv: 1401.7714 , Bibcode:2014arXiv1401.7714L, doi:10.1145/2608628.2627493, ISBN   9781450325011, S2CID   353236
  27. 1 2 Le Gall, François; Urrutia, Floren (2018). "Improved Rectangular Matrix Multiplication using Powers of the Coppersmith-Winograd Tensor". In Czumaj, Artur (ed.). Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics. doi:10.1137/1.9781611975031.67. ISBN   978-1-61197-503-1. S2CID   33396059.
  28. Knight, Philip A. (May 1995). "Fast rectangular matrix multiplication and QR decomposition". Linear Algebra and its Applications. 221: 69–81. doi:10.1016/0024-3795(93)00230-w. ISSN   0024-3795.
  29. Rote, G. (2001). "Division-free algorithms for the determinant and the pfaffian: algebraic and combinatorial approaches" (PDF). Computational discrete mathematics. Springer. pp. 119–135. ISBN   3-540-45506-X.
  30. Aho, Alfred V.; Hopcroft, John E.; Ullman, Jeffrey D. (1974). "Theorem 6.6". The Design and Analysis of Computer Algorithms. Addison-Wesley. p. 241. ISBN   978-0-201-00029-0.
  31. Fraleigh, J.B.; Beauregard, R.A. (1987). Linear Algebra (3rd ed.). Addison-Wesley. p. 95. ISBN   978-0-201-15459-7.
  32. Cohn, Henry; Kleinberg, Robert; Szegedy, Balazs; Umans, Chris (2005). "Group-theoretic Algorithms for Matrix Multiplication". Proceedings of the 46th Annual Symposium on Foundations of Computer Science. IEEE. pp. 379–388. arXiv: math.GR/0511460 . doi:10.1109/SFCS.2005.39. ISBN   0-7695-2468-0. S2CID   6429088.

Further reading