In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the fastest algorithm for matrix multiplication is of major practical relevance.
Directly applying the mathematical definition of matrix multiplication gives an algorithm that requires n3 field operations to multiply two n × n matrices over that field (Θ(n3) in big O notation). Surprisingly, algorithms exist that provide better running times than this straightforward "schoolbook algorithm". The first to be discovered was Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication". [1] The optimal number of field operations needed to multiply two square n × n matrices up to constant factors is still unknown. This is a major open question in theoretical computer science.
As of January 2024 [update] , the best bound on the asymptotic complexity of a matrix multiplication algorithm is O(n2.371552). [2] [3] However, this and similar improvements to Strassen are not used in practice, because they are galactic algorithms: the constant coefficient hidden by the big O notation is so large that they are only worthwhile for matrices that are too large to handle on present-day computers. [4] [5]
If A, B are n × n matrices over a field, then their product AB is also an n × n matrix over that field, defined entrywise as
The simplest approach to computing the product of two n × n matrices A and B is to compute the arithmetic expressions coming from the definition of matrix multiplication. In pseudocode:
inputA and B, both n by n matrices initializeC to be an n by n matrix of all zeros fori from 1 to n: forj from 1 to n: fork from 1 to n: C[i][j] = C[i][j] + A[i][k]*B[k][j] outputC (as A*B)
This algorithm requires, in the worst case, multiplications of scalars and additions for computing the product of two square n×n matrices. Its computational complexity is therefore , in a model of computation where field operations (addition and multiplication) take constant time (in practice, this is the case for floating point numbers, but not necessarily for integers).
Strassen's algorithm improves on naive matrix multiplication through a divide-and-conquer approach. The key observation is that multiplying two 2 × 2 matrices can be done with only 7 multiplications, instead of the usual 8 (at the expense of 11 additional addition and subtraction operations). This means that, treating the input n×n matrices as block 2 × 2 matrices, the task of multiplying n×n matrices can be reduced to 7 subproblems of multiplying n/2×n/2 matrices. Applying this recursively gives an algorithm needing field operations.
Unlike algorithms with faster asymptotic complexity, Strassen's algorithm is used in practice. The numerical stability is reduced compared to the naive algorithm, [6] but it is faster in cases where n > 100 or so [7] and appears in several libraries, such as BLAS. [8] Fast matrix multiplication algorithms cannot achieve component-wise stability, but some can be shown to exhibit norm-wise stability. [9] It is very useful for large matrices over exact domains such as finite fields, where numerical stability is not an issue.
Year | Bound on omega | Authors |
---|---|---|
1969 | 2.8074 | Strassen [1] |
1978 | 2.796 | Pan [10] |
1979 | 2.780 | Bini, Capovani , Romani [11] |
1981 | 2.522 | Schönhage [12] |
1981 | 2.517 | Romani [13] |
1981 | 2.496 | Coppersmith, Winograd [14] |
1986 | 2.479 | Strassen [15] |
1990 | 2.3755 | Coppersmith, Winograd [16] |
2010 | 2.3737 | Stothers [17] |
2012 | 2.3729 | Williams [18] [19] |
2014 | 2.3728639 | Le Gall [20] |
2020 | 2.3728596 | Alman, Williams [21] [22] |
2022 | 2.371866 | Duan, Wu, Zhou [23] |
2024 | 2.371552 | Williams, Xu, Xu, and Zhou [2] |
2024 | 2.371339 | Alman, Duan, Williams, Xu, Xu, and Zhou [24] |
The matrix multiplication exponent, usually denoted ω, is the smallest real number for which any two matrices over a field can be multiplied together using field operations. This notation is commonly used in algorithms research, so that algorithms using matrix multiplication as a subroutine have bounds on running time that can update as bounds on ω improve.
Using a naive lower bound and schoolbook matrix multiplication for the upper bound, one can straightforwardly conclude that 2 ≤ ω ≤ 3. Whether ω = 2 is a major open question in theoretical computer science, and there is a line of research developing matrix multiplication algorithms to get improved bounds on ω.
All recent algorithms in this line of research use the laser method, a generalization of the Coppersmith–Winograd algorithm, which was given by Don Coppersmith and Shmuel Winograd in 1990 and was the best matrix multiplication algorithm until 2010. [25] The conceptual idea of these algorithms is similar to Strassen's algorithm: a way is devised for multiplying two k × k-matrices with fewer than k3 multiplications, and this technique is applied recursively. The laser method has limitations to its power, Ambainis, Filmus and Le Gall prove that it cannot be used to show that ω < 2.3725 by analyzing higher and higher tensor powers of a certain identity of Coppersmith and Winograd and neither ω < 2.3078 for a wide class of variants of this approach. [26] In 2022 Duan, Wu and Zhou devised a variant breaking the first of the two barriers with ω < 2.37188, [23] they do so by identifying a source of potential optimization in the laser method termed combination loss for which they compensate using an asymmetric version of the hashing method in the Coppersmith–Winograd algorithm.
Henry Cohn, Robert Kleinberg, Balázs Szegedy and Chris Umans put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely different group-theoretic context, by utilising triples of subsets of finite groups which satisfy a disjointness property called the triple product property (TPP). They also give conjectures that, if true, would imply that there are matrix multiplication algorithms with essentially quadratic complexity. This implies that the optimal exponent of matrix multiplication is 2, which most researchers believe is indeed the case. [5] One such conjecture is that families of wreath products of Abelian groups with symmetric groups realise families of subset triples with a simultaneous version of the TPP. [27] [28] Several of their conjectures have since been disproven by Blasiak, Cohn, Church, Grochow, Naslund, Sawin, and Umans using the Slice Rank method. [29] Further, Alon, Shpilka and Chris Umans have recently shown that some of these conjectures implying fast matrix multiplication are incompatible with another plausible conjecture, the sunflower conjecture, [30] which in turn is related to the cap set problem. [29]
There is a trivial lower bound of . Since any algorithm for multiplying two n × n-matrices has to process all 2n2 entries, there is a trivial asymptotic lower bound of Ω(n2) operations for any matrix multiplication algorithm. Thus . It is unknown whether . The best known lower bound for matrix-multiplication complexity is Ω(n2 log(n)), for bounded coefficient arithmetic circuits over the real or complex numbers, and is due to Ran Raz. [31]
The exponent ω is defined to be a limit point, in that it is the infimum of the exponent over all matrix multiplication algorithms. It is known that this limit point is not achieved. In other words, under the model of computation typically studied, there is no matrix multiplication algorithm that uses precisely O(nω) operations; there must be an additional factor of no(1). [14]
Similar techniques also apply to rectangular matrix multiplication. The central object of study is , which is the smallest such that one can multiply a matrix of size with a matrix of size with arithmetic operations. A result in algebraic complexity states that multiplying matrices of size and requires the same number of arithmetic operations as multiplying matrices of size and and of size and , so this encompasses the complexity of rectangular matrix multiplication. [32] This generalizes the square matrix multiplication exponent, since .
Since the output of the matrix multiplication problem is size , we have for all values of . If one can prove for some values of between 0 and 1 that , then such a result shows that for those . The largest k such that is known as the dual matrix multiplication exponent, usually denoted α. α is referred to as the "dual" because showing that is equivalent to showing that . Like the matrix multiplication exponent, the dual matrix multiplication exponent sometimes appears in the complexity of algorithms in numerical linear algebra and optimization. [33]
The first bound on α is by Coppersmith in 1982, who showed that . [34] The current best peer-reviewed bound on α is , given by Williams, Xu, Xu, and Zhou. [2]
Problems that have the same asymptotic complexity as matrix multiplication include determinant, matrix inversion, Gaussian elimination (see next section). Problems with complexity that is expressible in terms of include characteristic polynomial, eigenvalues (but not eigenvectors), Hermite normal form, and Smith normal form.[ citation needed ]
In his 1969 paper, where he proved the complexity for matrix computation, Strassen proved also that matrix inversion, determinant and Gaussian elimination have, up to a multiplicative constant, the same computational complexity as matrix multiplication. The proof does not make any assumptions on matrix multiplication that is used, except that its complexity is for some
The starting point of Strassen's proof is using block matrix multiplication. Specifically, a matrix of even dimension 2n×2n may be partitioned in four n×n blocks
Under this form, its inverse is
provided that A and are invertible.
Thus, the inverse of a 2n×2n matrix may be computed with two inversions, six multiplications and four additions or additive inverses of n×n matrices. It follows that, denoting respectively by I(n), M(n) and A(n) = n2 the number of operations needed for inverting, multiplying and adding n×n matrices, one has
If one may apply this formula recursively:
If and one gets eventually
for some constant d.
For matrices whose dimension is not a power of two, the same complexity is reached by increasing the dimension of the matrix to a power of two, by padding the matrix with rows and columns whose entries are 1 on the diagonal and 0 elsewhere.
This proves the asserted complexity for matrices such that all submatrices that have to be inverted are indeed invertible. This complexity is thus proved for almost all matrices, as a matrix with randomly chosen entries is invertible with probability one.
The same argument applies to LU decomposition, as, if the matrix A is invertible, the equality
defines a block LU decomposition that may be applied recursively to and for getting eventually a true LU decomposition of the original matrix.
The argument applies also for the determinant, since it results from the block LU decomposition that
Related to the problem of minimizing the number of arithmetic operations is minimizing the number of multiplications, which is typically a more costly operation than addition. A algorithm for matrix multiplication must necessarily only use multiplication operations, but these algorithms are impractical. Improving from the naive multiplications for schoolbook multiplication, matrices in can be done with 47 multiplications, [35] matrix multiplication over a commutative ring can be done in 21 multiplications [36] [37] (23 if non-commutative [38] ). The lower bound of multiplications needed is 2mn+2n−m−2 (multiplication of n×m-matrices with m×n-matrices using the substitution method, m⩾n⩾3), which means n=3 case requires at least 19 multiplications and n=4 at least 34. [39] For n=2 optimal 7 multiplications 15 additions are minimal, compared to only 4 additions for 8 multiplications. [40] [41]
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:
In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.
In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:
In linear algebra, an invertible matrix is a square matrix which has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by an inverse to undo the operation. Invertible matrices are the same size as their inverse.
In mathematics, the determinant of an m-by-m skew-symmetric matrix can always be written as the square of a polynomial in the matrix entries, a polynomial with integer coefficients that only depends on m. When m is odd, the polynomial is zero, and when m is even, it is a nonzero polynomial of degree m/2, and is unique up to multiplication by ±1. The convention on skew-symmetric tridiagonal matrices, given below in the examples, then determines one specific polynomial, called the Pfaffian polynomial. The value of this polynomial, when applied to the entries of a skew-symmetric matrix, is called the Pfaffian of that matrix. The term Pfaffian was introduced by Cayley, who indirectly named them after Johann Friedrich Pfaff.
In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices. The Strassen algorithm is slower than the fastest known algorithms for extremely large matrices, but such galactic algorithms are not useful in practice, as they are much slower for matrices of practical size. For small matrices even faster algorithms exist.
In mathematics, the Smith normal form is a normal form that can be defined for any matrix with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module. It is named after the Irish mathematician Henry John Stephen Smith.
Matrix chain multiplication is an optimization problem concerning the most efficient way to multiply a given sequence of matrices. The problem is not actually to perform the multiplications, but merely to decide the sequence of the matrix multiplications involved. The problem may be solved using dynamic programming.
In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.
Freivalds' algorithm is a probabilistic randomized algorithm used to verify matrix multiplication. Given three n × n matrices , , and , a general problem is to verify whether . A naïve algorithm would compute the product explicitly and compare term by term whether this product equals . However, the best known matrix multiplication algorithm runs in time. Freivalds' algorithm utilizes randomization in order to reduce this time bound to with high probability. In time the algorithm can verify a matrix product with probability of failure less than .
The following tables list the computational complexity of various algorithms for common mathematical operations.
Victor Yakovlevich Pan is a Soviet and American mathematician and computer scientist, known for his research on algorithms for polynomials and matrix multiplication.
Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors.
In computational complexity theory, arithmetic circuits are the standard model for computing polynomials. Informally, an arithmetic circuit takes as inputs either variables or numbers, and is allowed to either add or multiply two expressions it has already computed. Arithmetic circuits provide a formal way to understand the complexity of computing polynomials. The basic type of question in this line of research is "what is the most efficient way to compute a given polynomial ?"
Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.
In quantum computing, the quantum Fourier transform (QFT) is a linear transformation on quantum bits, and is the quantum analogue of the discrete Fourier transform. The quantum Fourier transform is a part of many quantum algorithms, notably Shor's algorithm for factoring and computing the discrete logarithm, the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator, and algorithms for the hidden subgroup problem. The quantum Fourier transform was discovered by Don Coppersmith. With small modifications to the QFT, it can also be used for performing fast integer arithmetic operations such as addition and multiplication.
In numerical mathematics, hierarchical matrices (H-matrices) are used as data-sparse approximations of non-sparse matrices. While a sparse matrix of dimension can be represented efficiently in units of storage by storing only its non-zero entries, a non-sparse matrix would require units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only units of storage, where is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations, preconditioning the resulting systems of linear equations, or solving elliptic partial differential equations, a rank proportional to with a small constant is sufficient to ensure an accuracy of . Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage: the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated in operations, where
The spectral correlation density (SCD), sometimes also called the cyclic spectral density or spectral correlation function, is a function that describes the cross-spectral density of all pairs of frequency-shifted versions of a time-series. The spectral correlation density applies only to cyclostationary processes because stationary processes do not exhibit spectral correlation. Spectral correlation has been used both in signal detection and signal classification. The spectral correlation density is closely related to each of the bilinear time-frequency distributions, but is not considered one of Cohen's class of distributions.
Virginia Vassilevska Williams is a theoretical computer scientist and mathematician known for her research in computational complexity theory and algorithms. She is currently the Steven and Renee Finn Career Development Associate Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. She is notable for her breakthrough results in fast matrix multiplication, for her work on dynamic algorithms, and for helping to develop the field of fine-grained complexity.
In mathematics and computer science, polynomial evaluation refers to computation of the value of a polynomial when its indeterminates are substituted for some values. In other words, evaluating the polynomial at consists of computing See also Polynomial ring § Polynomial evaluation
The Coppersmith–Winograd algorithm is not practical, due to the very large hidden constant in the upper bound on the number of multiplications required.
Even if someone manages to prove one of the conjectures—thereby demonstrating that ω = 2—the wreath product approach is unlikely to be applicable to the large matrix problems that arise in practice. [...] the input matrices must be astronomically large for the difference in time to be apparent.
{{cite book}}
: |work=
ignored (help){{cite journal}}
: Cite journal requires |journal=
(help){{cite book}}
: CS1 maint: multiple names: authors list (link)