Matrix chain multiplication

Last updated

Matrix chain multiplication (or the matrix chain ordering problem [1] ) is an optimization problem concerning the most efficient way to multiply a given sequence of matrices. The problem is not actually to perform the multiplications, but merely to decide the sequence of the matrix multiplications involved. The problem may be solved using dynamic programming.

Contents

There are many options because matrix multiplication is associative. In other words, no matter how the product is parenthesized, the result obtained will remain the same. For example, for four matrices A, B, C, and D, there are five possible options:

((AB)C)D = (A(BC))D = (AB)(CD) = A((BC)D) = A(B(CD)).

Although it does not affect the product, the order in which the terms are parenthesized affects the number of simple arithmetic operations needed to compute the product, that is, the computational complexity. The straightforward multiplication of a matrix that is X × Y by a matrix that is Y × Z requires XYZ ordinary multiplications and X(Y − 1)Z ordinary additions. In this context, it is typical to use the number of ordinary multiplications as a measure of the runtime complexity.

If A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix, then

computing (AB)C needs (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations, while
computing A(BC) needs (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations.

Clearly the first method is more efficient. With this information, the problem statement can be refined as "how to determine the optimal parenthesization of a product of n matrices?" The number of possible parenthesizations is given by the (n–1)th Catalan number, which is O(4n / n3/2), so checking each possible parenthesization (brute force) would require a run-time that is exponential in the number of matrices, which is very slow and impractical for large n. A quicker solution to this problem can be achieved by breaking up the problem into a set of related subproblems.

A dynamic programming algorithm

To begin, let us assume that all we really want to know is the minimum cost, or minimum number of arithmetic operations needed to multiply out the matrices. If we are only multiplying two matrices, there is only one way to multiply them, so the minimum cost is the cost of doing this. In general, we can find the minimum cost using the following recursive algorithm:

For example, if we have four matrices ABCD, we compute the cost required to find each of (A)(BCD), (AB)(CD), and (ABC)(D), making recursive calls to find the minimum cost to compute ABC, AB, CD, and BCD. We then choose the best one. Better still, this yields not only the minimum cost, but also demonstrates the best way of doing the multiplication: group it the way that yields the lowest total cost, and do the same for each factor.

However, this algorithm has exponential runtime complexity making it as inefficient as the naive approach of trying all permutations. The reason is that the algorithm does a lot of redundant work. For example, above we made a recursive call to find the best cost for computing both ABC and AB. But finding the best cost for computing ABC also requires finding the best cost for AB. As the recursion grows deeper, more and more of this type of unnecessary repetition occurs.

One simple solution is called memoization: each time we compute the minimum cost needed to multiply out a specific subsequence, we save it. If we are ever asked to compute it again, we simply give the saved answer, and do not recompute it. Since there are about n2/2 different subsequences, where n is the number of matrices, the space required to do this is reasonable. It can be shown that this simple trick brings the runtime down to O(n3) from O(2n), which is more than efficient enough for real applications. This is top-down dynamic programming.

The following bottom-up approach [2] computes, for each 2 ≤ k ≤ n, the minimum costs of all subsequences of length k using the costs of smaller subsequences already computed. It has the same asymptotic runtime and requires no recursion.

Pseudocode:

// Matrix A[i] has dimension dims[i-1] x dims[i] for i = 1..nMatrixChainOrder(intdims[]){// length[dims] = n + 1n=dims.length-1;// m[i,j] = Minimum number of scalar multiplications (i.e., cost)// needed to compute the matrix A[i]A[i+1]...A[j] = A[i..j]// The cost is zero when multiplying one matrixfor(i=1;i<=n;i++)m[i,i]=0;for(len=2;len<=n;len++){// Subsequence lengthsfor(i=1;i<=n-len+1;i++){j=i+len-1;m[i,j]=MAXINT;for(k=i;k<=j-1;k++){cost=m[i,k]+m[k+1,j]+dims[i-1]*dims[k]*dims[j];if(cost<m[i,j]){m[i,j]=cost;s[i,j]=k;// Index of the subsequence split that achieved minimal cost}}}}}

A Python implementation using the memoization decorator from the standard library:

fromfunctoolsimportcachedefmatrixChainOrder(dims:list[int])->int:@cachedefa(i,j):returnmin((a(i,k)+dims[i]*dims[k]*dims[j]+a(k,j)forkinrange(i+1,j)),default=0)returna(0,len(dims)-1)

More efficient algorithms

There are algorithms that are more efficient than the O(n3) dynamic programming algorithm, though they are more complex.

Hu & Shing

An algorithm published by T. C. Hu and M.-T. Shing achieves O(n log n) computational complexity. [3] [4] [5] They showed how the matrix chain multiplication problem can be transformed (or reduced) into the problem of triangulation of a regular polygon. The polygon is oriented such that there is a horizontal bottom side, called the base, which represents the final result. The other n sides of the polygon, in the clockwise direction, represent the matrices. The vertices on each end of a side are the dimensions of the matrix represented by that side. With n matrices in the multiplication chain there are n−1 binary operations and Cn−1 ways of placing parentheses, where Cn−1 is the (n−1)-th Catalan number. The algorithm exploits that there are also Cn−1 possible triangulations of a polygon with n+1 sides.

This image illustrates possible triangulations of a regular hexagon. These correspond to the different ways that parentheses can be placed to order the multiplications for a product of 5 matrices.

Catalan-Hexagons-example.svg

For the example below, there are four sides: A, B, C and the final result ABC. A is a 10×30 matrix, B is a 30×5 matrix, C is a 5×60 matrix, and the final result is a 10×60 matrix. The regular polygon for this example is a 4-gon, i.e. a square:

Matrix chain multiplication polygon example.svg

The matrix product AB is a 10x5 matrix and BC is a 30x60 matrix. The two possible triangulations in this example are:

The cost of a single triangle in terms of the number of multiplications needed is the product of its vertices. The total cost of a particular triangulation of the polygon is the sum of the costs of all its triangles:

(AB)C: (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 multiplications
A(BC): (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 multiplications

Hu & Shing developed an algorithm that finds an optimum solution for the minimum cost partition problem in O(n log n) time. Their proof of correctness of the algorithm relies on "Lemma 1" proved in a 1981 technical report and omitted from the published paper. [6] [4] The technical report's proof of the lemma is incorrect, but Shing has presented a corrected proof. [1]

Other O(n log n) algorithms

Wang, Zhu and Tian have published a simplified O(n log m) algorithm, where n is the number of matrices in the chain and m is the number of local minimums in the dimension sequence of the given matrix chain. [7]

Nimbark, Gohel, and Doshi have published a greedy O(n log n) algorithm, [8] but their proof of optimality is incorrect and their algorithm fails to produce the most efficient parentheses assignment for some matrix chains. [1]

Chin-Hu-Shing approximate solution

An algorithm created independently by Chin [9] and Hu & Shing [10] runs in O(n) and produces a parenthesization which is at most 15.47% worse than the optimal choice. In most cases the algorithm yields the optimal solution or a solution which is only 1-2 percent worse than the optimal one. [5]

The algorithm starts by translating the problem to the polygon partitioning problem. To each vertex V of the polygon is associated a weight w. Suppose we have three consecutive vertices , and that is the vertex with minimum weight . We look at the quadrilateral with vertices (in clockwise order). We can triangulate it in two ways:

Therefore, if

or equivalently

we remove the vertex from the polygon and add the side to the triangulation. We repeat this process until no satisfies the condition above. For all the remaining vertices , we add the side to the triangulation. This gives us a nearly optimal triangulation.

Generalizations

The matrix chain multiplication problem generalizes to solving a more abstract problem: given a linear sequence of objects, an associative binary operation on those objects, and a way to compute the cost of performing that operation on any two given objects (as well as all partial results), compute the minimum cost way to group the objects to apply the operation over the sequence. [11] A practical instance of this comes from the ordering of join operations in databases; see Query optimization § Join ordering.

Another somewhat contrived special case of this is string concatenation of a list of strings. In C, for example, the cost of concatenating two strings of length m and n using strcat is O(m + n), since we need O(m) time to find the end of the first string and O(n) time to copy the second string onto the end of it. Using this cost function, we can write a dynamic programming algorithm to find the fastest way to concatenate a sequence of strings. However, this optimization is rather useless because we can straightforwardly concatenate the strings in time proportional to the sum of their lengths. A similar problem exists for singly linked lists.

Another generalization is to solve the problem when parallel processors are available. In this case, instead of adding the costs of computing each factor of a matrix product, we take the maximum because we can do them simultaneously. This can drastically affect both the minimum cost and the final optimal grouping; more "balanced" groupings that keep all the processors busy are favored. There are even more sophisticated approaches. [12]

See also

Related Research Articles

In mathematics and computer programming, exponentiating by squaring is a general method for fast computation of large positive integer powers of a number, or more generally of an element of a semigroup, like a polynomial or a square matrix. Some variants are commonly referred to as square-and-multiply algorithms or binary exponentiation. These can be of quite general use, for example in modular arithmetic or powering of matrices. For semigroups for which additive notation is commonly used, like elliptic curves used in cryptography, this method is also referred to as double-and-add.

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

<span class="mw-page-title-main">Dynamic programming</span> Problem optimization method

Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.

In computer science, the Floyd–Warshall algorithm is an algorithm for finding shortest paths in a directed weighted graph with positive or negative edge weights. A single execution of the algorithm will find the lengths of shortest paths between all pairs of vertices. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm. Versions of the algorithm can also be used for finding the transitive closure of a relation , or widest paths between all pairs of vertices in a weighted graph.

<span class="mw-page-title-main">Levenshtein distance</span> Computer science metric for string similarity

In information theory, linguistics, and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. The Levenshtein distance between two words is the minimum number of single-character edits required to change one word into the other. It is named after Soviet mathematician Vladimir Levenshtein, who defined the metric in 1965.

In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices. The Strassen algorithm is slower than the fastest known algorithms for extremely large matrices, but such galactic algorithms are not useful in practice, as they are much slower for matrices of practical size. For small matrices even faster algorithms exist.

In mathematics, computer science and especially graph theory, a distance matrix is a square matrix containing the distances, taken pairwise, between the elements of a set. Depending upon the application involved, the distance being used to define this matrix may or may not be a metric. If there are N elements, this matrix will have size N×N. In graph-theoretic applications, the elements are more often referred to as points, nodes or vertices.

In machine learning, backpropagation is a gradient estimation method used to train neural network models. The gradient estimate is used by the optimization algorithm to compute the network parameter updates.

In computing, a cache-oblivious algorithm is an algorithm designed to take advantage of a processor cache without having the size of the cache as an explicit parameter. An optimal cache-oblivious algorithm is a cache-oblivious algorithm that uses the cache optimally. Thus, a cache-oblivious algorithm is designed to perform well, without modification, on multiple machines with different cache sizes, or for a memory hierarchy with different levels of cache having different sizes. Cache-oblivious algorithms are contrasted with explicit loop tiling, which explicitly breaks a problem into blocks that are optimally sized for a given cache.

The Hungarian method is a combinatorial optimization algorithm that solves the assignment problem in polynomial time and which anticipated later primal–dual methods. It was developed and published in 1955 by Harold Kuhn, who gave it the name "Hungarian method" because the algorithm was largely based on the earlier works of two Hungarian mathematicians, Dénes Kőnig and Jenő Egerváry. However, in 2006 it was discovered that Carl Gustav Jacobi had solved the assignment problem in the 19th century, and the solution had been published posthumously in 1890 in Latin.

Non-negative matrix factorization, also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into (usually) two matrices W and H, with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered. Since the problem is not exactly solvable in general, it is commonly approximated numerically.

Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

In computer science, Hirschberg's algorithm, named after its inventor, Dan Hirschberg, is a dynamic programming algorithm that finds the optimal sequence alignment between two strings. Optimality is measured with the Levenshtein distance, defined to be the sum of the costs of insertions, replacements, deletions, and null actions needed to change one string into the other. Hirschberg's algorithm is simply described as a more space-efficient version of the Needleman–Wunsch algorithm that uses divide and conquer. Hirschberg's algorithm is commonly used in computational biology to find maximal global alignments of DNA and protein sequences.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.

Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors.

In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the fastest algorithm for matrix multiplication is of major practical relevance.

Min-plus matrix multiplication, also known as distance product, is an operation on matrices.

References

  1. 1 2 3 Schwartz, Oded; Weiss, Elad (January 2019). "Revisiting Computation of Matrix Chain Products". SIAM Journal on Computing. 48 (5): 1481–1486. doi:10.1137/18m1195401. S2CID   203009883.
  2. Cormen, Thomas H; Leiserson, Charles E; Rivest, Ronald L; Stein, Clifford (2001). "15.2: Matrix-chain multiplication". Introduction to Algorithms. Vol. Second Edition. MIT Press and McGraw-Hill. pp. 331–338. ISBN   978-0-262-03293-3.
  3. Hu, T. C.; Shing, M.-T. (1982). "Computation of Matrix Chain Products, Part I" (PDF). SIAM Journal on Computing. 11 (2): 362–373. CiteSeerX   10.1.1.695.2923 . doi:10.1137/0211028. ISSN   0097-5397.
  4. 1 2 Hu, T. C.; Shing, M.-T. (1984). "Computation of Matrix Chain Products, Part II" (PDF). SIAM Journal on Computing. 13 (2): 228–251. CiteSeerX   10.1.1.695.4875 . doi:10.1137/0213017. ISSN   0097-5397.
  5. 1 2 Artur, Czumaj (1996). "Very Fast Approximation of the Matrix Chain Product Problem" (PDF). Journal of Algorithms. 21: 71–79. CiteSeerX   10.1.1.218.8168 . doi:10.1006/jagm.1996.0037. S2CID   2818053. Archived from the original (PDF) on 2018-07-27.
  6. Hu, TC; Shing, MT (1981). Computation of Matrix Chain Products, Part I, Part II (PDF) (Technical report). Stanford University, Department of Computer Science. Part II, page 3. STAN-CS-TR-81-875.
  7. Wang, Xiaodong; Zhu, Daxin; Tian, Jun (April 2013). "Efficient computation of matrix chain". 2013 8th International Conference on Computer Science & Education. pp. 703–707. doi:10.1109/ICCSE.2013.6553999. ISBN   978-1-4673-4463-0. S2CID   17303326.
  8. Nimbark, Hitesh; Gohel, Shobhen; Doshi, Nishant (2011). "A Novel Approach for Matrix Chain Multiplication Using Greedy Technique for Packet Processing". Computer Networks and Information Technologies. Communications in Computer and Information Science. Vol. 142. pp. 318–321. doi:10.1007/978-3-642-19542-6_58. ISBN   978-3-642-19541-9.
  9. Chin, Francis Y. (July 1978). "An O(n) algorithm for determining a near-optimal computation order of matrix chain products". Communications of the ACM. 21 (7): 544–549. doi: 10.1145/359545.359556 .
  10. Hu, T.C; Shing, M.T (June 1981). "An O(n) algorithm to find a near-optimum partition of a convex polygon". Journal of Algorithms. 2 (2): 122–138. doi:10.1016/0196-6774(81)90014-6.
  11. G. Baumgartner, D. Bernholdt, D. Cociorva, R. Harrison, M. Nooijen, J. Ramanujam and P. Sadayappan. A Performance Optimization Framework for Compilation of Tensor Contraction Expressions into Parallel Programs. 7th International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS '02). Fort Lauderdale, Florida. 2002 available at http://citeseer.ist.psu.edu/610463.html and at http://www.csc.lsu.edu/~gb/TCE/Publications/OptFramework-HIPS02.pdf
  12. Heejo Lee, Jong Kim, Sungje Hong, and Sunggu Lee. Processor Allocation and Task Scheduling of Matrix Chain Products on Parallel Systems Archived 2011-07-22 at the Wayback Machine . IEEE Trans. on Parallel and Distributed Systems, Vol. 14, No. 4, pp. 394–407, Apr. 2003