Sparse matrix

Last updated
Example of sparse matrix
The above sparse matrix contains only 9 non-zero elements, with 26 zero elements. Its sparsity is 74%, and its density is 26%.
A sparse matrix obtained when solving a finite element problem in two dimensions. The non-zero elements are shown in black. Finite element sparse matrix.png
A sparse matrix obtained when solving a finite element problem in two dimensions. The non-zero elements are shown in black.

In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. [1] There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. [1] The number of zero-valued elements divided by the total number of elements (e.g., m × n for an m × n matrix) is sometimes referred to as the sparsity of the matrix.

Contents

Conceptually, sparsity corresponds to systems with few pairwise interactions. For example, consider a line of balls connected by springs from one to the next: this is a sparse system as only adjacent balls are coupled. By contrast, if the same line of balls were to have springs connecting each ball to all other balls, the system would correspond to a dense matrix. The concept of sparsity is useful in combinatorics and application areas such as network theory and numerical analysis, which typically have a low density of significant data or connections. Large sparse matrices often appear in scientific or engineering applications when solving partial differential equations.

When storing and manipulating sparse matrices on a computer, it is beneficial and often necessary to use specialized algorithms and data structures that take advantage of the sparse structure of the matrix. Specialized computers have been made for sparse matrices, [2] as they are common in the machine learning field. [3] Operations using standard dense-matrix structures and algorithms are slow and inefficient when applied to large sparse matrices as processing and memory are wasted on the zeros. Sparse data is by nature more easily compressed and thus requires significantly less storage. Some very large sparse matrices are infeasible to manipulate using standard dense-matrix algorithms.

Special cases

Banded

An important special type of sparse matrices is band matrix, defined as follows. The lower bandwidth of a matrix A is the smallest number p such that the entry ai,j vanishes whenever i > j + p. Similarly, the upper bandwidth is the smallest number p such that ai,j = 0 whenever i < jp( Golub & Van Loan 1996 , §1.2.1). For example, a tridiagonal matrix has lower bandwidth 1 and upper bandwidth 1. As another example, the following sparse matrix has lower and upper bandwidth both equal to 3. Notice that zeros are represented with dots for clarity.

Matrices with reasonably small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; or one can sometimes apply dense matrix algorithms and gain efficiency simply by looping over a reduced number of indices.

By rearranging the rows and columns of a matrix A it may be possible to obtain a matrix A with a lower bandwidth. A number of algorithms are designed for bandwidth minimization.

Diagonal

A very efficient structure for an extreme case of band matrices, the diagonal matrix , is to store just the entries in the main diagonal as a one-dimensional array, so a diagonal n × n matrix requires only n entries.

Symmetric

A symmetric sparse matrix arises as the adjacency matrix of an undirected graph; it can be stored efficiently as an adjacency list.

Block diagonal

A block-diagonal matrix consists of sub-matrices along its diagonal blocks. A block-diagonal matrix A has the form

where Ak is a square matrix for all k = 1, ..., n.

Use

Reducing fill-in

The fill-in of a matrix are those entries that change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm, it is useful to minimize the fill-in by switching rows and columns in the matrix. The symbolic Cholesky decomposition can be used to calculate the worst possible fill-in before doing the actual Cholesky decomposition.

There are other methods than the Cholesky decomposition in use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in.

Solving sparse matrix equations

Both iterative and direct methods exist for sparse matrix solving.

Iterative methods, such as conjugate gradient method and GMRES utilize fast computations of matrix-vector products , where matrix is sparse. The use of preconditioners can significantly accelerate convergence of such iterative methods.

Storage

A matrix is typically stored as a two-dimensional array. Each entry in the array represents an element ai,j of the matrix and is accessed by the two indices i and j. Conventionally, i is the row index, numbered from top to bottom, and j is the column index, numbered from left to right. For an m × n matrix, the amount of memory required to store the matrix in this format is proportional to m × n (disregarding the fact that the dimensions of the matrix also need to be stored).

In the case of a sparse matrix, substantial memory requirement reductions can be realized by storing only the non-zero entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to the basic approach. The trade-off is that accessing the individual elements becomes more complex and additional structures are needed to be able to recover the original matrix unambiguously.

Formats can be divided into two groups:

Dictionary of keys (DOK)

DOK consists of a dictionary that maps (row, column)-pairs to the value of the elements. Elements that are missing from the dictionary are taken to be zero. The format is good for incrementally constructing a sparse matrix in random order, but poor for iterating over non-zero values in lexicographical order. One typically constructs a matrix in this format and then converts to another more efficient format for processing. [4]

List of lists (LIL)

LIL stores one list per row, with each entry containing the column index and the value. Typically, these entries are kept sorted by column index for faster lookup. This is another format good for incremental matrix construction. [5]

Coordinate list (COO)

COO stores a list of (row, column, value) tuples. Ideally, the entries are sorted first by row index and then by column index, to improve random access times. This is another format that is good for incremental matrix construction. [6]

Compressed sparse row (CSR, CRS or Yale format)

The compressed sparse row (CSR) or compressed row storage (CRS) or Yale format represents a matrix M by three (one-dimensional) arrays, that respectively contain nonzero values, the extents of rows, and column indices. It is similar to COO, but compresses the row indices, hence the name. This format allows fast row access and matrix-vector multiplications (Mx). The CSR format has been in use since at least the mid-1960s, with the first complete description appearing in 1967. [7]

The CSR format stores a sparse m × n matrix M in row form using three (one-dimensional) arrays (V, COL_INDEX, ROW_INDEX). Let NNZ denote the number of nonzero entries in M. (Note that zero-based indices shall be used here.)

For example, the matrix

is a 4 × 4 matrix with 4 nonzero elements, hence

V         = [ 5 8 3 6 ] COL_INDEX = [ 0 1 2 1 ] ROW_INDEX = [ 0 1 2 3 4 ] 

assuming a zero-indexed language.

To extract a row, we first define:

row_start = ROW_INDEX[row] row_end   = ROW_INDEX[row + 1]

Then we take slices from V and COL_INDEX starting at row_start and ending at row_end.

To extract the row 1 (the second row) of this matrix we set row_start=1 and row_end=2. Then we make the slices V[1:2] = [8] and COL_INDEX[1:2] = [1]. We now know that in row 1 we have one element at column 1 with value 8.

In this case the CSR representation contains 13 entries, compared to 16 in the original matrix. The CSR format saves on memory only when NNZ < (m (n − 1) − 1) / 2.

Another example, the matrix

is a 4 × 6 matrix (24 entries) with 8 nonzero elements, so

V         = [ 10 20 30 40 50 60 70 80 ] COL_INDEX = [  0  1  1  3  2  3  4  5 ]    ROW_INDEX = [  0  2  4  7  8 ]

The whole is stored as 21 entries: 8 in V, 8 in COL_INDEX, and 5 in ROW_INDEX.

Note that in this format, the first value of ROW_INDEX is always zero and the last is always NNZ, so they are in some sense redundant (although in programming languages where the array length needs to be explicitly stored, NNZ would not be redundant). Nonetheless, this does avoid the need to handle an exceptional case when computing the length of each row, as it guarantees the formula ROW_INDEX[i + 1] − ROW_INDEX[i] works for any row i. Moreover, the memory cost of this redundant storage is likely insignificant for a sufficiently large matrix.

The (old and new) Yale sparse matrix formats are instances of the CSR scheme. The old Yale format works exactly as described above, with three arrays; the new format combines ROW_INDEX and COL_INDEX into a single array and handles the diagonal of the matrix separately. [9]

For logical adjacency matrices, the data array can be omitted, as the existence of an entry in the row array is sufficient to model a binary adjacency relation.

It is likely known as the Yale format because it was proposed in the 1977 Yale Sparse Matrix Package report from Department of Computer Science at Yale University. [10]

Compressed sparse column (CSC or CCS)

CSC is similar to CSR except that values are read first by column, a row index is stored for each value, and column pointers are stored. For example, CSC is (val, row_ind, col_ptr), where val is an array of the (top-to-bottom, then left-to-right) non-zero values of the matrix; row_ind is the row indices corresponding to the values; and, col_ptr is the list of val indexes where each column starts. The name is based on the fact that column index information is compressed relative to the COO format. One typically uses another format (LIL, DOK, COO) for construction. This format is efficient for arithmetic operations, column slicing, and matrix-vector products. This is the traditional format for specifying a sparse matrix in MATLAB (via the sparse function).

Software

Many software libraries support sparse matrices, and provide solvers for sparse matrix equations. The following are open-source:

History

The term sparse matrix was possibly coined by Harry Markowitz who initiated some pioneering work but then left the field. [11]

See also

Notes

  1. 1 2 Yan, Di; Wu, Tao; Liu, Ying; Gao, Yang (2017). "An efficient sparse-dense matrix multiplication on a multicore system". 2017 IEEE 17th International Conference on Communication Technology (ICCT). IEEE. pp. 1880–1883. doi:10.1109/icct.2017.8359956. ISBN   978-1-5090-3944-9. The computation kernel of DNN is large sparse-dense matrix multiplication. In the field of numerical analysis, a sparse matrix is a matrix populated primarily with zeros as elements of the table. By contrast, if the number of non-zero elements in a matrix is relatively large, then it is commonly considered a dense matrix. The fraction of zero elements (non-zero elements) in a matrix is called the sparsity (density). Operations using standard dense-matrix structures and algorithms are relatively slow and consume large amounts of memory when applied to large sparse matrices.
  2. "Cerebras Systems Unveils the Industry's First Trillion Transistor Chip". www.businesswire.com. 2019-08-19. Retrieved 2019-12-02. The WSE contains 400,000 AI-optimized compute cores. Called SLAC™ for Sparse Linear Algebra Cores, the compute cores are flexible, programmable, and optimized for the sparse linear algebra that underpins all neural network computation
  3. "Argonne National Laboratory Deploys Cerebras CS-1, the World's Fastest Artificial Intelligence Computer | Argonne National Laboratory". www.anl.gov (Press release). Retrieved 2019-12-02. The WSE is the largest chip ever made at 46,225 square millimeters in area, it is 56.7 times larger than the largest graphics processing unit. It contains 78 times more AI optimized compute cores, 3,000 times more high speed, on-chip memory, 10,000 times more memory bandwidth, and 33,000 times more communication bandwidth.
  4. See scipy.sparse.dok_matrix
  5. See scipy.sparse.lil_matrix
  6. See scipy.sparse.coo_matrix
  7. Buluç, Aydın; Fineman, Jeremy T.; Frigo, Matteo; Gilbert, John R.; Leiserson, Charles E. (2009). Parallel sparse matrix-vector and matrix-transpose-vector multiplication using compressed sparse blocks (PDF). ACM Symp. on Parallelism in Algorithms and Architectures. CiteSeerX   10.1.1.211.5256 .
  8. Saad, Yousef (2003). Iterative methods for sparse linear systems. SIAM.
  9. Bank, Randolph E.; Douglas, Craig C. (1993), "Sparse Matrix Multiplication Package (SMMP)" (PDF), Advances in Computational Mathematics, 1: 127–137, doi:10.1007/BF02070824, S2CID   6412241
  10. Eisenstat, S. C.; Gursky, M. C.; Schultz, M. H.; Sherman, A. H. (April 1977). "Yale Sparse Matrix Package" (PDF). Archived (PDF) from the original on April 6, 2019. Retrieved 6 April 2019.
  11. Oral history interview with Harry M. Markowitz, pp. 9, 10.

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism. The determinant of a product of matrices is the product of their determinants.

<span class="mw-page-title-main">Gaussian elimination</span> Algorithm for solving systems of linear equations

In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

<span class="mw-page-title-main">System of linear equations</span> Several equations of degree 1 to be solved simultaneously

In mathematics, a system of linear equations is a collection of one or more linear equations involving the same variables. For example,

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

In graph theory and computer science, an adjacency matrix is a square matrix used to represent a finite graph. The elements of the matrix indicate whether pairs of vertices are adjacent or not in the graph.

In mathematics, a generalized permutation matrix is a matrix with the same nonzero pattern as a permutation matrix, i.e. there is exactly one nonzero entry in each row and each column. Unlike a permutation matrix, where the nonzero entry must be 1, in a generalized permutation matrix the nonzero entry can be any nonzero value. An example of a generalized permutation matrix is

In mathematics, the Smith normal form is a normal form that can be defined for any matrix with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module. It is named after the Irish mathematician Henry John Stephen Smith.

In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side.

<span class="mw-page-title-main">Row- and column-major order</span> Array representation in computer memory

In computing, row-major order and column-major order are methods for storing multidimensional arrays in linear storage such as random access memory.

A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a matrix with entries from the Boolean domain B = {0, 1}. Such a matrix can be used to represent a binary relation between a pair of finite sets. It is an important tool in combinatorial mathematics and theoretical computer science.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

A frontal solver is an approach to solving sparse linear systems which is used extensively in finite element analysis. Algorithms of this kind are variants of Gauss elimination that automatically avoids a large number of operations involving zero terms due to the fact that the matrix is only sparse. The development of frontal solvers is usually considered as dating back to work by Bruce Irons.

A Frobenius matrix is a special kind of square matrix from numerical analysis. A matrix is a Frobenius matrix if it has the following three properties:

<span class="mw-page-title-main">Skyline matrix</span>

In scientific computing, skyline matrix storage, or SKS, or a variable band matrix storage, or envelope storage scheme is a form of a sparse matrix storage format matrix that reduces the storage requirement of a matrix more than banded storage. In banded storage, all entries within a fixed distance from the diagonal are stored. In column-oriented skyline storage, only the entries from the first nonzero entry to the last nonzero entry in each column are stored. There is also row oriented skyline storage, and, for symmetric matrices, only one triangle is usually stored.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

References

Further reading

  1. Saad, Yousef (2003). Iterative methods for sparse linear systems. SIAM.