Skyline matrix

Last updated

In scientific computing, skyline matrix storage, or SKS, or a variable band matrix storage, or envelope storage scheme [1] is a form of a sparse matrix storage format matrix that reduces the storage requirement of a matrix more than banded storage. In banded storage, all entries within a fixed distance from the diagonal (called half-bandwidth) are stored. In column-oriented skyline storage, only the entries from the first nonzero entry to the last nonzero entry in each column are stored. There is also row oriented skyline storage, and, for symmetric matrices, only one triangle is usually stored. [2]

A column-oriented skyline matrix (on the top). On the bottom is the relative storage structure. The name comes from the resemblance to the skyscrapers skyline of the top non-zero values. Skyline matrix.svg
A column-oriented skyline matrix (on the top). On the bottom is the relative storage structure. The name comes from the resemblance to the skyscrapers skyline of the top non-zero values.

Skyline storage has become very popular in the finite element codes for structural mechanics, because the skyline is preserved by Cholesky decomposition (a method of solving systems of linear equations with a symmetric, positive-definite matrix; all fill-in falls within the skyline), and systems of equations from finite elements have a relatively small skyline. In addition, the effort of coding skyline Cholesky [3] is about same as for Cholesky for banded matrices (available for banded matrices, e.g. in LAPACK; for a prototype skyline code, see [3] ).

Before storing a matrix in skyline format, the rows and columns are typically renumbered to reduce the size of the skyline (the number of nonzero entries stored) and to decrease the number of operations in the skyline Cholesky algorithm. The same heuristic renumbering algorithm that reduce the bandwidth are also used to reduce the skyline. The basic and one of the earliest algorithms to do that is reverse Cuthill–McKee algorithm.

However, skyline storage is not as popular for very large systems (many millions of equations) because skyline Cholesky is not so easily adapted for massively parallel computing, and general sparse methods, [4] which store only the nonzero entries of the matrix, become more efficient for very large problems due to much less fill-in.

See also

Related Research Articles

<span class="mw-page-title-main">Gaussian elimination</span> Algorithm for solving systems of linear equations

In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

<span class="mw-page-title-main">Numerical analysis</span> Methods for numerical approximations

Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics, numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

<span class="mw-page-title-main">Sparse matrix</span> Matrix in which most of the elements are zero

In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. The number of zero-valued elements divided by the total number of elements is sometimes referred to as the sparsity of the matrix.

This is an outline of topics related to linear algebra, the branch of mathematics concerning linear equations and linear maps and their representations in vector spaces and through matrices.

In numerical analysis, the minimum degree algorithm is an algorithm used to permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition, to reduce the number of non-zeros in the Cholesky factor. This results in reduced storage requirements and means that the Cholesky factor can be applied with fewer arithmetic operations.

In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side.

In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products. Such methods can be preferable when the matrix is so big that storing and manipulating it would cost a lot of memory and computing time, even with the use of methods for sparse matrices. Many iterative methods allow for a matrix-free implementation, including:

A frontal solver is an approach to solving sparse linear systems which is used extensively in finite element analysis. Algorithms of this kind are variants of Gauss elimination that automatically avoids a large number of operations involving zero terms due to the fact that the matrix is only sparse. The development of frontal solvers is usually considered as dating back to work by Bruce Irons.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.

In numerical analysis, nested dissection is a divide and conquer heuristic for the solution of sparse symmetric systems of linear equations based on graph partitioning. Nested dissection was introduced by George (1973); the name was suggested by Garrett Birkhoff.

In numerical mathematics, hierarchical matrices (H-matrices) are used as data-sparse approximations of non-sparse matrices. While a sparse matrix of dimension can be represented efficiently in units of storage by storing only its non-zero entries, a non-sparse matrix would require units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only units of storage, where is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations, preconditioning the resulting systems of linear equations, or solving elliptic partial differential equations, a rank proportional to with a small constant is sufficient to ensure an accuracy of . Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage: the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated in operations, where

Matrix Toolkit Java (MTJ) is an open-source Java software library for performing numerical linear algebra. The library contains a full set of standard linear algebra operations for dense matrices based on BLAS and LAPACK code. Partial set of sparse operations is provided through the Templates project. The library can be configured to run as a pure Java library or use BLAS machine-optimized code through the Java Native Interface.

In numerical analysis, the mixed finite element method, is a type of finite element method in which extra fields to be solved are introduced during the posing a partial differential equation problem. Somewhat related is the hybrid finite element method. The extra fields are constrained by using Lagrange multiplier fields. To be distinguished from the mixed finite element method, usual finite element methods that do not introduce such extra fields are also called irreducible or primal finite element methods. The mixed finite element method is efficient for some problems that would be numerically ill-posed if discretized by using the irreducible finite element method; one example of such problems is to compute the stress and strain fields in an almost incompressible elastic body.

References

  1. Watkins, David S. (2002), Fundamentals of matrix computations (Second ed.), New York: John Wiley & Sons, Inc., p. 60, ISBN   0-471-21394-2
  2. Barrett, Richard; Berry; Chan; Demmel; Donato; Dongarra; Eijkout; Pozo; Romine; Van der Vorst (1994), "Skyline Storage (SKS)", Templates for the solution of linear systems, SIAM, ISBN   0-89871-328-5
  3. 1 2 George, Alan; Liu, Joseph W. H. (1981), Computer solution of large sparse positive definite systems , Prentice-Hall Inc., ISBN   0-13-165274-5 . The book also contains the description and source code of simple sparse matrix routines, still useful even if long superseded.
  4. Duff, Iain S.; Erisman, Albert M.; Reid, John K. (1986), Direct methods for sparse matrices, Oxford University Press, ISBN   0-19-853408-6