Lis (linear algebra library)

Last updated
Stable release
2.1.6 / July 3, 2024 (2024-07-03)
Operating system Cross-platform
Available in C, Fortran
Type Software library
License New BSD License
Website www.ssisc.org/lis/

Lis (Library of Iterative Solvers for linear systems, pronounced [lis]) is a scalable parallel software library to solve discretized linear equations and eigenvalue problems that mainly arise from the numerical solution of partial differential equations using iterative methods. [1] [2] [3] Although it is designed for parallel computers, the library can be used without being conscious of parallel processing.

Contents

Features

Lis provides facilities for:

Example

A C program to solve the linear equation is written as follows:

#include<stdio.h>#include"lis_config.h"#include"lis.h"LIS_INTmain(LIS_INTargc,char*argv[]){LIS_MATRIXA;LIS_VECTORb,x;LIS_SOLVERsolver;LIS_INTiter;doubletime;lis_initialize(&argc,&argv);lis_matrix_create(LIS_COMM_WORLD,&A);lis_vector_create(LIS_COMM_WORLD,&b);lis_vector_create(LIS_COMM_WORLD,&x);lis_input_matrix(A,argv[1]);lis_input_vector(b,argv[2]);lis_vector_duplicate(A,&x);lis_solver_create(&solver);lis_solver_set_optionC(solver);lis_solve(A,b,x,solver);lis_solver_get_iter(solver,&iter);lis_solver_get_time(solver,&time);printf("number of iterations = %d\n",iter);printf("elapsed time = %e\n",time);lis_output_vector(x,LIS_FMT_MM,argv[3]);lis_solver_destroy(solver);lis_matrix_destroy(A);lis_vector_destroy(b);lis_vector_destroy(x);lis_finalize();return0;}

System requirements

Installing Lis requires a C compiler. If you wish to use the Fortran interface, a Fortran compiler is needed, and the algebraic multigrid preconditioner requires a Fortran 90 compiler. [4] For parallel computing environments, an OpenMP or MPI library is necessary. Lis supports both the Matrix Market and Harwell-Boeing formats for importing and exporting user data.

Packages that use Lis

See also

Related Research Articles

<span class="mw-page-title-main">Numerical analysis</span> Study of algorithms using numerical approximation

Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics, numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.

<span class="mw-page-title-main">System of linear equations</span> Several equations of degree 1 to be solved simultaneously

In mathematics, a system of linear equations is a collection of two or more linear equations involving the same variables. For example,

<span class="mw-page-title-main">Sparse matrix</span> Matrix in which most of the elements are zero

In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. The number of zero-valued elements divided by the total number of elements is sometimes referred to as the sparsity of the matrix.

<span class="mw-page-title-main">LAPACK</span> Software library for numerical linear algebra

LAPACK is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines.

Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.

In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.

Numerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs).

In numerical analysis, a multigrid method is an algorithm for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in problems exhibiting multiple scales of behavior. For example, many basic relaxation methods exhibit different rates of convergence for short- and long-wavelength components, suggesting these different scales be treated differently, as in a Fourier analysis approach to multigrid. MG methods can be used as solvers as well as preconditioners.

In linear algebra, it is often important to know which vectors have their directions unchanged by a given linear transformation. An eigenvector or characteristic vector is such a vector. More precisely, an eigenvector of a linear transformation is scaled by a constant factor when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products. Such methods can be preferable when the matrix is so big that storing and manipulating it would cost a lot of memory and computing time, even with the use of methods for sparse matrices. Many iterative methods allow for a matrix-free implementation, including:

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

An algebraic Riccati equation is a type of nonlinear equation that arises in the context of infinite-horizon optimal control problems in continuous time or discrete time.

In numerical linear algebra, the Chebyshev iteration is an iterative method for determining the solutions of a system of linear equations. The method is named after Russian mathematician Pafnuty Chebyshev.

Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem

Matrix Toolkit Java (MTJ) is an open-source Java software library for performing numerical linear algebra. The library contains a full set of standard linear algebra operations for dense matrices based on BLAS and LAPACK code. Partial set of sparse operations is provided through the Templates project. The library can be configured to run as a pure Java library or use BLAS machine-optimized code through the Java Native Interface.

References

  1. Akira Nishida (2010). "Experience in Developing an Open Source Scalable Software Infrastructure in Japan". Computational Science and Its Applications – ICCSA 2010. Lecture Notes in Computer Science 6017. Vol. 6017. Springer. pp. 87–98. doi:10.1007/978-3-642-12165-4_36. ISBN   978-3-642-12164-7.
  2. Hisashi Kotakemori; Hidehiko Hasegawa; Tamito Kajiyama; Akira Nukada; Reiji Suda & Akira Nishida (2008). "Performance Evaluation of Parallel Sparse Matrix-Vector Products on SGI Altix 3700". OpenMP Shared Memory Parallel Programming. Lecture Notes in Computer Science 4315. Springer. pp. 153–163. doi:10.1007/978-3-540-68555-5_13. ISBN   978-3-540-68554-8.
  3. Hisashi Kotakemori; Hidehiko Hasegawa & Akira Nishida (2005). "Performance Evaluation of a Parallel Iterative Method Library using OpenMP". Proceedings of the 8th International Conference on High Performance Computing in Asia Pacific Region (HPC Asia 2005). IEEE. pp. 432–436. doi:10.1109/HPCASIA.2005.74. ISBN   0-7695-2486-9. S2CID   6402585.
  4. Akihiro Fujii; Akira Nishida & Yoshio Oyanagi (2005). "Evaluation of Parallel Aggregate Creation Orders : Smoothed Aggregation Algebraic Multigrid Method". High Performance Computational Science and Engineering. Springer. pp. 99–122. doi:10.1007/0-387-24049-7_6. ISBN   1-4419-3684-X. S2CID   118053459.