The following tables provide a comparison of linear algebra software libraries , either specialized or general purpose libraries with significant linear algebra coverage.
Creator | Language | First public release | Latest stable version | Source code availability | License | Notes | |
---|---|---|---|---|---|---|---|
ALGLIB [1] | ALGLIB Project | C++, C#, Python, FreePascal | 2006 | 3.19.0 / 06.2022 | Free | GPL/commercial | General purpose numerical analysis library with C++, C#, Python, FreePascal interfaces. |
Armadillo [2] [3] | NICTA | C++ | 2009 | 9.200 / 10.2018 | Free | Apache License 2.0 | C++ template library for linear algebra; includes various decompositions and factorisations; syntax (API) is similar to MATLAB. |
ATLAS | R. Clint Whaley et al. | C | 2001 | 3.10.3 / 07.2016 | Free | BSD | Automatically tuned implementation of BLAS. Also includes LU and Cholesky decompositions. |
Blaze [4] | K. Iglberger et al. | C++ | 2012 | 3.8 / 08.2020 | Free | BSD | Blaze is an open-source, high-performance C++ math library for dense and sparse arithmetic. |
Blitz++ | Todd Veldhuizen | C++ | ? | 1.0.2 / 10.2019 | Free | GPL | Blitz++ is a C++ template class library that provides high-performance multidimensional array containers for scientific computing. |
Boost uBLAS | J. Walter, M. Koch | C++ | 2000 | 1.70.0 / 04.2019 | Free | Boost Software License | uBLAS is a C++ template class library that provides BLAS level 1, 2, 3 functionality for dense, packed and sparse matrices. |
Dlib | Davis E. King | C++ | 2006 | 19.7 / 09/2017 | Free | Boost | C++ template library; binds to optimized BLAS such as the Intel MKL; Includes matrix decompositions, non-linear solvers, and machine learning tooling |
Eigen | Benoît Jacob | C++ | 2008 | 3.4.0 / 08.2021 | Free | MPL2 | Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms. |
Fastor [5] | R. Poya, A. J. Gil and R. Ortigosa | C++ | 2016 | 0.6.4 / 06.2023 | Free | MIT License | Fastor is a high performance tensor (fixed multi-dimensional array) library for modern C++. |
GNU Scientific Library [6] | GNU Project | C, C++ | 1996 | 2.5 / 06.2018 | Free | GPL | General purpose numerical analysis library. Includes some support for linear algebra. |
IMSL Numerical Libraries | Rogue Wave Software | C, Java, C#, Fortran, Python | 1970 | many components | Non-free | Proprietary | General purpose numerical analysis library. |
LAPACK [7] [8] | Fortran | 1992 | 3.9.0 / 11.2019 | Free | 3-clause BSD | Numerical linear algebra library with long history | |
librsb | Michele Martone | C, Fortran, M4 | 2011 | 1.2 / September 2016 | Free | GPL | High-performance multi-threaded primitives for large sparse matrices. Support operations for iterative solvers: multiplication, triangular solve, scaling, matrix I/O, matrix rendering. Many variants: e.g.: symmetric, hermitian, complex, quadruple precision. |
oneMKL | Intel | C, C++, Fortran | 2003 | 2023.1 / March 2023 | Non-free | Intel Simplified Software License | Numerical analysis library optimized for Intel CPUs and GPUs. C++ SYCL based reference API implementation available in source for free. |
Math.NET Numerics | C. Rüegg, M. Cuda, et al. | C# | 2009 | 3.20 / 07.2017 | Free | MIT License | C# numerical analysis library with linear algebra support |
Matrix Template Library | Jeremy Siek, Peter Gottschling, Andrew Lumsdaine, et al. | C++ | 1998 | 4.0 / 2018 | Free | Boost Software License | High-performance C++ linear algebra library based on Generic programming |
NAG Numerical Library | The Numerical Algorithms Group | C, Fortran | 1971 | many components | Non-free | Proprietary | General purpose numerical analysis library. |
NMath | CenterSpace Software | C# | 2003 | 7.1 / December 2019 | Non-free | Proprietary | Math and statistical libraries for the .NET Framework |
SciPy [9] [10] [11] | Enthought | Python | 2001 | 1.0.0 / 10.2017 | Free | BSD | Based on Python |
Xtensor [12] | S. Corlay, W. Vollprecht, J. Mabille et al. | C++ | 2016 | 0.21.10 / 11.2020 | Free | 3-clause BSD | Xtensor is a C++ library meant for numerical analysis with multi-dimensional array expressions, broadcasting and lazy computing. |
Matrix types (special types like bidiagonal/tridiagonal are not listed):
Operations:
Real | Complex | SPD | HPD | SY | HE | BND | TF | OF | EVP | SVD | GEVP | GSVD | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ALGLIB | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes | Yes | Yes | No |
ATLAS | Yes | Yes | Yes | Yes | No | No | No | Yes | No | No | No | No | No |
Dlib | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | Yes | No | No |
GNU Scientific Library | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes | Yes | Yes | Yes |
ILNumerics.Net | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes | Yes | No | No |
IMSL Numerical Libraries | Yes | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes | Yes | No |
LAPACK | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
oneMKL | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
NAG Numerical Library | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
NMath | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | No |
SciPy (Python packages) | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes | Yes | No | No |
Eigen | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No |
Armadillo | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | Yes | Yes | No |
In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.
In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.
In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.
Jack Joseph Dongarra is an American computer scientist and mathematician. He is the American University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee. He holds the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, Turing Fellowship in the School of Mathematics at the University of Manchester, and is an adjunct professor and teacher in the Computer Science Department at Rice University. He served as a faculty fellow at the Texas A&M University Institute for Advanced Study (2014–2018). Dongarra is the founding director of the Innovative Computing Laboratory at the University of Tennessee. He was the recipient of the Turing Award in 2021.
LAPACK is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines.
Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.
Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science that uses advanced computing capabilities to understand and solve complex physical problems. This includes
EISPACK is a software library for numerical computation of eigenvalues and eigenvectors of matrices, written in FORTRAN. It contains subroutines for calculating the eigenvalues of nine classes of matrices: complex general, complex Hermitian, real general, real symmetric, real symmetric banded, real symmetric tridiagonal, special real tridiagonal, generalized real, and generalized real symmetric matrices. In addition it includes subroutines to perform a singular value decomposition.
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.
Hendrik "Henk" Albertus van der Vorst is a Dutch mathematician and Emeritus Professor of Numerical Analysis at Utrecht University. According to the Institute for Scientific Information (ISI), his paper on the BiCGSTAB method was the most cited paper in the field of mathematics in the 1990s. He is a member of the Royal Netherlands Academy of Arts and Sciences (KNAW) since 2002 and the Netherlands Academy of Technology and Innovation. In 2006 he was awarded a knighthood of the Order of the Netherlands Lion. Henk van der Vorst is a Fellow of Society for Industrial and Applied Mathematics (SIAM).
Armadillo is a linear algebra software library for the C++ programming language. It aims to provide efficient and streamlined base calculations, while at the same time having a straightforward and easy-to-use interface. Its intended target users are scientists and engineers.
ARPACK, the ARnoldi PACKage, is a numerical software library written in FORTRAN 77 for solving large scale eigenvalue problems in the matrix-free fashion.
Intel oneAPI Math Kernel Library is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math.
Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem
ALGLIB is a cross-platform open source numerical analysis and data processing library. It can be used from several programming languages.
Math.NET Numerics is an open-source numerical library for .NET and Mono, written in C# and F#. It features functionality similar to BLAS and LAPACK.
Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification is numerics including mathematically strict error evaluation, and it is one field of numerical analysis. For computation, interval arithmetic is used, and all results are represented by intervals. Validated numerics were used by Warwick Tucker in order to solve the 14th of Smale's problems, and today it is recognized as a powerful tool for the study of dynamical systems.
{{cite journal}}
: CS1 maint: multiple names: authors list (link)