Comparison of linear algebra libraries

Last updated

The following tables provide a comparison of linear algebra software libraries , either specialized or general purpose libraries with significant linear algebra coverage.

Contents

Dense linear algebra

General information

CreatorLanguageFirst public releaseLatest stable versionSource code availabilityLicenseNotes
ALGLIB [1] ALGLIB ProjectC++, C#, Python, FreePascal20063.19.0 / 06.2022FreeGPL/commercialGeneral purpose numerical analysis library with C++, C#, Python, FreePascal interfaces.
Armadillo [2] [3] NICTA C++20099.200 / 10.2018Free Apache License 2.0 C++ template library for linear algebra; includes various decompositions and factorisations; syntax (API) is similar to MATLAB.
ATLAS R. Clint Whaley et al.C20013.10.3 / 07.2016FreeBSDAutomatically tuned implementation of BLAS. Also includes LU and Cholesky decompositions.
Blaze [4] K. Iglberger et al.C++20123.8 / 08.2020Free BSD Blaze is an open-source, high-performance C++ math library for dense and sparse arithmetic.
Blitz++ Todd VeldhuizenC++ ?1.0.2 / 10.2019Free GPL Blitz++ is a C++ template class library that provides high-performance multidimensional array containers for scientific computing.
Boost uBLASJ. Walter, M. KochC++20001.70.0 / 04.2019FreeBoost Software LicenseuBLAS is a C++ template class library that provides BLAS level 1, 2, 3 functionality for dense, packed and sparse matrices.
Dlib Davis E. KingC++200619.7 / 09/2017FreeBoostC++ template library; binds to optimized BLAS such as the Intel MKL; Includes matrix decompositions, non-linear solvers, and machine learning tooling
Eigen Benoît JacobC++20083.4.0 / 08.2021Free MPL2 Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
Fastor [5] R. Poya, A. J. Gil and R. OrtigosaC++20160.6.4 / 06.2023Free MIT License Fastor is a high performance tensor (fixed multi-dimensional array) library for modern C++.
GNU Scientific Library [6] GNU ProjectC, C++19962.5 / 06.2018Free GPL General purpose numerical analysis library. Includes some support for linear algebra.
IMSL Numerical Libraries Rogue Wave Software C, Java, C#, Fortran, Python1970many componentsNon-freeProprietaryGeneral purpose numerical analysis library.
LAPACK [7] [8] Fortran19923.9.0 / 11.2019Free 3-clause BSD Numerical linear algebra library with long history
librsb Michele MartoneC, Fortran, M420111.2 / September 2016Free GPL High-performance multi-threaded primitives for large sparse matrices. Support operations for iterative solvers: multiplication, triangular solve, scaling, matrix I/O, matrix rendering. Many variants: e.g.: symmetric, hermitian, complex, quadruple precision.
oneMKL IntelC, C++, Fortran20032023.1 / March 2023Non-freeIntel Simplified Software LicenseNumerical analysis library optimized for Intel CPUs and GPUs. C++ SYCL based reference API implementation available in source for free.
Math.NET Numerics C. Rüegg, M. Cuda, et al.C#20093.20 / 07.2017Free MIT License C# numerical analysis library with linear algebra support
Matrix Template Library Jeremy Siek, Peter Gottschling, Andrew Lumsdaine, et al.C++19984.0 / 2018Free Boost Software LicenseHigh-performance C++ linear algebra library based on Generic programming
NAG Numerical Library The Numerical Algorithms Group C, Fortran1971many componentsNon-freeProprietaryGeneral purpose numerical analysis library.
NMath CenterSpace Software C#20037.1 / December 2019Non-freeProprietaryMath and statistical libraries for the .NET Framework
SciPy [9] [10] [11] Enthought Python20011.0.0 / 10.2017Free BSD Based on Python
Xtensor [12] S. Corlay, W. Vollprecht, J. Mabille et al.C++20160.21.10 / 11.2020Free 3-clause BSD Xtensor is a C++ library meant for numerical analysis with multi-dimensional array expressions, broadcasting and lazy computing.

Matrix types and operations

Matrix types (special types like bidiagonal/tridiagonal are not listed):

Operations:

RealComplexSPDHPDSYHEBNDTFOFEVPSVDGEVPGSVD
ALGLIB YesYesYesYesNoNoNoYesYesYesYesYesNo
ATLAS YesYesYesYesNoNoNoYesNoNoNoNoNo
Dlib YesYesYesYesYesYesNoYesYesYesYesNoNo
GNU Scientific Library YesYesYesYesNoNoNoYesYesYesYesYesYes
ILNumerics.Net YesYesYesYesNoNoNoYesYesYesYesNoNo
IMSL Numerical Libraries YesYesYesYesNoNoYesYesNoYesYesYesNo
LAPACK YesYesYesYesYesYesYesYesYesYesYesYesYes
oneMKL YesYesYesYesYesYesYesYesYesYesYesYesYes
NAG Numerical Library YesYesYesYesYesYesYesYesYesYesYesYesYes
NMath YesYesYesYesYesYesYesYesYesYesYesNoNo
SciPy (Python packages)YesYesYesYesNoNoNoYesYesYesYesNoNo
Eigen YesYesYesYesYesYesYesYesYesYesYesYesNo
Armadillo YesYesYesYesYesYesNoYesYesYesYesYesNo

Related Research Articles

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.

In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.

<span class="mw-page-title-main">Jack Dongarra</span> American computer scientist (born 1950)

Jack Joseph Dongarra is an American computer scientist and mathematician. He is the American University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee. He holds the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, Turing Fellowship in the School of Mathematics at the University of Manchester, and is an adjunct professor and teacher in the Computer Science Department at Rice University. He served as a faculty fellow at the Texas A&M University Institute for Advanced Study (2014–2018). Dongarra is the founding director of the Innovative Computing Laboratory at the University of Tennessee. He was the recipient of the Turing Award in 2021.

<span class="mw-page-title-main">LAPACK</span> Software library for numerical linear algebra

LAPACK is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines.

Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.

Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science that uses advanced computing capabilities to understand and solve complex physical problems. This includes

EISPACK is a software library for numerical computation of eigenvalues and eigenvectors of matrices, written in FORTRAN. It contains subroutines for calculating the eigenvalues of nine classes of matrices: complex general, complex Hermitian, real general, real symmetric, real symmetric banded, real symmetric tridiagonal, special real tridiagonal, generalized real, and generalized real symmetric matrices. In addition it includes subroutines to perform a singular value decomposition.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

Hendrik "Henk" Albertus van der Vorst is a Dutch mathematician and Emeritus Professor of Numerical Analysis at Utrecht University. According to the Institute for Scientific Information (ISI), his paper on the BiCGSTAB method was the most cited paper in the field of mathematics in the 1990s. He is a member of the Royal Netherlands Academy of Arts and Sciences (KNAW) since 2002 and the Netherlands Academy of Technology and Innovation. In 2006 he was awarded a knighthood of the Order of the Netherlands Lion. Henk van der Vorst is a Fellow of Society for Industrial and Applied Mathematics (SIAM).

Armadillo is a linear algebra software library for the C++ programming language. It aims to provide efficient and streamlined base calculations, while at the same time having a straightforward and easy-to-use interface. Its intended target users are scientists and engineers.

ARPACK, the ARnoldi PACKage, is a numerical software library written in FORTRAN 77 for solving large scale eigenvalue problems in the matrix-free fashion.

Intel oneAPI Math Kernel Library is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math.

Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem

<span class="mw-page-title-main">ALGLIB</span> Open source numerical analysis library

ALGLIB is a cross-platform open source numerical analysis and data processing library. It can be used from several programming languages.

Math.NET Numerics is an open-source numerical library for .NET and Mono, written in C# and F#. It features functionality similar to BLAS and LAPACK.

Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification is numerics including mathematically strict error evaluation, and it is one field of numerical analysis. For computation, interval arithmetic is used, and all results are represented by intervals. Validated numerics were used by Warwick Tucker in order to solve the 14th of Smale's problems, and today it is recognized as a powerful tool for the study of dynamical systems.

References

  1. Bochkanov, S., & Bystritsky, V. (2011). ALGLIB-a cross-platform numerical analysis and data processing library. ALGLIB Project.
  2. Sanderson, C., & Curtin, R. (2016). Armadillo: a template-based C++ library for linear algebra. Journal of Open Source Software, 1(2), 26.
  3. Sanderson, C. (2010). Armadillo: An open source C++ linear algebra library for fast prototyping and computationally intensive experiments (p. 84). Technical report, NICTA.
  4. "Bitbucket".
  5. Poya, Roman and Gil, Antonio J. and Ortigosa, Rogelio (2017). "A high performance data parallel tensor contraction framework: Application to coupled electro-mechanics". Computer Physics Communications. 216: 35–52. Bibcode:2017CoPhC.216...35P. doi:10.1016/j.cpc.2017.02.016.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  6. Gough, B. (2009). GNU scientific library reference manual. Network Theory Ltd.
  7. Anderson, E., Bai, Z., Bischof, C., Blackford, S., Dongarra, J., Du Croz, J., ... & Sorensen, D. (1999). LAPACK Users' guide. SIAM.
  8. Anderson, E., Bai, Z., Dongarra, J., Greenbaum, A., McKenney, A., Du Croz, J., ... & Sorensen, D. (1990, November). LAPACK: A portable linear algebra library for high-performance computers. In Proceedings of the 1990 ACM/IEEE conference on Supercomputing (pp. 2–11). IEEE Computer Society Press.
  9. Jones, E., Oliphant, T., & Peterson, P. (2001). SciPy: Open source scientific tools for Python.
  10. Bressert, E. (2012). SciPy and NumPy: an overview for developers. " O'Reilly Media, Inc.".
  11. Blanco-Silva, F. J. (2013). Learning SciPy for numerical and scientific computing. Packt Publishing Ltd.
  12. "Xtensor-stack/Xtensor". GitHub . 13 February 2022.