LAPACK

Last updated
LAPACK (Netlib reference implementation)
Initial release1992;31 years ago (1992)
Stable release
3.12.0 [1]   OOjs UI icon edit-ltr-progressive.svg / 24 November 2023;32 days ago (24 November 2023)
Repository
Written in Fortran 90
Type Software library
License BSD-new
Website netlib.org/lapack/   OOjs UI icon edit-ltr-progressive.svg

LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. [2] LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). [3] The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines. [2] : "The BLAS as the Key to Portability"

Contents

LAPACK was designed as the successor to the linear equations and linear least-squares routines of LINPACK and the eigenvalue routines of EISPACK. LINPACK, written in the 1970s and 1980s, was designed to run on the then-modern vector computers with shared memory. LAPACK, in contrast, was designed to effectively exploit the caches on modern cache-based architectures and the instruction-level parallelism of modern superscalar processors, [2] : "Factors that Affect Performance" and thus can run orders of magnitude faster than LINPACK on such machines, given a well-tuned BLAS implementation. [2] : "The BLAS as the Key to Portability" LAPACK has also been extended to run on distributed memory systems in later packages such as ScaLAPACK and PLAPACK. [4]

Netlib LAPACK is licensed under a three-clause BSD style license, a permissive free software license with few restrictions. [5]

Naming scheme

Subroutines in LAPACK have a naming convention which makes the identifiers very compact. This was necessary as the first Fortran standards only supported identifiers up to six characters long, so the names had to be shortened to fit into this limit. [2] : "Naming Scheme"

A LAPACK subroutine name is in the form pmmaaa, where:

For example, the subroutine to solve a linear system with a general (non-structured) matrix using real double-precision arithmetic is called DGESV. [2] : "Linear Equations"

Matrix types in the LAPACK naming scheme
NameDescription
BD bidiagonal matrix
DI diagonal matrix
GBgeneral band matrix
GEgeneral matrix (i.e., unsymmetric, in some cases rectangular)
GGgeneral matrices, generalized problem (i.e., a pair of general matrices)
GTgeneral tridiagonal matrix
HB(complex) Hermitian band matrix
HE(complex) Hermitian matrix
HG upper Hessenberg matrix, generalized problem (i.e. a Hessenberg and a triangular matrix)
HP(complex) Hermitian, packed storage matrix
HS upper Hessenberg matrix
OP(real) orthogonal matrix, packed storage matrix
OR(real) orthogonal matrix
PB symmetric matrix or Hermitian matrix positive definite band
PO symmetric matrix or Hermitian matrix positive definite
PP symmetric matrix or Hermitian matrix positive definite, packed storage matrix
PT symmetric matrix or Hermitian matrix positive definite tridiagonal matrix
SB(real) symmetric band matrix
SP symmetric, packed storage matrix
ST(real) symmetric matrix tridiagonal matrix
SY symmetric matrix
TB triangular band matrix
TG triangular matrices, generalized problem (i.e., a pair of triangular matrices)
TP triangular, packed storage matrix
TR triangular matrix (or in some cases quasi-triangular)
TZ trapezoidal matrix
UN(complex) unitary matrix
UP(complex) unitary, packed storage matrix

Use with other programming languages and libraries

Many programming environments today support the use of libraries with C binding, allowing LAPACK routines to be used directly so long as a few restrictions are observed. Additionally, many other software libraries and tools for scientific and numerical computing are built on top of LAPACK, such as R, [6] MATLAB, [7] and SciPy. [8]

Several alternative language bindings are also available:

Implementations

As with BLAS, LAPACK is sometimes forked or rewritten to provide better performance on specific systems. Some of the implementations are:

Accelerate
Apple's framework for macOS and iOS, which includes tuned versions of BLAS and LAPACK. [9] [10]
Netlib LAPACK
The official LAPACK.
Netlib ScaLAPACK
Scalable (multicore) LAPACK, built on top of PBLAS.
Intel MKL
Intel's Math routines for their x86 CPUs.
OpenBLAS
Open-source reimplementation of BLAS and LAPACK.
Gonum LAPACK
A partial native Go implementation.

Since LAPACK typically calls underlying BLAS routines to perform the bulk of its computations, simply linking to a better-tuned BLAS implementation can be enough to significantly improve performance. As a result, LAPACK is not reimplemented as often as BLAS is.

Similar projects

These projects provide a similar functionality to LAPACK, but with a main interface differing from that of LAPACK:

Libflame
A dense linear algebra library. Has a LAPACK-compatible wrapper. Can be used with any BLAS, although BLIS is the preferred implementation. [11]
Eigen
A header library for linear algebra. Has a BLAS and a partial LAPACK implementation for compatibility.
MAGMA
Matrix Algebra on GPU and Multicore Architectures (MAGMA) project develops a dense linear algebra library similar to LAPACK but for heterogeneous and hybrid architectures including multicore systems accelerated with GPGPUs.
PLASMA
The Parallel Linear Algebra for Scalable Multi-core Architectures (PLASMA) project is a modern replacement of LAPACK for multi-core architectures. PLASMA is a software framework for development of asynchronous operations and features out of order scheduling with a runtime scheduler called QUARK that may be used for any code that expresses its dependencies with a directed acyclic graph. [12]

See also

Related Research Articles

<span class="mw-page-title-main">NumPy</span> Python library for numerical programming

NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. The predecessor of NumPy, Numeric, was originally created by Jim Hugunin with contributions from several other developers. In 2005, Travis Oliphant created NumPy by incorporating features of the competing Numarray into Numeric, with extensive modifications. NumPy is open-source software and has many contributors. NumPy is a NumFOCUS fiscally sponsored project.

LINPACK is a software library for performing numerical linear algebra on digital computers. It was written in Fortran by Jack Dongarra, Jim Bunch, Cleve Moler, and Gilbert Stewart, and was intended for use on supercomputers in the 1970s and early 1980s. It has been largely superseded by LAPACK, which runs more efficiently on modern architectures.

Netlib is a repository of software for scientific computing maintained by AT&T, Bell Laboratories, the University of Tennessee and Oak Ridge National Laboratory. Netlib comprises many separate programs and libraries. Most of the code is written in C and Fortran, with some programs in other languages.

<span class="mw-page-title-main">Jack Dongarra</span> American computer scientist (born 1950)

Jack Joseph Dongarra is an American computer scientist and mathematician. He is the American University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee. He holds the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, Turing Fellowship in the School of Mathematics at the University of Manchester, and is an adjunct professor and teacher in the Computer Science Department at Rice University. He served as a faculty fellow at the Texas A&M University Institute for Advanced Study (2014–2018). Dongarra is the founding director of the Innovative Computing Laboratory at the University of Tennessee. He was the recipient of the Turing Award in 2021.

Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.

EISPACK is a software library for numerical computation of eigenvalues and eigenvectors of matrices, written in FORTRAN. It contains subroutines for calculating the eigenvalues of nine classes of matrices: complex general, complex Hermitian, real general, real symmetric, real symmetric banded, real symmetric tridiagonal, special real tridiagonal, generalized real, and generalized real symmetric matrices. In addition it includes subroutines to perform a singular value decomposition.

AMD Core Math Library (ACML) is an end-of-life software development library released by AMD, replaced by many open source libraries, including AMD libm 4.0. This library provides mathematical routines optimized for AMD processors.

The NAG Numerical Library is a software product developed and sold by The Numerical Algorithms Group Ltd. It is a software library of numerical analysis routines, containing more than 1,900 mathematical and statistical algorithms. Areas covered by the library include linear algebra, optimization, quadrature, the solution of ordinary and partial differential equations, regression analysis, and time series analysis.

MINPACK is a library of FORTRAN subroutines for the solving of systems of nonlinear equations, or the least-squares minimization of the residual of a set of linear or nonlinear equations.

Automatically Tuned Linear Algebra Software (ATLAS) is a software library for linear algebra. It provides a mature open source implementation of BLAS APIs for C and Fortran77.

Armadillo is a linear algebra software library for the C++ programming language. It aims to provide efficient and streamlined base calculations, while at the same time having a straightforward and easy-to-use interface. Its intended target users are scientists and engineers.

SLATEC Common Mathematical Library is a FORTRAN 77 library of over 1,400 general purpose mathematical and statistical routines. The code was developed at US government research laboratories and is therefore public domain software.

ARPACK, the ARnoldi PACKage, is a numerical software library written in FORTRAN 77 for solving large scale eigenvalue problems in the matrix-free fashion.

Intel oneAPI Math Kernel Library is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math.

The LINPACK Benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense n by n system of linear equations Ax = b, which is a common task in engineering.

IT++ is a C++ library of classes and functions for linear algebra, numerical optimization, signal processing, communications, and statistics. It is being developed by researchers in these areas and is widely used by researchers, both in the communications industry and universities. The IT++ library originates from the former Department of Information Theory at the Chalmers University of Technology, Gothenburg, Sweden.

In scientific computing, GotoBLAS and GotoBLAS2 are open source implementations of the BLAS API with many hand-crafted optimizations for specific processor types. GotoBLAS was developed by Kazushige Goto at the Texas Advanced Computing Center. As of 2003, it was used in seven of the world's ten fastest supercomputers.

OpenBLAS is an open-source implementation of the BLAS and LAPACK APIs with many hand-crafted optimizations for specific processor types. It is developed at the Lab of Parallel Software and Computational Science, ISCAS.

References

  1. "Release 3.12.0". 24 November 2023. Retrieved 19 December 2023.
  2. 1 2 3 4 5 6 Anderson, E.; Bai, Z.; Bischof, C.; Blackford, S.; Demmel, J.; Dongarra, J.; Du Croz, J.; Greenbaum, A.; Hammarling, S.; McKenney, A.; Sorensen, D. (1999). LAPACK Users' Guide (Third ed.). Philadelphia, PA: Society for Industrial and Applied Mathematics. ISBN   0-89871-447-8 . Retrieved 28 May 2022.
  3. "LAPACK 3.2 Release Notes". 16 November 2008.
  4. "PLAPACK: Parallel Linear Algebra Package". www.cs.utexas.edu. University of Texas at Austin. 12 June 2007. Retrieved 20 April 2017.
  5. "LICENSE.txt". Netlib. Retrieved 28 May 2022.
  6. "R: LAPACK Library". stat.ethz.ch. Retrieved 2022-03-19.
  7. "LAPACK in MATLAB". Mathworks Help Center. Retrieved 28 May 2022.
  8. "Low-level LAPACK functions". SciPy v1.8.1 Manual. Retrieved 28 May 2022.
  9. "Guides and Sample Code". developer.apple.com. Retrieved 2017-07-07.
  10. "Guides and Sample Code". developer.apple.com. Retrieved 2017-07-07.
  11. "amd/libflame: High-performance object-based library for DLA computations". GitHub. AMD. 25 August 2020.
  12. "ICL". icl.eecs.utk.edu. Retrieved 2017-07-07.