Matrix Template Library

Last updated
Matrix Template Library
Operating system Linux, Unix, Mac OS X, Windows
Available in C++
Type Scientific software library
License Boost Software License
Website http://simunova.com/en/mtl4/

The Matrix Template Library (MTL) is a linear algebra library for C++ programs.

Contents

The MTL uses template programming, which considerably reduces the code length. All matrices and vectors are available in all classical numerical formats: float, double, complex<float> or complex<double>.

Furthermore, generic programming allows the usage of arbitrary types as long as they provide the necessary operations. For instance one can use arbitrary integer formats (e.g. unsigned short), types for interval arithmetic (e.g. boost::interval) from the Boost C++ Libraries, quaternions (e.g. boost::quaternion), types of higher precision (e.g. GNU Multi-Precision Library) and appropriate user-defined types.

The MTL supports several implementations of dense matrices and sparse matrices. MTL2 has been developed by Jeremy Siek and Andrew Lumsdaine. [1]

The latest version, MTL4, is developed by Peter Gottschling and Andrew Lumsdaine. It contains most of MTL2's functionality and adds new optimization techniques as meta-tuning, e.g. loop unrolling of dynamically sized containers can be specified in the function call. Platform-independent performance scalability is reached by recursive data structures and algorithms. [2]

Generic applications can be written in a natural notation, e.g. v += A*q - w;, while the library dispatches to the appropriate algorithms: matrix vector products vs. matrix products vs. vector scalar products etcetera. The goal is to encapsulate performance issues inside the library and provide scientists an intuitive interface. MTL4 is used in different finite element and finite volume packages, e.g. the FEniCS Project. [3]

See also

Related Research Articles

In algebra, a division ring, also called a skew field, is a ring in which division is possible. Specifically, it is a nonzero ring in which every nonzero element a has a multiplicative inverse, that is, an element generally denoted a–1, such that aa–1 = a–1a = 1. So, division may be defined as a / b = ab–1, but this notation is generally avoided, as one may have ab–1b–1a.

Linear algebra Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

Numerical analysis Field of mathematics

Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Numerical analysis naturally finds application in all fields of engineering and the physical sciences, but in the 21st century also the life sciences, social sciences, medicine, business and even the arts have adopted elements of scientific computations. The growth in computing power has revolutionized the use of realistic mathematical models in science and engineering, and subtle numerical analysis is required to implement these detailed models of the world. For example, ordinary differential equations appear in celestial mechanics ; numerical linear algebra is important for data analysis; stochastic differential equations and Markov chains are essential in simulating living cells for medicine and biology.

Quaternion Noncommutative extension of the real numbers

In mathematics, the quaternion number system extends the complex numbers. Quaternions were first described by Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. Hamilton defined a quaternion as the quotient of two directed lines in a three-dimensional space, or, equivalently, as the quotient of two vectors. Multiplication of quaternions is noncommutative.

A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The development of the computer algebra systems in the second half of the 20th century is part of the discipline of "computer algebra" or "symbolic computation", which has spurred work in algorithms over mathematical objects such as polynomials.

Matrix multiplication Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

Rotation (mathematics)

Rotation in mathematics is a concept originating in geometry. Any rotation is a motion of a certain space that preserves at least one point. It can describe, for example, the motion of a rigid body around a fixed point. A rotation is different from other types of motions: translations, which have no fixed points, and (hyperplane) reflections, each of them having an entire (n − 1)-dimensional flat of fixed points in a n-dimensional space. A clockwise rotation is a negative magnitude so a counterclockwise turn has a positive magnitude.

LAPACK Software library for numerical linear algebra

LAPACK is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision.

Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.

Template Numerical Toolkit

The Template Numerical Toolkit is a software library for manipulating vectors and matrices in C++ created by the U.S. National Institute of Standards and Technology. TNT provides the fundamental linear algebra operations. TNT is analogous to the BLAS library used by LAPACK. Higher level algorithms, such as LU decomposition and singular value decomposition, are provided by JAMA, also developed at NIST, which uses TNT.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

Matrix (mathematics) Two-dimensional array of numbers with specific operations

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns. For example, the dimension of the matrix below is 2 × 3, because there are two rows and three columns:

In mathematics, the joint spectral radius is a generalization of the classical notion of spectral radius of a matrix, to sets of matrices. In recent years this notion has found applications in a large number of engineering fields and is still a topic of active research.

Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem

In computer science, a 4D vector is a 4-component vector data type. Uses include homogeneous coordinates for 3-dimensional space in computer graphics, and red green blue alpha (RGBA) values for bitmap images with a color and alpha channel. They may also represent quaternions although the algebra they define is different.

GraphBLAS API for graph data and graph operations

GraphBLAS is an API specification that defines standard building blocks for graph algorithms in the language of linear algebra. GraphBLAS is built upon the notion that a sparse matrix can be used to represent graphs as either an adjacency matrix or an incidence matrix. The GraphBLAS specification describes how graph operations can be efficiently implemented via linear algebraic methods over different semirings.

References