Matrix representation

Last updated
Illustration of row- and column-major order Row and column major order.svg
Illustration of row- and column-major order

Matrix representation is a method used by a computer language to store matrices of more than one dimension in memory. Fortran and C use different schemes for their native arrays. Fortran uses "Column Major", in which all the elements for a given column are stored contiguously in memory. C uses "Row Major", which stores all the elements for a given row contiguously in memory. LAPACK defines various matrix representations in memory. There is also Sparse matrix representation and Morton-order matrix representation. According to the documentation, in LAPACK the unitary matrix representation is optimized. [1] [2] Some languages such as Java store matrices using Iliffe vectors. These are particularly useful for storing irregular matrices. Matrices are of primary importance in linear algebra.

Contents

Basic mathematical operations

An m × n (read as m by n) order matrix is a set of numbers arranged in m rows and n columns. Matrices of the same order can be added by adding the corresponding elements. Two matrices can be multiplied, the condition being that the number of columns of the first matrix is equal to the number of rows of the second matrix. Hence, if an m × n matrix is multiplied with an n × r matrix, then the resultant matrix will be of the order m × r. [3]

Operations like row operations or column operations can be performed on a matrix, using which we can obtain the inverse of a matrix. The inverse may be obtained by determining the adjoint as well. [3] rows and columns are the different classes of matrices

In 3D graphics

The choice of representation for 4×4 matrices commonly used in 3D graphics affects the implementation of matrix/vector operations in systems with packed SIMD instructions:

Row major

With row-major matrix order, it is easy to transform vectors using dot product operations, since the coefficients of each component are sequential in memory. Consequently, this layout may be desirable if a processor supports dot product operations natively. It is also possible to efficiently use a '3×4' affine transformation matrix without padding or awkward permutes.

Column major

With column-major order, a "matrix × vector" multiply can be implemented with vectorized multiply-add operations, if the vector's components are broadcast to each SIMD lane. It is also easy to access the basis vectors represented by a transformation matrix as individual column vectors, as these are contiguous in memory.

See also

Related Research Articles

In computer science, an array is a data structure consisting of a collection of elements, each identified by at least one array index or key. An array is stored such that the position of each element can be computed from its index tuple by a mathematical formula. The simplest type of data structure is a linear array, also called one-dimensional array.

In linear algebra, the identity matrix of size is the square matrix with ones on the main diagonal and zeros elsewhere.

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

<span class="mw-page-title-main">Square matrix</span> Matrix with the same number of rows and columns

In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied.

<span class="mw-page-title-main">Transpose</span> Matrix operation which flips a matrix over its diagonal

In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix A by producing another matrix, often denoted by AT.

In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is , while an example of a 3×3 diagonal matrix is. An identity matrix of any size, or any multiple of it, is a diagonal matrix.

In mathematics, particularly in matrix theory, a permutation matrix is a square binary matrix that has exactly one entry of 1 in each row and each column and 0s elsewhere. Each such matrix, say P, represents a permutation of m elements and, when used to multiply another matrix, say A, results in permuting the rows or columns of the matrix A.

<span class="mw-page-title-main">Sparse matrix</span> Matrix in which most of the elements are zero

In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse but a common criterion is that the number of non-zero elements is roughly equal to the number of rows or columns. By contrast, if most of the elements are non-zero, the matrix is considered dense. The number of zero-valued elements divided by the total number of elements is sometimes referred to as the sparsity of the matrix.

In computer science, array programming refers to solutions which allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings.

<span class="mw-page-title-main">LAPACK</span> Software library for numerical linear algebra

LAPACK is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines.

Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.

<span class="mw-page-title-main">Row- and column-major order</span> Array representation in computer memory

In computing, row-major order and column-major order are methods for storing multidimensional arrays in linear storage such as random access memory.

<span class="mw-page-title-main">Template Numerical Toolkit</span>

The Template Numerical Toolkit is a software library for manipulating vectors and matrices in C++ created by the U.S. National Institute of Standards and Technology. TNT provides the fundamental linear algebra operations. TNT is analogous to the BLAS library used by LAPACK. Higher level algorithms, such as LU decomposition and singular value decomposition, are provided by JAMA, also developed at NIST, which uses TNT.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

In-place matrix transposition, also called in-situ matrix transposition, is the problem of transposing an N×M matrix in-place in computer memory, ideally with O(1) (bounded) additional storage, or at most with additional storage much less than NM. Typically, the matrix is assumed to be stored in row-major or column-major order.

<span class="mw-page-title-main">Matrix (mathematics)</span> Two-dimensional array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

Matrix Toolkit Java (MTJ) is an open-source Java software library for performing numerical linear algebra. The library contains a full set of standard linear algebra operations for dense matrices based on BLAS and LAPACK code. Partial set of sparse operations is provided through the Templates project. The library can be configured to run as a pure Java library or use BLAS machine-optimized code through the Java Native Interface.

References

  1. "Representation of Orthogonal or Unitary Matrices". University of Texas at Austin. Retrieved 14 September 2011.
  2. Lehoucq, R. (1996). "The Computation of Elementary Unitary Matrices". ACM Transactions on Mathematical Software. 22 (4): 393–400. doi:10.1145/235815.235817. hdl: 1911/101830 .
  3. 1 2 Ramana, B.V (2008). Higher Engineering Mathematics. New Delhi: Tata Mcgraw-Hill. ISBN   978-0-07-063419-0.