Efficient Java Matrix Library

Last updated
Original author(s) Peter Abeles
Stable release
0.41.1 / December 4, 2022;12 months ago (2022-12-04)
Operating system Cross-platform
Type Library
License Apache License
Website ejml.org

Efficient Java Matrix Library (EJML) is a linear algebra library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.

Contents

EJML has three distinct ways to interact with it: 1) Procedural, 2) SimpleMatrix, and 3) Equations. The procedural style provides all capabilities of EJML and almost complete control over matrix creation, speed, and specific algorithms. The SimpleMatrix style provides a simplified subset of the core capabilities in an easy to use flow-styled object-oriented API, inspired by JAMA. The Equations style provides a symbolic interface, similar in spirit to Matlab and other CAS, that provides a compact way of writing equations. [1]

Capabilities

EJML provides the following capabilities for dense matrices.

Usage examples

Equation style

Computing the Kalman gain:

eq.process("K = P*H'*inv( H*P*H' + R )");

Procedural style

Computing Kalman gain:

mult(H,P,c);multTransB(c,H,S);addEquals(S,R);if(!invert(S,S_inv))thrownewRuntimeException("Invert failed");multTransA(H,S_inv,d);mult(P,d,K);

SimpleMatrix style

Example of singular value decomposition (SVD):

SimpleSVDs=matA.svd();SimpleMatrixU=s.getU();SimpleMatrixW=s.getW();SimpleMatrixV=s.getV();

Example of matrix multiplication:

SimpleMatrixresult=matA.mult(matB);

DecompositionFactory

Use of a DecompositionFactory to compute a Singular Value Decomposition with a Dense Double Row Major matrix (DDRM): [2]

SingularValueDecomposition_F64<DenseMatrix64F>svd=DecompositionFactory_DDRM.svd(true,true,true);if(!DecompositionFactory.decomposeSafe(svd,matA))thrownewDetectedException("Decomposition failed.");DenseMatrix64FU=svd.getU(null,false);DenseMatrix64FS=svd.getW(null);DenseMatrix64FV=svd.getV(null,false);

Example of matrix multiplication:

CommonOps_DDRM.mult(matA,matB,result);

See also

Related Research Articles

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

In linear algebra, an n-by-n square matrix A is called invertible if there exists an n-by-n square matrix B such that

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

This is an outline of topics related to linear algebra, the branch of mathematics concerning linear equations and linear maps and their representations in vector spaces and through matrices.

In mathematics, the Woodbury matrix identity, named after Max A. Woodbury, says that the inverse of a rank-k correction of some matrix can be computed by doing a rank-k correction to the inverse of the original matrix. Alternative names for this formula are the matrix inversion lemma, Sherman–Morrison–Woodbury formula or just Woodbury formula. However, the identity appeared in several papers before the Woodbury report.

In computer science, array programming refers to solutions that allow the application of operations to an entire set of values at once. Such solutions are commonly used in scientific and engineering settings.

In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix, both square and of the same size.

JAMA is a software library for performing numerical linear algebra tasks created at National Institute of Standards and Technology in 1998 similar in functionality to LAPACK.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

Colt is a set of open-source Libraries for High Performance Scientific and Technical Computing written in Java and developed at CERN. Colt was developed with a focus on High Energy Physics, but is applicable to many other problems. Colt was last updated in 2004 and its code base has been incorporated into the Parallel Colt code base, which has received more recent development.

In numerical mathematics, hierarchical matrices (H-matrices) are used as data-sparse approximations of non-sparse matrices. While a sparse matrix of dimension can be represented efficiently in units of storage by storing only its non-zero entries, a non-sparse matrix would require units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only units of storage, where is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations, preconditioning the resulting systems of linear equations, or solving elliptic partial differential equations, a rank proportional to with a small constant is sufficient to ensure an accuracy of . Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage: the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated in operations, where

Matrix Toolkit Java (MTJ) is an open-source Java software library for performing numerical linear algebra. The library contains a full set of standard linear algebra operations for dense matrices based on BLAS and LAPACK code. Partial set of sparse operations is provided through the Templates project. The library can be configured to run as a pure Java library or use BLAS machine-optimized code through the Java Native Interface.

Parallel Colt is a set of multithreaded version of Colt. It is a collection of open-source libraries for High Performance Scientific and Technical Computing written in Java. It contains all the original capabilities of Colt and adds several new ones, with a focus on multi-threaded algorithms.

oj! Algorithms or ojAlgo, is an open source Java library for mathematics, linear algebra and optimisation. It was first released in 2003 and is 100% pure Java source code and free from external dependencies. Its feature set make it particularly suitable for use within the financial domain.

jblas is a linear algebra library, created by Mikio Braun, for the Java programming language built upon BLAS and LAPACK. Unlike most other Java linear algebra libraries, jblas is designed to be used with native code through the Java Native Interface (JNI) and comes with precompiled binaries. When used on one of the targeted architectures, it will automatically select the correct binary to use and load it. This allows it to be used out of the box and avoid a potentially tedious compilation process. jblas provides an easier to use high level API on top of the archaic API provided by BLAS and LAPACK, removing much of the tediousness.

References

  1. "EJML Project Page". EJML. Peter Abeles. Retrieved Jan 21, 2019.
  2. "Matrix Decompositions - Efficient Java Matrix Library". ejml.org. Retrieved 2021-04-24.