OjAlgo

Last updated
OjAlgo
Original author(s) Anders Peterson
Stable release
v44.0 / September 27, 2017 (2017-09-27)
Operating system Cross-platform
Type Library
License MIT License
Website ojalgo.org

oj! Algorithms or ojAlgo, is an open source Java library for mathematics, [1] [2] linear algebra and optimisation. It was first released in 2003 [3] and is 100% pure Java source code and free from external dependencies. Its feature set make it particularly suitable for use within the financial domain.

Contents

Capabilities

It requires Java 8 since version v38. As of version 44.0, the finance specific code has been moved to its own project/module named ojAlgo-finance. [3]

Usage Example

Example of Singular Value Decomposition (SVD):

SingularValue<Double>svd=SingularValueDecomposition.make(matA);svd.compute(matA);MatrixStore<Double>U=svd.getQ1();MatrixStore<Double>S=svd.getD();MatrixStore<Double>V=svd.getQ2();

Example of matrix multiplication:

PrimitiveDenseStoreresult=FACTORY.makeZero(matA.getRowDim(),matB.getColDim());result.fillByMultiplying(matA,matB);

Related Research Articles

In linear algebra, the rank of a matrix is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by . There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

Principal component analysis conversion of a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components

Given a collection of points in two, three, or higher dimensional space, a "best fitting" line can be defined as one that minimizes the average squared distance from a point to the line. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components, and several related procedures principal component analysis (PCA).

Singular value decomposition Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix that generalizes the eigendecomposition of a square normal matrix to any matrix via an extension of the polar decomposition.

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.

In numerical linear algebra, the QR algorithm is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.

LAPACK Software library for numerical linear algebra

LAPACK is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. LU decomposition was introduced by Polish mathematician Tadeusz Banachiewicz in 1938.

JAMA is a software library for performing numerical linear algebra tasks created at National Institute of Standards and Technology in 1998 similar in functionality to LAPACK.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

Armadillo is a linear algebra software library for the C++ programming language. It aims to provide efficient and streamlined base calculations, while at the same time having a straightforward and easy-to-use interface. Its intended target users are scientists and engineers.

Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest eigenvalues and the corresponding eigenvectors of a symmetric positive definite generalized eigenvalue problem

Colt is a set of open-source Libraries for High Performance Scientific and Technical Computing written in Java and developed at CERN. Colt was developed with a focus on High Energy Physics, but is applicable to many other problems. Colt was last updated in 2004 and its code base has been incorporated into the Parallel Colt code base, which has received more recent development.

Matrix Toolkit Java (MTJ) is an open-source Java software library for performing numerical linear algebra. The library contains a full set of standard linear algebra operations for dense matrices based on BLAS and LAPACK code. Partial set of sparse operations is provided through the Templates project. The library can be configured to run as a pure Java library or use BLAS machine-optimized code through the Java Native Interface.

Parallel Colt is a set of multithreaded version of Colt. It is a collection of open-source libraries for High Performance Scientific and Technical Computing written in Java. It contains all the original capabilities of Colt and adds several new ones, with a focus on multi-threaded algorithms.

K-SVD dictionary learning algorithm

In applied mathematics, K-SVD is a dictionary learning algorithm for creating a dictionary for sparse representations, via a singular value decomposition approach. K-SVD is a generalization of the k-means clustering method, and it works by iteratively alternating between sparse coding the input data based on the current dictionary, and updating the atoms in the dictionary to better fit the data. K-SVD can be found widely in use in applications such as image processing, audio processing, biology, and document analysis.

Efficient Java Matrix Library (EJML) is a linear algebra library for manipulating real/complex/dense/sparse matrices. Its design goals are; 1) to be as computationally and memory efficient as possible for both small and large matrices, and 2) to be accessible to both novices and experts. These goals are accomplished by dynamically selecting the best algorithms to use at runtime, clean API, and multiple interfaces. EJML is free, written in 100% Java and has been released under an Apache v2.0 license.

jblas is a linear algebra library, created by Mikio Braun, for the Java programming language built upon BLAS and LAPACK. Unlike most other Java linear algebra libraries, jblas is designed to be used with native code through the Java Native Interface (JNI) and comes with precompiled binaries. When used on one of the targeted architectures, it will automatically select the correct binary to use and load it. This allows it to be used out of the box and avoid a potentially tedious compilation process. jblas provides an easier to use high level API on top of the archaic API provided by BLAS and LAPACK, removing much of the tediousness.

References

  1. Takaki, M.; Cavalcanti, D.; Gheyi, R.; Iyoda, J.; d’Amorim, M.; Prudêncio, R. B. (2010). "Randomized constraint solvers: a comparative study". Bioinformatics. 6 (3): 243–253. doi:10.1007/s11334-010-0124-1.
  2. Vanek, O.; Bosansky, B.; Jakob, M.; Pechoucek, M. (2010). Transiting areas patrolled by a mobile adversary. Symposium on Computational Intelligence and Games. pp. 9–16.
  3. 1 2 "oj! Algorithms Project Page". oj! Algorithms. Retrieved July 2, 2013.