SuanShu numerical library

Last updated
SuanShu
Stable release
20120606 / 2012-06-06
Written in Java
Type Math
License Apache License 2.0
Website github.com/nmltd/SuanShu

SuanShu is a Java math library. It is open-source under Apache License 2.0 available in GitHub. SuanShu is a large collection of Java classes for basic numerical analysis, statistics, and optimization. [1] It implements a parallel version of the adaptive strassen's algorithm for fast matrix multiplication. [2] SuanShu has been quoted and used in a number of academic works. [3] [4] [5] [6]

Contents

Features

License terms

SuanShu is released under the terms of the Apache License 2.0

Examples of usage

The following code shows the object-oriented design of the library (in contrast to the traditional procedural design of many other FORTRAN and C numerical libraries) by a simple example of minimization.

LogGammalogGamma=newLogGamma();// the log-gamma functionBracketSearchMinimizersolver=newBrentMinimizer(1e-8,10);// precision, max number of iterationsUnivariateMinimizer.Solutionsoln=solver.solve(logGamma);// optimizationdoublex_min=soln.search(0,5);// bracket = [0, 5]System.out.println(String.format("f(%f) = %f",x_min,logGamma.evaluate(x_min)));

See also

Related Research Articles

In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although the naive algorithm is often better for smaller matrices. The Strassen algorithm is slower than the fastest known algorithms for extremely large matrices, but such galactic algorithms are not useful in practice, as they are much slower for matrices of practical size. For small matrices even faster algorithms exist.

Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.

<span class="mw-page-title-main">Newton's method in optimization</span> Method for finding stationary points of a function

In calculus, Newton's method (also called Newton–Raphson) is an iterative method for finding the roots of a differentiable function F, which are solutions to the equation F (x) = 0. As such, Newton's method can be applied to the derivative f of a twice-differentiable function f to find the roots of the derivative (solutions to f ′(x) = 0), also known as the critical points of f. These solutions may be minima, maxima, or saddle points; see section "Several variables" in Critical point (mathematics) and also section "Geometric interpretation" in this article. This is relevant in optimization, which aims to find (global) minima of the function f.

In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliable methods. The algorithm tries to use the potentially fast-converging secant method or inverse quadratic interpolation if possible, but it falls back to the more robust bisection method if necessary. Brent's method is due to Richard Brent and builds on an earlier algorithm by Theodorus Dekker. Consequently, the method is also known as the Brent–Dekker method.

In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations via a generalized secant method.

In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method.

Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.

Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization which may be considered a quasi-Newton method. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable.

Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. The "full" Newton's method requires the Jacobian in order to search for zeros, or the Hessian for finding extrema. Some iterative methods that reduce to Newton's method, such as SLSQP, may be considered quasi-Newtonian.

<span class="mw-page-title-main">COIN-OR</span>

Computational Infrastructure for Operations Research (COIN-OR), is a project that aims to "create for mathematical software what the open literature is for mathematical theory." The open literature provides the operations research (OR) community with a peer-review process and an archive. Papers in operations research journals on mathematical theory often contain supporting numerical results from computational studies. The software implementations, models, and data used to produce the numerical results are typically not published. The status quo impeded researchers needing to reproduce computational results, make fair comparisons, and extend the state of the art.

Because matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors.

In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and numerical algorithms for numerical linear algebra and optimization, so finding the right amount of time it should take is of major practical relevance.

<span class="mw-page-title-main">ALGLIB</span> Open source numerical analysis library

ALGLIB is a cross-platform open source numerical analysis and data processing library. It can be used from several programming languages.

Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems and add a penalty term to the objective; the difference is that the augmented Lagrangian method adds yet another term, designed to mimic a Lagrange multiplier. The augmented Lagrangian is related to, but not identical with the method of Lagrange multipliers.

In numerical mathematics, hierarchical matrices (H-matrices) are used as data-sparse approximations of non-sparse matrices. While a sparse matrix of dimension can be represented efficiently in units of storage by storing only its non-zero entries, a non-sparse matrix would require units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only units of storage, where is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations, preconditioning the resulting systems of linear equations, or solving elliptic partial differential equations, a rank proportional to with a small constant is sufficient to ensure an accuracy of . Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage: the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated in operations, where

<span class="mw-page-title-main">OR-Tools</span> Open source software suite by Google

Google OR-Tools is a free and open-source software suite developed by Google for solving linear programming (LP), mixed integer programming (MIP), constraint programming (CP), vehicle routing (VRP), and related optimization problems.

References

  1. "Java Numerics: Main". math.nist.gov. Retrieved 2021-03-23.
  2. "Fastest Java Matrix Multiplication | NM DEV". NM DEV | Mathematics at Your Fingertips. 2015-08-07. Retrieved 2021-08-02.
  3. Möhlmann, Eike (2018). Automatic stability verification via Lyapunov functions: representations, transformations, and practical issues (phd thesis). Universität Oldenburg.
  4. Christou, Ioannis T.; Vassilaras, Spyridon (2013-10-01). "A parallel hybrid greedy branch and bound scheme for the maximum distance-2 matching problem". Computers & Operations Research. 40 (10): 2387–2397. doi:10.1016/j.cor.2013.04.009. ISSN   0305-0548.
  5. Łukawska, Barbara; Łukawski, Grzegorz; Sapiecha, Krzysztof (2016-10-04). "An implementation of articial advisor for dynamic classication of objects". Annales Universitatis Mariae Curie-Sklodowska, sectio AI – Informatica. 16 (1): 40. doi: 10.17951/ai.2016.16.1.40 . ISSN   2083-3628.
  6. Ansari, Mohd Samar (2013-09-03). Non-Linear Feedback Neural Networks: VLSI Implementations and Applications. Springer. ISBN   978-81-322-1563-9.