List of numerical libraries

Last updated

This is a list of numerical libraries, which are libraries used in software development for performing numerical calculations. It is not a complete listing but is instead a list of numerical libraries with articles on Wikipedia, with few exceptions.

Contents

The choice of a typical library depends on a range of requirements such as: desired features (e.g. large dimensional linear algebra, parallel computation, partial differential equations), licensing, readability of API, portability or platform/compiler dependence (e.g. Linux, Windows, Visual C++, GCC), performance, ease-of-use, continued support from developers, standard compliance, specialized optimization in code for specific application scenarios or even the size of the code-base to be installed.

Multi-language

C

C++

Delphi

.NET Framework languages C#, F#, VB.NET and PowerShell

Fortran

Java

Perl

Python

Others

See also

Related Research Articles

<span class="mw-page-title-main">Numerical analysis</span> Study of algorithms using numerical approximation

Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics, numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.

<span class="mw-page-title-main">NumPy</span> Python library for numerical programming

NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. The predecessor of NumPy, Numeric, was originally created by Jim Hugunin with contributions from several other developers. In 2005, Travis Oliphant created NumPy by incorporating features of the competing Numarray into Numeric, with extensive modifications. NumPy is open-source software and has many contributors. NumPy is a NumFOCUS fiscally sponsored project.

<span class="mw-page-title-main">Jack Dongarra</span> American computer scientist (born 1950)

Jack Joseph Dongarra is an American computer scientist and mathematician. He is a University Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee. He holds the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory, Turing Fellowship in the School of Mathematics at the University of Manchester, and is an adjunct professor and teacher in the Computer Science Department at Rice University. He served as a faculty fellow at the Texas A&M University Institute for Advanced Study (2014–2018). Dongarra is the founding director of the Innovative Computing Laboratory at the University of Tennessee. He was the recipient of the Turing Award in 2021.

<span class="mw-page-title-main">LAPACK</span> Software library for numerical linear algebra

LAPACK is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines.

Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the de facto standard low-level routines for linear algebra libraries; the routines have bindings for both C and Fortran. Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.

Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science that uses advanced computing capabilities to understand and solve complex physical problems. This includes

The NAG Numerical Library is a software product developed and sold by The Numerical Algorithms Group Ltd. It is a software library of numerical analysis routines, containing more than 1,900 mathematical and statistical algorithms. Areas covered by the library include linear algebra, optimization, quadrature, the solution of ordinary and partial differential equations, regression analysis, and time series analysis.

LAPACK++, the Linear Algebra PACKage in C++, is a computer software library of algorithms for numerical linear algebra that solves systems of linear equations and eigenvalue problems.

Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of. Numerical linear algebra uses properties of vectors and matrices to develop computer algorithms that minimize the error introduced by the computer, and is also concerned with ensuring that the algorithm is as efficient as possible.

Trilinos is a collection of open-source software libraries, called packages, intended to be used as building blocks for the development of scientific applications. The word "Trilinos" is Greek and conveys the idea of "a string of pearls", suggesting a number of software packages linked together by a common infrastructure. Trilinos was developed at Sandia National Laboratories from a core group of existing algorithms and utilizes the functionality of software interfaces such as BLAS, LAPACK, and MPI. In 2004, Trilinos received an R&D100 Award.

Armadillo is a linear algebra software library for the C++ programming language. It aims to provide efficient and streamlined base calculations, while at the same time having a straightforward and easy-to-use interface. Its intended target users are scientists and engineers.

ARPACK, the ARnoldi PACKage, is a numerical software library written in FORTRAN 77 for solving large scale eigenvalue problems in the matrix-free fashion.

Intel oneAPI Math Kernel Library is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math.

The following tables provide a comparison of linear algebra software libraries, either specialized or general purpose libraries with significant linear algebra coverage.

IT++ is a C++ library of classes and functions for linear algebra, numerical optimization, signal processing, communications, and statistics. It is being developed by researchers in these areas and is widely used by researchers, both in the communications industry and universities. The IT++ library originates from the former Department of Information Theory at the Chalmers University of Technology, Gothenburg, Sweden.

jblas is a linear algebra library, created by Mikio Braun, for the Java programming language built upon BLAS and LAPACK. Unlike most other Java linear algebra libraries, jblas is designed to be used with native code through the Java Native Interface (JNI) and comes with precompiled binaries. When used on one of the targeted architectures, it will automatically select the correct binary to use and load it. This allows it to be used out of the box and avoid a potentially tedious compilation process. jblas provides an easier to use high level API on top of the archaic API provided by BLAS and LAPACK, removing much of the tediousness.

ILNumerics is a mathematical class library for Common Language Infrastructure (CLI) developers and a domain specific language (DSL) for the implementation of numerical algorithms on the .NET platform. While algebra systems with graphical user interfaces focus on prototyping of algorithms, implementation of such algorithms into distribution-ready applications is done using development environments and general purpose programming languages (GPL). ILNumerics is an extension to Visual Studio and aims at supporting the creation of technical applications based on .NET.

References

  1. Sanderson, C., & Curtin, R. (2016). Armadillo: a template-based C++ library for linear algebra. Journal of Open Source Software, 1(2), 26.
  2. David Ramel (2018-05-08). "Open Source, Cross-Platform ML.NET Simplifies Machine Learning -- Visual Studio Magazine". Visual Studio Magazine. Retrieved 2018-05-10.
  3. Kareem Anderson (2017-05-09). "Microsoft debuts ML.NET cross-platform machine learning framework". On MSFT. Retrieved 2018-05-10.
  4. Smith, B. T., Boyle, J. M., Garbow, B. S., Ikebe, Y., Klema, V. C., & Moler, C. B. (2013). Matrix eigensystem routines-EISPACK guide (Vol. 6). Springer.
  5. Anderson, E., Bai, Z., Bischof, C., Blackford, S., Dongarra, J., Du Croz, J., ... & Sorensen, D. (1999). LAPACK Users' guide (Vol. 9). SIAM.
  6. Demmel, J. (1989, December). LAPACK: A portable linear algebra library for supercomputers. In IEEE Control Systems Society Workshop on Computer-Aided Control System Design (pp. 1-7). IEEE.
  7. Dongarra, J. J., Moler, C. B., Bunch, J. R., & Stewart, G. W. (1979). LINPACK users' guide. Society for Industrial and Applied Mathematics.
  8. Dongarra, J. J., Luszczek, P., & Petitet, A. (2003). The LINPACK benchmark: past, present and future. Concurrency and Computation: practice and experience, 15(9), 803-820.
  9. Dongarra, J. J. (1987, June). The LINPACK benchmark: An explanation. In International Conference on Supercomputing (pp. 456-474). Springer, Berlin, Heidelberg.
  10. "Perl Data Language - metacpan.org". July 26, 2021.
  11. "PDL::LinearAlgebra - Linear Algebra utils for PDL - metacpan.org". July 26, 2021.
  12. "PDL::FFTW3 - PDL interface to the Fastest Fourier Transform in the West - metacpan.org". July 26, 2021.
  13. "PDL::Graphics::Gnuplot - Gnuplot-based plotting for PDL - metacpan.org". July 26, 2021.
  14. "PDL::Graphics::PLplot - Object-oriented interface from perl/PDL to the PLPLOT plotting library - metacpan.org". July 26, 2021.
  15. Zimmermann, P., Casamayou, A., Cohen, N., Connan, G., Dumont, T., Fousse, L., ... & Bray, E. (2018). Computational Mathematics with SageMath. SIAM.
  16. Jones, E., Oliphant, T., & Peterson, P. (2001). SciPy: Open source scientific tools for Python.
  17. Bressert, E. (2012). SciPy and NumPy: an overview for developers. " O'Reilly Media, Inc.".
  18. Blanco-Silva, F. J. (2013). Learning SciPy for numerical and scientific computing. Packt Publishing Ltd.
  19. S.M. Rump: INTLAB – INTerval LABoratory. In Tibor Csendes, editor, Developments in Reliable Computing, pages 77–104. Kluwer Academic Publishers, Dordrecht, 1999.
  20. Moore, R. E., Kearfott, R. B., & Cloud, M. J. (2009). Introduction to Interval Analysis. Society for Industrial and Applied Mathematics.
  21. Rump, S. M. (2010). Verification methods: Rigorous results using floating-point arithmetic. Acta Numerica, 19, 287–449.
  22. Hargreaves, G. I. (2002). Interval analysis in MATLAB. Numerical Algorithms, (2009.1).