Comparison of optimization software

Last updated

Given a system transforming a set of inputs to output values, described by a mathematical function f, optimization refers to the generation and selection of the best solution from some set of available alternatives, [1] by systematically choosing input values from within an allowed set, computing the value of the function, and recording the best value found during the process. Many real-world and theoretical problems may be modeled in this general framework. For example, the inputs can be design parameters of a motor while the output can be the power consumption. Other inputs can be business choices with the output being obtained profit. or describing the configuration of a physical system with the output being its energy.

Contents

An optimization problem can be represented in the following way

Given: a function f : AR from some set A to the real numbers
Search for: an element x0 in A such that f(x0) ≤ f(x) for all x in A ("minimization").

Typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints , equalities or inequalities that the members of A have to satisfy. Maximization can be reduced to minimization by multiplying the function by minus one.

The use of optimization software requires that the function f is defined in a suitable programming language and linked to the optimization software. The optimization software will deliver input values in A, the software module realizing f will deliver the computed value f(x). In this manner, a clear separation of concerns is obtained: different optimization software modules can be easily tested on the same function f, or a given optimization software can be used for different functions f.

The following tables provide a comparison of notable optimization software libraries, either specialized or general purpose libraries with significant optimization coverage.

NameProgramming languageLatest stable versionAcademic/noncommercial
use is free
Can be used in
proprietary apps
LicenseNotes
ALGLIB C++, C#, Python, FreePascal3.19.0 / June 2022YesYesDual (Commercial, GPL)General purpose library, includes optimization package: linear, quadratic, nonlinear programming.
AMPL C, C++, C#, Python, Java, Matlab, ROctober 2018YesYesDual (Commercial, academic)A popular algebraic modeling language for linear, mixed-integer and nonlinear optimization. Student and AMPL for courses versions are available for free.
APMonitor Fortran, C++, Python, Matlab, Julia0.6.2 / March 2016YesYesDual (Commercial, academic)A differential and algebraic modeling language for mixed-integer and nonlinear optimization. Freely available interfaces for Matlab, Python, and Julia.
Artelys Knitro C, C++, C#, Python, Java, Julia, Matlab, R11.1 / November 2018NoYesCommercial, Academic, TrialGeneral purpose library, specialized in nonlinear optimization. Handles mixed-integer problems (MINLP) and mathematical programs with equilibrium constraints (MPEC). Specialized algorithms for nonlinear least squares problems.
CPLEX C, C++, Java, C#, Python, R20.1 / Dec 2020YesYesCommercial, academic, trialIBM CPLEX Optimization Studio is a suite of optimization engines (CPLEX for Mathematical Programming, and CP Optimizer for Constraint programming), a modeling language (OPL), and an Integrated Development Environment.
FICO Xpress Mosel, BCL, C, C++, Java, R, Python, Matlab, .Net, VB68.13 / Nov 2021YesYesCommercial, academic, community, trialSuite of Optimization Technologies and Solutions. Includes: Solver technologies including (LP (Simplex & Barrier), MIP, MIQP, MIQCQP, MISOCP, MINLP QP, QCQP, SOCP, NLP (SLP & Interior Point); An algebraic modelling and procedural programming language; an Integrated Development Environment; Supports for a range of execution services; Support for packaging of optimization models and services as software solutions
GEKKO Python0.2.8 / August 2020YesYesDual (Commercial, academic)GEKKO is a Python package for machine learning and optimization of mixed-integer and differential algebraic equations. It is coupled with large-scale solvers for linear, quadratic, nonlinear, and mixed integer programming (LP, QP, NLP, MILP, MINLP). Modes of operation include parameter regression, data reconciliation, real-time optimization, dynamic simulation, and nonlinear predictive control.
GNU Linear Programming Kit C4.52 / July 2013YesNoGPLFree library for linear programming (LP) and mixed integer programming (MIP).
GNU Scientific Library C1.16 / July 2013YesNoGPLFree library provided by GNU project.
IMSL Numerical Libraries C, Java, C#, Fortran, Pythonmany componentsNoYesProprietary
LIONsolver C++, Java2.0.198 / October 2011YesYesProprietarySupport for interactive and learning optimization,

according to RSO principles . [2]

Math Kernel Library (MKL)C++, Fortran11.1 / October 2013NoYesProprietaryNumerical library from Intel. MKL is specialized on linear algebra,
but contains some optimization-related functionality.
Wolfram Mathematica C++, Wolfram Language13.3.1 (August 16, 2023;35 days ago (2023-08-16)) [±] [3] NoYesProprietaryConstrained nonlinear optimization, interior point methods, convex optimization and integer programming-as well as original symbolic methods integrated with general computational capabilities.
MIDACO C++, C#, Python, Matlab, Octave, Fortran, R, Java, Excel, VBA, Julia6.0 / Mar 2018YesYesDual (Commercial, academic)Lightweight software tool for single- and multi-objective optimization. Supporting MINLP and parallelization.
NAG Numerical Libraries C, FortranMark 26 / October 2017NoYesProprietary
NMath C#5.3 / May 2013NoYesProprietaryC# numerical library built on top of MKL.
Octeract Engine C++/Python0.11.29 / November 2019NoYesCommercialSupercomputing deterministic global optimization solver for general MINLP problems. Octeract Engine uses MPI for distributed calculations.
OptaPlanner Java8.0.0.Final / November 2020YesYesASL (open source)Lightweight optimization solver in Java, with optional integration modules for JPA-Hibernate, Quarkus, Spring, Jackson, JAXB, etc. Works on Kotlin and Scala too.
SciPy Python0.13.1 / November 2013YesYesBSDGeneral purpose numerical and scientific computing library for Python.

See also

Related Research Articles

<span class="mw-page-title-main">Decision problem</span> Yes/no problem in computer science

In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question of the input values. An example of a decision problem is deciding by means of an algorithm whether a given natural number is prime. Another is the problem "given two numbers x and y, does x evenly divide y?". The answer is either 'yes' or 'no' depending upon the values of x and y. A method for solving a decision problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the decision problem "given two numbers x and y, does x evenly divide y?" would give the steps for determining whether x evenly divides y. One such algorithm is long division. If the remainder is zero the answer is 'yes', otherwise it is 'no'. A decision problem which can be solved by an algorithm is called decidable.

<span class="mw-page-title-main">Supervised learning</span> A paradigm in machine learning

Supervised learning (SL) is a paradigm in machine learning where input objects and a desired output value train a model. The training data is processed, building a function that maps new data on expected output values. An optimal scenario will allow for the algorithm to correctly determine output values for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way. This statistical quality of an algorithm is measured through the so-called generalization error.

<span class="mw-page-title-main">Numerical analysis</span> Study of algorithms using numerical approximation

Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. It is the study of numerical methods that attempt at finding approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics, numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.

Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

<span class="mw-page-title-main">Combinatorial optimization</span> Subfield of mathematical optimization

Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combinatorial optimization problems are the travelling salesman problem ("TSP"), the minimum spanning tree problem ("MST"), and the knapsack problem. In many such problems, such as the ones previously mentioned, exhaustive search is not tractable, and so specialized algorithms that quickly rule out large parts of the search space or approximation algorithms must be resorted to instead.

Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is equivalent to the minimization of the function .

In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.

Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the general recursive functions.

<span class="mw-page-title-main">No free lunch in search and optimization</span> Average solution cost is the same with any method

In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method. The name alludes to the saying "no such thing as a free lunch", that is, no method offers a "short cut". This is under the assumption that the search space is a probability density function. It does not apply to the case where the search space has underlying structure that can be exploited more efficiently than random search or even has closed-form solutions that can be determined without search at all. For such probabilistic assumptions, the outputs of all procedures solving a particular type of problem are statistically identical. A colourful way of describing such a circumstance, introduced by David Wolpert and William G. Macready in connection with the problems of search and optimization, is to say that there is no free lunch. Wolpert had previously derived no free lunch theorems for machine learning. Before Wolpert's article was published, Cullen Schaffer independently proved a restricted version of one of Wolpert's theorems and used it to critique the current state of machine learning research on the problem of induction.

<span class="mw-page-title-main">Functional (mathematics)</span> Types of mappings in mathematics

In mathematics, a functional is a certain type of function. The exact definition of the term varies depending on the subfield.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

<span class="mw-page-title-main">Differential evolution</span>

In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the optimized problem and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.

In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size.

The difference-map algorithm is a search algorithm for general constraint satisfaction problems. It is a meta-algorithm in the sense that it is built from more basic algorithms that perform projections onto constraint sets. From a mathematical perspective, the difference-map algorithm is a dynamical system based on a mapping of Euclidean space. Solutions are encoded as fixed points of the mapping.

In computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run forever. The halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible program–input pairs.

A system of polynomial equations is a set of simultaneous equations f1 = 0, ..., fh = 0 where the fi are polynomials in several variables, say x1, ..., xn, over some field k.

<span class="mw-page-title-main">Simulation-based optimization</span>

Simulation-based optimization integrates optimization techniques into simulation modeling and analysis. Because of the complexity of the simulation, the objective function may become difficult and expensive to evaluate. Usually, the underlying simulation model is stochastic, so that the objective function must be estimated using statistical estimation techniques.

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

References

  1. "The Nature of Mathematical Programming," Mathematical Programming Glossary, INFORMS Computing Society.
  2. Battiti, Roberto; Mauro Brunato; Franco Mascia (2008). Reactive Search and Intelligent Optimization. Springer Verlag. ISBN   978-0-387-09623-0.
  3. "Mathematica Quick Revision History" . Retrieved 2023-08-16.