In mathematical optimization, the Rastrigin function is a non-convex function used as a performance test problem for optimization algorithms. It is a typical example of non-linear multimodal function. It was first proposed in 1974 by Rastrigin [1] as a 2-dimensional function and has been generalized by Rudolph. [2] The generalized version was popularized by Hoffmeister & Bäck [3] and Mühlenbein et al. [4] Finding the minimum of this function is a fairly difficult problem due to its large search space and its large number of local minima.
On an -dimensional domain it is defined by:
where and . There are many extrema:
Number of dimensions | Maximum value at |
---|---|
1 | 40.35329019 |
2 | 80.70658039 |
3 | 121.0598706 |
4 | 161.4131608 |
5 | 201.7664509 |
6 | 242.1197412 |
7 | 282.4730314 |
8 | 322.8263216 |
9 | 363.1796117 |
Here are all the values at 0.5 interval listed for the 2D Rastrigin function with :
0 | 20.25 | 1 | 22.25 | 4 | 26.25 | 9 | 32.25 | 16 | 40.25 | 25 | 28.92 | ||
20.25 | 40.5 | 21.25 | 42.5 | 24.25 | 46.5 | 29.25 | 52.5 | 36.25 | 60.5 | 45.25 | 49.17 | ||
1 | 21.25 | 2 | 23.25 | 5 | 27.25 | 10 | 33.25 | 17 | 41.25 | 26 | 29.92 | ||
22.25 | 42.5 | 23.25 | 44.5 | 26.25 | 48.5 | 31.25 | 54.5 | 38.25 | 62.5 | 47.25 | 51.17 | ||
4 | 24.25 | 5 | 26.25 | 8 | 30.25 | 13 | 36.25 | 20 | 44.25 | 29 | 32.92 | ||
26.25 | 46.5 | 27.25 | 48.5 | 30.25 | 52.5 | 35.25 | 58.5 | 42.25 | 66.5 | 51.25 | 55.17 | ||
9 | 29.25 | 10 | 31.25 | 13 | 35.25 | 18 | 41.25 | 25 | 49.25 | 34 | 37.92 | ||
32.25 | 52.5 | 33.25 | 54.5 | 36.25 | 58.5 | 41.25 | 64.5 | 48.25 | 72.5 | 57.25 | 61.17 | ||
16 | 36.25 | 17 | 38.25 | 20 | 42.25 | 25 | 48.25 | 32 | 56.25 | 41 | 44.92 | ||
40.25 | 60.5 | 41.25 | 62.5 | 44.25 | 66.5 | 49.25 | 72.5 | 56.25 | 80.5 | 65.25 | 69.17 | ||
25 | 45.25 | 26 | 47.25 | 29 | 51.25 | 34 | 57.25 | 41 | 65.25 | 50 | 53.92 | ||
28.92 | 49.17 | 29.92 | 51.17 | 32.92 | 55.17 | 37.92 | 61.17 | 44.92 | 69.17 | 53.92 | 57.85 |
The abundance of local minima underlines the necessity of a global optimization algorithm when needing to find the global minimum. Local optimization algorithms are likely to get stuck in a local minimum.
Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.
Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints. It is named after the mathematician Joseph-Louis Lagrange.
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for finding a local minimum of a differentiable multivariate function.
In numerical analysis, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by making an incremental change to the solution. If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found.
Multi-disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization (MSDO), and multidisciplinary design analysis and optimization (MDAO).
Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is equivalent to the minimization of the function .
In mathematics and computing, the Levenberg–Marquardt algorithm, also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent. The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach.
In statistics, nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. The data are fitted by a method of successive approximations (iterations).
Feature selection is the process of selecting a subset of relevant features for use in model construction. Stylometry and DNA microarray analysis are two cases where feature selection is used. It should be distinguished from feature extraction.
In optimization, line search is a basic iterative approach to find a local minimum of an objective function . It first finds a descent direction along which the objective function will be reduced, and then computes a step size that determines how far should move along that direction. The descent direction can be computed by various methods, such as gradient descent or quasi-Newton method. The step size can be determined either exactly or inexactly.
In (unconstrained) mathematical optimization, a backtracking line search is a line search method to determine the amount to move along a given search direction. Its use requires that the objective function is differentiable and that its gradient is known.
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. k-means clustering minimizes within-cluster variances, but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances. For instance, better Euclidean solutions can be found using k-medians and k-medoids.
In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms. It is also known as Rosenbrock's valley or Rosenbrock's banana function.
In image processing, ridge detection is the attempt, via software, to locate ridges in an image, defined as curves whose points are local maxima of the function, akin to geographical ridges.
Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().
Coordinate descent is an optimization algorithm that successively minimizes along coordinate directions to find the minimum of a function. At each iteration, the algorithm determines a coordinate or coordinate block via a coordinate selection rule, then exactly or inexactly minimizes over the corresponding coordinate hyperplane while fixing all other coordinates or coordinate blocks. A line search along the coordinate direction can be performed at the current iterate to determine the appropriate step size. Coordinate descent is applicable in both differentiable and derivative-free contexts.
Graph cut optimization is a combinatorial optimization method applicable to a family of functions of discrete variables, named after the concept of cut in the theory of flow networks. Thanks to the max-flow min-cut theorem, determining the minimum cut over a graph representing a flow network is equivalent to computing the maximum flow over the network. Given a pseudo-Boolean function , if it is possible to construct a flow network with positive weights such that
In mathematics, mirror descent is an iterative optimization algorithm for finding a local minimum of a differentiable function.