Slater's condition

Last updated

In mathematics, Slater's condition (or Slater condition) is a sufficient condition for strong duality to hold for a convex optimization problem, named after Morton L. Slater. [1] Informally, Slater's condition states that the feasible region must have an interior point (see technical details below).

Contents

Slater's condition is a specific example of a constraint qualification. [2] In particular, if Slater's condition holds for the primal problem, then the duality gap is 0, and if the dual value is finite then it is attained.

Formulation

Let be real-valued functions on some subset of . We say that the functions satisfy the Slater condition if there exists some in the relative interior of , for which for all in . We say that the functions satisfy the relaxed Slater condition if: [3]

Application to convex optimization

Consider the optimization problem

where are convex functions. This is an instance of convex programming. Slater's condition for convex programming states that there exists an that is strictly feasible, that is, all m constraints are satisfied, and the nonlinear constraints are satisfied with strict inequalities.

If a convex program satisfies Slater's condition (or relaxed condition), and it is bounded from below, then strong duality holds. Mathematically, this states that strong duality holds if there exists an (where relint denotes the relative interior of the convex set ) such that

(the convex, nonlinear constraints)
[4]

Generalized Inequalities

Given the problem

where is convex and is -convex for each . Then Slater's condition says that if there exists an such that

and

then strong duality holds. [4]

See also

Related Research Articles

The Hahn–Banach theorem is a central tool in functional analysis. It allows the extension of bounded linear functionals defined on a vector subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting". Another version of the Hahn–Banach theorem is known as the Hahn–Banach separation theorem or the hyperplane separation theorem, and has numerous uses in convex geometry.

In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints. It is named after the mathematician Joseph-Louis Lagrange.

<span class="mw-page-title-main">Convex function</span> Real function with secant line between points above the graph itself

In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph is a convex set. In simple terms, a convex function graph is shaped like a cup , while a concave function's graph is shaped like a cap .

In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate. It allows in particular for a far reaching generalization of Lagrangian duality.

<span class="mw-page-title-main">Interior-point method</span> Algorithms for solving convex optimization problems

Interior-point methods are algorithms for solving linear and non-linear convex optimization problems. IPMs combine two advantages of previously-known algorithms:

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear equation systems . It was first discovered by the Polish mathematician Stefan Kaczmarz, and was rediscovered in the field of image reconstruction from projections by Richard Gordon, Robert Bender, and Gabor Herman in 1970, where it is called the Algebraic Reconstruction Technique (ART). ART includes the positivity constraint, making it nonlinear.

<span class="mw-page-title-main">Convex analysis</span>

Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.

In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.

Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

Conic optimization is a subfield of convex optimization that studies problems consisting of minimizing a convex function over the intersection of an affine subspace and a convex cone.

Subgradient methods are convex optimization methods which use subderivatives. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of steepest descent.

A self-concordant function is a function satisfying a certain differential inequality, which makes it particularly easy for optimization using Newton's method A self-concordant barrier is a particular self-concordant function, that is also a barrier function for a particular convex set. Self-concordant barriers are important ingredients in interior point methods for optimization.

A sum-of-squares optimization program is an optimization problem with a linear cost function and a particular type of constraint on the decision variables. These constraints are of the form that when the decision variables are used as coefficients in certain polynomials, those polynomials should have the polynomial SOS property. When fixing the maximum degree of the polynomials involved, sum-of-squares optimization is also known as the Lasserre hierarchy of relaxations in semidefinite programming.

In mathematical optimization, fractional programming is a generalization of linear-fractional programming. The objective function in a fractional program is a ratio of two functions that are in general nonlinear. The ratio to be optimized often describes some kind of efficiency of a system.

In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix and an approximating matrix, subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.

In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lower bound for a minimization problem can be found because of the weak duality principle.

In the mathematical theory of probability, the drift-plus-penalty method is used for optimization of queueing networks and other stochastic systems.

References

  1. Slater, Morton (1950). Lagrange Multipliers Revisited (PDF). Cowles Commission Discussion Paper No. 403 (Report). Reprinted in Giorgi, Giorgio; Kjeldsen, Tinne Hoff, eds. (2014). Traces and Emergence of Nonlinear Programming. Basel: Birkhäuser. pp. 293–306. ISBN   978-3-0348-0438-7.
  2. Takayama, Akira (1985). Mathematical Economics . New York: Cambridge University Press. pp.  66–76. ISBN   0-521-25707-7.
  3. Nemirovsky and Ben-Tal (2023). "Optimization III: Convex Optimization" (PDF).
  4. 1 2 Boyd, Stephen; Vandenberghe, Lieven (2004). Convex Optimization (pdf). Cambridge University Press. ISBN   978-0-521-83378-3 . Retrieved October 3, 2011.