S-procedure

Last updated

The S-procedure or S-lemma is a mathematical result that gives conditions under which a particular quadratic inequality is a consequence of another quadratic inequality. The S-procedure was developed independently in a number of different contexts [1] [2] and has applications in control theory, linear algebra and mathematical optimization.

Contents

Statement of the S-procedure

Let F1 and F2 be symmetric matrices, g1 and g2 be vectors and h1 and h2 be real numbers. Assume that there is some x0 such that the strict inequality holds. Then the implication

holds if and only if there exists some nonnegative number λ such that

is positive semidefinite. [3]

Related Research Articles

In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (which follows directly from the above properties).

<span class="mw-page-title-main">Linear algebra</span> Branch of mathematics

Linear algebra is the branch of mathematics concerning linear equations such as:

In linear algebra, the rank of a matrix A is the dimension of the vector space generated by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the "nondegenerateness" of the system of linear equations and linear transformation encoded by A. There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.

Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.

<span class="mw-page-title-main">Linear programming</span> Method to solve some optimization problems

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.

<span class="mw-page-title-main">Gradient descent</span> Optimization algorithm

In mathematics, gradient descent is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent. It is particularly useful in machine learning for minimizing the cost or loss function. Gradient descent should not be confused with local search algorithms, although both are iterative methods for optimization.

In linear algebra, an n-by-n square matrix A is called invertible, if there exists an n-by-n square matrix B such that

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor.

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

In mathematics, Farkas' lemma is a solvability theorem for a finite system of linear inequalities. It was originally proven by the Hungarian mathematician Gyula Farkas. Farkas' lemma is the key result underpinning the linear programming duality and has played a central role in the development of mathematical optimization. It is used amongst other things in the proof of the Karush–Kuhn–Tucker theorem in nonlinear programming. Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set of Bell inequalities in the form of necessary and sufficient conditions for the existence of a local hidden-variable theory, given data from any specific set of measurements.

Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization which may be considered a quasi-Newton method. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable.

A second-order cone program (SOCP) is a convex optimization problem of the form

In geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with an ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them.

In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size.

<span class="mw-page-title-main">Special linear Lie algebra</span>

In mathematics, the special linear Lie algebra of order n is the Lie algebra of matrices with trace zero and with the Lie bracket . This algebra is well studied and understood, and is often used as a model for the study of other Lie algebras. The Lie group that it generates is the special linear group.

Finsler's lemma is a mathematical result named after Paul Finsler. It states equivalent ways to express the positive definiteness of a quadratic form Q constrained by a linear form L. Since it is equivalent to another lemmas used in optimization and control theory, such as Yakubovich's S-lemma, Finsler's lemma has been given many proofs and has been widely used, particularly in results related to robust optimization and linear matrix inequalities.

This is a glossary for the terminology in a mathematical field of functional analysis.

References

  1. Frank Uhlig, A recurring theorem about pairs of quadratic forms and extensions: a survey , Linear Algebra and its Applications, Volume 25, 1979, pages 219–237.
  2. Imre Pólik and Tamás Terlaky, A Survey of the S-Lemma , SIAM Review, Volume 49, 2007, Pages 371–418.
  3. Stephen Boyd and Lieven Vandenberghe Convex Optimization , Cambridge University Press, 2004, p.655.

See also