This article may be too technical for most readers to understand.(October 2011) |
A second-order cone program (SOCP) is a convex optimization problem of the form
where the problem parameters are , and . is the optimization variable. is the Euclidean norm and indicates transpose. [1] The "second-order cone" in SOCP arises from the constraints, which are equivalent to requiring the affine function to lie in the second-order cone in . [1]
SOCPs can be solved by interior point methods [2] and in general, can be solved more efficiently than semidefinite programming (SDP) problems. [3] Some engineering applications of SOCP include filter design, antenna array weight design, truss design, and grasping force optimization in robotics. [4] Applications in quantitative finance include portfolio optimization; some market impact constraints, because they are not linear, cannot be solved by quadratic programming but can be formulated as SOCP problems. [5] [6] [7]
The standard or unit second-order cone of dimension is defined as
.
The second-order cone is also known by quadratic cone or ice-cream cone or Lorentz cone. The standard second-order cone in is .
The set of points satisfying a second-order cone constraint is the inverse image of the unit second-order cone under an affine mapping:
and hence is convex.
The second-order cone can be embedded in the cone of the positive semidefinite matrices since
i.e., a second-order cone constraint is equivalent to a linear matrix inequality (Here means is semidefinite matrix). Similarly, we also have,
.
When for , the SOCP reduces to a linear program. When for , the SOCP is equivalent to a convex quadratically constrained linear program.
Convex quadratically constrained quadratic programs can also be formulated as SOCPs by reformulating the objective function as a constraint. [4] Semidefinite programming subsumes SOCPs as the SOCP constraints can be written as linear matrix inequalities (LMI) and can be reformulated as an instance of semidefinite program. [4] The converse, however, is not valid: there are positive semidefinite cones that do not admit any second-order cone representation. [3] In fact, while any closed convex semialgebraic set in the plane can be written as a feasible region of a SOCP, [8] it is known that there exist convex semialgebraic sets that are not representable by SDPs, that is, there exist convex semialgebraic sets that can not be written as a feasible region of a SDP. [9]
Consider a convex quadratic constraint of the form
This is equivalent to the SOCP constraint
Consider a stochastic linear program in inequality form
where the parameters are independent Gaussian random vectors with mean and covariance and . This problem can be expressed as the SOCP
where is the inverse normal cumulative distribution function. [1]
We refer to second-order cone programs as deterministic second-order cone programs since data defining them are deterministic. Stochastic second-order cone programs are a class of optimization problems that are defined to handle uncertainty in data defining deterministic second-order cone programs. [10]
Other modeling examples are available at the MOSEK modeling cookbook. [11]
Name | License | Brief info |
---|---|---|
AMPL | commercial | An algebraic modeling language with SOCP support |
Artelys Knitro | commercial | |
CPLEX | commercial | |
CVXPY | open source | Python modeling language with support for SOCP. Supports open source and commercial solvers. |
CVXOPT | open source | Convex solver with support for SOCP |
ECOS | open source | SOCP solver optimized for embedded applications |
FICO Xpress | commercial | |
Gurobi Optimizer | commercial | |
MATLAB | commercial | The coneprog function solves SOCP problems [12] using an interior-point algorithm [13] |
MOSEK | commercial | parallel interior-point algorithm |
NAG Numerical Library | commercial | General purpose numerical library with SOCP solver |
SCS | open source | SCS (Splitting Conic Solver) is a numerical optimization package for solving large-scale convex quadratic cone problems. |
Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.
In mathematics, a symmetric matrix with real entries is positive-definite if the real number is positive for every nonzero real column vector where is the transpose of . More generally, a Hermitian matrix is positive-definite if the real number is positive for every nonzero complex column vector where denotes the conjugate transpose of
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.
Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.
In mathematics, Farkas' lemma is a solvability theorem for a finite system of linear inequalities. It was originally proven by the Hungarian mathematician Gyula Farkas. Farkas' lemma is the key result underpinning the linear programming duality and has played a central role in the development of mathematical optimization. It is used amongst other things in the proof of the Karush–Kuhn–Tucker theorem in nonlinear programming. Remarkably, in the area of the foundations of quantum theory, the lemma also underlies the complete set of Bell inequalities in the form of necessary and sufficient conditions for the existence of a local hidden-variable theory, given data from any specific set of measurements.
In linear algebra, a cone—sometimes called a linear cone for distinguishing it from other sorts of cones—is a subset of a vector space that is closed under positive scalar multiplication; that is, C is a cone if implies for every positive scalar s.
Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.
In convex optimization, a linear matrix inequality (LMI) is an expression of the form
In mathematics, the Grothendieck inequality states that there is a universal constant with the following property. If Mij is an n × n matrix with
In mathematical optimization, a quadratically constrained quadratic program (QCQP) is an optimization problem in which both the objective function and the constraints are quadratic functions. It has the form
Conic optimization is a subfield of convex optimization that studies problems consisting of minimizing a convex function over the intersection of an affine subspace and a convex cone.
In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size.
A self-concordant function is a function satisfying a certain differential inequality, which makes it particularly easy for optimizaiton using Newton's method
Sparse principal component analysis is a technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables.
A sum-of-squares optimization program is an optimization problem with a linear cost function and a particular type of constraint on the decision variables. These constraints are of the form that when the decision variables are used as coefficients in certain polynomials, those polynomials should have the polynomial SOS property. When fixing the maximum degree of the polynomials involved, sum-of-squares optimization is also known as the Lasserre hierarchy of relaxations in semidefinite programming.
Matrix completion is the task of filling in the missing entries of a partially observed matrix, which is equivalent to performing data imputation in statistics. A wide range of datasets are naturally organized in matrix form. One example is the movie-ratings matrix, as appears in the Netflix problem: Given a ratings matrix in which each entry represents the rating of movie by customer , if customer has watched movie and is otherwise missing, we would like to predict the remaining entries in order to make good recommendations to customers on what to watch next. Another example is the document-term matrix: The frequencies of words used in a collection of documents can be represented as a matrix, where each entry corresponds to the number of times the associated term appears in the indicated document.
Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.