Constraint (mathematics)

Last updated

In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraintsprimarily equality constraints, inequality constraints, and integer constraints. The set of candidate solutions that satisfy all constraints is called the feasible set. [1]

Contents

Example

The following is a simple optimization problem:

subject to

and

where denotes the vector (x1, x2).

In this example, the first line defines the function to be minimized (called the objective function, loss function, or cost function). The second and third lines define two constraints, the first of which is an inequality constraint and the second of which is an equality constraint. These two constraints are hard constraints, meaning that it is required that they be satisfied; they define the feasible set of candidate solutions.

Without the constraints, the solution would be (0,0), where has the lowest value. But this solution does not satisfy the constraints. The solution of the constrained optimization problem stated above is , which is the point with the smallest value of that satisfies the two constraints.

Terminology

Hard and soft constraints

If the problem mandates that the constraints be satisfied, as in the above discussion, the constraints are sometimes referred to as hard constraints. However, in some problems, called flexible constraint satisfaction problems, it is preferred but not required that certain constraints be satisfied; such non-mandatory constraints are known as soft constraints . Soft constraints arise in, for example, preference-based planning. In a MAX-CSP problem, a number of constraints are allowed to be violated, and the quality of a solution is measured by the number of satisfied constraints.

Global constraints

Global constraints [2] are constraints representing a specific relation on a number of variables, taken altogether. Some of them, such as the alldifferent constraint, can be rewritten as a conjunction of atomic constraints in a simpler language: the alldifferent constraint holds on n variables , and is satisfied iff the variables take values which are pairwise different. It is semantically equivalent to the conjunction of inequalities . Other global constraints extend the expressivity of the constraint framework. In this case, they usually capture a typical structure of combinatorial problems. For instance, the regular constraint expresses that a sequence of variables is accepted by a deterministic finite automaton.

Global constraints are used [3] to simplify the modeling of constraint satisfaction problems, to extend the expressivity of constraint languages, and also to improve the constraint resolution: indeed, by considering the variables altogether, infeasible situations can be seen earlier in the solving process. Many of the global constraints are referenced into an online catalog.

See also

Related Research Articles

Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.

Linear programming Method to solve some optimization problems

Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.

Mathematical optimization Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

In mathematics, an inequation is a statement that an inequality or a non-equality holds between two values. It is usually written in the form of a pair of expressions denoting the values in question, with a relational sign between them indicating the specific inequality relation. Some examples of inequations are:

In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints. It is named after the mathematician Joseph-Louis Lagrange. The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied. The relationship between the gradient of the function and gradients of the constraints rather naturally leads to a reformulation of the original problem, known as the Lagrangian function.

Gradient descent Optimization algorithm

Gradient descent is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent.

In mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming.

Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.

Multi-disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization (MSDO).

In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem. However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition.

In mathematical optimization, constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

In an optimization problem, a slack variable is a variable that is added to an inequality constraint to transform it into an equality. Introducing a slack variable replaces an inequality constraint with an equality constraint and a non-negativity constraint on the slack variable.

Feasible region

In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down.

Benders decomposition is a technique in mathematical programming that allows the solution of very large linear programming problems that have a special block structure. This block structure often occurs in applications such as stochastic programming as the uncertainty is usually represented with scenarios. The technique is named after Jacques F. Benders.

In operations research, the Big M method is a method of solving linear programming problems using the simplex algorithm. The Big M method extends the simplex algorithm to problems that contain "greater-than" constraints. It does so by associating the constraints with large negative constants which would not be part of any optimal solution, if it exists.

Design optimization is an engineering design methodology using a mathematical formulation of a design problem to support selection of the optimal design among many alternatives. Design optimization involves the following stages:

  1. Variables: Describe the design alternatives
  2. Objective: Elected functional combination of variables
  3. Constraints: Combination of Variables expressed as equalities or inequalities that must be satisfied for any acceptable design alternative
  4. Feasibility: Values for set of variables that satisfies all constraints and minimizes/maximizes Objective.

References

  1. Takayama, Akira (1985). Mathematical Economics (2nd ed.). New York: Cambridge University Press. p.  61. ISBN   0-521-31498-4.
  2. Rossi, Francesca; Van Beek, Peter; Walsh, Toby (2006). "7". Handbook of constraint programming (1st ed.). Amsterdam: Elsevier. ISBN   9780080463643. OCLC   162587579.
  3. Rossi, Francesca (2003). Principles and Practice of Constraint Programming CP 2003 00 : 9th International Conference, CP 2003, Kinsale, Ireland, September 29 October 3, 2003. Proceedings. Berlin: Springer-Verlag Berlin Heidelberg. ISBN   9783540451938. OCLC   771185146.

Further reading