Lexicographic optimization

Last updated

Lexicographic optimization is a kind of Multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Often, the different objectives can be ranked in order of importance to the decision-maker, so that objective is the most important, objective is the next most important, and so on. Lexicographic optimization presumes that the decision-maker prefers even a very small increase in , to even a very large increase in etc. Similarly, the decision-maker prefers even a very small increase in , to even a very large increase in etc. In other words, the decision-maker has lexicographic preferences, ranking the possible solutions according to a lexicographic order of their objective function values. Lexicographic optimization is sometimes called preemptive optimization, [1] since a small increase in one objective value preempts a much larger increase in less important objective values.

Contents

As an example, consider a firm which puts safety above all. It wants to maximize the safety of its workers and customers. Subject to attaining the maximum possible safety, it wants to maximize profits. This firm performs lexicographic optimization, where denotes safety and denotes profits.

As another example, [2] in project management, when analyzing PERT networks, one often wants to minimize the mean completion time, and subject to this, minimize the variance of the completion time.

Notation

A lexicographic maximization problem is often written as:

where are the functions to maximize, ordered from the most to the least important; is the vector of decision variables; and is the feasible set - the set of possible values of . A lexicographic minimization problem can be defined analogously.

Algorithms

There are several algorithms for solving lexicographic optimization problems. [3]

Sequential algorithm for general objectives

A leximin optimization problem with n objectives can be solved using a sequence of n single-objective optimization problems, as follows: [1] [3] :Alg.1

So, in the first iteration, we find the maximum feasible value of the most important objective , and put this maximum value in . In the second iteration, we find the maximum feasible value of the second-most important objective , with the additional constraint that the most imporant objective must keep its maximum value of ; and so on.

The sequential algorithm is general - it can be applied whenever we have a solver for the single-objective functions.

Lexicographic simplex algorithm for linear objectives

Linear lexicographic optimization [2] is a special case of lexicographic optimization in which the objectives are linear, and the feasible set is described by linear inequalities. It can be written as:

where are vectors representing the linear objectives to maximize, ordered from the most to the least important; is the vector of decision variables; and the feasible set is determined by the matrix and the vector .

Isermann [2] extended the theory of linear programming duality to lexicographic linear programs, and developed a lexicographic simplex algorithm. In contrast to the sequential algorithm, this simplex algorithm considers all objective functions simultaneously.

Weighted average for linear objectives

Sherali and Soyster [1] prove that, for any linear lexicographic optimization problem, there exist a set of weights such that the set of lexicographically-optimal solutions is identical to the set of solutions to the following single-objective problem:

One way to compute the weights is given by Yager. [4] He assumes that all objective values are real numbers between 0 and 1, and the smallest difference between any two possible values is some constant d < 1 (so that values with difference smaller than d are considered equal). Then, the weight of is set to approximately . This guarantees that maximizing the weighted sum is equivalent to lexicographic maximization.

Cococcioni, Pappalardo and Sergeyev [5] show that, given a computer that can make numeric computations with infinitesimals, it is possible to choose weights that are infinitesimals (specifically: ; is infinitesimal; is infinitesimal-squared; etc.), and thus reduce linear lexicographic optimization to single-objective linear programming with infinitesimals. They present an adaptation of the simplex algorithm to infinitesimals, and present some running examples.

Properties

(1) Uniqueness. In general, a lexicographic optimization problem may have more than one optimal solution. However, If and are two optimal solutions, then their value must be the same, that is, for all . [3] :Thm.2 Moreover, if the feasible domain is a convex set, and the objective functions are strictly concave, then the problem has at most one optimal solution, since if there were two different optimal solutions, their mean would be another feasible solution in which the objective functions attain a higher value - contradicting the optimality of the original solutions.

(2) Partial sums. Given a vector of functions to optimize, for all t in 1,...,n, define = the sum of all functions from the most important to the t-th most important one. Then, the original lexicographic optimization problem is equivalent to the following: [3] :Thm.4

In some cases, the second problem may be easier to solve.

See also

Related Research Articles

<span class="mw-page-title-main">Linear programming</span> Method to solve some optimization problems

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

<span class="mw-page-title-main">Support vector machine</span> Set of methods for supervised statistical learning

In machine learning, support vector machines are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories by Vladimir Vapnik with colleagues SVMs are one of the most robust prediction methods, being based on statistical learning frameworks or VC theory proposed by Vapnik and Chervonenkis (1974). Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier. SVM maps training examples to points in space so as to maximise the width of the gap between the two categories. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.

In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints. It is named after the mathematician Joseph-Louis Lagrange. The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied. The relationship between the gradient of the function and gradients of the constraints rather naturally leads to a reformulation of the original problem, known as the Lagrangian function.

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.

In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.

In mathematical optimization, constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size.

Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

<span class="mw-page-title-main">Linear programming relaxation</span>

In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable.

<span class="mw-page-title-main">Feasible region</span> Mathematical constraints that define ways of finding the best solution

In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down.

Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

The dual of a given linear program (LP) is another LP that is derived from the original LP in the following schematic way:

The quadratic knapsack problem (QKP), first introduced in 19th century, is an extension of knapsack problem that allows for quadratic terms in the objective function: Given a set of items, each with a weight, a value, and an extra profit that can be earned if two items are selected, determine the number of items to include in a collection without exceeding capacity of the knapsack, so as to maximize the overall profit. Usually, quadratic knapsack problems come with a restriction on the number of copies of each kind of item: either 0, or 1. This special type of QKP forms the 0-1 quadratic knapsack problem, which was first discussed by Gallo et al. The 0-1 quadratic knapsack problem is a variation of knapsack problems, combining the features of unbounded knapsack problem, 0-1 knapsack problem and quadratic knapsack problem.

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

In the theory of linear programming, a basic feasible solution (BFS) is a solution with a minimal set of non-zero variables. Geometrically, each BFS corresponds to a corner of the polyhedron of feasible solutions. If there exists an optimal solution, then there exists an optimal BFS. Hence, to find an optimal solution, it is sufficient to consider the BFS-s. This fact is used by the simplex algorithm, which essentially travels from one BFS to another until an optimal solution is found.

Lexicographic max-min optimization is a kind of multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Lexmaxmin optimization presumes that the decision-maker would like the smallest objective value to be as high as possible; subject to this, the second-smallest objective should be as high as possible; and so on. In other words, the decision-maker ranks the possible solutions according to a leximin order of their objective function values.

References

  1. 1 2 3 Sherali, H. D.; Soyster, A. L. (1983-02-01). "Preemptive and nonpreemptive multi-objective programming: Relationship and counterexamples". Journal of Optimization Theory and Applications. 39 (2): 173–186. doi:10.1007/BF00934527. ISSN   1573-2878.
  2. 1 2 3 Isermann, H. (1982-12-01). "Linear lexicographic optimization". Operations-Research-Spektrum. 4 (4): 223–228. doi:10.1007/BF01782758. ISSN   1436-6304.
  3. 1 2 3 4 Ogryczak, W.; Pióro, M.; Tomaszewski, A. (2005). "Telecommunications network design and max-min optimization problem". Journal of Telecommunications and Information Technology. nr 3: 43–56. ISSN   1509-4553.
  4. Yager, Ronald R. (1997-10-01). "On the analytic representation of the Leximin ordering and its application to flexible constraint propagation". European Journal of Operational Research. 102 (1): 176–192. doi:10.1016/S0377-2217(96)00217-2. ISSN   0377-2217.
  5. Cococcioni, Marco; Pappalardo, Massimo; Sergeyev, Yaroslav D. (2018-02-01). "Lexicographic multi-objective linear programming using grossone methodology: Theory and algorithm". Applied Mathematics and Computation. Recent Trends in Numerical Computations: Theory and Algorithms. 318: 298–311. doi:10.1016/j.amc.2017.05.058. hdl: 11568/877746 . ISSN   0096-3003.
  6. Kohlberg, Elon (1972-07-01). "The Nucleolus as a Solution of a Minimization Problem". SIAM Journal on Applied Mathematics. 23 (1): 34–39. doi:10.1137/0123004. ISSN   0036-1399.