Lexicographic max-min optimization

Last updated

Lexicographic max-min optimization (also called lexmaxmin or leximin or leximax or lexicographic max-ordering optimization) is a kind of multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Lexmaxmin optimization presumes that the decision-maker would like the smallest objective value to be as high as possible; subject to this, the second-smallest objective should be as high as possible; and so on. In other words, the decision-maker ranks the possible solutions according to a leximin order of their objective function values.

Contents

As an example, consider egalitarian social planners, who want to decide on a policy such that the utility of the poorest person will be as high as possible; subject to this, they want to maximize the utility of the second-poorest person; and so on. This planner solves a lexmaxmin problem, where the objective function number i is the utility of agent number i.

Algorithms for lexmaxmin optimization (not using this name) were developed for computing the nucleolus of a cooperative game. [1] [2] An early application of lexmaxmin was presented by Melvin Dresher [3] in his book on game theory, in the context of taking maximum advantage of the opponent's mistakes in a zero-sum game. Behringer [4] cites many other examples in game theory as well as decision theory.

Notation

A lexmaxmin problem may be written as:

where are the functions to maximize; is the vector of decision variables; and is the feasible set - the set of possible values of .

Comparison with lexicographic optimization

Lexmaxmin optimization is closely related to lexicographic optimization . However, in lexicographic optimization, there is a fixed order on the functions, such that is the most important, is the next-most important, and so on. In contrast, in lexmaxmin, all the objectives are equally important. To present lexmaxmin as a special case of lexicographic optimization, denote by the smallest objective value in x. Similarly, denote by the second-smallest objective value in x, and so on, so that . Then, the lexmaxmin optimization problem can be written as the following lexicographic maximization problem:

Uniqueness

In general, a lexmaxmin optimization problem may have more than one optimal solution. If and are two optimal solutions, then their ordered value vector must be the same, that is, for all , [5] :Thm.2 that is, the smallest value is the same, the second-smallest value is the same, and so on. However, the unsorted value vectors may be different. For example, (1,2,3) and (2,1,3) may both be optimal solutions to the same problem.

However, if the feasible domain is a convex set, and the objectives are concave functions, then the value vectors in all optimal solutions must be the same, since if there were two different optimal solutions, their mean would be another feasible solution in which the objective functions attain a higher value - contradicting the optimality of the original solutions. [5] :Thm.6

Algorithms for continuous variables

Saturation Algorithm for convex problems

The Saturation Algorithm works when the feasible set is a convex set, and the objectives are concave functions. Variants of these algorithm appear in many papers. The earliest appearance is attributed to Alexander Kopelowitz [1] by Elkind and Pasechnik. [6] Other variants appear in. [7] :20–27 [8] [9] [5] :Alg.2 [10] [11] [12] [13] [14] [15]

The algorithm keeps a set of objectives that are considered saturated (also called: blocking). This means that their value cannot be improved without harming lower-valued objectives. The other objectives are called free. Initially, all objectives are free. In general, the algorithm works as follows:

It remains to explain how we can find new saturated objectives in each iteration.

Method 1: interior optimizers. [1] [6] An interior optimizer of a linear program is an optimal solution in which the smallest possible number of constraints are tight. In other words, it is a solution in the interior of the optimal face. An interior optimizer of (P1) can be found by solving (P1) using the ellipsoid method or interior point methods.

The set of tight constraints in an interior optimizer is unique. Proof: Suppose by contradiction that there are two interior-optimizers, x1 and x2, with different sets of tight constraints. Since the feasible set is convex, the average solution x3 = (x1+x2)/2 is also an optimizer. Every constraint that is not tight in either x1 or x2, is not tight in x3. Therefore, the number of tight constraints in x3 is smaller than in x1 and x2, contradicting the definition of an interior optimizer.

Therefore, the set of tight constraints in the interior optimizer corresponds to the set of free objectives that become saturated. Using this method, the leximin solution can be computed using at most n iterations.

Method 2: iterating over all objectives. [7] It is possible to find at least one saturated objective using the following algorithm.

At each step, at least one free objective must become saturated. This is because, if no objective were saturated, then the mean of all optimal solutions to (P2) would be a feasible solution in which all objective values are larger than - contradicting the optimality of solution to (P1). For example, suppose , objective 1 is not saturated because there is a solution with value-vector (3,1), and objective 2 is not saturated because there exists a solution with value-vector and (1,3). Then, there exists a solution with value-vector at least (2,2), but should have been at least 2.

Therefore, after at most n iterations, all variables are saturated and a leximin-optimal solution is found. In each iteration t, the algorithm solves at most n-t+1 linear programs; therefore, the run-time of the algorithm is at most times the run-time of the LP solver.

In some cases, the run-time of the saturation algorithm can be improved. Instead of finding all saturated objectives, we can break out of the inner loop after finding one saturated objective; the algorithm still stops after at most n iterations, and may reduce the number of linear programs (P2) we need to solve. [5] :Alg.3

Furthermore, instead of looping over all objectives to find a saturated one, the algorithm can find a saturated objective using the dual problem of (P1). In some cases, the dual variables are given as a byproduct of solving (P1), for example, when the objectives and constraints are linear and the solver is the simplex algorithm. In this case, (P2) is not needed at all, and the run-time of the algorithm is at most times the run-time of the solver of (P1). [5] :Alg.4

All these variants work only for convex problems. For non-convex problems, there might be no saturated objective, so the algorithm might not stop.

Ordered Outcomes Algorithm for general problems

The Ordered Outcomes Algorithm works in arbitrary domains (not necessarily convex). It was developed by Ogryczak and Śliwiński [16] and also presented in the context of telecommunication networks by Ogryczak, Pioro and Tomaszewski, [5] and in the context of location problems by Ogryczak. [17] The algorithm reduces lexmaxmin optimization to the easier problem of lexicographic optimization. Lexicographic optimization can be done with a simple sequential algorithm, which solves at most n linear programs. The reduction starts with the following presentation of lexmaxmin:

This problem cannot be solved as-is, because (the t-th smallest value in ) is not a simple function of x. The problem (L1) is equivalent to the following problem, where the sum of the t smallest values in :

This problem can be solved iteratively using lexicographic optimization, but the number of constraints in each iteration t is C(n,t) -- the number of subsets of size t. This grows exponentially with n. It is possible to reduce the problem to a different problem, in which the number of constraints is polynomial in n.

For every t, the sum can be computed as the optimal value to the following problem, with n+1 auxiliary variables (an unbounded variable , and non-negative variables for all j in 1,...,n), and n additional constraints: [5] :Thm.8 [18]

Proof. Let us compute the values of the auxiliary variables in the optimal solution.

Therefore, the problem (L2) is equivalent to the following lexicographic maximization problem: [5] :(32)

This problem (L4) has additional variables, and additional constraints. It can be solved by every algorithm for solving lexicographic maximization, for example: the sequential algorithm using n linear programs, or the lexicographic simplex algorithm (if the objectives and constraints are linear).

Approximate leximin solutions

One advantage of the Ordered Outcomes Algorithm is that it can be used even when the single-problem solver is inaccurate, and returns only approximate solutions. Specifically, if the single-problem solver approximates the optimal single-problem solution with multiplicative factor α ∈ (0,1] and additive factor ϵ ≥ 0, then the algorithm returns a solution that approximates the leximin-optimal solution with multiplicative factor α2/(1 − α + α2) and additive factor ϵ/(1 − α + α2). [19]

Ordered Values Algorithm for general problems

The Ordered Values Algorithm works in any domain in which the set of possible values of the objective functions is finite. It was developed by Ogryczak and Śliwiński. [16] Let be the set of all values that can be returned by the functions , such that . Given a solution x, and an integer k in {1,..,r}, define as the number of occurrences of the value vr in the vector . Then, the lexmaxmin problem can be stated as the following lexicographic minimization problem:

since we want to have as few as possible functions attaining the smallest value; subject to this, as few as possible functions attaining the next-smallest value; and so on. Ogryczak and Śliwiński [16] show how to transform this non-linear program into a linear program with auxiliary variables. In their computational experiments, the Ordered Values algorithm runs much faster than the Saturation algorithm and the Ordered Outcomes algorithm.

Behringer's algorithm for quasiconcave functions

Behringer [4] presented a sequential algorithm for lexmaxmin optimization[ clarification needed ] when the objectives are quasiconvex functions, and the feasible set X is a convex set.

Weighted average

Yager [20] presented a way to represent the leximin ordering analytically using the Ordered weighted averaging aggregation operator. He assumes that all objective values are real numbers between 0 and 1, and the smallest difference between any two possible values is some constant d < 1 (so that values with difference smaller than d are considered equal). The weight of is set to approximately . This guarantees that maximizing the weighted sum is equivalent to lexmaxmin.

Algorithms for discrete variables

If the set of vectors is discrete, and the domain is sufficiently small, then it is possible to use one of the functions representing the leximin order, and maximize it subject to the constraints, using a solver for constraint-satisfaction problems.

But if the domain is large, the above approach becomes unfeasible due to the large number of possible values that this function can have: , where m is the number of different values in the domain, and n is the number of variables. [21]

Bouveret and Lemaître present five different algorithms for finding leximin-optimal solutions to discrete constraint-satisfaction problems: [21]

  1. Branch and bound based on the LEXIMIN constraint - a constraint on two vectors x and y, saying that y is leximin-larger than x.
  2. Branching on saturated subsets - finding subsets of variables that must be fixed at the minimum value, and finding the maximum-minimum value for the other variables.
  3. Using the SORT constraint - a constraint on two vectors x and y, saying that y contains the same elements as x sorted in ascending order. This constraint can be computed efficiently by several algorithms. [22] [23]
  4. Using the ATLEAST constraint.
  5. Using max-min transformations.

In their experiments, the best-performing approach was 4 (ATLEAST), followed by 3 (SORT) followed by 1 (LEXIMIN).

Dall'aglio [24] presents an algorithm for computing a leximin-optimal resource allocation.

Related Research Articles

<span class="mw-page-title-main">Linear programming</span> Method to solve optimization problems

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming.

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.

Branch and bound is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution. It is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.

In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.

<span class="mw-page-title-main">Interior-point method</span> Algorithms for solving convex optimization problems

Interior-point methods are algorithms for solving linear and non-linear convex optimization problems. IPMs combine two advantages of previously-known algorithms:

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in 1956. In each iteration, the Frank–Wolfe algorithm considers a linear approximation of the objective function, and moves towards a minimizer of this linear function.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.

In mathematical optimization, constrained optimization is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The objective function is either a cost function or energy function, which is to be minimized, or a reward function or utility function, which is to be maximized. Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.

Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

Parametric programming is a type of mathematical optimization, where the optimization problem is solved as a function of one or multiple parameters. Developed in parallel to sensitivity analysis, its earliest mention can be found in a thesis from 1952. Since then, there have been considerable developments for the cases of multiple parameters, presence of integer variables as well as nonlinearities.

The quadratic knapsack problem (QKP), first introduced in 19th century, is an extension of knapsack problem that allows for quadratic terms in the objective function: Given a set of items, each with a weight, a value, and an extra profit that can be earned if two items are selected, determine the number of items to include in a collection without exceeding capacity of the knapsack, so as to maximize the overall profit. Usually, quadratic knapsack problems come with a restriction on the number of copies of each kind of item: either 0, or 1. This special type of QKP forms the 0-1 quadratic knapsack problem, which was first discussed by Gallo et al. The 0-1 quadratic knapsack problem is a variation of knapsack problems, combining the features of unbounded knapsack problem, 0-1 knapsack problem and quadratic knapsack problem.

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

In mathematics, leximin order is a total preorder on finite-dimensional vectors. A more accurate, but less common term is leximin preorder. The leximin order is particularly important in social choice theory and fair division.

Egalitarian item allocation, also called max-min item allocation is a fair item allocation problem, in which the fairness criterion follows the egalitarian rule. The goal is to maximize the minimum value of an agent. That is, among all possible allocations, the goal is to find an allocation in which the smallest value of an agent is as large as possible. In case there are two or more allocations with the same smallest value, then the goal is to select, from among these allocations, the one in which the second-smallest value is as large as possible, and so on. Therefore, an egalitarian item allocation is sometimes called a leximin item allocation.

Lexicographic optimization is a kind of Multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Often, the different objectives can be ranked in order of importance to the decision-maker, so that objective is the most important, objective is the next most important, and so on. Lexicographic optimization presumes that the decision-maker prefers even a very small increase in , to even a very large increase in etc. Similarly, the decision-maker prefers even a very small increase in , to even a very large increase in etc. In other words, the decision-maker has lexicographic preferences, ranking the possible solutions according to a lexicographic order of their objective function values. Lexicographic optimization is sometimes called preemptive optimization, since a small increase in one objective value preempts a much larger increase in less important objective values.

In cooperative game theory, the nucleolus of a cooperative game is the solution that maximizes the smallest excess of a coalition. Subject to that, the nucleolus satisfies the second-smallest excess; and so on, in the leximin order. The nucleolus was introduced by David Schmeidler.

References

  1. 1 2 3 COMPUTATION OF THE KERNELS OF SIMPLE GAMES AND THE NUCLEOLUS OF N-PERSON GAMES (Report).
  2. Kohlberg, Elon (1972-07-01). "The Nucleolus as a Solution of a Minimization Problem". SIAM Journal on Applied Mathematics. 23 (1): 34–39. doi:10.1137/0123004. ISSN   0036-1399.
  3. Dresher, Melvin (1961). Games of Strategy: Theory and Applications.
  4. 1 2 Behringer, F. A. (1977-06-01). "Lexicographic quasiconcave multiobjective programming". Zeitschrift für Operations Research. 21 (3): 103–116. doi:10.1007/BF01919766. ISSN   1432-5217. S2CID   27807594.
  5. 1 2 3 4 5 6 7 8 Ogryczak, W.; Pióro, M.; Tomaszewski, A. (2005). "Telecommunications network design and max-min optimization problem". Journal of Telecommunications and Information Technology. 3: 43–56. ISSN   1509-4553.
  6. 1 2 Elkind, Edith; Pasechnik, Dmitrii (2009-01-04). Computing the nucleolus of weighted voting games. Society for Industrial and Applied Mathematics. pp. 327–335. doi:10.1137/1.9781611973068.37. hdl:10356/93815. ISBN   978-0-89871-680-1.
  7. 1 2 Willson, Stephen J. (1995). "Fair Division using Linear Programming" (PDF). Iowa State University (unpublished manuscript).
  8. Potters, Jos A. M.; Tijs, Stef H. (1992-02-01). "The Nucleolus of a Matrix Game and Other Nucleoli". Mathematics of Operations Research . 17 (1): 164–174. doi:10.1287/moor.17.1.164. hdl: 2066/223732 . ISSN   0364-765X. S2CID   40275405.
  9. Luss, Hanan (1999-06-01). "On Equitable Resource Allocation Problems: A Lexicographic Minimax Approach". Operations Research. 47 (3): 361–378. doi: 10.1287/opre.47.3.361 . ISSN   0030-364X.
  10. Nace, Dritan; Pioro, Michal (2008). "Max-min fairness and its applications to routing and load-balancing in communication networks: a tutorial". IEEE Communications Surveys & Tutorials. 10 (4): 5–17. doi:10.1109/SURV.2008.080403. ISSN   1553-877X. S2CID   6595144.
  11. Airiau, Stéphane; Aziz, Haris; Caragiannis, Ioannis; Kruger, Justin; Lang, Jérôme; Peters, Dominik (2019-08-10). "Portioning using ordinal preferences: fairness and efficiency". Proceedings of the 28th International Joint Conference on Artificial Intelligence. IJCAI'19. Macao, China: AAAI Press: 11–17. ISBN   978-0-9992411-4-1.
  12. Bei, Xiaohui; Lu, Xinhang; Suksompong, Warut (2022-06-28). "Truthful Cake Sharing". Proceedings of the AAAI Conference on Artificial Intelligence. 36 (5): 4809–4817. doi: 10.1609/aaai.v36i5.20408 . ISSN   2374-3468. S2CID   245117491.
  13. Ogryczak, Włodzimierz (1997-08-01). "On the lexicographic minimax approach to location problems". European Journal of Operational Research. 100 (3): 566–585. doi:10.1016/S0377-2217(96)00154-3. ISSN   0377-2217.
  14. Dubois, Didier; Fortemps, Philippe (1999-10-01). "Computing improved optimal solutions to max–min flexible constraint satisfaction problems". European Journal of Operational Research. 118 (1): 95–126. doi:10.1016/S0377-2217(98)00307-5. ISSN   0377-2217.
  15. Ehrgott, Matthias (2005-05-18). Multicriteria Optimization. Springer Science & Business Media. ISBN   978-3-540-21398-7.
  16. 1 2 3 Ogryczak, Włodzimierz; Śliwiński, Tomasz (2006). "On Direct Methods for Lexicographic Min-Max Optimization". In Gavrilova, Marina; Gervasi, Osvaldo; Kumar, Vipin; Tan, C. J. Kenneth; Taniar, David; Laganá, Antonio; Mun, Youngsong; Choo, Hyunseung (eds.). Computational Science and Its Applications - ICCSA 2006. Lecture Notes in Computer Science. Vol. 3982. Berlin, Heidelberg: Springer. pp. 802–811. doi:10.1007/11751595_85. ISBN   978-3-540-34076-8.
  17. Ogryczak, Włodzimierz (1997-08-01). "On the lexicographic minimax approach to location problems". European Journal of Operational Research. 100 (3): 566–585. doi:10.1016/S0377-2217(96)00154-3. ISSN   0377-2217.
  18. Ogryczak, Wlodzimierz; Tamir, Arie (2003-02-14). "Minimizing the sum of the k largest functions in linear time". Information Processing Letters. 85 (3): 117–122. doi:10.1016/S0020-0190(02)00370-8. ISSN   0020-0190.
  19. Hartman, Eden; Hassidim, Avinatan; Aumann, Yonatan; Segal-Halevi, Erel (2023), "Leximin Approximation: From Single-Objective to Multi-Objective", ECAI 2023, Frontiers in Artificial Intelligence and Applications, IOS Press, pp. 996–1003, arXiv: 2303.12506 , doi: 10.3233/FAIA230371 , ISBN   9781643684369 , retrieved 2023-10-15
  20. Yager, Ronald R. (1997-10-01). "On the analytic representation of the Leximin ordering and its application to flexible constraint propagation". European Journal of Operational Research. 102 (1): 176–192. doi:10.1016/S0377-2217(96)00217-2. ISSN   0377-2217.
  21. 1 2 Bouveret, Sylvain; Lemaître, Michel (2009-02-01). "Computing leximin-optimal solutions in constraint networks". Artificial Intelligence. 173 (2): 343–364. doi: 10.1016/j.artint.2008.10.010 . ISSN   0004-3702.
  22. Guernalec, Noëlle Bleuzen; Colmerauer, Alain (1997). "Narrowing a 2n-block of sortings in O (N logn)". In Smolka, Gert (ed.). Principles and Practice of Constraint Programming-CP97. Lecture Notes in Computer Science. Vol. 1330. Berlin, Heidelberg: Springer. pp. 2–16. doi:10.1007/BFb0017426. ISBN   978-3-540-69642-1.
  23. Mehlhorn, Kurt; Thiel, Sven (2000). "Faster Algorithms for Bound-Consistency of the Sortedness and the Alldifferent Constraint". In Dechter, Rina (ed.). Principles and Practice of Constraint Programming – CP 2000. Lecture Notes in Computer Science. Vol. 1894. Berlin, Heidelberg: Springer. pp. 306–319. doi:10.1007/3-540-45349-0_23. ISBN   978-3-540-45349-9.
  24. Dall'Aglio, Marco (2001-05-01). "The Dubins–Spanier optimization problem in fair division theory". Journal of Computational and Applied Mathematics. 130 (1–2): 17–40. Bibcode:2001JCoAM.130...17D. doi: 10.1016/S0377-0427(99)00393-3 . ISSN   0377-0427.