Multi-objective linear programming

Last updated

Multi-objective linear programming is a subarea of mathematical optimization. A multiple objective linear program (MOLP) is a linear program with more than one objective function. An MOLP is a special case of a vector linear program. Multi-objective linear programming is also a subarea of Multi-objective optimization.

Contents

Problem formulation

In mathematical terms, a MOLP can be written as:

where is an matrix, is a matrix, is an -dimensional vector with components in , is an -dimensional vector with components in , is an -dimensional vector with components in , is an -dimensional vector with components in

Solution concepts

A feasible point is called efficient if there is no feasible point with , , where denotes the component-wise ordering.

Often in the literature, the aim in multiple objective linear programming is to compute the set of all efficient extremal points..... [1] There are also algorithms to determine the set of all maximal efficient faces. [2] Based on these goals, the set of all efficient (extreme) points can seen to be the solution of MOLP. This type of solution concept is called decision set based. [3] It is not compatible with an optimal solution of a linear program but rather parallels the set of all optimal solutions of a linear program (which is more difficult to determine).

Efficient points are frequently called efficient solutions. This term is misleading because a single efficient point can be already obtained by solving one linear program, such as the linear program with the same feasible set and the objective function being the sum of the objectives of MOLP. [4]

More recent references consider outcome set based solution concepts [5] and corresponding algorithms. [6] [3] Assume MOLP is bounded, i.e. there is some such that for all feasible . A solution of MOLP is defined to be a finite subset of efficient points that carries a sufficient amount of information in order to describe the upper image of MOLP. Denoting by the feasible set of MOLP, the upper image of MOLP is the set . A formal definition of a solution [5] [7] is as follows:

A finite set of efficient points is called solution to MOLP if ("conv" denotes the convex hull).

If MOLP is not bounded, a solution consists not only of points but of points and directions [7] [8]

Solution methods

Multiobjective variants of the simplex algorithm are used to compute decision set based solutions [1] [2] [9] and objective set based solutions. [10]

Objective set based solutions can be obtained by Benson's algorithm. [3] [8]

Multiobjective linear programming is equivalent to polyhedral projection. [11]

Related Research Articles

Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.

Linear programming Method to solve some optimization problems

Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.

Mathematical optimization Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

Support-vector machine Set of methods for supervised statistical learning

In machine learning, support-vector machines are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories by Vladimir Vapnik with colleagues, SVMs are one of the most robust prediction methods, being based on statistical learning frameworks or VC theory proposed by Vapnik and Chervonenkis (1974). Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier. SVM maps training examples to points in space so as to maximise the width of the gap between the two categories. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.

In mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming.

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.

Multiple-criteria decision analysis

Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making. Conflicting criteria are typical in evaluating options: cost or price is usually one of the main criteria, and some measure of quality is typically another criterion, easily in conflict with the cost. In purchasing a car, cost, comfort, safety, and fuel economy may be some of the main criteria we consider – it is unusual that the cheapest car is the most comfortable and the safest one. In portfolio management, managers are interested in getting high returns while simultaneously reducing risks; however, the stocks that have the potential of bringing high returns typically carry high risk of losing money. In a service industry, customer satisfaction and the cost of providing service are fundamental conflicting criteria.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

Isotonic regression Type of numerical analysis

In statistics, isotonic regression or monotonic regression is the technique of fitting a free-form line to a sequence of observations such that the fitted line is non-decreasing everywhere, and lies as close to the observations as possible.

The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in 1956. In each iteration, the Frank–Wolfe algorithm considers a linear approximation of the objective function, and moves towards a minimizer of this linear function.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem. However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition.

Ellipsoid method

In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size.

Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

A second-order cone program (SOCP) is a convex optimization problem of the form

Multi-objective optimization is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

Bilevel optimization is a special kind of optimization where one problem is embedded (nested) within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly referred to as the lower-level optimization task. These problems involve two kinds of variables, referred to as the upper-level variables and the lower-level variables.

In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function one.

Vector optimization is a subarea of mathematical optimization where optimization problems with a vector-valued objective functions are optimized with respect to a given partial ordering and subject to certain constraints. A multi-objective optimization problem is a special case of a vector optimization problem: The objective space is the finite dimensional Euclidean space partially ordered by the component-wise "less than or equal to" ordering.

Benson's algorithm, named after Harold Benson, is a method for solving multi-objective linear programming problems and vector linear programs. This works by finding the "efficient extreme points in the outcome set". The primary concept in Benson's algorithm is to evaluate the upper image of the vector optimization problem by cutting planes.

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. The power of quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

References

  1. 1 2 Ecker, J. G.; Kouada, I. A. (1978). "Finding all efficient extreme points for multiple objective linear programs". Mathematical Programming. 14 (1): 249–261. doi:10.1007/BF01588968. ISSN   0025-5610. S2CID   42726689.
  2. 1 2 Ecker, J. G.; Hegner, N. S.; Kouada, I. A. (1980). "Generating all maximal efficient faces for multiple objective linear programs". Journal of Optimization Theory and Applications. 30 (3): 353–381. doi:10.1007/BF00935493. ISSN   0022-3239. S2CID   120455645.
  3. 1 2 3 Benson, Harold P. (1998). "An outer approximation algorithm for generating all efficient extreme points in the outcome set of a multiple objective linear programming problem". Journal of Global Optimization. 13 (1): 1–24. doi:10.1023/A:1008215702611. ISSN   0925-5001. S2CID   45440728.
  4. Ehrgott, M. (2005). Multicriteria Optimization. Springer. CiteSeerX   10.1.1.360.5223 . doi:10.1007/3-540-27659-9. ISBN   978-3-540-21398-7.
  5. 1 2 Heyde, Frank; Löhne, Andreas (2011). "Solution concepts in vector optimization: a fresh look at an old story" (PDF). Optimization. 60 (12): 1421–1440. doi:10.1080/02331931003665108. ISSN   0233-1934. S2CID   54519405.
  6. Dauer, J.P.; Saleh, O.A. (1990). "Constructing the set of efficient objective values in multiple objective linear programs". European Journal of Operational Research. 46 (3): 358–365. doi:10.1016/0377-2217(90)90011-Y. ISSN   0377-2217.
  7. 1 2 Löhne, Andreas (2011). Vector Optimization with Infimum and Supremum. Vector Optimization. doi:10.1007/978-3-642-18351-5. ISBN   978-3-642-18350-8. ISSN   1867-8971.
  8. 1 2 Löhne, Andreas; Weißing, Benjamin (2017). "The vector linear program solver Bensolve – notes on theoretical background". European Journal of Operational Research. 260 (3): 807–813. arXiv: 1510.04823 . doi:10.1016/j.ejor.2016.02.039. ISSN   0377-2217. S2CID   17267946.
  9. Armand, P.; Malivert, C. (1991). "Determination of the efficient set in multiobjective linear programming". Journal of Optimization Theory and Applications. 70 (3): 467–489. CiteSeerX   10.1.1.161.9730 . doi:10.1007/BF00941298. ISSN   0022-3239. S2CID   18407847.
  10. Rudloff, Birgit; Ulus, Firdevs; Vanderbei, Robert (2016). "A parametric simplex algorithm for linear vector optimization problems". Mathematical Programming. 163 (1–2): 213–242. arXiv: 1507.01895 . doi:10.1007/s10107-016-1061-z. ISSN   0025-5610. S2CID   13844342.
  11. Löhne, Andreas; Weißing, Benjamin (2016). "Equivalence between polyhedral projection, multiple objective linear programming and vector linear programming". Mathematical Methods of Operations Research. 84 (2): 411–426. arXiv: 1507.00228 . doi:10.1007/s00186-016-0554-0. ISSN   1432-2994. S2CID   26137201.