Benson's algorithm

Last updated

Benson's algorithm, named after Harold Benson, is a method for solving multi-objective linear programming problems and vector linear programs. This works by finding the "efficient extreme points in the outcome set". [1] The primary concept in Benson's algorithm is to evaluate the upper image of the vector optimization problem by cutting planes. [2]

Multi-objective linear programming is a subarea of mathematical optimization. A multiple objective linear program (MOLP) is a linear program with more than one objective function. An MOLP is a special case of a vector linear program. Multi-objection linear programming is also a subarea of Multi-objective optimization.

Vector optimization is a subarea of mathematical optimization where optimization problems with a vector-valued objective functions are optimized with respect to a given partial ordering and subject to certain constraints. A multi-objective optimization problem is a special case of a vector optimization problem: The objective space is the finite dimensional Euclidean space partially ordered by the component-wise "less than or equal to" ordering.

Cutting-plane method

In mathematical optimization, the cutting-plane method is any of a variety of optimization methods that iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts. Such procedures are commonly used to find integer solutions to mixed integer linear programming (MILP) problems, as well as to solve general, not necessarily differentiable convex optimization problems. The use of cutting planes to solve MILP was introduced by Ralph E. Gomory.

Contents

Idea of algorithm

Consider a vector linear program

for , , and a polyhedral convex ordering cone having nonempty interior and containing no lines. The feasible set is . In particular, Benson's algorithm finds the extreme points of the set , which is called upper image. [2]

Extreme point

In mathematics, an extreme point of a convex set S in a real vector space is a point in S which does not lie in any open line segment joining two points of S. In linear programming problems, an extreme point is also called vertex or corner point of S.

In case of , one obtains the special case of a multi-objective linear program (multiobjective optimization).

Dual algorithm

There is a dual variant of Benson's algorithm, [3] which is based on geometric duality [4] for multi-objective linear programs.

Implementations

Bensolve - a free VLP solver

Inner

Related Research Articles

Quadratic programming (QP) is the process of solving a special type of mathematical optimization problem—specifically, a quadratic optimization problem, that is, the problem of optimizing a quadratic function of several variables subject to linear constraints on these variables. Quadratic programming is a particular type of nonlinear programming.

Linear programming

Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.

Support-vector machine set of methods for supervised statistical learning

In machine learning, support-vector machines are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.

In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. Whereas deterministic optimization problems are formulated with known parameters, real world problems almost invariably include some unknown parameters. When the parameters are known only within certain bounds, one approach to tackling such problems is called robust optimization. Here the goal is to find a solution which is feasible for all such data and optimal in some sense. Stochastic programming models are similar in style but take advantage of the fact that probability distributions governing the data are known or can be estimated. The goal here is to find some policy that is feasible for all the possible data instances and maximizes the expectation of some function of the decisions and the random variables. More generally, such models are formulated, solved analytically or numerically, and analyzed in order to provide useful information to a decision-maker.

Stochastic gradient descent, also known as incremental gradient descent, is an iterative method for optimizing a differentiable objective function, a stochastic approximation of gradient descent optimization. A 2018 article implicitly credits Herbert Robbins and Sutton Monro for developing SGD in their 1951 article titled "A Stochastic Approximation Method"; see Stochastic approximation for more information. It is called stochastic because samples are selected randomly instead of as a single group or in the order they appear in the training set.

Farkas' lemma is a solvability theorem for a finite system of linear inequalities in mathematics. It was originally proven by the Hungarian mathematician Gyula Farkas. Farkas' lemma is the key result underpinning the linear programming duality and has played a central role in the development of mathematical optimization. It is used amongst other things in the proof of the Karush–Kuhn–Tucker theorem in nonlinear programming.

The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in 1956. In each iteration, the Frank–Wolfe algorithm considers a linear approximation of the objective function, and moves towards a minimizer of this linear function.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem. However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition.

Convex cone

In linear algebra, a convex cone is a subset of a vector space over an ordered field that is closed under linear combinations with positive coefficients.

Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.

A second-order cone program (SOCP) is a convex optimization problem of the form

Conic optimization is a subfield of convex optimization that studies problems consisting of minimizing a convex function over the intersection of an affine subspace and a convex cone.

Multi-objective optimization is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

The dual of a given linear program (LP) is another LP that is derived from the original LP in the following schematic way:

In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function one.

In applied mathematics, weak duality is a concept in optimization which states that the duality gap is always greater than or equal to 0. That means the solution to the primal (minimization) problem is always greater than or equal to the solution to an associated dual problem. This is opposed to strong duality which only holds in certain cases.

In applied mathematics, Graver bases enable iterative solutions of linear and various nonlinear integer programming problems in polynomial time. They were introduced by Jack E. Graver. Their connection to the theory of Gröbner bases was discussed by Bernd Sturmfels. The algorithmic theory of Graver bases and its application to integer programming is described by Shmuel Onn.

In mathematical optimization, total dual integrality is a sufficient condition for the integrality of a polyhedron. Thus, the optimization of a linear objective over the integral points of such a polyhedron can be done using techniques from linear programming.

References

  1. Harold P. Benson (1998). "An Outer Approximation Algorithm for Generating All Efficient Extreme Points in the Outcome Set of a Multiple Objective Linear Programming Problem". Journal of Global Optimization. 13 (1): 1–24. doi:10.1023/A:1008215702611.
  2. 1 2 Andreas Löhne (2011). Vector Optimization with Infimum and Supremum. Springer. pp. 162–169. ISBN   9783642183508.
  3. Ehrgott, Matthias; Löhne, Andreas; Shao, Lizhen (2011). "A dual variant of Benson's "outer approximation algorithm" for multiple objective linear programming". Journal of Global Optimization. 52 (4): 757–778. doi:10.1007/s10898-011-9709-y. ISSN   0925-5001.
  4. Heyde, Frank; Löhne, Andreas (2008). "Geometric Duality in Multiple Objective Linear Programming" (PDF). SIAM Journal on Optimization. 19 (2): 836–845. doi:10.1137/060674831. ISSN   1052-6234.