Multi-objective linear programming is a subarea of mathematical optimization. A multiple objective linear program (MOLP) is a linear program with more than one objective function. An MOLP is a special case of a vector linear program. Multi-objective linear programming is also a subarea of Multi-objective optimization.
In mathematical terms, a MOLP can be written as:
where is an matrix, is a matrix, is an -dimensional vector with components in , is an -dimensional vector with components in , is an -dimensional vector with components in , is an -dimensional vector with components in
A feasible point is called efficient if there is no feasible point with , , where denotes the component-wise ordering.
Often in the literature, the aim in multiple objective linear programming is to compute the set of all efficient extremal points..... [1] There are also algorithms to determine the set of all maximal efficient faces. [2] Based on these goals, the set of all efficient (extreme) points can seen to be the solution of MOLP. This type of solution concept is called decision set based. [3] It is not compatible with an optimal solution of a linear program but rather parallels the set of all optimal solutions of a linear program (which is more difficult to determine).
Efficient points are frequently called efficient solutions. This term is misleading because a single efficient point can be already obtained by solving one linear program, such as the linear program with the same feasible set and the objective function being the sum of the objectives of MOLP. [4]
More recent references consider outcome set based solution concepts [5] and corresponding algorithms. [6] [3] Assume MOLP is bounded, i.e. there is some such that for all feasible . A solution of MOLP is defined to be a finite subset of efficient points that carries a sufficient amount of information in order to describe the upper image of MOLP. Denoting by the feasible set of MOLP, the upper image of MOLP is the set . A formal definition of a solution [5] [7] is as follows:
A finite set of efficient points is called solution to MOLP if ("conv" denotes the convex hull).
If MOLP is not bounded, a solution consists not only of points but of points and directions [7] [8]
Multiobjective variants of the simplex algorithm are used to compute decision set based solutions [1] [2] [9] and objective set based solutions. [10]
Objective set based solutions can be obtained by Benson's algorithm. [3] [8]
Multiobjective linear programming is equivalent to polyhedral projection. [11]
Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming.
Linear programming is a method to achieve the best outcome in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming.
Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
In machine learning, support-vector machines are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories by Vladimir Vapnik with colleagues, SVMs are one of the most robust prediction methods, being based on statistical learning frameworks or VC theory proposed by Vapnik and Chervonenkis (1974). Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier. SVM maps training examples to points in space so as to maximise the width of the gap between the two categories. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.
In mathematical optimization, Dantzig's simplex algorithm is a popular algorithm for linear programming.
An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.
Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making. Conflicting criteria are typical in evaluating options: cost or price is usually one of the main criteria, and some measure of quality is typically another criterion, easily in conflict with the cost. In purchasing a car, cost, comfort, safety, and fuel economy may be some of the main criteria we consider – it is unusual that the cheapest car is the most comfortable and the safest one. In portfolio management, managers are interested in getting high returns while simultaneously reducing risks; however, the stocks that have the potential of bringing high returns typically carry high risk of losing money. In a service industry, customer satisfaction and the cost of providing service are fundamental conflicting criteria.
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
In statistics, isotonic regression or monotonic regression is the technique of fitting a free-form line to a sequence of observations such that the fitted line is non-decreasing everywhere, and lies as close to the observations as possible.
The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in 1956. In each iteration, the Frank–Wolfe algorithm considers a linear approximation of the objective function, and moves towards a minimizer of this linear function.
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. The solution to the dual problem provides a lower bound to the solution of the primal (minimization) problem. However in general the optimal values of the primal and dual problems need not be equal. Their difference is called the duality gap. For convex optimization problems, the duality gap is zero under a constraint qualification condition.
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size.
Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space, i.e., a spectrahedron.
A second-order cone program (SOCP) is a convex optimization problem of the form
Multi-objective optimization is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.
Bilevel optimization is a special kind of optimization where one problem is embedded (nested) within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly referred to as the lower-level optimization task. These problems involve two kinds of variables, referred to as the upper-level variables and the lower-level variables.
In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function one.
Vector optimization is a subarea of mathematical optimization where optimization problems with a vector-valued objective functions are optimized with respect to a given partial ordering and subject to certain constraints. A multi-objective optimization problem is a special case of a vector optimization problem: The objective space is the finite dimensional Euclidean space partially ordered by the component-wise "less than or equal to" ordering.
Benson's algorithm, named after Harold Benson, is a method for solving multi-objective linear programming problems and vector linear programs. This works by finding the "efficient extreme points in the outcome set". The primary concept in Benson's algorithm is to evaluate the upper image of the vector optimization problem by cutting planes.
Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. The power of quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.