Multi-objective optimization

Last updated

Multi-objective optimization or Pareto optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, or multiattribute optimization) is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

Contents

For a multi-objective optimization problem, it is not guaranteed that a single solution simultaneously optimizes each objective. The objective functions are said to be conflicting. A solution is called nondominated, Pareto optimal, Pareto efficient or noninferior, if none of the objective functions can be improved in value without degrading some of the other objective values. Without additional subjective preference information, there may exist a (possibly infinite) number of Pareto optimal solutions, all of which are considered equally good. Researchers study multi-objective optimization problems from different viewpoints and, thus, there exist different solution philosophies and goals when setting and solving them. The goal may be to find a representative set of Pareto optimal solutions, and/or quantify the trade-offs in satisfying the different objectives, and/or finding a single solution that satisfies the subjective preferences of a human decision maker (DM).

Bicriteria optimization denotes the special case in which there are two objective functions.

There is a direct relationship between multitask optimization and multi-objective optimization. [1]

Introduction

A multi-objective optimization problem is an optimization problem that involves multiple objective functions. [2] [3] [4] In mathematical terms, a multi-objective optimization problem can be formulated as

where the integer is the number of objectives and the set is the feasible set of decision vectors, which is typically but it depends on the -dimensional application domain. The feasible set is typically defined by some constraint functions. In addition, the vector-valued objective function is often defined as

Example of a Pareto frontier (in red), the set of Pareto optimal solutions (those that are not dominated by any other feasible solutions). The boxed points represent feasible choices, and smaller values are preferred to larger ones. Point C is not on the Pareto frontier because it is dominated by both point A and point B. Points A and B are not strictly dominated by any other, and hence do lie on the frontier. Front pareto.svg
Example of a Pareto frontier (in red), the set of Pareto optimal solutions (those that are not dominated by any other feasible solutions). The boxed points represent feasible choices, and smaller values are preferred to larger ones. Point C is not on the Pareto frontier because it is dominated by both point A and point B. Points A and B are not strictly dominated by any other, and hence do lie on the frontier.

If some objective function is to be maximized, it is equivalent to minimize its negative or its inverse. We denote the image of ; a feasible solution or feasible decision; and an objective vector or an outcome.

In multi-objective optimization, there does not typically exist a feasible solution that minimizes all objective functions simultaneously. Therefore, attention is paid to Pareto optimal solutions; that is, solutions that cannot be improved in any of the objectives without degrading at least one of the other objectives. In mathematical terms, a feasible solution is said to (Pareto) dominate another solution , if

  1. , and
  2. .

A solution (and the corresponding outcome ) is called Pareto optimal if there does not exist another solution that dominates it. The set of Pareto optimal outcomes, denoted , is often called the Pareto front , Pareto frontier, or Pareto boundary.

The Pareto front of a multi-objective optimization problem is bounded by a so-called nadir objective vectorand an ideal objective vector, if these are finite. The nadir objective vector is defined as

and the ideal objective vector as

In other words, the components of the nadir and ideal objective vectors define the upper and lower bounds of the objective function of Pareto optimal solutions. In practice, the nadir objective vector can only be approximated as, typically, the whole Pareto optimal set is unknown. In addition, a utopian objective vector, such that where is a small constant, is often defined because of numerical reasons.

Examples of applications

Economics

In economics, many problems involve multiple objectives along with constraints on what combinations of those objectives are attainable. For example, consumer's demand for various goods is determined by the process of maximization of the utilities derived from those goods, subject to a constraint based on how much income is available to spend on those goods and on the prices of those goods. This constraint allows more of one good to be purchased only at the sacrifice of consuming less of another good; therefore, the various objectives (more consumption of each good is preferred) are in conflict with each other. A common method for analyzing such a problem is to use a graph of indifference curves, representing preferences, and a budget constraint, representing the trade-offs that the consumer is faced with.

Another example involves the production possibilities frontier, which specifies what combinations of various types of goods can be produced by a society with certain amounts of various resources. The frontier specifies the trade-offs that the society is faced with — if the society is fully utilizing its resources, more of one good can be produced only at the expense of producing less of another good. A society must then use some process to choose among the possibilities on the frontier.

Macroeconomic policy-making is a context requiring multi-objective optimization. Typically a central bank must choose a stance for monetary policy that balances competing objectives — low inflation, low unemployment, low balance of trade deficit, etc. To do this, the central bank uses a model of the economy that quantitatively describes the various causal linkages in the economy; it simulates the model repeatedly under various possible stances of monetary policy, in order to obtain a menu of possible predicted outcomes for the various variables of interest. Then in principle it can use an aggregate objective function to rate the alternative sets of predicted outcomes, although in practice central banks use a non-quantitative, judgement-based, process for ranking the alternatives and making the policy choice.

Finance

In finance, a common problem is to choose a portfolio when there are two conflicting objectives — the desire to have the expected value of portfolio returns be as high as possible, and the desire to have risk, often measured by the standard deviation of portfolio returns, be as low as possible. This problem is often represented by a graph in which the efficient frontier shows the best combinations of risk and expected return that are available, and in which indifference curves show the investor's preferences for various risk-expected return combinations. The problem of optimizing a function of the expected value (first moment) and the standard deviation (square root of the second central moment) of portfolio return is called a two-moment decision model.

Optimal control

In engineering and economics, many problems involve multiple objectives which are not describable as the-more-the-better or the-less-the-better; instead, there is an ideal target value for each objective, and the desire is to get as close as possible to the desired value of each objective. For example, energy systems typically have a trade-off between performance and cost [5] [6] or one might want to adjust a rocket's fuel usage and orientation so that it arrives both at a specified place and at a specified time; or one might want to conduct open market operations so that both the inflation rate and the unemployment rate are as close as possible to their desired values.

Often such problems are subject to linear equality constraints that prevent all objectives from being simultaneously perfectly met, especially when the number of controllable variables is less than the number of objectives and when the presence of random shocks generates uncertainty. Commonly a multi-objective quadratic objective function is used, with the cost associated with an objective rising quadratically with the distance of the objective from its ideal value. Since these problems typically involve adjusting the controlled variables at various points in time and/or evaluating the objectives at various points in time, intertemporal optimization techniques are employed. [7]

Optimal design

Product and process design can be largely improved using modern modeling, simulation, and optimization techniques.[ citation needed ] The key question in optimal design is measuring what is good or desirable about a design. Before looking for optimal designs, it is important to identify characteristics that contribute the most to the overall value of the design. A good design typically involves multiple criteria/objectives such as capital cost/investment, operating cost, profit, quality and/or product recovery, efficiency, process safety, operation time, etc. Therefore, in practical applications, the performance of process and product design is often measured with respect to multiple objectives. These objectives are typically conflicting, i.e., achieving the optimal value for one objective requires some compromise on one or more objectives.

For example, when designing a paper mill, one can seek to decrease the amount of capital invested in a paper mill and enhance the quality of paper simultaneously. If the design of a paper mill is defined by large storage volumes and paper quality is defined by quality parameters, then the problem of optimal design of a paper mill can include objectives such as i) minimization of expected variation of those quality parameters from their nominal values, ii) minimization of the expected time of breaks and iii) minimization of the investment cost of storage volumes. Here, the maximum volume of towers is a design variable. This example of optimal design of a paper mill is a simplification of the model used in. [8] Multi-objective design optimization has also been implemented in engineering systems in the circumstances such as control cabinet layout optimization, [9] airfoil shape optimization using scientific workflows, [10] design of nano-CMOS, [11] system on chip design, design of solar-powered irrigation systems, [12] optimization of sand mould systems, [13] [14] engine design, [15] [16] optimal sensor deployment [17] and optimal controller design. [18] [19]

Process optimization

Multi-objective optimization has been increasingly employed in chemical engineering and manufacturing. In 2009, Fiandaca and Fraga used the multi-objective genetic algorithm (MOGA) to optimize the pressure swing adsorption process (cyclic separation process). The design problem involved the dual maximization of nitrogen recovery and nitrogen purity. The results approximated the Pareto frontier well with acceptable trade-offs between the objectives. [20]

In 2010, Sendín et al. solved a multi-objective problem for the thermal processing of food. They tackled two case studies (bi-objective and triple-objective problems) with nonlinear dynamic models. They used a hybrid approach consisting of the weighted Tchebycheff and the Normal Boundary Intersection approach. The novel hybrid approach was able to construct a Pareto optimal set for the thermal processing of foods. [21]

In 2013, Ganesan et al. carried out the multi-objective optimization of the combined carbon dioxide reforming and partial oxidation of methane. The objective functions were methane conversion, carbon monoxide selectivity, and hydrogen to carbon monoxide ratio. Ganesan used the Normal Boundary Intersection (NBI) method in conjunction with two swarm-based techniques (Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO)) to tackle the problem. [22] Applications involving chemical extraction [23] and bioethanol production processes [24] have posed similar multi-objective problems.

In 2013, Abakarov et al. proposed an alternative technique to solve multi-objective optimization problems arising in food engineering. [25] The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used to compute the initial set of the non-dominated or Pareto-optimal solutions. The Analytic Hierarchy Process and Tabular Method were used simultaneously for choosing the best alternative among the computed subset of non-dominated solutions for osmotic dehydration processes. [26]

In 2018, Pearce et al. formulated task allocation to human and robotic workers as a multi-objective optimization problem, considering production time and the ergonomic impact on the human worker as the two objectives considered in the formulation. Their approach used a Mixed-Integer Linear Program to solve the optimization problem for a weighted sum of the two objectives to calculate a set of Pareto optimal solutions. Applying the approach to several manufacturing tasks showed improvements in at least one objective in most tasks and in both objectives in some of the processes. [27]

Radio resource management

The purpose of radio resource management is to satisfy the data rates that are requested by the users of a cellular network. [28] The main resources are time intervals, frequency blocks, and transmit powers. Each user has its own objective function that, for example, can represent some combination of the data rate, latency, and energy efficiency. These objectives are conflicting since the frequency resources are very scarce, thus there is a need for tight spatial frequency reuse which causes immense inter-user interference if not properly controlled. Multi-user MIMO techniques are nowadays used to reduce the interference by adaptive precoding. The network operator would like to both bring great coverage and high data rates, thus the operator would like to find a Pareto optimal solution that balance the total network data throughput and the user fairness in an appropriate subjective manner.

Radio resource management is often solved by scalarization; that is, selection of a network utility function that tries to balance throughput and user fairness. The choice of utility function has a large impact on the computational complexity of the resulting single-objective optimization problem. [28] For example, the common utility of weighted sum rate gives an NP-hard problem with a complexity that scales exponentially with the number of users, while the weighted max-min fairness utility results in a quasi-convex optimization problem with only a polynomial scaling with the number of users. [29]

Electric power systems

Reconfiguration, by exchanging the functional links between the elements of the system, represents one of the most important measures which can improve the operational performance of a distribution system. The problem of optimization through the reconfiguration of a power distribution system, in terms of its definition, is a historical single objective problem with constraints. Since 1975, when Merlin and Back [30] introduced the idea of distribution system reconfiguration for active power loss reduction, until nowadays, a lot of researchers have proposed diverse methods and algorithms to solve the reconfiguration problem as a single objective problem. Some authors have proposed Pareto optimality based approaches (including active power losses and reliability indices as objectives). For this purpose, different artificial intelligence based methods have been used: microgenetic, [31] branch exchange, [32] particle swarm optimization [33] and non-dominated sorting genetic algorithm. [34]

Inspection of infrastructure

Autonomous inspection of infrastructure has the potential to reduce costs, risks and environmental impacts, as well as ensuring better periodic maintenance of inspected assets. Typically, planning such missions has been viewed as a single-objective optimization problem, where one aims to minimize the energy or time spent in inspecting an entire target structure. [35] For complex, real-world structures, however, covering 100% of an inspection target is not feasible, and generating an inspection plan may be better viewed as a multiobjective optimization problem, where one aims to both maximize inspection coverage and minimize time and costs. A recent study has indicated that multiobjective inspection planning indeed has the potential to outperform traditional methods on complex structures [36]

Solution

As multiple Pareto optimal solutions for multi-objective optimization problems usually exist, what it means to solve such a problem is not as straightforward as it is for a conventional single-objective optimization problem. Therefore, different researchers have defined the term "solving a multi-objective optimization problem" in various ways. This section summarizes some of them and the contexts in which they are used. Many methods convert the original problem with multiple objectives into a single-objective optimization problem. This is called a scalarized problem. If the Pareto optimality of the single-objective solutions obtained can be guaranteed, the scalarization is characterized as done neatly.

Solving a multi-objective optimization problem is sometimes understood as approximating or computing all or a representative set of Pareto optimal solutions. [37] [38]

When decision making is emphasized, the objective of solving a multi-objective optimization problem is referred to as supporting a decision maker in finding the most preferred Pareto optimal solution according to their subjective preferences. [2] [39] The underlying assumption is that one solution to the problem must be identified to be implemented in practice. Here, a human decision maker (DM) plays an important role. The DM is expected to be an expert in the problem domain.

The most preferred results can be found using different philosophies. Multi-objective optimization methods can be divided into four classes. [3]

  1. In so-called no-preference methods, no DM is expected to be available, but a neutral compromise solution is identified without preference information. [2] The other classes are so-called a priori, a posteriori, and interactive methods, and they all involve preference information from the DM in different ways.
  2. In a priori methods, preference information is first asked from the DM, and then a solution best satisfying these preferences is found.
  3. In a posteriori methods, a representative set of Pareto optimal solutions is first found, and then the DM must choose one of them.
  4. In interactive methods, the decision maker is allowed to search for the most preferred solution iteratively. In each iteration of the interactive method, the DM is shown Pareto optimal solution(s) and describes how the solution(s) could be improved. The information given by the DM is then taken into account while generating new Pareto optimal solution(s) for the DM to study in the next iteration. In this way, the DM learns about the feasibility of their wishes and can concentrate on solutions that are interesting to them. The DM may stop the search whenever they want to.

More information and examples of different methods in the four classes are given in the following sections.

No-preference methods

When a decision maker does not explicitly articulate any preference information, the multi-objective optimization method can be classified as a no-preference method. [3] A well-known example is the method of global criterion, [40] in which a scalarized problem of the form

is solved. In the above problem, can be any norm, with common choices including , , and . [2] The method of global criterion is sensitive to the scaling of the objective functions. Thus, it is recommended that the objectives be normalized into a uniform, dimensionless scale. [2] [39]

A priori methods

A priori methods require that sufficient preference information is expressed before the solution process. [3] Well-known examples of a priori methods include the utility function method, lexicographic method, and goal programming.

Utility function method

The utility function method assumes the decision maker's utility function is available. A mapping is a utility function if for all it holds that if the decision maker prefers to , and if the decision maker is indifferent between and . The utility function specifies an ordering of the decision vectors (recall that vectors can be ordered in many different ways). Once is obtained, it suffices to solve

but in practice, it is very difficult to construct a utility function that would accurately represent the decision maker's preferences, [2] particularly since the Pareto front is unknown before the optimization begins.

Lexicographic method

The lexicographic method assumes that the objectives can be ranked in the order of importance. We assume that the objective functions are in the order of importance so that is the most important and the least important to the decision maker. Subject to this assumption, various methods can be used to attain the lexicographically optimal solution. Note that a goal or target value is not specified for any objective here, which makes it different from the Lexicographic Goal Programming method.

Scalarizing

Linear scalarization approach is an easy method used to solve multi-objective optimization problem. It consists in aggregating the different optimization functions in a single function. However, this method only allows to find the supported solutions of the problem (i.e. points on the convex hull of the objective set). This animation shows that when the outcome set is not convex, not all efficient solutions can be found NonConvex.gif
Linear scalarization approach is an easy method used to solve multi-objective optimization problem. It consists in aggregating the different optimization functions in a single function. However, this method only allows to find the supported solutions of the problem (i.e. points on the convex hull of the objective set). This animation shows that when the outcome set is not convex, not all efficient solutions can be found

Scalarizing a multi-objective optimization problem is an a priori method, which means formulating a single-objective optimization problem such that optimal solutions to the single-objective optimization problem are Pareto optimal solutions to the multi-objective optimization problem. [3] In addition, it is often required that every Pareto optimal solution can be reached with some parameters of the scalarization. [3] With different parameters for the scalarization, different Pareto optimal solutions are produced. A general formulation for a scalarization of a multi-objective optimization problem is

where is a vector parameter, the set is a set depending on the parameter , and is a function.

Very well-known examples are:

where the weights of the objectives are the parameters of the scalarization.
where upper bounds are parameters as above and is the objective to be minimized.

Somewhat more advanced examples are the following:

One example of the achievement scalarizing problems can be formulated as
where the term is called the augmentation term, is a small constant, and and are the nadir and utopian vectors, respectively. In the above problem, the parameter is the so-called reference point representing objective function values preferred by the decision maker.
where is individual optima (absolute) for objectives of maximization and minimization to .
where the weights of the objectives are the parameters of the scalarization. If the parameters/weights are drawn uniformly in the positive orthant, it is shown that this scalarization provably converges to the Pareto front, [43] even when the front is non-convex.

For example, portfolio optimization is often conducted in terms of mean-variance analysis. In this context, the efficient set is a subset of the portfolios parametrized by the portfolio mean return in the problem of choosing portfolio shares to minimize the portfolio's variance of return subject to a given value of ; see Mutual fund separation theorem for details. Alternatively, the efficient set can be specified by choosing the portfolio shares to maximize the function ; the set of efficient portfolios consists of the solutions as ranges from zero to infinity.

Some of the above scalarizations involve invoking the minimax principle, where always the worst of the different objectives is optimized. [44]

A posteriori methods

A posteriori methods aim at producing all the Pareto optimal solutions or a representative subset of the Pareto optimal solutions. Most a posteriori methods fall into either one of the following three classes:

Mathematical programming

Well-known examples of mathematical programming-based a posteriori methods are the Normal Boundary Intersection (NBI), [45] Modified Normal Boundary Intersection (NBIm), [46] Normal Constraint (NC), [47] [48] Successive Pareto Optimization (SPO), [49] and Directed Search Domain (DSD) [50] methods, which solve the multi-objective optimization problem by constructing several scalarizations. The solution to each scalarization yields a Pareto optimal solution, whether locally or globally. The scalarizations of the NBI, NBIm, NC, and DSD methods are constructed to obtain evenly distributed Pareto points that give a good approximation of the real set of Pareto points.

Evolutionary algorithms

Evolutionary algorithms are popular approaches to generating Pareto optimal solutions to a multi-objective optimization problem. Most evolutionary multi-objective optimization (EMO) algorithms apply Pareto-based ranking schemes. Evolutionary algorithms such as the Non-dominated Sorting Genetic Algorithm-II (NSGA-II), [51] its extended version NSGA-III, [52] [53] Strength Pareto Evolutionary Algorithm 2 (SPEA-2) [54] and multiobjective differential evolution variants have become standard approaches, although some schemes based on particle swarm optimization and simulated annealing [55] are significant. The main advantage of evolutionary algorithms, when applied to solve multi-objective optimization problems, is the fact that they typically generate sets of solutions, allowing computation of an approximation of the entire Pareto front. The main disadvantage of evolutionary algorithms is their lower speed and the Pareto optimality of the solutions cannot be guaranteed; it is only known that none of the generated solutions is dominated by another.

Another paradigm for multi-objective optimization based on novelty using evolutionary algorithms was recently improved upon. [56] This paradigm searches for novel solutions in objective space (i.e., novelty search [57] on objective space) in addition to the search for non-dominated solutions. Novelty search is like stepping stones guiding the search to previously unexplored places. It is especially useful in overcoming bias and plateaus as well as guiding the search in many-objective optimization problems.

Deep learning methods

Deep learning conditional methods are new approaches to generating several Pareto optimal solutions. The idea is to use the generalization capacity of deep neural networks to learn a model of the entire Pareto front from a limited number of example trade-offs along that front, a task called Pareto Front Learning. [58] Several approaches address this setup, including using hypernetworks [58] and using Stein variational gradient descent. [59]

List of methods

Commonly known a posteriori methods are listed below:

Interactive methods

In interactive methods of optimizing multiple objective problems, the solution process is iterative and the decision maker continuously interacts with the method when searching for the most preferred solution (see e.g., Miettinen 1999, [2] Miettinen 2008 [70] ). In other words, the decision maker is expected to express preferences at each iteration to get Pareto optimal solutions that are of interest to the decision maker and learn what kind of solutions are attainable.

The following steps are commonly present in interactive methods of optimization: [70]

  1. initialize (e.g., calculate ideal and approximated nadir objective vectors and show them to the decision maker)
  2. generate a Pareto optimal starting point (by using e.g., some no-preference method or solution given by the decision maker)
  3. ask for preference information from the decision maker (e.g., aspiration levels or number of new solutions to be generated)
  4. generate new Pareto optimal solution(s) according to the preferences and show it/them and possibly some other information about the problem to the decision maker
  5. if several solutions were generated, ask the decision maker to select the best solution so far
  6. stop (if the decision maker wants to; otherwise, go to step 3).

The above aspiration levels refer to desirable objective function values forming a reference point. Instead of mathematical convergence, often used as a stopping criterion in mathematical optimization methods, psychological convergence is often emphasized in interactive methods. Generally speaking, a method is terminated when the decision maker is confident that he/she has found the most preferred solution available.

Types of preference information

There are different interactive methods involving different types of preference information. Three types can be identified based on

  1. trade-off information,
  2. reference points, and
  3. classification of objective functions. [70]

On the other hand, a fourth type of generating a small sample of solutions is included in: [71] [72] An example of the interactive method utilizing trade-off information is the Zionts-Wallenius method, [73] where the decision maker is shown several objective trade-offs at each iteration, and (s)he is expected to say whether (s)he likes, dislikes, or is indifferent with respect to each trade-off. In reference point-based methods (see e.g., [74] [75] ), the decision maker is expected at each iteration to specify a reference point consisting of desired values for each objective and a corresponding Pareto optimal solution(s) is then computed and shown to them for analysis. In classification-based interactive methods, the decision maker is assumed to give preferences in the form of classifying objectives at the current Pareto optimal solution into different classes, indicating how the values of the objectives should be changed to get a more preferred solution. Then, the classification information is considered when new (more preferred) Pareto optimal solution(s) are computed. In the satisficing trade-off method (STOM), [76] three classes are used: objectives whose values 1) should be improved, 2) can be relaxed, and 3) are acceptable as such. In the NIMBUS method, [77] [78] two additional classes are also used: objectives whose values 4) should be improved until a given bound and 5) can be relaxed until a given bound.

Hybrid methods

Different hybrid methods exist, but here we consider hybridizing MCDM (multi-criteria decision-making) and EMO (evolutionary multi-objective optimization). A hybrid algorithm in multi-objective optimization combines algorithms/approaches from these two fields (see e.g., [70] ). Hybrid algorithms of EMO and MCDM are mainly used to overcome shortcomings by utilizing strengths. Several types of hybrid algorithms have been proposed in the literature, e.g., incorporating MCDM approaches into EMO algorithms as a local search operator, leading a DM to the most preferred solution(s), etc. A local search operator is mainly used to enhance the rate of convergence of EMO algorithms.

The roots for hybrid multi-objective optimization can be traced to the first Dagstuhl seminar organized in November 2004 (see here). Here, some of the best minds[ citation needed ] in EMO (Professor Kalyanmoy Deb, Professor Jürgen Branke, etc.) and MCDM (Professor Kaisa Miettinen, Professor Ralph E. Steuer, etc.) realized the potential in combining ideas and approaches of MCDM and EMO fields to prepare hybrids of them. Subsequently, many more Dagstuhl seminars have been arranged to foster collaboration. Recently, hybrid multi-objective optimization has become an important theme in several international conferences in the area of EMO and MCDM (see e.g., [79] [80] ).

Visualization of the Pareto front

Visualization of the Pareto front is one of the a posteriori preference techniques of multi-objective optimization. The a posteriori preference techniques provide an important class of multi-objective optimization techniques. [2] Usually, the a posteriori preference techniques include four steps: (1) computer approximates the Pareto front, i.e., the Pareto optimal set in the objective space; (2) the decision maker studies the Pareto front approximation; (3) the decision maker identifies the preferred point at the Pareto front; (4) computer provides the Pareto optimal decision, whose output coincides with the objective point identified by the decision maker. From the point of view of the decision maker, the second step of the a posteriori preference techniques is the most complicated. There are two main approaches to informing the decision maker. First, a number of points of the Pareto front can be provided in the form of a list (interesting discussion and references are given in [81] ) or using heatmaps. [82]

Visualization in bi-objective problems: tradeoff curve

In the case of bi-objective problems, informing the decision maker concerning the Pareto front is usually carried out by its visualization: the Pareto front, often named the tradeoff curve in this case, can be drawn at the objective plane. The tradeoff curve gives full information on objective values and on objective tradeoffs, which inform how improving one objective is related to deteriorating the second one while moving along the tradeoff curve. The decision maker takes this information into account while specifying the preferred Pareto optimal objective point. The idea to approximate and visualize the Pareto front was introduced for linear bi-objective decision problems by S. Gass and T. Saaty. [83] This idea was developed and applied in environmental problems by J.L. Cohon. [84] A review of methods for approximating the Pareto front for various decision problems with a small number of objectives (mainly, two) is provided in. [85]

Visualization in high-order multi-objective optimization problems

There are two generic ideas for visualizing the Pareto front in high-order multi-objective decision problems (problems with more than two objectives). One of them, which is applicable in the case of a relatively small number of objective points that represent the Pareto front, is based on using the visualization techniques developed in statistics (various diagrams, etc.; see the corresponding subsection below). The second idea proposes the display of bi-objective cross-sections (slices) of the Pareto front. It was introduced by W.S. Meisel in 1973 [86] who argued that such slices inform the decision maker on objective tradeoffs. The figures that display a series of bi-objective slices of the Pareto front for three-objective problems are known as the decision maps. They give a clear picture of tradeoffs between the three criteria. The disadvantages of such an approach are related to the following two facts. First, the computational procedures for constructing the Pareto front's bi-objective slices are unstable since the Pareto front is usually not stable. Secondly, it is applicable in the case of only three objectives. In the 1980s, the idea of W.S. Meisel was implemented in a different form—in the form of the Interactive Decision Maps (IDM) technique. [87] More recently, N. Wesner [88] proposed using a combination of a Venn diagram and multiple scatterplots of the objective space to explore the Pareto frontier and select optimal solutions.

See also

Related Research Articles

<span class="mw-page-title-main">Pareto efficiency</span> Weakly optimal allocation of resources

In welfare economics, a Pareto improvement formalizes the idea of an outcome being "better in every possible way". A change is called a Pareto improvement if it leaves everyone in a society better-off. A situation is called Pareto efficient or Pareto optimal if all possible Pareto improvements have already been made; in other words, there are no longer any ways left to make one person better-off, without making some other person worse-off.

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

<span class="mw-page-title-main">Particle swarm optimization</span> Iterative simulation method

In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.

A fitness function is a particular type of objective function that is used to summarise, as a single figure of merit, how close a given design solution is to achieving the set aims. Fitness functions are used in software architecture and evolutionary algorithms (EA), such as genetic programming and genetic algorithms to guide simulations towards optimal design solutions.

Multi-disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization (MSDO), and multidisciplinary design analysis and optimization (MDAO).

Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function is equivalent to the minimization of the function .

<span class="mw-page-title-main">Ant colony optimization algorithms</span> Optimization algorithm

In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems that can be reduced to finding good paths through graphs. Artificial ants represent multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a preferred method for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.

<span class="mw-page-title-main">Multiple-criteria decision analysis</span> Operations research that evaluates multiple conflicting criteria in decision making

Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making. It is also known as multiple attribute utility theory, multiple attribute value theory, multiple attribute preference theory, and multi-objective decision analysis.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

<span class="mw-page-title-main">Pareto front</span> Set of all Pareto efficient situations

In multi-objective optimization, the Pareto front is the set of all Pareto efficient solutions. The concept is widely used in engineering. It allows the designer to restrict attention to the set of efficient choices, and to make tradeoffs within this set, rather than considering the full range of every parameter.

<span class="mw-page-title-main">Differential evolution</span> Method of mathematical optimization

In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the optimized problem and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.

Bilevel optimization is a special kind of optimization where one problem is embedded (nested) within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly referred to as the lower-level optimization task. These problems involve two kinds of variables, referred to as the upper-level variables and the lower-level variables.

In applied mathematics, multimodal optimization deals with optimization tasks that involve finding all or most of the multiple solutions of a problem, as opposed to a single best solution. Evolutionary multimodal optimization is a branch of evolutionary computation, which is closely related to machine learning. Wong provides a short survey, wherein the chapter of Shir and the book of Preuss cover the topic in more detail.

In applied mathematics, test functions, known as artificial landscapes, are useful to evaluate characteristics of optimization algorithms, such as convergence rate, precision, robustness and general performance.

Biogeography-based optimization (BBO) is an evolutionary algorithm (EA) that optimizes a function by stochastically and iteratively improving candidate solutions with regard to a given measure of quality, or fitness function. BBO belongs to the class of metaheuristics since it includes many variations, and since it does not make any assumptions about the problem and can therefore be applied to a wide class of problems.

Derivative-free optimization is a discipline in mathematical optimization that does not use derivative information in the classical sense to find optimal solutions: Sometimes information about the derivative of the objective function f is unavailable, unreliable or impractical to obtain. For example, f might be non-smooth, or time-consuming to evaluate, or in some way noisy, so that methods that rely on derivatives or approximate them via finite differences are of little use. The problem to find optimal points in such situations is referred to as derivative-free optimization, algorithms that do not use derivatives or finite differences are called derivative-free algorithms.

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

Multi-objective linear programming is a subarea of mathematical optimization. A multiple objective linear program (MOLP) is a linear program with more than one objective function. An MOLP is a special case of a vector linear program. Multi-objective linear programming is also a subarea of Multi-objective optimization.

Lexicographic optimization is a kind of Multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Often, the different objectives can be ranked in order of importance to the decision-maker, so that objective is the most important, objective is the next most important, and so on. Lexicographic optimization presumes that the decision-maker prefers even a very small increase in , to even a very large increase in etc. Similarly, the decision-maker prefers even a very small increase in , to even a very large increase in etc. In other words, the decision-maker has lexicographic preferences, ranking the possible solutions according to a lexicographic order of their objective function values. Lexicographic optimization is sometimes called preemptive optimization, since a small increase in one objective value preempts a much larger increase in less important objective values.

Lexicographic max-min optimization is a kind of multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Lexmaxmin optimization presumes that the decision-maker would like the smallest objective value to be as high as possible; subject to this, the second-smallest objective should be as high as possible; and so on. In other words, the decision-maker ranks the possible solutions according to a leximin order of their objective function values.

References

  1. J. -Y. Li, Z. -H. Zhan, Y. Li and J. Zhang, "Multiple Tasks for Multiple Objectives: A New Multiobjective Optimization Method via Multitask Optimization," in IEEE Transactions on Evolutionary Computation, doi : 10.1109/TEVC.2023.3294307
  2. 1 2 3 4 5 6 7 8 9 Kaisa Miettinen (1999). Nonlinear Multiobjective Optimization. Springer. ISBN   978-0-7923-8278-2 . Retrieved 29 May 2012.
  3. 1 2 3 4 5 6 Ching-Lai Hwang; Abu Syed Md Masud (1979). Multiple objective decision making, methods and applications: a state-of-the-art survey . Springer-Verlag. ISBN   978-0-387-09111-2 . Retrieved 29 May 2012.
  4. Hassanzadeh, Hamidreza; Rouhani, Modjtaba (2010). "A multi-objective gravitational search algorithm". In Computational Intelligence, Communication Systems and Networks (CICSyN): 7–12.
  5. Shirazi, Ali; Najafi, Behzad; Aminyavari, Mehdi; Rinaldi, Fabio; Taylor, Robert A. (2014-05-01). "Thermal–economic–environmental analysis and multi-objective optimization of an ice thermal energy storage system for gas turbine cycle inlet air cooling". Energy. 69: 212–226. doi: 10.1016/j.energy.2014.02.071 . hdl: 11311/845828 .
  6. Najafi, Behzad; Shirazi, Ali; Aminyavari, Mehdi; Rinaldi, Fabio; Taylor, Robert A. (2014-02-03). "Exergetic, economic and environmental analyses and multi-objective optimization of an SOFC-gas turbine hybrid cycle coupled with an MSF desalination system". Desalination. 334 (1): 46–59. doi: 10.1016/j.desal.2013.11.039 . hdl: 11311/764704 .
  7. Rafiei, S. M. R.; Amirahmadi, A.; Griva, G. (2009). "Chaos rejection and optimal dynamic response for boost converter using SPEA multi-objective optimization approach". 2009 35th Annual Conference of IEEE Industrial Electronics. pp. 3315–3322. doi:10.1109/IECON.2009.5415056. ISBN   978-1-4244-4648-3. S2CID   2539380.
  8. Ropponen, A.; Ritala, R.; Pistikopoulos, E. N. (2011). "Optimization issues of the broke management system in papermaking". Computers & Chemical Engineering. 35 (11): 2510. doi:10.1016/j.compchemeng.2010.12.012.
  9. Pllana, Sabri; Memeti, Suejb; Kolodziej, Joanna (2019). "Customizing Pareto Simulated Annealing for Multi-objective Optimization of Control Cabinet Layout". arXiv: 1906.04825 [cs.OH].
  10. Nguyen, Hoang Anh; van Iperen, Zane; Raghunath, Sreekanth; Abramson, David; Kipouros, Timoleon; Somasekharan, Sandeep (2017). "Multi-objective optimisation in scientific workflow". Procedia Computer Science. 108: 1443–1452. doi: 10.1016/j.procs.2017.05.213 . hdl: 1826/12173 .
  11. Ganesan, T.; Elamvazuthi, I.; Vasant, P. (2015-07-01). "Multiobjective design optimization of a nano-CMOS voltage-controlled oscillator using game theoretic-differential evolution". Applied Soft Computing. 32: 293–299. doi:10.1016/j.asoc.2015.03.016.
  12. Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P. (2013-01-01). "Hypervolume-Driven Analytical Programming for Solar-Powered Irrigation System Optimization". In Zelinka, Ivan; Chen, Guanrong; Rössler, Otto E.; Snasel, Vaclav; Abraham, Ajith (eds.). Nostradamus 2013: Prediction, Modeling and Analysis of Complex Systems. Advances in Intelligent Systems and Computing. Vol. 210. Springer International Publishing. pp. 147–154. doi:10.1007/978-3-319-00542-3_15. ISBN   978-3-319-00541-6.
  13. Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P. (2013-01-01). "Multiobjective Optimization of Green Sand Mould System Using Chaotic Differential Evolution". In Gavrilova, Marina L.; Tan, C. J. Kenneth; Abraham, Ajith (eds.). Transactions on Computational Science XXI. Lecture Notes in Computer Science. Vol. 8160. Springer Berlin Heidelberg. pp. 145–163. doi:10.1007/978-3-642-45318-2_6. ISBN   978-3-642-45317-5.
  14. Surekha, B.; Kaushik, Lalith K.; Panduy, Abhishek K.; Vundavilli, Pandu R.; Parappagoudar, Mahesh B. (2011-05-07). "Multi-objective optimization of green sand mould system using evolutionary algorithms". The International Journal of Advanced Manufacturing Technology. 58 (1–4): 9–17. doi:10.1007/s00170-011-3365-8. ISSN   0268-3768. S2CID   110315544.
  15. "MultiObjective Optimization in Engine Design Using Genetic Algorithms to Improve Engine Performance | ESTECO". www.esteco.com. Retrieved 2015-12-01.
  16. Courteille, E.; Mortier, F.; Leotoing, L.; Ragneau, E. (2005-05-16). "Multi-Objective Robust Design Optimization of an Engine Mounting System". SAE Technical Paper Series (PDF). Vol. 1. Warrendale, PA. doi:10.4271/2005-01-2412. S2CID   20170456.{{cite book}}: CS1 maint: location missing publisher (link)
  17. Domingo-Perez, Francisco; Lazaro-Galilea, Jose Luis; Wieser, Andreas; Martin-Gorostiza, Ernesto; Salido-Monzu, David; Llana, Alvaro de la (April 2016). "Sensor placement determination for range-difference positioning using evolutionary multi-objective optimization". Expert Systems with Applications. 47: 95–105. doi:10.1016/j.eswa.2015.11.008.
  18. Bemporad, Alberto; Muñoz de la Peña, David (2009-12-01). "Multiobjective model predictive control". Automatica. 45 (12): 2823–2830. doi:10.1016/j.automatica.2009.09.032.
  19. Panda, Sidhartha (2009-06-01). "Multi-objective evolutionary algorithm for SSSC-based controller design". Electric Power Systems Research. 79 (6): 937–944. doi:10.1016/j.epsr.2008.12.004.
  20. Fiandaca, Giovanna; Fraga, Eric S.; Brandani, Stefano (2009). "A multi-objective genetic algorithm for the design of pressure swing adsorption". Engineering Optimization. 41 (9): 833–854. doi:10.1080/03052150903074189. S2CID   120201436 . Retrieved 2015-12-01.
  21. Sendín, José Oscar H.; Alonso, Antonio A.; Banga, Julio R. (2010-06-01). "Efficient and robust multi-objective optimization of food processing: A novel approach with application to thermal sterilization". Journal of Food Engineering. 98 (3): 317–324. doi:10.1016/j.jfoodeng.2010.01.007. hdl: 10261/48082 .
  22. Ganesan, T.; Elamvazuthi, I.; Ku Shaari, Ku Zilati; Vasant, P. (2013-03-01). "Swarm intelligence and gravitational search algorithm for multi-objective optimization of synthesis gas production". Applied Energy. 103: 368–374. Bibcode:2013ApEn..103..368G. doi:10.1016/j.apenergy.2012.09.059.
  23. Ganesan, Timothy; Elamvazuthi, Irraivan; Vasant, Pandian; Shaari, Ku Zilati Ku (2015-03-23). "Multiobjective Optimization of Bioactive Compound Extraction Process via Evolutionary Strategies". In Nguyen, Ngoc Thanh; Trawiński, Bogdan; Kosala, Raymond (eds.). Intelligent Information and Database Systems. Lecture Notes in Computer Science. Vol. 9012. Springer International Publishing. pp. 13–21. doi:10.1007/978-3-319-15705-4_2. ISBN   978-3-319-15704-7.
  24. Mehdi, Khosrow-Pour (2014-06-30). Contemporary Advancements in Information Technology Development in Dynamic Environments. IGI Global. ISBN   9781466662537.
  25. Abakarov. A., Sushkov. Yu., Mascheroni. R.H. (2012). "Multi-criteria optimization and decision-making approach for improving of food engineering processes" (PDF). International Journal of Food Studies. 2: 1–21. doi:10.7455/ijfs/2.1.2013.a1. S2CID   3708256.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  26. Abakarov, A, Sushkov, Y, Almonacid, S, and Simpson, R. (2009). "Multiobjective Optimisation Approach: Thermal Food Processing". Journal of Food Science. 74 (9): E471–E487. doi:10.1111/j.1750-3841.2009.01348.x. hdl: 10533/134983 . PMID   20492109.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  27. Pearce, Margaret; Mutlu, Bilge; Shah, Julie; Radwin, Robert (2018). "Optimizing Makespan and Ergonomics in Integrating Collaborative Robots Into Manufacturing Processes". IEEE Transactions on Automation Science and Engineering. 15 (4): 1772–1784. doi: 10.1109/tase.2018.2789820 . ISSN   1545-5955. S2CID   52927442.
  28. 1 2 E. Björnson and E. Jorswieck, Optimal Resource Allocation in Coordinated Multi-Cell Systems, Foundations and Trends in Communications and Information Theory, vol. 9, no. 2-3, pp. 113-381, 2013.
  29. Z.-Q. Luo and S. Zhang, Dynamic spectrum management: Complexity and duality, IEEE Journal of Selected Topics in Signal Processing, vol. 2, no. 1, pp. 57–73, 2008.
  30. Merlin, A.; Back, H. Search for a Minimal-Loss Operating Spanning Tree Configuration in an Urban Power Distribution System. In Proceedings of the 1975 Fifth Power Systems Computer Conference (PSCC), Cambridge, UK, 1–5 September 1975; pp. 1–18.
  31. Mendoza, J.E.; Lopez, M.E.; Coello, C.A.; Lopez, E.A. Microgenetic multiobjective reconfiguration algorithm considering power losses and reliability indices for medium voltage distribution network. IET Gener. Transm. Distrib. 2009, 3, 825–840.
  32. Bernardon, D.P.; Garcia, V.J.; Ferreira, A.S.Q.; Canha, L.N. Multicriteria distribution network reconfiguration considering subtransmission analysis. IEEE Trans. Power Deliv. 2010, 25, 2684–2691.
  33. Amanulla, B.; Chakrabarti, S.; Singh, S.N. Reconfiguration of power distribution systems considering reliability and power loss. IEEE Trans. Power Deliv. 2012, 27, 918–926.
  34. Tomoiagă, B.; Chindriş, M.; Sumper, A.; Sudria-Andreu, A.; Villafafila-Robles, R. Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGA-II. Energies 2013, 6, 1439-1455.
  35. Galceran, Enric; Carreras, Marc (2013). "A survey on coverage path planning for robotics". Robotics and Autonomous Systems. 61 (12): 1258–1276. CiteSeerX   10.1.1.716.2556 . doi:10.1016/j.robot.2013.09.004. ISSN   0921-8890. S2CID   1177069.
  36. Ellefsen, K.O.; Lepikson, H.A.; Albiez, J.C. (2019). "Multiobjective coverage path planning: Enabling automated inspection of complex, real-world structures". Applied Soft Computing. 61: 264–282. arXiv: 1901.07272 . Bibcode:2019arXiv190107272O. doi:10.1016/j.asoc.2017.07.051. hdl:10852/58883. ISSN   1568-4946. S2CID   6183350.
  37. Matthias Ehrgott (1 June 2005). Multicriteria Optimization. Birkhäuser. ISBN   978-3-540-21398-7 . Retrieved 29 May 2012.
  38. Carlos A. Coello Coello; Gary B. Lamont; David A. Van Veldhuisen (2007). Evolutionary Algorithms for Solving Multi-Objective Problems. Springer. ISBN   978-0-387-36797-2 . Retrieved 1 November 2012.
  39. 1 2 Jürgen Branke; Kalyanmoy Deb; Kaisa Miettinen; Roman Slowinski (21 November 2008). Multiobjective Optimization: Interactive and Evolutionary Approaches. Springer. ISBN   978-3-540-88907-6 . Retrieved 1 November 2012.
  40. Zeleny, M. (1973), "Compromise Programming", in Cochrane, J.L.; Zeleny, M. (eds.), Multiple Criteria Decision Making, University of South Carolina Press, Columbia, pp. 262–301
  41. Wierzbicki, A. P. (1982). "A mathematical basis for satisficing decision making". Mathematical Modelling. 3 (5): 391–405. doi: 10.1016/0270-0255(82)90038-0 .
  42. Sen, Chandra, (1983) A new approach for multi-objective rural development planning, The Indian Economic Journal, Vol.30, (4), 91-96.
  43. 1 2 Daniel Golovin and Qiuyi Zhang. Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization. ICML 2021. https://arxiv.org/abs/2006.04655
  44. Xu, J., Tao, Z. (2011). Rough Multiple Objective Decision Making. Vereinigtes Königreich: CRC Press., Page 67 https://books.google.com/books?id=zwDSBQAAQBAJ&dq=the%20minimax%20multi%20objective%20-game&pg=PA67
  45. 1 2 Das, I.; Dennis, J. E. (1998). "Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems". SIAM Journal on Optimization. 8 (3): 631. doi:10.1137/S1052623496307510. hdl: 1911/101880 . S2CID   207081991.
  46. 1 2 S. Motta, Renato; Afonso, Silvana M. B.; Lyra, Paulo R. M. (8 January 2012). "A modified NBI and NC method for the solution of N-multiobjective optimization problems". Structural and Multidisciplinary Optimization. 46 (2): 239–259. doi:10.1007/s00158-011-0729-5. S2CID   121122414.
  47. 1 2 Messac, A.; Ismail-Yahaya, A.; Mattson, C.A. (2003). "The normalized normal constraint method for generating the Pareto frontier". Structural and Multidisciplinary Optimization. 25 (2): 86–98. doi:10.1007/s00158-002-0276-1. S2CID   58945431.
  48. 1 2 Messac, A.; Mattson, C. A. (2004). "Normal constraint method with guarantee of even representation of complete Pareto frontier". AIAA Journal. 42 (10): 2101–2111. Bibcode:2004AIAAJ..42.2101M. doi:10.2514/1.8977.
  49. 1 2 Mueller-Gritschneder, Daniel; Graeb, Helmut; Schlichtmann, Ulf (2009). "A Successive Approach to Compute the Bounded Pareto Front of Practical Multiobjective Optimization Problems". SIAM Journal on Optimization. 20 (2): 915–934. doi:10.1137/080729013.
  50. Erfani, Tohid; Utyuzhnikov, Sergei V. (2010). "Directed search domain: a method for even generation of the Pareto frontier in multiobjective optimization". Engineering Optimization. 43 (5): 467–484. doi:10.1080/0305215X.2010.497185. ISSN   0305-215X.
  51. 1 2 Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. (2002). "A fast and elitist multiobjective genetic algorithm: NSGA-II". IEEE Transactions on Evolutionary Computation. 6 (2): 182. CiteSeerX   10.1.1.17.7771 . doi:10.1109/4235.996017. S2CID   9914171.
  52. Deb, Kalyanmoy; Jain, Himanshu (2014). "An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints". IEEE Transactions on Evolutionary Computation. 18 (4): 577–601. doi:10.1109/TEVC.2013.2281535. ISSN   1089-778X. S2CID   206682597.
  53. Jain, Himanshu; Deb, Kalyanmoy (2014). "An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach". IEEE Transactions on Evolutionary Computation. 18 (4): 602–622. doi:10.1109/TEVC.2013.2281534. ISSN   1089-778X. S2CID   16426862.
  54. Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the Performance of the Strength Pareto Evolutionary Algorithm, Technical Report 103, Computer Engineering and Communication Networks Lab (TIK), Swiss Federal Institute of Technology (ETH) Zurich (2001)
  55. Suman, B.; Kumar, P. (2006). "A survey of simulated annealing as a tool for single and multiobjective optimization". Journal of the Operational Research Society. 57 (10): 1143–1160. doi:10.1057/palgrave.jors.2602068. S2CID   18916703.
  56. 1 2 Danilo Vasconcellos Vargas, Junichi Murata, Hirotaka Takano, Alexandre Claudio Botazzo Delbem (2015), "General Subpopulation Framework and Taming the Conflict Inside Populations", Evolutionary computation 23 (1), 1-36.
  57. Lehman, Joel, and Kenneth O. Stanley. "Abandoning objectives: Evolution through the search for novelty alone." Evolutionary computation 19.2 (2011): 189-223.
  58. 1 2 3 Navon, Aviv; Shamsian, Aviv; Chechik, Gal; Fetaya, Ethan (2021-04-26). "Learning the Pareto Front with Hypernetworks". Proceedings of International Conference on Learning Representations (ICLR). arXiv: 2010.04104 .
  59. Xingchao, Liu; Xin, Tong; Qiang, Liu (2021-12-06). "Profiling Pareto Front With Multi-Objective Stein Variational Gradient Descent". Advances in Neural Information Processing Systems. 34.
  60. Mavrotas, George (2009). "Effective implementation of the ε-constraint method in Multi-Objective Mathematical Programming problems". Applied Mathematics and Computation. 213 (2): 455–465. doi:10.1016/j.amc.2009.03.037. ISSN   0096-3003.
  61. Carvalho, Iago A.; Ribeiro, Marco A. (2020). "An exact approach for the Minimum-Cost Bounded-Error Calibration Tree problem". Annals of Operations Research. 287 (1): 109–126. doi:10.1007/s10479-019-03443-4. ISSN   0254-5330. S2CID   209959109.
  62. Mavrotas, G.; Diakoulaki, D. (2005). "Multi-criteria branch and bound: A vector maximization algorithm for Mixed 0-1 Multiple Objective Linear Programming". Applied Mathematics and Computation. 171 (1): 53–71. doi:10.1016/j.amc.2005.01.038. ISSN   0096-3003.
  63. Vincent, Thomas; Seipp, Florian; Ruzika, Stefan; Przybylski, Anthony; Gandibleux, Xavier (2013). "Multiple objective branch and bound for mixed 0-1 linear programming: Corrections and improvements for the biobjective case". Computers & Operations Research. 40 (1): 498–509. doi:10.1016/j.cor.2012.08.003. ISSN   0305-0548.
  64. Przybylski, Anthony; Gandibleux, Xavier (2017). "Multi-objective branch and bound". European Journal of Operational Research. 260 (3): 856–872. doi:10.1016/j.ejor.2017.01.032. ISSN   0377-2217.
  65. Craft, D.; Halabi, T.; Shih, H.; Bortfeld, T. (2006). "Approximating convex Pareto surfaces in multiobjective radiotherapy planning". Medical Physics. 33 (9): 3399–3407. Bibcode:2006MedPh..33.3399C. doi:10.1118/1.2335486. PMID   17022236.
  66. Beume, N.; Naujoks, B.; Emmerich, M. (2007). "SMS-EMOA: Multiobjective selection based on dominated hypervolume". European Journal of Operational Research. 181 (3): 1653. doi:10.1016/j.ejor.2006.08.008.
  67. Bringmann, Karl; Friedrich, Tobias; Neumann, Frank; Wagner, Markus (2011). "Approximation-Guided Evolutionary Multi-Objective Optimization". IJCAI. doi:10.5591/978-1-57735-516-8/IJCAI11-204.
  68. Battiti, Roberto; Mauro Brunato; Franco Mascia (2008). Reactive Search and Intelligent Optimization. Springer Verlag. ISBN   978-0-387-09623-0.
  69. Battiti, Roberto; Mauro Brunato (2011). Reactive Business Intelligence. From Data to Models to Insight. Trento, Italy: Reactive Search Srl. ISBN   978-88-905795-0-9.
  70. 1 2 3 4 Miettinen, K.; Ruiz, F.; Wierzbicki, A. P. (2008). "Introduction to Multiobjective Optimization: Interactive Approaches". Multiobjective Optimization. Lecture Notes in Computer Science. Vol. 5252. pp. 27–57. CiteSeerX   10.1.1.475.465 . doi:10.1007/978-3-540-88908-3_2. ISBN   978-3-540-88907-6.
  71. Luque, M.; Ruiz, F.; Miettinen, K. (2008). "Global formulation for interactive multiobjective optimization". OR Spectrum. 33: 27–48. doi:10.1007/s00291-008-0154-3. S2CID   15050545.
  72. Ruiz, F.; Luque, M.; Miettinen, K. (2011). "Improving the computational efficiency in a global formulation (GLIDE) for interactive multiobjective optimization". Annals of Operations Research. 197: 47–70. doi:10.1007/s10479-010-0831-x. S2CID   14947919.
  73. Zionts, S.; Wallenius, J. (1976). "An Interactive Programming Method for Solving the Multiple Criteria Problem". Management Science. 22 (6): 652. doi:10.1287/mnsc.22.6.652.
  74. Wierzbicki, A. P. (1986). "On the completeness and constructiveness of parametric characterizations to vector optimization problems". OR Spektrum. 8 (2): 73–78. doi:10.1007/BF01719738. S2CID   121771992.
  75. Andrzej P. Wierzbicki; Marek Makowski; Jaap Wessels (31 May 2000). Model-Based Decision Support Methodology with Environmental Applications. Springer. ISBN   978-0-7923-6327-9 . Retrieved 17 September 2012.
  76. Nakayama, H.; Sawaragi, Y. (1984), "Satisficing Trade-Off Method for Multiobjective Programming", in Grauer, M.; Wierzbicki, A. P. (eds.), Interactive Decision Analysis, Springer-Verlag Berlin, Heidelberg, pp. 113–122
  77. Miettinen, K.; Mäkelä, M. M. (1995). "Interactive bundle-based method for nondifferentiable multiobjeective optimization: Nimbus§". Optimization. 34 (3): 231. doi:10.1080/02331939508844109.
  78. Miettinen, K.; Mäkelä, M. M. (2006). "Synchronous approach in interactive multiobjective optimization". European Journal of Operational Research. 170 (3): 909. doi:10.1016/j.ejor.2004.07.052.
  79. Sindhya, K.; Ruiz, A. B.; Miettinen, K. (2011). "A Preference Based Interactive Evolutionary Algorithm for Multi-objective Optimization: PIE". Evolutionary Multi-Criterion Optimization. Lecture Notes in Computer Science. Vol. 6576. pp. 212–225. doi:10.1007/978-3-642-19893-9_15. ISBN   978-3-642-19892-2.
  80. Sindhya, K.; Deb, K.; Miettinen, K. (2008). "A Local Search Based Evolutionary Multi-objective Optimization Approach for Fast and Accurate Convergence". Parallel Problem Solving from Nature – PPSN X. Lecture Notes in Computer Science. Vol. 5199. pp. 815–824. doi:10.1007/978-3-540-87700-4_81. ISBN   978-3-540-87699-1.
  81. Benson, Harold P.; Sayin, Serpil (1997). "Towards finding global representations of the efficient set in multiple objective mathematical programming" (PDF). Naval Research Logistics. 44 (1): 47–67. doi:10.1002/(SICI)1520-6750(199702)44:1<47::AID-NAV3>3.0.CO;2-M. hdl:11693/25666. ISSN   0894-069X.
  82. Pryke, Andy; Sanaz Mostaghim; Alireza Nazemi (2007). "Heatmap Visualization of Population Based Multi Objective Algorithms". Evolutionary Multi-Criterion Optimization. Lecture Notes in Computer Science. Vol. 4403. pp. 361–375. doi:10.1007/978-3-540-70928-2_29. ISBN   978-3-540-70927-5. S2CID   2502459.
  83. Gass, Saul; Saaty, Thomas (1955). "The computational algorithm for the parametric objective function". Naval Research Logistics Quarterly. 2 (1–2): 39–45. doi:10.1002/nav.3800020106. ISSN   0028-1441.
  84. Jared L. Cohon (13 January 2004). Multiobjective Programming and Planning. Courier Dover Publications. ISBN   978-0-486-43263-2 . Retrieved 29 May 2012.
  85. Ruzika, S.; Wiecek, M. M. (2005). "Approximation Methods in Multiobjective Programming". Journal of Optimization Theory and Applications. 126 (3): 473–501. doi:10.1007/s10957-005-5494-4. ISSN   0022-3239. S2CID   122221156.
  86. Meisel, W. L. (1973), J. L. Cochrane; M. Zeleny (eds.), "Tradeoff decision in multiple criteria decision making", Multiple Criteria Decision Making: 461–476
  87. A. V. Lotov; V. A. Bushenkov; G. K. Kamenev (29 February 2004). Interactive Decision Maps: Approximation and Visualization of Pareto Frontier. Springer. ISBN   978-1-4020-7631-2 . Retrieved 29 May 2012.
  88. Wesner, N. (2017), "Multiobjective Optimization via Visualization", Economics Bulletin, 37 (2): 1226–1233