Goal programming

Last updated

Goal programming is a branch of multiobjective optimization, which in turn is a branch of multi-criteria decision analysis (MCDA). It can be thought of as an extension or generalisation of linear programming to handle multiple, normally conflicting objective measures. Each of these measures is given a goal or target value to be achieved. Deviations are measured from these goals both above and below the target. Unwanted deviations from this set of target values are then minimised in an achievement function. This can be a vector or a weighted sum dependent on the goal programming variant used. As satisfaction of the target is deemed to satisfy the decision maker(s), an underlying satisficing philosophy is assumed. Goal programming is used to perform three types of analysis:

Contents

  1. Determine the required resources to achieve a desired set of objectives.
  2. Determine the degree of attainment of the goals with the available resources.
  3. Providing the best satisfying solution under a varying amount of resources and priorities of the goals.

History

Goal programming was first used by Charnes, Cooper and Ferguson in 1955, [1] although the actual name first appeared in a 1961 text by Charnes and Cooper. [2] Seminal works by Lee, [3] Ignizio, [4] Ignizio and Cavalier, [5] and Romero [6] followed. Schniederjans gives in a bibliography of a large number of pre-1995 articles relating to goal programming, [7] and Jones and Tamiz give an annotated bibliography of the period 1990-2000. [8] A recent textbook by Jones and Tamiz . [9] gives a comprehensive overview of the state-of-the-art in goal programming.

The first engineering application of goal programming, due to Ignizio in 1962, was the design and placement of the antennas employed on the second stage of the Saturn V. This was used to launch the Apollo space capsule that landed the first men on the moon.

Variants

The initial goal programming formulations ordered the unwanted deviations into a number of priority levels, with the minimisation of a deviation in a higher priority level being infinitely more important than any deviations in lower priority levels. This is known as lexicographic or pre-emptive goal programming. Ignizio [4] gives an algorithm showing how a lexicographic goal programme can be solved as a series of linear programmes. Lexicographic goal programming is used when there exists a clear priority ordering amongst the goals to be achieved.

If the decision maker is more interested in direct comparisons of the objectives then weighted or non-pre-emptive goal programming should be used. In this case, all the unwanted deviations are multiplied by weights, reflecting their relative importance, and added together as a single sum to form the achievement function. Deviations measured in different units cannot be summed directly due to the phenomenon of incommensurability.

Hence each unwanted deviation is multiplied by a normalisation constant to allow direct comparison. Popular choices for normalisation constants are the goal target value of the corresponding objective (hence turning all deviations into percentages) or the range of the corresponding objective (between the best and the worst possible values, hence mapping all deviations onto a zero-one range). [6] For decision makers more interested in obtaining a balance between the competing objectives, Chebyshev goal programming is used. Introduced by Flavell in 1976, [10] this variant seeks to minimise the maximum unwanted deviation, rather than the sum of deviations. This utilises the Chebyshev distance metric.

Strengths and weaknesses

A major strength of goal programming is its simplicity and ease of use. This accounts for the large number of goal programming applications in many and diverse fields. Linear goal programmes can be solved using linear programming software as either a single linear programme, or in the case of the lexicographic variant, a series of connected linear programmes.

Goal programming can hence handle relatively large numbers of variables, constraints and objectives. A debated weakness is the ability of goal programming to produce solutions that are not Pareto efficient. This violates a fundamental concept of decision theory, that no rational decision maker will knowingly choose a solution that is not Pareto efficient. However, techniques are available [6] [11] [12] to detect when this occurs and project the solution onto the Pareto efficient solution in an appropriate manner.

The setting of appropriate weights in the goal programming model is another area that has caused debate, with some authors [13] suggesting the use of the analytic hierarchy process or interactive methods [14] for this purpose. Also, the weights of the objective functions can be calculated based on their preference using the ordinal priority approach. [15]

See also

Related Research Articles

Zero-sum game is a mathematical representation in game theory and economic theory of a situation that involves two competing entities, where the result is an advantage for one side and an equivalent loss for the other. In other words, player one's gain is equivalent to player two's loss, with the result that the net improvement in benefit of the game is zero.

Operations research, often shortened to the initialism OR, is a branch of applied mathematics that deals with the development and application of analytical methods to improve decision-making. Although the term management science is sometimes used similarly, the two fields differ in their scope and emphasis.

<span class="mw-page-title-main">Pareto efficiency</span> Weakly optimal allocation of resources

In welfare economics, a Pareto improvement formalizes the idea of an outcome being "better in every possible way". A change is called a Pareto improvement if it leaves at least one person in society better-off without leaving anyone else worse off than they were before. A situation is called Pareto efficient or Pareto optimal if all possible Pareto improvements have already been made; in other words, there are no longer any ways left to make one person better-off, without making some other person worse-off.

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

<span class="mw-page-title-main">Least squares</span> Approximation method in statistics

In regression analysis, least squares is a parameter estimation method based on minimizing the sum of the squares of the residuals made in the results of each individual equation.

Program management deals with overseeing a group or several projects that align with a company’s organizational strategy, goals, and mission. These projects, are intended to improve an organization's performance. Program management is distinct from project management.

<span class="mw-page-title-main">Fitness function</span> Objective function of evolutionary algorithm

A fitness function is a particular type of objective or cost function that is used to summarize, as a single figure of merit, how close a given candidate solution is to achieving the set aims. It is an important component of evolutionary algorithms (EA), such as genetic programming, evolution strategies or genetic algorithms. An EA is a metaheuristic that reproduces the basic principles of biological evolution as a computer algorithm in order to solve challenging optimization or planning tasks, at least approximately. For this purpose, many candidate solutions are generated, which are evaluated using a fitness function in order to guide the evolutionary development towards the desired goal. Similar quality functions are also used in other metaheuristics, such as ant colony optimization or particle swarm optimization.

<span class="mw-page-title-main">Loss function</span> Mathematical relation assigning a probability event to a cost

In mathematical optimization and decision theory, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite, in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.

Data envelopment analysis (DEA) is a nonparametric method in operations research and economics for the estimation of production frontiers. DEA has been applied in a large range of fields including international banking, economic sustainability, police department operations, and logistical applications Additionally, DEA has been used to assess the performance of natural language processing models, and it has found other applications within machine learning.

<span class="mw-page-title-main">Multiple-criteria decision analysis</span> Operations research that evaluates multiple conflicting criteria in decision making

Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making. It is also known as multiple attribute utility theory, multiple attribute value theory, multiple attribute preference theory, and multi-objective decision analysis.

Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.

Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute residuals (LAR), or least absolute values (LAV), is a statistical optimality criterion and a statistical optimization technique based on minimizing the sum of absolute deviations or the L1 norm of such values. It is analogous to the least squares technique, except that it is based on absolute values instead of squared values. It attempts to find a function which closely approximates a set of data by minimizing residuals between points generated by the function and corresponding data points. The LAD estimate also arises as the maximum likelihood estimate if the errors have a Laplace distribution. It was introduced in 1757 by Roger Joseph Boscovich.

In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function 1.

William Wager Cooper was an American operations researcher, known as a father of management science and as "Mr. Linear Programming". He was the founding president of The Institute of Management Sciences, founding editor-in-chief of Auditing: A Journal of Practice and Theory, a founding faculty member of the Graduate School of Industrial Administration at the Carnegie Institute of Technology, founding dean of the School of Urban and Public Affairs at CMU, the former Arthur Lowes Dickinson Professor of Accounting at Harvard University, and the Foster Parker Professor Emeritus of Management, Finance and Accounting at the University of Texas at Austin.

The Interactive Decision Maps technique of multi-objective optimization is based on approximating the Edgeworth-Pareto Hull (EPH) of the feasible objective set, that is, the feasible objective set broadened by the objective points dominated by it. Alternatively, this set is known as Free Disposal Hull. It is important that the EPH has the same Pareto front as the feasible objective set, but the bi-objective slices of the EPH look much simpler. The frontiers of bi-objective slices of the EPH contain the slices of the Pareto front. It is important that, in contrast to the Pareto front itself, the EPH is usually stable in respect to disturbances of data. The IDM technique applies fast on-line display of bi-objective slices of the EPH approximated in advance.

Chance-constrained portfolio selection is an approach to portfolio selection under loss aversion. The formulation assumes that (i) investor's preferences are representable by the expected utility of final wealth, and that (ii) they require that the probability of their final wealth falling below a survival or safety level must to be acceptably low. The chance-constrained portfolio problem is then to find:

Ordinal priority approach (OPA) is a multiple-criteria decision analysis method that aids in solving the group decision-making problems based on preference relations.

Lexicographic optimization is a kind of Multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Often, the different objectives can be ranked in order of importance to the decision-maker, so that objective is the most important, objective is the next most important, and so on. Lexicographic optimization presumes that the decision-maker prefers even a very small increase in , to even a very large increase in etc. Similarly, the decision-maker prefers even a very small increase in , to even a very large increase in etc. In other words, the decision-maker has lexicographic preferences, ranking the possible solutions according to a lexicographic order of their objective function values. Lexicographic optimization is sometimes called preemptive optimization, since a small increase in one objective value preempts a much larger increase in less important objective values.

Lexicographic max-min optimization is a kind of multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Lexmaxmin optimization presumes that the decision-maker would like the smallest objective value to be as high as possible; subject to this, the second-smallest objective should be as high as possible; and so on. In other words, the decision-maker ranks the possible solutions according to a leximin order of their objective function values.

Chance Constrained Programming (CCP) is a mathematical optimization approach used to handle problems under uncertainty. It was first introduced by Charnes and Cooper in 1959 and further developed by Miller and Wagner in 1965. CCP is widely used in various fields, including finance, engineering, and operations research, to optimize decision-making processes where certain constraints need to be satisfied with a specified probability.

References

  1. A Charnes, WW Cooper, R Ferguson (1955) Optimal estimation of executive compensation by linear programming, Management Science, 1, 138-151.
  2. A Charnes, WW Cooper (1961) Management models and industrial applications of linear programming, Wiley, New York
  3. SM Lee (1972) Goal programming for decision analysis, Auerback, Philadelphia
  4. 1 2 JP Ignizio (1976) Goal programming and extensions, Lexington Books, Lexington, MA.
  5. JP Ignizio, TM Cavalier (1994) Linear programming, Prentice Hall.
  6. 1 2 3 C Romero (1991) Handbook of critical issues in goal programming, Pergamon Press, Oxford.
  7. MJ Scniederjans (1995) Goal programming methodology and applications, Kluwer publishers, Boston.
  8. DF Jones, M Tamiz (2002) Goal programming in the period 1990-2000, in Multiple Criteria Optimization: State of the art annotated bibliographic surveys, M. Ehrgott and X.Gandibleux (Eds.), 129-170. Kluwer
  9. Jones DF, Tamiz M (2010) Practical Goal Programming, Springer Books.
  10. RB Flavell (1976) A new goal programming formulation, Omega, 4, 731-732.
  11. EL Hannan (1980) Non-dominance in goal programming, INFOR, 18, 300-309
  12. M Tamiz, SK Mirrazavi, DF Jones (1999) Extensions of Pareto efficiency analysis to integer goal programming, Omega, 27, 179-188.
  13. SI Gass (1987) A process for determining priorities and weights for large scale linear goal programmes, Journal of the Operational Research Society, 37, 779-785.
  14. BJ White (1996) Developing Products and Their Rhetoric from a Single Hierarchical Model, 1996 Proceedings of the Annual Conference of the Society for Technical Communication, 43, 223-224.
  15. Tafakkori, Keivan; Tavakkoli-Moghaddam, Reza; Siadat, Ali (2022). "Sustainable negotiation-based nesting and scheduling in additive manufacturing systems: A case study and multi-objective meta-heuristic algorithms". Engineering Applications of Artificial Intelligence. 112: 104836. doi: 10.1016/j.engappai.2022.104836 . ISSN   0952-1976.