Goal programming is a branch of multiobjective optimization, which in turn is a branch of multi-criteria decision analysis (MCDA). It can be thought of as an extension or generalisation of linear programming to handle multiple, normally conflicting objective measures. Each of these measures is given a goal or target value to be achieved. Deviations are measured from these goals both above and below the target. Unwanted deviations from this set of target values are then minimised in an achievement function. This can be a vector or a weighted sum dependent on the goal programming variant used. As satisfaction of the target is deemed to satisfy the decision maker(s), an underlying satisficing philosophy is assumed. Goal programming is used to perform three types of analysis:
Goal programming was first used by Charnes, Cooper and Ferguson in 1955, [1] although the actual name first appeared in a 1961 text by Charnes and Cooper. [2] Seminal works by Lee, [3] Ignizio, [4] Ignizio and Cavalier, [5] and Romero [6] followed. Schniederjans gives in a bibliography of a large number of pre-1995 articles relating to goal programming, [7] and Jones and Tamiz give an annotated bibliography of the period 1990-2000. [8] A recent textbook by Jones and Tamiz . [9] gives a comprehensive overview of the state-of-the-art in goal programming.
The first engineering application of goal programming, due to Ignizio in 1962, was the design and placement of the antennas employed on the second stage of the Saturn V. This was used to launch the Apollo space capsule that landed the first men on the moon.
The initial goal programming formulations ordered the unwanted deviations into a number of priority levels, with the minimisation of a deviation in a higher priority level being infinitely more important than any deviations in lower priority levels. This is known as lexicographic or pre-emptive goal programming. Ignizio [4] gives an algorithm showing how a lexicographic goal programme can be solved as a series of linear programmes. Lexicographic goal programming is used when there exists a clear priority ordering amongst the goals to be achieved.
If the decision maker is more interested in direct comparisons of the objectives then weighted or non-pre-emptive goal programming should be used. In this case all the unwanted deviations are multiplied by weights, reflecting their relative importance, and added together as a single sum to form the achievement function. Deviations measured in different units cannot be summed directly due to the phenomenon of incommensurability.
Hence each unwanted deviation is multiplied by a normalisation constant to allow direct comparison. Popular choices for normalisation constants are the goal target value of the corresponding objective (hence turning all deviations into percentages) or the range of the corresponding objective (between the best and the worst possible values, hence mapping all deviations onto a zero-one range). [6] For decision makers more interested in obtaining a balance between the competing objectives, Chebyshev goal programming is used. Introduced by Flavell in 1976, [10] this variant seeks to minimise the maximum unwanted deviation, rather than the sum of deviations. This utilises the Chebyshev distance metric.
A major strength of goal programming is its simplicity and ease of use. This accounts for the large number of goal programming applications in many and diverse fields. Linear goal programmes can be solved using linear programming software as either a single linear programme, or in the case of the lexicographic variant, a series of connected linear programmes.
Goal programming can hence handle relatively large numbers of variables, constraints and objectives. A debated weakness is the ability of goal programming to produce solutions that are not Pareto efficient. This violates a fundamental concept of decision theory, that no rational decision maker will knowingly choose a solution that is not Pareto efficient. However, techniques are available [6] [11] [12] to detect when this occurs and project the solution onto the Pareto efficient solution in an appropriate manner.
The setting of appropriate weights in the goal programming model is another area that has caused debate, with some authors [13] suggesting the use of the analytic hierarchy process or interactive methods [14] for this purpose.
Time management is the process of planning and exercising conscious control of time spent on specific activities, especially to increase effectiveness, efficiency, and productivity. It involves a juggling act of various demands upon a person relating to work, social life, family, hobbies, personal interests and commitments with the finiteness of time. Using time effectively gives the person "choice" on spending/managing activities at their own time and expediency. Time management may be aided by a range of skills, tools, and techniques used to manage time when accomplishing specific tasks, projects, and goals complying with a due date. Initially, time management referred to just business or work activities, but eventually the term broadened to include personal activities as well. A time management system is a designed combination of processes, tools, techniques, and methods. Time management is usually a necessity in any project development as it determines the project completion time and scope. It is also important to understand that both technical and structural differences in time management exist due to variations in cultural concepts of time.
Pareto efficiency or Pareto optimality is a situation where no individual or preference criterion can be better off without making at least one individual or preference criterion worse off. The concept is named after Vilfredo Pareto (1848–1923), Italian engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The following three concepts are closely related:
Mathematical optimization or mathematical programming is the selection of a best element from some set of available alternatives. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems by minimizing the sum of the squares of the residuals made in the results of every single equation.
An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.
In mathematical optimization and decision theory, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its negative, in which case it is to be maximized.
In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. Whereas deterministic optimization problems are formulated with known parameters, real world problems almost invariably include some unknown parameters. When the parameters are known only within certain bounds, one approach to tackling such problems is called robust optimization. Here the goal is to find a solution which is feasible for all such data and optimal in some sense. Stochastic programming models are similar in style but take advantage of the fact that probability distributions governing the data are known or can be estimated. The goal here is to find some policy that is feasible for all the possible data instances and maximizes the expectation of some function of the decisions and the random variables. More generally, such models are formulated, solved analytically or numerically, and analyzed in order to provide useful information to a decision-maker.
Data envelopment analysis (DEA) is a nonparametric method in operations research and economics for the estimation of production frontiers. It is used to empirically measure productive efficiency of decision making units (DMUs). Although DEA has a strong link to production theory in economics, the tool is also used for benchmarking in operations management, where a set of measures is selected to benchmark the performance of manufacturing and service operations. In benchmarking, the efficient DMUs, as defined by DEA, may not necessarily form a “production frontier”, but rather lead to a “best-practice frontier”.
Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making. Conflicting criteria are typical in evaluating options: cost or price is usually one of the main criteria, and some measure of quality is typically another criterion, easily in conflict with the cost. In purchasing a car, cost, comfort, safety, and fuel economy may be some of the main criteria we consider – it is unusual that the cheapest car is the most comfortable and the safest one. In portfolio management, we are interested in getting high returns while simultaneously reducing risks; however, the stocks that have the potential of bringing high returns typically carry high risk of losing money. In a service industry, customer satisfaction and the cost of providing service are fundamental conflicting criteria.
Participatory development communication is the use of mass media and traditional, inter-personal means of communication that empowers communities to visualise aspirations and discover solutions to their development problems and issues.
Linear Programming Boosting (LPBoost) is a supervised classifier from the boosting family of classifiers. LPBoost maximizes a margin between training samples of different classes and hence also belongs to the class of margin-maximizing supervised classification algorithms. Consider a classification function
Multi-objective optimization is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.
Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.
Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute value (LAV), least absolute residual (LAR), sum of absolute deviations, or the L1 norm condition, is a statistical optimality criterion and the statistical optimization technique that relies on it. Similar to the least squares technique, it attempts to find a function which closely approximates a set of data. In the simple case of a set of (x,y) data, the approximation function is a simple "trend line" in two-dimensional Cartesian coordinates. The method minimizes the sum of absolute errors (SAE). The least absolute deviations estimate also arises as the maximum likelihood estimate if the errors have a Laplace distribution. It was introduced in 1757 by Roger Joseph Boscovich.
Problem management is the process responsible for managing the lifecycle of all problems that happen or could happen in an IT service. The primary objectives of problem management are to prevent problems and resulting incidents from happening, to eliminate recurring incidents, and to minimize the impact of incidents that cannot be prevented. The Information Technology Infrastructure Library defines a problem as the cause of one or more incidents.
In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function one.
William Wager Cooper was an American operations researcher, known as a father of management science and as "Mr. Linear Programming". He was the founding president of The Institute of Management Sciences, founding editor-in-chief of Auditing: A Journal of Practice and Theory, a founding faculty member of the Graduate School of Industrial Administration at the Carnegie Institute of Technology, founding dean of the School of Urban and Public Affairs at CMU, the former Arthur Lowes Dickinson Professor of Accounting at Harvard University, and the Foster Parker Professor Emeritus of Management, Finance and Accounting at the University of Texas at Austin.
Reverse logistics is for all operations related to the reuse of products and materials. It is "the process of moving goods from their typical final destination for the purpose of capturing value, or proper disposal. Remanufacturing and refurbishing activities also may be included in the definition of reverse logistics."
CIPP evaluation model is a Program evaluation model which was developed by Daniel Stufflebeam and colleagues in the 1960s. CIPP is an acronym for Context, Input, Process and Product. CIPP is an evaluation model that requires the evaluation of context, input, process and product in judging a programme’s value. CIPP is a decision-focused approach to evaluation and emphasises the systematic provision of information for programme management and operation.
Simulation-based optimization integrates optimization techniques into simulation analysis. Because of the complexity of the simulation, the objective function may become difficult and expensive to evaluate.