Robust decision-making (RDM) is an iterative decision analytics framework that aims to help identify potential robust strategies, characterize the vulnerabilities of such strategies, and evaluate the tradeoffs among them. [1] [2] [3] RDM focuses on informing decisions under conditions of what is called "deep uncertainty", that is, conditions where the parties to a decision do not know or do not agree on the system models relating actions to consequences or the prior probability distributions for the key input parameters to those models. [2] : 1011
A wide variety of concepts, methods, and tools have been developed to address decision challenges that confront a large degree of uncertainty. One source of the name "robust decision" was the field of robust design popularized primarily by Genichi Taguchi in the 1980s and early 1990s. [4] [5] Jonathan Rosenhead and colleagues were among the first to lay out a systematic decision framework for robust decisions, in their 1989 book Rational Analysis for a Problematic World. [6] Similar themes have emerged from the literatures on scenario planning, robust control, imprecise probability, and info-gap decision theory and methods. An early review of many of these approaches is contained in the Third Assessment Report of the Intergovernmental Panel on Climate Change, published in 2001.
Robust decision-making (RDM) is a particular set of methods and tools developed over the last decade, primarily by researchers associated with the RAND Corporation, designed to support decision-making and policy analysis under conditions of deep uncertainty.
While often used by researchers to evaluate alternative options, RDM is designed and is often employed as a method for decision support, with a particular focus on helping decision-makers identify and design new decision options that may be more robust than those they had originally considered. Often, these more robust options represent adaptive decision strategies designed to evolve over time in response to new information. In addition, RDM can be used to facilitate group decision-making in contentious situations where parties to the decision have strong disagreements about assumptions and values. [7]
RDM approaches have been applied to a wide range of different types of decision challenges. A study in 1996 addressed adaptive strategies for reducing greenhouse gas emissions. [8] More recent studies include a variety of applications to water management issues, [9] [10] [11] evaluation of the impacts of proposed U.S. renewable energy requirements,[ citation needed ] a comparison of long-term energy strategies for the government of Israel,[ citation needed ] an assessment of science and technology policies the government of South Korea might pursue in response to increasing economic competition from China,[ citation needed ] and an analysis of Congress' options for reauthorizing the Terrorism Risk Insurance Act (TRIA).[ citation needed ]
RDM rests on three key concepts that differentiate it from the traditional subjective expected utility decision framework: multiple views of the future, a robustness criterion, and reversing the order of traditional decision analysis by conducting an iterative process based on a vulnerability-and-response-option rather than a predict-then-act decision framework.[ citation needed ]
First, RDM characterizes uncertainty with multiple views of the future. In some cases, these multiple views will be represented by multiple future states of the world. RDM can also incorporate probabilistic information, but rejects the view that a single joint probability distribution represents the best description of a deeply uncertain future. Rather RDM uses ranges or, more formally, sets of plausible probability distributions to describe deep uncertainty.
Second, RDM uses robustness rather than optimality as a criterion to assess alternative policies. The traditional subjective utility framework ranks alternative decision options contingent on best estimated probability distributions. In general, there is a best (i.e., highest ranked) option. RDM analyses have employed several different definitions of robustness. These include: trading a small amount of optimum performance for less sensitivity to broken assumptions, good performance compared to the alternatives over a wide range of plausible scenarios, and keeping options open. [2] All incorporate some type of satisficing criteria and, in contrast to expected utility approaches, all generally describe tradeoffs rather than provide a strict ranking of alternative options.
Third, RDM employs a vulnerability-and-response-option analysis framework to characterize uncertainty and to help identify and evaluate robust strategies. This structuring of the decision problem is a key feature of RDM. The traditional decision analytic approach follows what has been called a predict-then-act approach [12] that first characterizes uncertainty about the future, and then uses this characterization to rank the desirability of alternative decision options. Importantly, this approach characterizes uncertainty without reference to the alternative options. In contrast, RDM characterizes uncertainty in the context of a particular decision. That is, the method identifies those combinations of uncertainties most important to the choice among alternative options and describes the set of beliefs about the uncertain state of the world that are consistent with choosing one option over another. This ordering provides cognitive benefits in decision support applications, allowing stakeholders to understand the key assumptions underlying alternative options before committing themselves to believing those assumptions. [13]
Robust decision methods seem most appropriate under three conditions: when the uncertainty is deep as opposed to well characterized, when there is a rich set of decision options, and the decision challenge is sufficiently complex that decision-makers need simulation models to trace the potential consequences of their actions over many plausible scenarios.
When the uncertainty is well characterized, then traditional expected utility (predict-then-act) analyses are often most appropriate. In addition, if decision-makers lack a rich set of decision options they may have little opportunity to develop a robust strategy and can do no better than a predict-then-act analysis. [2]
If the uncertainty is deep and a rich set of options is available, traditional qualitative scenario methods may prove most effective if the system is sufficiently simple or well understood that decision-makers can accurately connect potential actions to their consequences without the aid of simulation models.
RDM is not a recipe of analytic steps, but rather a set of methods that can be combined in varying ways for specific decisions to implement the concept. Two key items in this toolkit are described below: exploratory modeling and scenario discovery.
Many RDM analyses use an exploratory modeling approach, [14] with computer simulations used not as a device for prediction, but rather as a means for relating a set of assumptions to their implied consequences. The analyst draws useful information from such simulations by running them many times using an appropriate experimental design over the uncertain input parameters to the model(s), collecting the runs in a large database of cases, and analyzing this database to determine what policy-relevant statements can be supported. RDM represents a particular implementation of this concept. An RDM analysis typically creates a large database of simulation model results, and then uses this database to identify vulnerabilities of proposed strategies and the tradeoffs among potential responses. This analytic process provides several practical advantages:
RDM analyses often employ a process called scenario discovery to facilitate the identification of vulnerabilities of proposed strategies. [13] [15] The process begins by specifying some performance metric, such as the total cost of a policy or its deviation from optimality (regret), which can be used to distinguish those cases in the results database where the strategy is judged successful from those where it is judged unsuccessful. Statistical or data-mining algorithms are applied to the database to generate simple descriptions of regions in the space of uncertain input parameters to the model that best describe the cases where the strategy is unsuccessful. That is, the algorithm for describing these cases is tuned to optimize both the predictability and interpretability by decision-makers. The resulting clusters have many characteristics of scenarios and can be used to help decision-makers understand the vulnerabilities of the proposed policies and potential response options. A review conducted by the European Environment Agency of the rather sparse literature evaluating how scenarios actually perform in practice when used by organizations to inform decisions identified several key weaknesses of traditional scenario approaches.[ citation needed ] Scenario-discovery methods are designed to address these weaknesses. [13] In addition, scenario discovery supports analysis for multiple stressors because it characterizes vulnerabilities as combinations of very different types of uncertain parameters (e.g. climate, economic, organizational capabilities, etc.).
There is several software available to perform RDM analysis. RAND Corporation has developed CARS for exploratory modeling and the sdtoolkit R package for scenario discovery. The EMA Workbench, developed at Delft University of Technology, provides extensive exploratory modeling and scenario discovery capabilities in Python. [16] OpenMORDM is an open source R package for RDM that includes support for defining more than one performance objective. [17] OpenMORDM facilitates exploring the impact of different robustness criteria, including both regret-based (e.g., minimizing deviation in performance) and satisficing-based (e.g., satisfying performance constraints) criteria. Rhodium is an open source Python package that supports similar functionality to the EMA Workbench and to OpenMORDM, but also allows its application on models written in C, C++, Fortran, R and Excel, as well as the use of several multi-objective evolutionary algorithms. [18]
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, physicist Stanislaw Ulam, was inspired by his uncle's gambling habits.
Real options valuation, also often termed real options analysis, applies option valuation techniques to capital budgeting decisions. A real option itself, is the right—but not the obligation—to undertake certain business initiatives, such as deferring, abandoning, expanding, staging, or contracting a capital investment project. For example, real options valuation could examine the opportunity to invest in the expansion of a firm's factory and the alternative option to sell the factory.
Decision theory is a branch of applied probability theory and analytic philosophy concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome.
Scenario planning, scenario thinking, scenario analysis, scenario prediction and the scenario method all describe a strategic planning method that some organizations use to make flexible long-term plans. It is in large part an adaptation and generalization of classic methods used by military intelligence.
Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system can be divided and allocated to different sources of uncertainty in its inputs. A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem.
Design for Six Sigma (DFSS) is a collection of best-practices for the development of new products and processes. It is sometimes deployed as an engineering design process or business process management method. DFSS originated at General Electric to build on the success they had with traditional Six Sigma; but instead of process improvement, DFSS was made to target new product development. It is used in many industries, like finance, marketing, basic engineering, process industries, waste management, and electronics. It is based on the use of statistical tools like linear regression and enables empirical research similar to that performed in other fields, such as social science. While the tools and order used in Six Sigma require a process to be in place and functioning, DFSS has the objective of determining the needs of customers and the business, and driving those needs into the product solution so created. It is used for product or process design in contrast with process improvement. Measurement is the most important part of most Six Sigma or DFSS tools, but whereas in Six Sigma measurements are made from an existing process, DFSS focuses on gaining a deep insight into customer needs and using these to inform every design decision and trade-off.
A computer experiment or simulation experiment is an experiment used to study a computer simulation, also referred to as an in silico system. This area includes computational physics, computational chemistry, computational biology and other similar disciplines.
Decision analysis (DA) is the discipline comprising the philosophy, methodology, and professional practice necessary to address important decisions in a formal manner. Decision analysis includes many procedures, methods, and tools for identifying, clearly representing, and formally assessing important aspects of a decision; for prescribing a recommended course of action by applying the maximum expected-utility axiom to a well-formed representation of the decision; and for translating the formal representation of a decision and its corresponding recommendation into insight for the decision maker, and other corporate and non-corporate stakeholders.
Monte Carlo methods are used in corporate finance and mathematical finance to value and analyze (complex) instruments, portfolios and investments by simulating the various sources of uncertainty affecting their value, and then determining the distribution of their value over the range of resultant outcomes. This is usually done by help of stochastic asset models. The advantage of Monte Carlo methods over other techniques increases as the dimensions of the problem increase.
In mathematical finance, a Monte Carlo option model uses Monte Carlo methods to calculate the value of an option with multiple sources of uncertainty or with complicated features. The first application to option pricing was by Phelim Boyle in 1977. In 1996, M. Broadie and P. Glasserman showed how to price Asian options by Monte Carlo. An important development was the introduction in 1996 by Carriere of Monte Carlo methods for options with early exercise features.
Futures techniques used in the multi-disciplinary field of futurology by futurists in Americas and Australasia, and futurology by futurologists in EU, include a diverse range of forecasting methods, including anticipatory thinking, backcasting, simulation, and visioning. Some of the anticipatory methods include, the delphi method, causal layered analysis, environmental scanning, morphological analysis, and scenario planning.
Info-gap decision theory seeks to optimize robustness to failure under severe uncertainty, in particular applying sensitivity analysis of the stability radius type to perturbations in the value of a given estimate of the parameter of interest. It has some connections with Wald's maximin model; some authors distinguish them, others consider them instances of the same principle.
In finance, model risk is the risk of loss resulting from using insufficiently accurate models to make decisions, originally and frequently in the context of valuing financial securities.
The fuzzy pay-off method for real option valuation is a method for valuing real options, developed by Mikael Collan, Robert Fullér, and József Mezei; and published in 2009. It is based on the use of fuzzy logic and fuzzy numbers for the creation of the possible pay-off distribution of a project. The structure of the method is similar to the probability theory based Datar–Mathews method for real option valuation, but the method is not based on probability theory and uses fuzzy numbers and possibility theory in framing the real option valuation problem.
The scenario approach or scenario optimization approach is a technique for obtaining solutions to robust optimization and chance-constrained optimization problems based on a sample of the constraints. It also relates to inductive reasoning in modeling and decision-making. The technique has existed for decades as a heuristic approach and has more recently been given a systematic theoretical foundation.
OptiY is a design environment software that provides modern optimization strategies and state of the art probabilistic algorithms for uncertainty, reliability, robustness, sensitivity analysis, data-mining and meta-modeling.
Analytica is a visual software developed by Lumina Decision Systems for creating, analyzing and communicating quantitative decision models. It combines hierarchical influence diagrams for visual creation and view of models, intelligent arrays for working with multidimensional data, Monte Carlo simulation for analyzing risk and uncertainty, and optimization, including linear and nonlinear programming. Its design is based on ideas from the field of decision analysis. As a computer language, it combines a declarative (non-procedural) structure for referential transparency, array abstraction, and automatic dependency maintenance for efficient sequencing of computation.
Probability bounds analysis (PBA) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called probability boxes, and constrain cumulative probability distributions.
In marketing, Bayesian inference allows for decision making and market research evaluation under uncertainty and with limited data. The communication between marketer and market can be seen as a form of Bayesian persuasion.
Strategic planning and uncertainty intertwine in a realistic framework where companies and organizations are bounded to develop and compete in a world dominated by complexity, ambiguity, and uncertainty in which unpredictable, unstoppable and, sometimes, meaningless circumstances may have a direct impact on the expected outcomes. In this scenario, formal planning systems are criticized by a number of academics, who argue that conventional methods, based on classic analytical tools, fail to shape a strategy that can adjust to the changing market and enhance the competitiveness of each business unit, which is the basic principle of a competitive business strategy. Strategy planning systems are supposed to produce the best approaches to concretize long-term objectives. However, since strategy deals with the upcoming future, the strategic context of an organization will always be uncertain, therefore the first choice an organisation has to make is when to act; acting now or when the uncertainty has been resolved.
Robust decision making describes a variety of approaches that differ from traditional optimum expected utility analysis in that they characterize uncertainty with multiple representations of the future rather than a single set of probability distributions and use robustness, rather than optimality, as a decision criterion. (1011-1012)
Robust decision making is more analytical than intuitive. It adopts a systematic approach to remove uncertainty within the resources available to make safe and effective decisions. (1023)