In decision theory and quantitative policy analysis, the expected value of including uncertainty (EVIU) is the expected difference in the value of a decision based on a probabilistic analysis versus a decision based on an analysis that ignores uncertainty. [1] [2] [3]
Decisions must be made every day in the ubiquitous presence of uncertainty. For most day-to-day decisions, various heuristics are used to act reasonably in the presence of uncertainty, often with little thought about its presence. However, for larger high-stakes decisions or decisions in highly public situations, decision makers may often benefit from a more systematic treatment of their decision problem, such as through quantitative analysis or decision analysis.
When building a quantitative decision model, a model builder identifies various relevant factors, and encodes these as input variables. From these inputs, other quantities, called result variables, can be computed; these provide information for the decision maker. For example, in the example detailed below, the decision maker must decide how soon before a flight's schedule departure he must leave for the airport (the decision). One input variable is how long it takes to drive to the airport parking garage. From this and other inputs, the model can compute how likely it is the decision maker will miss the flight and what the net cost (in minutes) will be for various decisions.
To reach a decision, a very common practice is to ignore uncertainty. Decisions are reached through quantitative analysis and model building by simply using a best guess (single value) for each input variable. Decisions are then made on computed point estimates. In many cases, however, ignoring uncertainty can lead to very poor decisions, with estimations for result variables often misleading the decision maker [4]
An alternative to ignoring uncertainty in quantitative decision models is to explicitly encode uncertainty as part of the model. With this approach, a probability distribution is provided for each input variable, rather than a single best guess. The variance in that distribution reflects the degree of subjective uncertainty (or lack of knowledge) in the input quantity. The software tools then use methods such as Monte Carlo analysis to propagate the uncertainty to result variables, so that a decision maker obtains an explicit picture of the impact that uncertainty has on his decisions, and in many cases can make a much better decision as a result.
When comparing the two approaches—ignoring uncertainty versus modeling uncertainty explicitly—the natural question to ask is how much difference it really makes to the quality of the decisions reached. In the 1960s, Ronald A. Howard proposed [5] one such measure, the expected value of perfect information (EVPI), a measure of how much it would be worth to learn the "true" values for all uncertain input variables. While providing a highly useful measure of sensitivity to uncertainty, the EVPI does not directly capture the actual improvement in decisions obtained from explicitly representing and reasoning about uncertainty. For this, Max Henrion, in his Ph.D. thesis, introduced the expected value of including uncertainty (EVIU), the topic of this article.
Let
When not including uncertainty, the optimal decision is found using only , the expected value of the uncertain quantity. Hence, the decision ignoring uncertainty is given by:
The optimal decision taking uncertainty into account is the standard Bayes decision that maximizes expected utility:
The EVIU is the difference in expected utility between these two decisions:
The uncertain quantity x and decision variable d may each be composed of many scalar variables, in which case the spaces X and D are each vector spaces.
The diagram at right is an influence diagram for deciding how early the decision maker should leave home in order to catch a flight at the airport. The single decision, in the green rectangle, is the number of minutes that one will decide to leave prior to the plane's departure time. Four uncertain variables appear on the diagram in cyan ovals: The time required to drive from home to the airport's parking garage (in minutes), time to get from the parking garage to the gate (in minutes), the time before departure that one must be at the gate, and the loss (in minutes) incurred if the flight is missed. Each of these nodes contains a probability distribution, viz:
Time_to_drive_to_airport := LogNormal(median:60,gsdev:1.3) Time_from_parking_to_gate := LogNormal(median:10,gsdev:1.3) Gate_time_before_departure := Triangular(min:20,mode:30,max:40) Loss_if_miss_the_plane := LogNormal(median:400,stddev:100)
Each of these distributions is taken to be statistically independent. The probability distribution for the first uncertain variable, Time_to_drive_to_airport, with median 60 and a geometric standard deviation of 1.3, is depicted in this graph:
The model calculates the cost (the red hexagonal variable) as the number of minutes (or minute equivalents) consumed to successfully board the plane. If one arrive too late, one will miss one's plane and incur the large loss (negative utility) of having to wait for the next flight. If one arrives too early, one incurs the cost of a needlessly long wait for the flight.
Models that utilize EVIU may use a utility function, or equivalently they may utilize a loss function, in which case the utility function is just the negative of the loss function. In either case, the EVIU will be positive. The main difference is just that with a loss function, the decision is made by minimizing loss rather than by maximizing utility. The example here uses a loss function, Cost.
The definitions for each of the computed variables is thus:
Time_from_home_to_gate := Time_to_drive_to_airport + Time_from_parking_to_gate + Loss_if_miss_the_plane Value_per_minute_at_home := 1
Cost := Value_per_minute_at_home * Time_I_leave_home + (If Time_I_leave_home < Time_from_home_to_gate Then Loss_if_miss_the_plane Else 0)
The following graph displays the expected value taking uncertainty into account (the smooth blue curve) to the expected utility ignoring uncertainty, graphed as a function of the decision variable.
When uncertainty is ignored, one acts as though the flight will be made with certainty as long as one leaves at least 100 minutes before the flight, and will miss the flight with certainty if one leaves any later than that. Because one acts as if everything is certain, the optimal action is to leave exactly 100 minutes (or 100 minutes, 1 second) before the flight.
When uncertainty is taken into account, the expected value smooths out (the blue curve), and the optimal action is to leave 140 minutes before the flight. The expected value curve, with a decision at 100 minutes before the flight, shows the expected cost when ignoring uncertainty to be 313.7 minutes, while the expected cost when one leaves 140 minute before the flight is 151 minutes. The difference between these two is the EVIU:
In other words, if uncertainty is explicitly taken into account when the decision is made, an average savings of 162.7 minutes will be realized.
In the context of centralized linear-quadratic control, with additive uncertainty in the equation of evolution but no uncertainty about coefficient values in that equation, the optimal solution for the control variables taking into account the uncertainty is the same as the solution ignoring uncertainty. This property, which gives a zero expected value of including uncertainty, is called certainty equivalence.
Both EVIU and EVPI compare the expected value of the Bayes' decision with another decision made without uncertainty. For EVIU this other decision is made when the uncertainty is ignored, although it is there, while for EVPI this other decision is made after the uncertainty is removed by obtaining perfect information about x.
The EVPI is the expected cost of being uncertain about x, while the EVIU is the additional expected cost of assuming that one is certain.
The EVIU, like the EVPI, gives expected value in terms of the units of the utility function.
A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences and engineering disciplines, as well as in non-physical systems such as the social sciences. It can also be taught as a subject in its own right.
In economics, utility is a measure of the satisfaction that a certain person has from a certain state of the world. Over time, the term has been used in at least two different meanings.
In economics and finance, risk aversion is the tendency of people to prefer outcomes with low uncertainty to those outcomes with high uncertainty, even if the average outcome of the latter is equal to or higher in monetary value than the more certain outcome.
In mathematical optimization and decision theory, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite, in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.
Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system can be divided and allocated to different sources of uncertainty in its inputs. This involves estimating sensitivity indices that quantify the influence of an input or group of inputs on the output. A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem.
The expected utility hypothesis is a foundational assumption in mathematical economics concerning decision making under uncertainty. It postulates that rational agents maximize utility, meaning the subjective desirability of their actions. Rational choice theory, a cornerstone of microeconomics, builds this postulate to model aggregate social behaviour.
In decision analysis, the clarity test is a test of how well a model element is defined. Although nothing can be completely defined, the clarity test allows the decision participants to determine whether such elements as variables, events, outcomes, and alternatives are sufficiently well defined to make the decision at hand. In general, a model element is well defined if a knowledgeable individual can answer questions about the model element without asking further clarifying questions.
A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes. The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used.
In metrology, measurement uncertainty is the expression of the statistical dispersion of the values attributed to a quantity measured on an interval or ratio scale.
The newsvendormodel is a mathematical model in operations management and applied economics used to determine optimal inventory levels. It is (typically) characterized by fixed prices and uncertain demand for a perishable product. If the inventory level is , each unit of demand above is lost in potential sales. This model is also known as the newsvendor problem or newsboy problem by analogy with the situation faced by a newspaper vendor who must decide how many copies of the day's paper to stock in the face of uncertain demand and knowing that unsold copies will be worthless at the end of the day.
Info-gap decision theory seeks to optimize robustness to failure under severe uncertainty, in particular applying sensitivity analysis of the stability radius type to perturbations in the value of a given estimate of the parameter of interest. It has some connections with Wald's maximin model; some authors distinguish them, others consider them instances of the same principle.
Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.
In decision theory, the expected value of sample information (EVSI) is the expected increase in utility that a decision-maker could obtain from gaining access to a sample of additional observations before making a decision. The additional information obtained from the sample may allow them to make a more informed, and thus better, decision, thus resulting in an increase in expected utility. EVSI attempts to estimate what this improvement would be before seeing actual sample data; hence, EVSI is a form of what is known as preposterior analysis. The use of EVSI in decision theory was popularized by Robert Schlaifer and Howard Raiffa in the 1960s.
An optimal decision is a decision that leads to at least as good a known or expected outcome as all other available decision options. It is an important concept in decision theory. In order to compare the different decision outcomes, one commonly assigns a utility value to each of them.
Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.
In decision theory, economics, and finance, a two-moment decision model is a model that describes or prescribes the process of making decisions in a context in which the decision-maker is faced with random variables whose realizations cannot be known in advance, and in which choices are made based on knowledge of two moments of those random variables. The two moments are almost always the mean—that is, the expected value, which is the first moment about zero—and the variance, which is the second moment about the mean.
In macroeconomics, multiplier uncertainty is lack of perfect knowledge of the multiplier effect of a particular policy action, such as a monetary or fiscal policy change, upon the intended target of the policy. For example, a fiscal policy maker may have a prediction as to the value of the fiscal multiplier—the ratio of the effect of a government spending change on GDP to the size of the government spending change—but is not likely to know the exact value of this ratio. Similar uncertainty may surround the magnitude of effect of a change in the monetary base or its growth rate upon some target variable, which could be the money supply, the exchange rate, the inflation rate, or GDP.
A probability box is a characterization of uncertain numbers consisting of both aleatoric and epistemic uncertainties that is often used in risk analysis or quantitative uncertainty modeling where numerical calculations must be performed. Probability bounds analysis is used to make arithmetic and logical calculations with p-boxes.
Probability bounds analysis (PBA) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called probability boxes, and constrain cumulative probability distributions.
Bulk dispatch lapse (BDL) or bulk dispatch value lapse, depicts the depreciation of a distributed object to multiple consumers.