Value of information

Last updated

Value of information (VOI or VoI) is the amount a decision maker would be willing to pay for information prior to making a decision.

Contents

Similar terms

VoI is sometimes distinguished into value of perfect information, also called value of clairvoyance (VoC), and value of imperfect information. They are closely related to the widely known expected value of perfect information (EVPI) and expected value of sample information (EVSI). Note that VoI is not necessarily equal to "value of decision situation with perfect information" - "value of current decision situation" as commonly understood.

Definitions

Simple

A simple example best illustrates the concept: Consider the decision situation with one decision, for example deciding on a 'Vacation Activity'; and one uncertainty, for example what will the 'Weather Condition' be? But we will only know the 'Weather Condition' after we have decided and begun the 'Vacation Activity'.

  • The Value of perfect information on Weather Condition captures the value of being able to know Weather Condition even before making the Vacation Activity decision. It is quantified as the highest price the decision-maker is willing to pay for being able to know Weather Condition before making the Vacation Activity decision.
  • The Value of imperfect information on Weather Condition, however, captures the value of being able to know the outcome of another related uncertainty, e.g., Weather Forecast, instead of Weather Condition itself before making Vacation Activity decision. It is quantified as the highest price the decision-maker is willing to pay for being able to know Weather Forecast before making Vacation Activity decision. Note that it is essentially the value of perfect information on Weather Forecast.

Formal

The above definition illustrates that the value of imperfect information of any uncertainty can always be framed as the value of perfect information, i.e., VoC, of another uncertainty, hence only the term VoC will be used onwards.

Standard

Consider a general decision situation [1] having n decisions (d1, d2, d3, ..., dn) and m uncertainties (u1, u2, u3, ..., um). Rationality assumption in standard individual decision-making philosophy states that what is made or known are not forgotten, i.e., the decision-maker has perfect recall. This assumption translates into the existence of a linear ordering of these decisions and uncertainties such that;

  • di is made prior to making dj if and only if di comes before dj in the ordering
  • di is made prior to knowing uj if and only if di comes before uj in the ordering
  • di is made after knowing uj if and only if di comes after uj in the ordering

Consider cases where the decision-maker is enabled to know the outcome of some additional uncertainties earlier in his/her decision situation, i.e., some ui are moved to appear earlier in the ordering. In such case, VoC is quantified as the highest price which the decision-maker is willing to pay for all those moves.

Generalized

The standard then is further generalized in team decision analysis framework where there is typically incomplete sharing of information among team members under the same decision situation. In such case, what is made or known might not be known in later decisions belonging to different team members, i.e., there might not exist linear ordering of decisions and uncertainties satisfying perfect recall assumption. VoC thus captures the value of being able to know "not only additional uncertainties but also additional decisions already made by other team members" before making some other decisions in the team decision situation. [2]

Characteristics

There are four characteristics of VoI that always hold for any decision situation:

  • The value of information can never be less than zero since the decision-maker can always ignore the additional information and make a decision as if such information is not available.
  • No other information gathering/sharing activities can be more valuable than that quantified by value of clairvoyance.
  • Observing multiple new evidences yields the same gain in maximum expected utility regardless of the order of observation.
  • The VoI of observing two new evidence variables is not additive. Instead it is equivalent to observing one, incorporating it into our current evidence, then observing the other.

Computation

VoC is derived strictly following its definition as the monetary amount that is big enough to just offset the additional benefit of getting more information. In other words; VoC is calculated iteratively until

"value of decision situation with perfect information while paying VoC" = "value of current decision situation".

A special case is when the decision-maker is risk neutral where VoC can be simply computed as

VoC = "value of decision situation with perfect information" - "value of current decision situation".

This special case is how expected value of perfect information and expected value of sample information are calculated where risk neutrality is implicitly assumed. For cases where the decision-maker is risk averse or risk seeking, this simple calculation does not necessarily yield the correct result, and iterative calculation is the only way to ensure correctness.

Decision trees and influence diagrams are most commonly used in representing and solving decision situations as well as associated VoC calculation. The influence diagram, in particular, is structured to accommodate team decision situations where incomplete sharing of information among team members can be represented and solved very efficiently. While decision trees are not designed to accommodate team decision situations, they can do so by augmenting them with information sets widely used in game trees.

Examples

VoC is often illustrated using the example of paying for a consultant in a business transaction, who may either be perfect (expected value of perfect information) or imperfect (expected value of imperfect information). [3]

In a typical consultant situation, the consultant would be paid up to cost c for their information, based on the expected cost E without the consultant and the revised cost F with the consultant's information. In a perfect information scenario, E can be defined as the sum product of the probability of a good outcome g times its cost k, plus the probability of a bad outcome (1-g) times its cost k'>k:

E = gk + (1-g)k',

which is revised to reflect expected cost F of perfect information including consulting cost c. The perfect information case assumes the bad outcome does not occur due to the perfect information consultant.

F = g(k+c)

We then solve for values of c for which F<E to determine when to pay the consultant.

In the case of a recursive decision tree, we often have an additional cost m that results from correcting the error, and the process restarts such that the expected cost will appear on both the left and right sides of our equations. [4] This is typical of hiring-rehiring decisions or value chain decisions for which assembly line components must be replaced if erroneously ordered or installed:

E = gk + (1-g)(k'+m+E)

F = g(k+c)

If the consultant is imperfect with frequency f, then the consultant cost is solved with the probability of error included:

F = g(k+c)(1-f) + g(k+c+F)f + (1-g)(1-f)(k+c+F) + (1-g)f(k'+c+m+F)

VoI is also used to do an inspection and maintenance planning of the structures. analyze to what extent the value associated with the information collected during the service life of engineered structures, for example, inspections, in the context of integrity management, is affected by not only measurement random errors but also biases (systematic errors), taking the dependency between the collections into account [5]

See also

Bibliography

  1. Howard, Ronald (1966). "Information Value Theory". IEEE Transactions on Systems Science and Cybernetics. 2 (1): 22–26. doi:10.1109/tssc.1966.300074. ISSN   0536-1567.
  2. Kuhn, H. W. (1953), "11. Extensive Games and the Problem of Information", in Kuhn, Harold William; Tucker, Albert William (eds.), Contributions to the Theory of Games (AM-28), Volume II, Princeton University Press, pp. 193–216, doi:10.1515/9781400881970-012, ISBN   9781400881970
  3. Parlikad, Ajith Kumar (2013-10-02). Total Information Risk Management: Maximizing the Value of Data and Information Assets (1st ed.). New York, NY: Morgan Kaufmann. ISBN   9780824788896.
  4. Laxminarayan, Ramanan; Macauley, Molly K., eds. (2014-09-21). The Value of Information: Methodological Frontiers and New Applications in Environment and Health (2012 ed.). Place of publication not identified: Springer. ISBN   9789400798083.
  5. Ali, Kashif; Qin, Jianjun; Faber, Michael Havbro (2020-11-12). "On information modeling in structural integrity management". Structural Health Monitoring: 147592172096829. doi: 10.1177/1475921720968292 . ISSN   1475-9217.

Related Research Articles

Pareto efficiency or Pareto optimality is a situation where no action or allocation is available that makes one individual better off without making another worse off. The concept is named after Vilfredo Pareto (1848–1923), Italian civil engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The following three concepts are closely related:

<span class="mw-page-title-main">Uncertainty</span> Situations involving imperfect or unknown information

Uncertainty or Incertitude refers to epistemic situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially observable or stochastic environments, as well as due to ignorance, indolence, or both. It arises in any number of fields, including insurance, philosophy, physics, statistics, economics, finance, medicine, psychology, sociology, engineering, metrology, meteorology, ecology and information science.

Cost–benefit analysis (CBA), sometimes also called benefit–cost analysis, is a systematic approach to estimating the strengths and weaknesses of alternatives. It is used to determine options which provide the best approach to achieving benefits while preserving savings in, for example, transactions, activities, and functional business requirements. A CBA may be used to compare completed or potential courses of action, and to estimate or evaluate the value against the cost of a decision, project, or policy. It is commonly used to evaluate business or policy decisions, commercial transactions, and project investments. For example, the U.S. Securities and Exchange Commission must conduct cost-benefit analyses before instituting regulations or deregulations.

<span class="mw-page-title-main">Loss function</span> Mathematical relation assigning a probability event to a cost

In mathematical optimization and decision theory, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite, in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.

The expected utility hypothesis is a foundational assumption in mathematical economics concerning human preference when decision making under uncertainty. It postulates that a rational agent maximizes utility, as formulated in the mathematics of game theory, based on their risk aversion. Rational choice theory, a cornerstone of microeconomics, builds upon the expected utility of individuals to model aggregate social behaviour.

An influence diagram (ID) is a compact graphical and mathematical representation of a decision situation. It is a generalization of a Bayesian network, in which not only probabilistic inference problems but also decision making problems can be modeled and solved.

Information economics or the economics of information is the branch of microeconomics that studies how information and information systems affect an economy and economic decisions.

In decision theory, the Ellsberg paradox is a paradox in which people's decisions are inconsistent with subjective expected utility theory. Daniel Ellsberg popularized the paradox in his 1961 paper, "Risk, Ambiguity, and the Savage Axioms". John Maynard Keynes published a version of the paradox in 1921. It is generally taken to be evidence of ambiguity aversion, in which a person tends to prefer choices with quantifiable risks over those with unknown, incalculable risks.

In decision theory, the expected value of perfect information (EVPI) is the price that one would be willing to pay in order to gain access to perfect information. A common discipline that uses the EVPI concept is health economics. In that context and when looking at a decision of whether to adopt a new treatment technology, there is always some degree of uncertainty surrounding the decision, because there is always a chance that the decision turns out to be wrong. The expected value of perfect information analysis tries to measure the expected cost of that uncertainty, which “can be interpreted as the expected value of perfect information (EVPI), since perfect information can eliminate the possibility of making the wrong decision” at least from a theoretical perspective.

In decision theory and economics, ambiguity aversion is a preference for known risks over unknown risks. An ambiguity-averse individual would rather choose an alternative where the probability distribution of the outcomes is known over one where the probabilities are unknown. This behavior was first introduced through the Ellsberg paradox.

Info-gap decision theory seeks to optimize robustness to failure under severe uncertainty, in particular applying sensitivity analysis of the stability radius type to perturbations in the value of a given estimate of the parameter of interest. It has some connections with Wald's maximin model; some authors distinguish them, others consider them instances of the same principle.

In decision theory, the expected value of sample information (EVSI) is the expected increase in utility that a decision-maker could obtain from gaining access to a sample of additional observations before making a decision. The additional information obtained from the sample may allow them to make a more informed, and thus better, decision, thus resulting in an increase in expected utility. EVSI attempts to estimate what this improvement would be before seeing actual sample data; hence, EVSI is a form of what is known as preposterior analysis. The use of EVSI in decision theory was popularized by Robert Schlaifer and Howard Raiffa in the 1960s.

<span class="mw-page-title-main">Precision and recall</span> Pattern-recognition performance metrics

In pattern recognition, information retrieval, object detection and classification, precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space.

Quantification of Margins and Uncertainty (QMU) is a decision support methodology for complex technical decisions. QMU focuses on the identification, characterization, and analysis of performance thresholds and their associated margins for engineering systems that are evaluated under conditions of uncertainty, particularly when portions of those results are generated using computational modeling and simulation. QMU has traditionally been applied to complex systems where comprehensive experimental test data is not readily available and cannot be easily generated for either end-to-end system execution or for specific subsystems of interest. Examples of systems where QMU has been applied include nuclear weapons performance, qualification, and stockpile assessment. QMU focuses on characterizing in detail the various sources of uncertainty that exist in a model, thus allowing the uncertainty in the system response output variables to be well quantified. These sources are frequently described in terms of probability distributions to account for the stochastic nature of complex engineering systems. The characterization of uncertainty supports comparisons of design margins for key system performance metrics to the uncertainty associated with their calculation by the model. QMU supports risk-informed decision-making processes where computational simulation results provide one of several inputs to the decision-making authority. There is currently no standardized methodology across the simulation community for conducting QMU; the term is applied to a variety of different modeling and simulation techniques that focus on rigorously quantifying model uncertainty in order to support comparison to design margins.

An optimal decision is a decision that leads to at least as good a known or expected outcome as all other available decision options. It is an important concept in decision theory. In order to compare the different decision outcomes, one commonly assigns a utility value to each of them.

Managerial Risk Accounting is concerned with the generation, dissemination and use of risk related accounting information to managers within organisations to enable them to judge and shape the risk situation of the organisation according to the objectives of the organisation.

Capital market imperfections are limitations that reduce the range of financial contracts that can be signed or honored. These restrictions are more common in capital markets. There are three basic reasons for that: First, lenders do not have full information about the borrower, whether they have the capacity to pay back their debt and/or whether they are willing to pay. Secondly, the lender needs to trust the borrower to commit and to pay back his/her debt or there needs to be a third party to enforce the contract as it is more difficult to enforce contracts ex post. Finally, since the exchange does not happen at the same time, there is always room for renegotiation.

In decision theory and quantitative policy analysis, the expected value of including uncertainty (EVIU) is the expected difference in the value of a decision based on a probabilistic analysis versus a decision based on an analysis that ignores uncertainty.

Risk aversion is a preference for a sure outcome over a gamble with higher or equal expected value. Conversely, the rejection of a sure thing in favor of a gamble of lower or equal expected value is known as risk-seeking behavior.

The cost-loss model, also called the cost/loss model or the cost-loss decision model, is a model used to understand how the predicted probability of adverse events affects the decision of whether to take a costly precautionary measure to protect oneself against losses from that event. The threshold probability above which it makes sense to take the precautionary measure equals the ratio of the cost of the preventative measure to the loss averted, and this threshold is termed the cost/loss ratio or cost-loss ratio. The model is typically used in the context of using prediction about weather conditions to decide whether to take a precautionary measure or not.