This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
In decision theory, the expected value of perfect information (EVPI) is the price that one would be willing to pay in order to gain access to perfect information. [1] A common discipline that uses the EVPI concept is health economics. In that context and when looking at a decision of whether to adopt a new treatment technology, there is always some degree of uncertainty surrounding the decision, because there is always a chance that the decision turns out to be wrong. The expected value of perfect information analysis tries to measure the expected cost of that uncertainty, which “can be interpreted as the expected value of perfect information (EVPI), since perfect information can eliminate the possibility of making the wrong decision” at least from a theoretical perspective. [2]
The problem is modeled with a payoff matrix Rij in which the row index i describes a choice that must be made by the player, while the column index j describes a random variable that the player does not yet have knowledge of, that has probability pj of being in state j. If the player is to choose i without knowing the value of j, the best choice is the one that maximizes the expected monetary value:
where
is the expected payoff for action i i.e. the expectation value, and
is choosing the maximum of these expectations for all available actions. On the other hand, with perfect knowledge of j, the player may choose a value of i that optimizes the expectation for that specific j. Therefore, the expected value given perfect information is
where is the probability that the system is in state j, and is the pay-off if one follows action i while the system is in state j. Here, indicates the best choice of action i for each state j.
The expected value of perfect information is the difference between these two quantities,
This difference describes, in expectation, how much larger a value the player can hope to obtain by knowing j and picking the best i for that j, as compared to picking a value of i before j is known. Since EV|PI is necessarily greater than or equal to EMV, EVPI is always non-negative.
EVPI provides a criterion by which to judge ordinary imperfectly informed forecasters. EVPI can be used to reject costly proposals: if one is offered knowledge for a price larger than EVPI, it would be better to refuse the offer. However, it is less helpful when deciding whether to accept a forecasting offer, because one needs to know the quality of the information one is acquiring.
Setup:
Suppose you were going to make an investment into only one of three investment vehicles: stock, mutual fund, or certificate of deposit (CD). Further suppose, that the market has a 50% chance of increasing, a 30% chance of staying even, and a 20% chance of decreasing. If the market increases the stock investment will earn $1500 and the mutual fund will earn $900. If the market stays even the stock investment will earn $300 and the mutual fund will earn $600. If the market decreases the stock investment will lose $800 and the mutual fund will lose $200. The certificate of deposit will earn $500 independent of the market's fluctuation.
Question:
What is the expected value of perfect information?
Solution:
Here the payoff matrix is:
The probability vector is:
Expectation for each vehicle ():
The maximum of these expectations is the stock vehicle. Not knowing which direction the market will go (only knowing the probability of the directions), we expect to make the most money with the stock vehicle.
Thus,
On the other hand, consider if we did know ahead of time which way the market would turn. Given the knowledge of the direction of the market we would (potentially) make a different investment vehicle decision.
Expectation for maximizing profit given the state of the market:
That is, given each market direction, we choose the investment vehicle that maximizes the profit.
Hence,
Conclusion:
Knowing the direction the market will go (i.e. having perfect information) is worth $350.
Discussion:
If someone was selling information that guaranteed the accurate prediction of the future market direction, we would want to purchase this information only if the price was less than $350. If the price was greater than $350 we would not purchase the information, if the price was less than $350 we would purchase the information. If the price was exactly $350, then our decision is futile.
Suppose the price for the information was $349.99 and we purchased it. Then we would expect to make 1030 - 349.99 = 680.01 > 680. Therefore, by purchasing the information we were able to make $0.01 more than if we didn't purchase the information.
Suppose the price for the information was $350.01 and we purchased it. Then we would expect to make 1030 - 350.01 = 679.99 < 680. Therefore, by purchasing the information we lost $0.01 when compared to not having purchased the information.
Suppose the price for the information was $350.00 and we purchased it. Then we would expect to make 1030 - 350.00 = 680.00 = 680. Therefore, by purchasing the information we did not gain nor lose any money by deciding to purchase this information when compared to not purchasing the information.
Note: As a practical example, there is a cost to using money to purchase items (time value of money), which must be considered as well.
In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called 'freezing temperature' Tf. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or without a regular pattern and the couplings too are random.
A geometric Brownian motion (GBM) is a continuous-time stochastic process in which the logarithm of the randomly varying quantity follows a Brownian motion with drift. It is an important example of stochastic processes satisfying a stochastic differential equation (SDE); in particular, it is used in mathematical finance to model stock prices in the Black–Scholes model.
Pearson's chi-squared test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely used of many chi-squared tests – statistical procedures whose results are evaluated by reference to the chi-squared distribution. Its properties were first investigated by Karl Pearson in 1900. In contexts where it is important to improve a distinction between the test statistic and its distribution, names similar to Pearson χ-squared test or statistic are used.
Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. It uses the variance of asset prices as a proxy for risk.
In finance, arbitrage pricing theory (APT) is a multi-factor model for asset pricing which relates various macro-economic (systematic) risk variables to the pricing of financial assets. Proposed by economist Stephen Ross in 1976, it is widely believed to be an improved alternative to its predecessor, the Capital Asset Pricing Model (CAPM). APT is founded upon the law of one price, which suggests that within an equilibrium market, rational investors will implement arbitrage such that the equilibrium price is eventually realised. As such, APT argues that when opportunities for arbitrage are exhausted in a given period, then the expected return of an asset is a linear function of various factors or theoretical market indices, where sensitivities of each factor is represented by a factor-specific beta coefficient or factor loading. Consequently, it provides traders with an indication of ‘true’ asset value and enables exploitation of market discrepancies via arbitrage. The linear factor model structure of the APT is used as the basis for evaluating asset allocation, the performance of managed funds as well as the calculation of cost of capital.
In mathematics, a character group is the group of representations of a group by complex-valued functions. These functions can be thought of as one-dimensional matrix representations and so are special cases of the group characters that arise in the related context of character theory. Whenever a group is represented by matrices, the function defined by the trace of the matrices is called a character; however, these traces do not in general form a group. Some important properties of these one-dimensional characters apply to characters in general:
Cohen's kappa coefficient is a statistic that is used to measure inter-rater reliability for qualitative (categorical) items. It is generally thought to be a more robust measure than simple percent agreement calculation, as κ takes into account the possibility of the agreement occurring by chance. There is controversy surrounding Cohen's kappa due to the difficulty in interpreting indices of agreement. Some researchers have suggested that it is conceptually simpler to evaluate disagreement between items.
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
In finance, diversification is the process of allocating capital in a way that reduces the exposure to any one particular asset or risk. A common path towards diversification is to reduce risk or volatility by investing in a variety of assets. If asset prices do not change in perfect synchrony, a diversified portfolio will have less variance than the weighted average variance of its constituent assets, and often less volatility than the least volatile of its constituents.
In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix. It is named after Carl Gustav Jacob Jacobi, who first proposed the method in 1846, but only became widely used in the 1950s with the advent of computers.
In condensed matter physics and crystallography, the static structure factor is a mathematical description of how a material scatters incident radiation. The structure factor is a critical tool in the interpretation of scattering patterns obtained in X-ray, electron and neutron diffraction experiments.
The Glicko rating system and Glicko-2 rating system are methods of assessing a player's strength in games of skill, such as chess and Go. The Glicko rating system was invented by Mark Glickman in 1995 as an improvement on the Elo rating system, and initially intended for the primary use as a chess rating system. Glickman's principal contribution to measurement is "ratings reliability", called RD, for ratings deviation.
A number of different Markov models of DNA sequence evolution have been proposed. These substitution models differ in terms of the parameters used to describe the rates at which one nucleotide replaces another during evolution. These models are frequently used in molecular phylogenetic analyses. In particular, they are used during the calculation of likelihood of a tree and they are used to estimate the evolutionary distance between sequences from the observed differences between the sequences.
Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.
Pulse compression is a signal processing technique commonly used by radar, sonar and echography to increase the range resolution as well as the signal to noise ratio. This is achieved by modulating the transmitted pulse and then correlating the received signal with the transmitted pulse.
In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.
A control premium is an amount that a buyer is sometimes willing to pay over the current market price of a publicly traded company in order to acquire a controlling share in that company.
In decision theory, the expected value of sample information (EVSI) is the expected increase in utility that a decision-maker could obtain from gaining access to a sample of additional observations before making a decision. The additional information obtained from the sample may allow them to make a more informed, and thus better, decision, thus resulting in an increase in expected utility. EVSI attempts to estimate what this improvement would be before seeing actual sample data; hence, EVSI is a form of what is known as preposterior analysis. The use of EVSI in decision theory was popularized by Robert Schlaifer and Howard Raiffa in the 1960s.
In credibility theory, a branch of study in actuarial science, the Bühlmann model is a random effects model used to determine the appropriate premium for a group of insurance contracts. The model is named after Hans Bühlmann who first published a description in 1967.
Stochastic portfolio theory (SPT) is a mathematical theory for analyzing stock market structure and portfolio behavior introduced by E. Robert Fernholz in 2002. It is descriptive as opposed to normative, and is consistent with the observed behavior of actual markets. Normative assumptions, which serve as a basis for earlier theories like modern portfolio theory (MPT) and the capital asset pricing model (CAPM), are absent from SPT.