Rank-dependent expected utility

Last updated

The rank-dependent expected utility model (originally called anticipated utility) is a generalized expected utility model of choice under uncertainty, designed to explain the behaviour observed in the Allais paradox, as well as for the observation that many people both purchase lottery tickets (implying risk-loving preferences) and insure against losses (implying risk aversion).

Generalized expected utility is a decision-making metric based on any of a variety of theories that attempt to resolve some discrepancies between expected utility theory and empirical observations, concerning choice under risky (probabilistic) circumstances.

Uncertainty situation which involves imperfect and/or unknown information

Uncertainty refers to epistemic situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. Uncertainty arises in partially observable and/or stochastic environments, as well as due to ignorance, indolence, or both. It arises in any number of fields, including insurance, philosophy, physics, statistics, economics, finance, psychology, sociology, engineering, metrology, meteorology, ecology and information science.

The Allais paradox is a choice problem designed by Maurice Allais (1953) to show an inconsistency of actual observed choices with the predictions of expected utility theory.

Contents

A natural explanation of these observations is that individuals overweight low-probability events such as winning the lottery, or suffering a disastrous insurable loss. In the Allais paradox, individuals appear to forgo the chance of a very large gain to avoid a one per cent chance of missing out on an otherwise certain large gain, but are less risk averse when offered the chance of reducing an 11 per cent chance of loss to 10 per cent.

A number of attempts were made to model preferences incorporating probability theory, most notably the original version of prospect theory, presented by Daniel Kahneman and Amos Tversky (1979). However, all such models involved violations of first-order stochastic dominance. In prospect theory, violations of dominance were avoided by the introduction of an 'editing' operation, but this gave rise to violations of transitivity.

Prospect theory

Prospect theory is a theory in cognitive psychology that describes the way people choose between probabilistic alternatives that involve risk, where the probabilities of outcomes are uncertain. The theory states that people make decisions based on the potential value of losses and gains rather than the final outcome, and that people evaluate these losses and gains using some heuristics. The model is descriptive: it tries to model real-life choices, rather than optimal decisions, as normative models do.

Daniel Kahneman Israeli-American psychologist

Daniel Kahneman is an Israeli-American psychologist and economist notable for his work on the psychology of judgment and decision-making, as well as behavioral economics, for which he was awarded the 2002 Nobel Memorial Prize in Economic Sciences. His empirical findings challenge the assumption of human rationality prevailing in modern economic theory.

Amos Tversky Israeli psychologist

Amos Nathan Tversky was a cognitive and mathematical psychologist, a student of cognitive science, a collaborator of Daniel Kahneman, and a figure in the discovery of systematic human cognitive bias and handling of risk.

The crucial idea of rank-dependent expected utility was to overweigh only unlikely extreme outcomes, rather than all unlikely events. Formalising this insight required transformations to be applied to the cumulative probability distribution function, rather than to individual probabilities (Quiggin, 1982, 1993).

John Quiggin Australian academic

John Quiggin is an Australian economist, a Professor at the University of Queensland. He was formerly an Australian Research Council Laureate Fellow and Federation Fellow and a Member of the Board of the Climate Change Authority of the Australian Government.

The central idea of rank-dependent weightings was then incorporated by Daniel Kahneman and Amos Tversky into prospect theory, and the resulting model was referred to as cumulative prospect theory (Tversky & Kahneman, 1992).

Cumulative prospect theory

Cumulative prospect theory (CPT) is a model for descriptive decisions under risk and uncertainty which was introduced by Amos Tversky and Daniel Kahneman in 1992. It is a further development and variant of prospect theory. The difference between this version and the original version of prospect theory is that weighting is applied to the cumulative probability distribution function, as in rank-dependent expected utility theory but not applied to the probabilities of individual outcomes. In 2002, Daniel Kahneman received the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel for his contributions to behavioral economics, in particular the development of Cumulative Prospect Theory (CPT).

Formal representation

As the name implies, the rank-dependent model is applied to the increasing rearrangement of which satisfies .

where and is a probability weight such that

for a transformation function with , .

Note that so that the decision weights sum to 1.

Related Research Articles

Multivariate normal distribution

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

Markov chain mathematical system

A Markov chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.

In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than does the statistic, as to which of those probability distributions is that of the population from which the sample was taken.

The St. Petersburg paradox or St. Petersburg lottery is a paradox related to probability and decision theory in economics. It is based on a particular (theoretical) lottery game that leads to a random variable with infinite expected value but nevertheless seems to be worth only a very small amount to the participants. The St. Petersburg paradox is a situation where a naive decision criterion which takes only the expected value into account predicts a course of action that presumably no actual person would be willing to take. Several resolutions are possible.

Sinc function Special mathematical function defined as sin(x)/x

In mathematics, physics and engineering, the cardinal sine function or sinc function, denoted by sinc(x), has two slightly different definitions.

In queueing theory, a discipline within the mathematical theory of probability, a Jackson network is a class of queueing network where the equilibrium distribution is particularly simple to compute as the network has a product-form solution. It was the first significant development in the theory of networks of queues, and generalising and applying the ideas of the theorem to search for similar product-form solutions in other networks has been the subject of much research, including ideas used in the development of the Internet. The networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years.’

Differential entropy is a concept in information theory that began as an attempt by Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.

A number of different Markov models of DNA sequence evolution have been proposed. These substitution models differ in terms of the parameters used to describe the rates at which one nucleotide replaces another during evolution. These models are frequently used in molecular phylogenetic analyses. In particular, they are used during the calculation of likelihood of a tree and they are used to estimate the evolutionary distance between sequences from the observed differences between the sequences.

In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.

A Choquet integral is a subadditive or superadditive integral created by the French mathematician Gustave Choquet in 1953. It was initially used in statistical mechanics and potential theory, but found its way into decision theory in the 1980s, where it is used as a way of measuring the expected utility of an uncertain event. It is applied specifically to membership functions and capacities. In imprecise probability theory, the Choquet integral is also used to calculate the lower expectation induced by a 2-monotone lower probability, or the upper expectation induced by a 2-alternating upper probability.

In statistics, the multinomial test is the test of the null hypothesis that the parameters of a multinomial distribution equal specified values. It is used for categorical data; see Read and Cressie.

In queueing theory, a discipline within the mathematical theory of probability, quasireversibility is a property of some queues. The concept was first identified by Richard R. Muntz and further developed by Frank Kelly. Quasireversibility differs from reversibility in that a stronger condition is imposed on arrival rates and a weaker condition is applied on probability fluxes. For example, an M/M/1 queue with state-dependent arrival rates and state-dependent service times is reversible, but not quasireversible.

Uniform convergence in probability is a form of convergence in probability in statistical asymptotic theory and probability theory. It means that, under certain conditions, the empirical frequencies of all events in a certain event-family converge to their theoretical probabilities. Uniform convergence in probability has applications to statistics as well as machine learning as part of statistical learning theory.

The Brownian motion models for financial markets are based on the work of Robert C. Merton and Paul A. Samuelson, as extensions to the one-period market models of Harold Markowitz and William F. Sharpe, and are concerned with defining the concepts of financial assets and markets, portfolios, gains and wealth in terms of continuous-time stochastic processes.

Stochastic portfolio theory (SPT) is a mathematical theory for analyzing stock market structure and portfolio behavior introduced by E. Robert Fernholz in 2002. It is descriptive as opposed to normative, and is consistent with the observed behavior of actual markets. Normative assumptions, which serve as a basis for earlier theories like modern portfolio theory (MPT) and the capital asset pricing model (CAPM), are absent from SPT.

In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.

References

See also