Expected utility hypothesis

Last updated

The expected utility hypothesis is a popular concept in economics that serves as a reference guide for decisions when the payoff is uncertain. The theory recommends which option rational individuals should choose in a complex situation, based on their risk appetite and preferences.


The expected utility hypothesis states an agent chooses between risky prospects by comparing expected utility values (i.e. the weighted sum of adding the respective utility values of payoffs multiplied by their probabilities). The summarised formula for expected utility is where is the probability that outcome indexed by with payoff is realized, and function u expresses the utility of each respective payoff. [1] On a graph, the curvature of u will explain the agent's risk attitude.

For example, if an agent derives 0 utils from 0 apples, 2 utils from one apple, and 3 utils from two apples, their expected utility for a 50–50 gamble between zero apples and two is 0.5u(0 apples) + 0.5u(2 apples) = 0.5(0 utils) + 0.5(3 utils) = 1.5 utils. Under the expected utility hypothesis, the consumer would prefer 1 apple (giving him 2 utils) to the gamble between zero and two.

Standard utility functions represent ordinal preferences. The expected utility hypothesis imposes limitations on the utility function and makes utility cardinal (though still not comparable across individuals). In the example above, any function such that u(0) < (1) < u(2) would represent the same preferences; we could specify u(0)= 0, u(1) = 2, and u(2) = 40, for example. Under the expected utility hypothesis, setting u(2) = 3 and assuming the agent is indifferent between one apple with certainty and a gamble with a 1/3 probability of no apple and a 2/3 probability of two apples, requires that the utility of one apple must be set to u(1) = 2. This is because it requires that (1/3)u(0) + (2/3)u(2) = u(1), and (1/3)(0) + (2/3)(3) = 2.

Although the expected utility hypothesis is standard in economic modelling, it has been found to be violated in psychology experiments. For many years, psychologists and economic theorists have been developing new theories to explain these deficiencies. [2] These include prospect theory, rank-dependent expected utility and cumulative prospect theory, and bounded rationality.


Limits of the expected value theory

In the early days of the calculus of probability, classic utilitarians believed that the option which has the greatest utility will produce more pleasure or happiness for the agent and therefore must be chosen [3] The main problem with the expected value theory is that there might not be a unique correct way to quantify utility or to identify the best trade-offs. For example, some of the trade-offs may be intangible or qualitative. Rather than monetary incentives, other desirable ends can also be included in utility such as pleasure, knowledge, friendship, etc. Originally the total utility of the consumer was the sum of independent utilities of the goods. However, the expected value theory was dropped as it was considered too static and deterministic. [4] The classical counter example to the expected value theory (where everyone makes the same "correct" choice) is the St. Petersburg Paradox. This paradox questioned if marginal utilities should be ranked differently as it proved that a “correct decision” for one person is not necessarily right for another person. [4]

Risk aversion

The expected utility theory takes into account that individuals may be risk-averse, meaning that the individual would refuse a fair gamble (a fair gamble has an expected value of zero). Risk aversion implies that their utility functions are concave and show diminishing marginal wealth utility. The risk attitude is directly related to the curvature of the utility function: risk neutral individuals have linear utility functions, while risk seeking individuals have convex utility functions and risk averse individuals have concave utility functions. The degree of risk aversion can be measured by the curvature of the utility function.

Since the risk attitudes are unchanged under affine transformations of u, the second derivative u'' is not an adequate measure of the risk aversion of a utility function. Instead, it needs to be normalized. This leads to the definition of the Arrow–Pratt [5] [6] measure of absolute risk aversion:

where is wealth.

The Arrow–Pratt measure of relative risk aversion is:

Special classes of utility functions are the CRRA (constant relative risk aversion) functions, where RRA(w) is constant, and the CARA (constant absolute risk aversion) functions, where ARA(w) is constant. They are often used in economics for simplification.

A decision that maximizes expected utility also maximizes the probability of the decision's consequences being preferable to some uncertain threshold. [7] In the absence of uncertainty about the threshold, expected utility maximization simplifies to maximizing the probability of achieving some fixed target. If the uncertainty is uniformly distributed, then expected utility maximization becomes expected value maximization. Intermediate cases lead to increasing risk aversion above some fixed threshold and increasing risk seeking below a fixed threshold.

The St. Petersburg paradox

The St. Petersburg paradox created by Daniel Bernoulli empirically established that the decisions of rational individuals sometimes violate the axioms of preferences. [8] When a probability distribution function has an infinite expected value, it is expected that a rational person would pay an arbitrarily large finite amount to take this gamble. However, this experiment demonstrated that there is no upper bound on the potential rewards from very low probability events. In his experimental game, a person had to flip a coin as many times as possible until it was tails. The participant's prize will be determined by the number of times the coin was turned heads consecutively. For every time the coin comes up heads (1/2 probability), the participant's prize will be doubled. The game ends when the participant flips the coin and it comes out a tail. According to the axioms of preferences, a player should be willing to pay a high price to play because his entry cost will always be less than the expected value of the game, since he could potentially win an infinite payout. However, in reality, people don't do this. “Only a few of the participants were willing to pay a maximum of $25 to enter the game because many of them were risk averse and unwilling to bet on a very small possibility at a very high price. [9]

Bernoulli's formulation

Daniel Bernoulli described the St. Petersburg paradox (involving infinite expected values) in 1713, prompting two Swiss mathematicians to develop expected utility theory as a solution. Bernoulli's paper was the first formalization of marginal utility, which has broad application in economics in addition to expected utility theory. He used this concept to formalize the idea that the same amount of additional money was less useful to an already-wealthy person than it would be to a poor person. The theory can also more accurately describe more realistic scenarios (where expected values are finite) than expected value alone. He proposed that a nonlinear function of utility of an outcome should be used instead of the expected value of an outcome, accounting for risk aversion, where the risk premium is higher for low-probability events than the difference between the payout level of a particular outcome and its expected value. Bernoulli further proposed that it was not the goal of the gambler to maximize his expected gain but to instead maximize the logarithm of his gain.

Daniel Bernoulli drew attention to psychological and behavioral behind the individual's decision-making process and found that the utility of wealth has a diminishing marginal utility. For example, as someone gets wealthier, an extra dollar or an additional good is perceived as less valuable. In other words, he found that the desirability related with a financial gain depends not only on the gain itself but also on the wealth of the person. He suggested that people maximize "moral expectation" rather than expected monetary value. Bernoulli made a clear distinction between expected value and expected utility. Instead of using the weighted outcomes, he used the weighted utility multiplied by probabilities. He proved that the utility function used in real life means is finite, even when its expected value is infinite. [4]

Other experiments proposed that very low probability events are neglected by considering the finite resources of the participants. For example, it makes rational sense for a rich person, but not for a poor person to pay 10,000USD in exchange for a lottery ticket that yields a 50% chance of winning and a 50% chance of nothing.  Even though both individuals have the same chance at each monetary price, they will assign different values to the potential outcomes , according to their income levels. Bernoulli's paper was the first formalization of marginal utility, which has broad application in economics in addition to expected utility theory.

Ramsey-theoretic approach to subjective probability

In 1926, Frank Ramsey introduced the Ramsey's Representation Theorem. This representation theorem for expected utility assumed that preferences are defined over set of bets where each option has a different yield. Ramsey believed that we always choose decisions to receive the best expected outcome according to our personal preferences. This implies that if we are able to understand the priorities and personal preferences of an individual we can anticipate what choices they are going to take. [10] In this model he defined numerical utilities for each option to exploit the richness of the space of prices. The outcome of each preference is exclusive from each other. For example, if you study, then you can't see your friends, however you will get a good grade in your course. In this scenario, if we analyze what are his personal preferences and beliefs we will be able to predict which he might choose. (e.g. if someone prioritizes their social life more than academic results, they will go out with their friends). Assuming that the decisions of a person are rational, according to this theorem we should be able to know the beliefs and utilities from a person just by looking the choices someone takes (which is wrong). Ramsey defines a proposition as “ethically neutral” when two possible outcome has an equal value. In other words, if the probability can be defined in terms of preference, each proposition should have ½ in order to be indifferent between both options. [11] Ramsey shows that


Savage's subjective expected utility representation

In the 1950s, Leonard Jimmie Savage, an American statistician, derived a framework for comprehending expected utility. At that point, it was considered the first and most thorough foundation to understanding the concept. Savage's framework involved proving that expected utility could be used to make an optimal choice among several acts through seven axioms. [13] In his book, The Foundations of Statistics, Savage integrated a normative account of decision making under risk (when probabilities are known) and under uncertainty (when probabilities are not objectively known).  Savage concluded that people have neutral attitudes towards uncertainty and that observation is enough to predict the probabilities of uncertain events.   [14] A crucial methodological aspect of Savage's framework is its focus on observable choices. Cognitive processes and other psychological aspects of decision making matter only to the extent that they have directly measurable implications on choice.

The theory of subjective expected utility combines two concepts: first, a personal utility function, and second a personal probability distribution (usually based on Bayesian probability theory).  This theoretical model has been known for its clear and elegant structure and its considered for some researchers one of “the most brilliant axiomatic theory of utility ever developed”. [15] Instead assuming the probability of an event, Savage defines it in terms of preferences over acts.  Savage used the states (something that is not in your control) to calculate the probability of an event. On the other hand, he used utility and intrinsic preferences to predict the outcome of the event. Savage assumed that each act and state are enough to uniquely determine an outcome. However, this assumption breaks in the cases where the individual doesn't have enough information about the event.

Additionally, he believed that outcomes must have the same utility regardless of the state. For that reason, it is essential to correctly identify which statement is considered an outcome. For example, if someone says “I got the job” this affirmation is not considered an outcome, since the utility of the statement will be different on each person depending on intrinsic factors such as financial necessity or judgments about the company. For that reason, no state can rule out the performance of any act, only when the state and the act are evaluated simultaneously you will be able to determine an outcome with certainty. [16]

Savage's representation theorem

The Savage representation theorem (Savage, 1954) A preference < satisfies P1–P7 if and only if there is a finitely additive probability measure P and a function u : C → R such that for every pair of acts f and g. [16] f < g ⇐⇒ Z Ω u(f(ω)) dP ≥ Z Ω u(g(ω)) dP [16] *If and only if all the axioms are satisfied when can used the information to reduce the uncertainty about the events that are out of your control. Additionally the theorem ranks the outcome according to utility function that reflects the personal preferences.

Key ingredients:

The key ingredients in Savage's theory are:

Von Neumann–Morgenstern utility theorem

The von Neumann–Morgenstern axioms

There are four axioms of the expected utility theory that define a rational decision maker: completeness; transitivity; independence of irrelevant alternatives; and continuity. [17]

Completeness assumes that an individual has well defined preferences and can always decide between any two alternatives.

This means that the individual prefers A to B, B to A or is indifferent between A and B.

Transitivity assumes that, as an individual decides according to the completeness axiom, the individual also decides consistently.

Independence of irrelevant alternatives pertains to well-defined preferences as well. It assumes that two gambles mixed with an irrelevant third one will maintain the same order of preference as when the two are presented independently of the third one. The independence axiom is the most controversial axiom.[ citation needed ].

Continuity assumes that when there are three lotteries (A, B and C) and the individual prefers A to B and B to C, then there should be a possible combination of A and C in which the individual is then indifferent between this mix and the lottery B.

If all these axioms are satisfied, then the individual is said to be rational and the preferences can be represented by a utility function, i.e. one can assign numbers (utilities) to each outcome of the lottery such that choosing the best lottery according to the preference amounts to choosing the lottery with the highest expected utility. This result is called the von Neumann–Morgenstern utility representation theorem.

In other words, if an individual's behavior always satisfies the above axioms, then there is a utility function such that the individual will choose one gamble over another if and only if the expected utility of one exceeds that of the other. The expected utility of any gamble may be expressed as a linear combination of the utilities of the outcomes, with the weights being the respective probabilities. Utility functions are also normally continuous functions. Such utility functions are also referred to as von Neumann–Morgenstern (vNM) utility functions. This is a central theme of the expected utility hypothesis in which an individual chooses not the highest expected value, but rather the highest expected utility. The expected utility maximizing individual makes decisions rationally based on the axioms of the theory.

The von Neumann–Morgenstern formulation is important in the application of set theory to economics because it was developed shortly after the Hicks–Allen "ordinal revolution" of the 1930s, and it revived the idea of cardinal utility in economic theory.[ citation needed ] However, while in this context the utility function is cardinal, in that implied behavior would be altered by a non-linear monotonic transformation of utility, the expected utility function is ordinal because any monotonic increasing transformation of expected utility gives the same behavior.

Examples of von Neumann–Morgenstern utility functions

The utility function was originally suggested by Bernoulli (see above). It has relative risk aversion constant and equal to one, and is still sometimes assumed in economic analyses. The utility function

exhibits constant absolute risk aversion, and for this reason is often avoided, although it has the advantage of offering substantial mathematical tractability when asset returns are normally distributed. Note that, as per the affine transformation property alluded to above, the utility function gives exactly the same preferences orderings as does ; thus it is irrelevant that the values of and its expected value are always negative: what matters for preference ordering is which of two gambles gives the higher expected utility, not the numerical values of those expected utilities.

The class of constant relative risk aversion utility functions contains three categories. Bernoulli's utility function

has relative risk aversion equal to 1. The functions

for have relative risk aversion equal to . And the functions

for have relative risk aversion equal to

See also the discussion of utility functions having hyperbolic absolute risk aversion (HARA).

Formula for expected utility

When the entity whose value affects a person's utility takes on one of a set of discrete values, the formula for expected utility, which is assumed to be maximized, is

where the left side is the subjective valuation of the gamble as a whole, is the ith possible outcome, is its valuation, and is its probability. There could be either a finite set of possible values in which case the right side of this equation has a finite number of terms; or there could be an infinite set of discrete values, in which case the right side has an infinite number of terms.

When can take on any of a continuous range of values, the expected utility is given by

where is the probability density function of

Measuring risk in the expected utility context

Often people refer to "risk" in the sense of a potentially quantifiable entity. In the context of mean-variance analysis, variance is used as a risk measure for portfolio return; however, this is only valid if returns are normally distributed or otherwise jointly elliptically distributed, [18] [19] [20] or in the unlikely case in which the utility function has a quadratic form. However, David E. Bell proposed a measure of risk which follows naturally from a certain class of von Neumann–Morgenstern utility functions. [21] Let utility of wealth be given by

for individual-specific positive parameters a and b. Then expected utility is given by

Thus the risk measure is , which differs between two individuals if they have different values of the parameter allowing different people to disagree about the degree of risk associated with any given portfolio. Individuals sharing a given risk measure (based on given value of a) may choose different portfolios because they may have different values of b. See also Entropic risk measure.

For general utility functions, however, expected utility analysis does not permit the expression of preferences to be separated into two parameters with one representing the expected value of the variable in question and the other representing its risk.


Expected utility theory is a theory about how to make optimal decisions under a given probability of risk. It has a normative interpretation which economists used to think applies in all situations to rational agents but now tend to regard as a useful and insightful first order approximation. In empirical applications, a number of violations have been proven to be systematic and these falsifications have deepened understanding of how people actually decide. Daniel Kahneman and Amos Tversky in 1979 presented their prospect theory which showed empirically, how preferences of individuals are inconsistent among the same choices, depending on how those choices are presented. [22] This is mainly because people are different in terms of their preferences and parameters. Additionally, personal behaviors may be different between individuals even when they are facing the same choice problem.

Like any mathematical model, expected utility theory is a simplification of reality. The mathematical correctness of expected utility theory and the salience of its primitive concepts do not guarantee that expected utility theory is a reliable guide to human behavior or optimal practice. The mathematical clarity of expected utility theory has helped scientists design experiments to test its adequacy, and to distinguish systematic departures from its predictions. This has led to the field of behavioral finance, which has produced deviations from expected utility theory to account for the empirical facts.

Other critics argue applying expected utility to economic and policy decisions, has engendered inappropriate valuations, particularly in scenarios in which monetary units are used to scale the utility of nonmonetary outcomes, such as deaths. [23]

Conservatism in updating beliefs

Psychologists have discovered systematic violations of probability calculations and behavior by humans. This have been evidenced with examples such as the Monty Hall problem where it was demonstrated that people do not revise their degrees on belief in line with experimented probabilities and also that probabilities cannot be applied to single cases. On the other hand, in updating probability distributions using evidence, a standard method uses conditional probability, namely the rule of Bayes. An experiment on belief revision has suggested that humans change their beliefs faster when using Bayesian methods than when using informal judgment. [24]

According to the empirical results there has been almost no recognition in decision theory of the distinction between the problem of justifying its theoretical claims regarding the properties of rational belief and desire. One of the main reasons is because people's basic tastes and preferences for losses cannot be represented with utility as they change under different scenarios. [25]

Irrational deviations

Behavioral finance has produced several generalized expected utility theories to account for instances where people's choices deviate from those predicted by expected utility theory. These deviations are described as "irrational" because they can depend on the way the problem is presented, not on the actual costs, rewards, or probabilities involved. Particular theories include prospect theory, rank-dependent expected utility and cumulative prospect theory are considered insufficient to predict preferences and the expected utility. [26] Additionally, experiments have shown systematic violations and generalizations based on the results of Savage and von Neumann–Morgenstern. This is because preferences and utility functions constructed under different contexts are significantly different. This is demonstrated in the contrast of individual preferences under the insurance and lottery context shows the degree of indeterminacy of the expected utility theory. Additionally, experiments have shown systematic violations and generalizations based on the results of Savage and von Neumann–Morgenstern.

In practice there will be many situations where the probabilities are unknown, and one is operating under uncertainty. In economics, Knightian uncertainty or ambiguity may occur. Thus one must make assumptions about the probabilities, but then the expected values of various decisions can be very sensitive to the assumptions. This is particularly a problem when the expectation is dominated by rare extreme events, as in a long-tailed distribution. Alternative decision techniques are robust to uncertainty of probability of outcomes, either not depending on probabilities of outcomes and only requiring scenario analysis (as in minimax or minimax regret), or being less sensitive to assumptions.

Bayesian approaches to probability treat it as a degree of belief and thus they do not draw a distinction between risk and a wider concept of uncertainty: they deny the existence of Knightian uncertainty. They would model uncertain probabilities with hierarchical models, i.e. where the uncertain probabilities are modelled as distributions whose parameters are themselves drawn from a higher-level distribution (hyperpriors).

Preference reversals over uncertain outcomes

Starting with studies such as Lichtenstein & Slovic (1971), it was discovered that subjects sometimes exhibit signs of preference reversals with regard to their certainty equivalents of different lotteries. Specifically, when eliciting certainty equivalents, subjects tend to value "p bets" (lotteries with a high chance of winning a low prize) lower than "$ bets" (lotteries with a small chance of winning a large prize). When subjects are asked which lotteries they prefer in direct comparison, however, they frequently prefer the "p bets" over "$ bets". [27] Many studies have examined this "preference reversal", from both an experimental (e.g., Plott & Grether, 1979) [28] and theoretical (e.g., Holt, 1986) [29] standpoint, indicating that this behavior can be brought into accordance with neoclassical economic theory under specific assumptions.

The problem of interpersonal utility comparisons

Understanding utilities in term of personal preferences is really challenging as it face a challenge known as the Problem of Interpersonal Utility Comparisons or the Social Welfare Function. It is frequently pointed out that ordinary people usually make comparisons, however such comparisons are empirically meaningful because the interpersonal comparisons does not show the desire of strength which is extremely relevant to measure the expected utility of decision. In other words, beside we can know X and Y has similar or identical preferences (e.g. both love cars) we cannot determine which love it more or is willing to sacrifice more to get it. [30] [31]


In conclusion Expected Utility theories such as Savage and von Neumann–Morgenstern have to be improved or replaced by more general representations theorems.

There are three components in the psychology field that are seen as crucial to the development of a more accurate descriptive theory of decision under risks. [25] [32]

  1. Theory of decision framing effect (psychology)
  2. Better understanding of the psychologically relevant outcome space
  3. A psychologically richer theory of the determinants

Mixture models of choice under risk

In this model Conte (2011) found that behaviour differs between individuals and for the same individual at different times. Applying a Mixture Model fits the data significantly better than either of the two preference functionals individually. [33] Additionally it helps to estimate preferences much more accurately than the old economic models because it takes heterogeneity into account. In other words, the model assumes that different agents in the population have different functionals. The model estimate the proportion of each group to consider all forms of heterogeneity.

Psychological expected utility model: [34]

In this model, Caplin (2001) expanded the standard prize space to include anticipatory emotions such suspense and anxiety influence on preferences and decisions. The author have replaced the standard prize space with a space of "psychological states," In this research, they open up a variety of psychologically interesting phenomena to rational analysis. This model explained how time inconsistency arises naturally in the presence of anticipations and also how this preceded emotions may change the result of choices, For example, this model founds that anxiety is anticipatory and that the desire to reduce anxiety motivates many decisions. A better understanding of the psychologically relevant outcome space will facilitate theorists to develop richer theory of determinants.

See also

Related Research Articles

As a topic of economics, utility is used to model worth or value. Its usage has evolved significantly over time. The term was introduced initially as a measure of pleasure or happiness as part of the theory of utilitarianism by moral philosophers such as Jeremy Bentham and John Stuart Mill. The term has been adapted and reapplied within neoclassical economics, which dominates modern economic theory, as a utility function that represents a single consumer's preference ordering over a choice set but is not comparable across consumers. This concept of utility is personal and based on choice rather than on pleasure received, and so is specified more rigorously than the original concept but makes it less useful for ethical decisions.

Risk aversion Economics theory

In economics and finance, risk aversion is the tendency of people to prefer outcomes with low uncertainty to those outcomes with high uncertainty, even if the average outcome of the latter is equal to or higher in monetary value than the more certain outcome. Risk aversion explains the inclination to agree to a situation with a more predictable, but possibly lower payoff, rather than another situation with a highly unpredictable, but possibly higher payoff. For example, a risk-averse investor might choose to put their money into a bank account with a low but guaranteed interest rate, rather than into a stock that may have high expected returns, but also involves a chance of losing value.

Prospect theory Theory of behavioral economics and behavioral finance

Prospect theory is a theory of behavioral economics and behavioral finance that was developed by Daniel Kahneman and Amos Tversky in 1979. The theory was cited in the decision to award Kahneman the 2002 Nobel Memorial Prize in Economics.

In mathematical optimization and decision theory, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite, in which case it is to be maximized.

Decision theory is the study of an agent's choices. Decision theory can be broken into two branches: normative decision theory, which analyzes the outcomes of decisions or determines the optimal decisions given constraints and assumptions, and descriptive decision theory, which analyzes how agents actually make the decisions they do.

In decision theory, subjective expected utility is the attractiveness of an economic opportunity as perceived by a decision-maker in the presence of risk. Characterizing the behavior of decision-makers as using subjective expected utility was promoted and axiomatized by L. J. Savage in 1954 following previous work by Ramsey and von Neumann. The theory of subjective expected utility combines two subjective concepts: first, a personal utility function, and second a personal probability distribution.

Ellsberg paradox Paradox in decision theory

The Ellsberg paradox is a paradox of choice in which people's decisions produce inconsistencies with subjective expected utility theory. The paradox was popularized by Daniel Ellsberg in his 1961 paper “Risk, Ambiguity, and the Savage Axioms”, although a version of it was noted considerably earlier by John Maynard Keynes. It is generally taken to be evidence for ambiguity aversion, in which a person tends to prefer choices with quantifiable risks over those with unknown risks.

Revealed preference theory, pioneered by economist Paul Anthony Samuelson in 1938, is a method of analyzing choices made by individuals, mostly used for comparing the influence of policies on consumer behavior. Revealed preference models assume that the preferences of consumers can be revealed by their purchasing habits.

The Allais paradox is a choice problem designed by Maurice Allais (1953) to show an inconsistency of actual observed choices with the predictions of expected utility theory.

Stochastic dominance is a partial order between random variables. It is a form of stochastic ordering. The concept arises in decision theory and decision analysis in situations where one gamble can be ranked as superior to another gamble for a broad class of decision-makers. It is based on shared preferences regarding sets of possible outcomes and their associated probabilities. Only limited knowledge of preferences is required for determining dominance. Risk aversion is a factor only in second order stochastic dominance.

In decision theory and economics, ambiguity aversion is a preference for known risks over unknown risks. An ambiguity-averse individual would rather choose an alternative where the probability distribution of the outcomes is known over one where the probabilities are unknown. This behavior was first introduced through the Ellsberg paradox.

Cumulative prospect theory

Cumulative prospect theory (CPT) is a model for descriptive decisions under risk and uncertainty which was introduced by Amos Tversky and Daniel Kahneman in 1992. It is a further development and variant of prospect theory. The difference between this version and the original version of prospect theory is that weighting is applied to the cumulative probability distribution function, as in rank-dependent expected utility theory but not applied to the probabilities of individual outcomes. In 2002, Daniel Kahneman received the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel for his contributions to behavioral economics, in particular the development of Cumulative Prospect Theory (CPT).

In decision theory, the von Neumann–Morgenstern (VNM) utility theorem shows that, under certain axioms of rational behavior, a decision-maker faced with risky (probabilistic) outcomes of different choices will behave as if he or she is maximizing the expected value of some function defined over the potential outcomes at some specified point in the future. This function is known as the von Neumann–Morgenstern utility function. The theorem is the basis for expected utility theory.

In decision theory, economics, and finance, a two-moment decision model is a model that describes or prescribes the process of making decisions in a context in which the decision-maker is faced with random variables whose realizations cannot be known in advance, and in which choices are made based on knowledge of two moments of those random variables. The two moments are almost always the mean—that is, the expected value, which is the first moment about zero—and the variance, which is the second moment about the mean.

In finance, economics, and decision theory, hyperbolic absolute risk aversion (HARA) refers to a type of risk aversion that is particularly convenient to model mathematically and to obtain empirical predictions from. It refers specifically to a property of von Neumann–Morgenstern utility functions, which are typically functions of final wealth, and which describe a decision-maker's degree of satisfaction with the outcome for wealth. The final outcome for wealth is affected both by random variables and by decisions. Decision-makers are assumed to make their decisions so as to maximize the expected value of the utility function.

In economics and other social sciences, preference is the order that a person gives to alternatives based on their relative utility, a process which results in an optimal "choice". Preferences are evaluations, they concern matters of value, typically in relation to practical reasoning. Instead of the prices of goods, personal income, or availability of goods, the character of the preferences is determined purely by a person's tastes. However, persons are still expected to act in their best interest. Rationality, in this context, means that when individuals are faced with a choice, they would select the option that maximizes self interest. Further, in every set of alternatives, preferences arise.

In expected utility theory, a lottery is a discrete distribution of probability on a set of states of nature. The elements of a lottery correspond to the probabilities that each of the states of nature will occur, e.g.. Much of the theoretical analysis of choice under uncertainty involves characterizing the available choices in terms of lotteries.

Risk aversion is a preference for a sure outcome over a gamble with higher or equal expected value. Conversely, the rejection of a sure thing in favor of a gamble of lower or equal expected value is known as risk-seeking behavior.

In economics, the Debreu theorems are several statements about the representation of a preference ordering by a real-valued function. The theorems were proved by Gerard Debreu during the 1950s.

In decision theory, a multi-attribute utility function is used to represent the preferences of an agent over bundles of goods either under conditions of certainty about the results of any potential choice, or under conditions of uncertainty.


  1. "Expected Utility Theory | Encyclopedia.com". www.encyclopedia.com. Retrieved 2021-04-28.
  2. Conte, Anna; Hey, John D.; Moffatt, Peter G. (2011-05-01). "Mixture models of choice under risk". Journal of Econometrics. 162 (1): 79–88. doi:10.1016/j.jeconom.2009.10.011. ISSN   0304-4076.
  3. Oberhelman DD (June 2001). Zalta EN (ed.). "Stanford Encyclopedia of Philosophy". Reference Reviews. 15 (6): 9. doi:10.1108/rr.2001.
  4. 1 2 3 Allais M, Hagen O, eds. (1979). Expected Utility Hypotheses and the Allais Paradox. Dordrecht: Springer Netherlands. doi:10.1007/978-94-015-7629-1. ISBN   978-90-481-8354-8.
  5. Arrow KJ (1965). "The theory of risk aversion". In Saatio YJ (ed.). Aspects of the Theory of Risk Bearing Reprinted in Essays in the Theory of Risk Bearing. Chicago, 1971: Markham Publ. Co. pp. 90–109.{{cite book}}: CS1 maint: location (link)
  6. Pratt JW (January–April 1964). "Risk aversion in the small and in the large". Econometrica. 32 (1/2): 122–136. doi:10.2307/1913738. JSTOR   1913738.
  7. Castagnoli and LiCalzi, 1996; Bordley and LiCalzi, 2000; Bordley and Kirkwood
  8. Aase KK (January 2001). "On the St. Petersburg Paradox". Scandinavian Actuarial Journal. 2001 (1): 69–78. doi:10.1080/034612301750077356. ISSN   0346-1238. S2CID   14750913.
  9. Martin, Robert (16 June 2008). "The St. Petersburg Paradox". Stanford Encyclopedia of Philosophy.
  10. Bradley R (2004). "Ramsey's Representation Theorem" (PDF). Dialectica. 58 (4): 483–498. doi:10.1111/j.1746-8361.2004.tb00320.x.
  11. Elliott E. "Ramsey and the Ethically Neutral Proposition" (PDF). Australian National University.
  12. Briggs RA (2014-08-08). "Normative Theories of Rational Choice: Expected Utility".{{cite journal}}: Cite journal requires |journal= (help)
  13. 1 2 Savage LJ (March 1951). "The Theory of Statistical Decision". Journal of the American Statistical Association. 46 (253): 55–67. doi:10.1080/01621459.1951.10500768. ISSN   0162-1459.
  14. Lindley DV (September 1973). "The foundations of statistics (second edition), by Leonard J. Savage. Pp xv, 310. £1·75. 1972 (Dover/Constable)". The Mathematical Gazette. 57 (401): 220–221. doi:10.1017/s0025557200132589. ISSN   0025-5572.
  15. "1. Foundations of probability theory", Interpretations of Probability, Berlin, New York: Walter de Gruyter, 2009-01-21, doi:10.1515/9783110213195.1, ISBN   978-3-11-021319-5
  16. 1 2 3 Li Z, Loomes G, Pogrebna G (2017-05-01). "Attitudes to Uncertainty in a Strategic Setting". The Economic Journal. 127 (601): 809–826. doi: 10.1111/ecoj.12486 . ISSN   0013-0133.
  17. von Neumann J, Morgenstern O (1953) [1944]. Theory of Games and Economic Behavior (Third ed.). Princeton, NJ: Princeton University Press.
  18. Borch K (January 1969). "A note on uncertainty and indifference curves". Review of Economic Studies. 36 (1): 1–4. doi:10.2307/2296336. JSTOR   2296336.
  19. Chamberlain G (1983). "A characterization of the distributions that imply mean-variance utility functions". Journal of Economic Theory. 29 (1): 185–201. doi:10.1016/0022-0531(83)90129-1.
  20. Owen J, Rabinovitch R (1983). "On the class of elliptical distributions and their applications to the theory of portfolio choice". Journal of Finance. 38 (3): 745–752. doi:10.2307/2328079. JSTOR   2328079.
  21. Bell DE (December 1988). "One-switch utility functions and a measure of risk". Management Science. 34 (12): 1416–24. doi:10.1287/mnsc.34.12.1416.
  22. Kahneman D, Tversky A (1979). "Prospect Theory: An Analysis of Decision under Risk" (PDF). Econometrica. 47 (2): 263–292. doi:10.2307/1914185. JSTOR   1914185.
  23. "Expected utility | decision theory". Encyclopedia Britannica. Retrieved 2021-04-28.
  24. Subjects changed their beliefs faster by conditioning on evidence (Bayes's theorem) than by using informal reasoning, according to a classic study by the psychologist Ward Edwards:
    • Edwards W (1968). "Conservatism in Human Information Processing". In Kleinmuntz, B (ed.). Formal Representation of Human Judgment. Wiley.
    • Edwards W (1982). "Conservatism in Human Information Processing (excerpted)". In Daniel Kahneman, Paul Slovic and Amos Tversky (ed.). Judgment under uncertainty: Heuristics and biases. Cambridge University Press.
    • Phillips LD, Edwards W (October 2008). "Chapter 6: Conservatism in a simple probability inference task (Journal of Experimental Psychology (1966) 72: 346-354)". In Weiss JW, Weiss DJ (eds.). A Science of Decision Making:The Legacy of Ward Edwards. Oxford University Press. p. 536. ISBN   978-0-19-532298-9.
  25. 1 2 Vind K (February 2000). "von Neumann Morgenstern preferences". Journal of Mathematical Economics. 33 (1): 109–122. doi:10.1016/s0304-4068(99)00004-x. ISSN   0304-4068.
  26. Baratgin J (2015-08-11). "Rationality, the Bayesian standpoint, and the Monty-Hall problem". Frontiers in Psychology. 6: 1168. doi: 10.3389/fpsyg.2015.01168 . PMC   4531217 . PMID   26321986.
  27. Lichtenstein S, Slovic P (1971). "Reversals of preference between bids and choices in gambling decisions". Journal of Experimental Psychology. 89 (1): 46–55. doi:10.1037/h0031207. hdl: 1794/22312 .
  28. Grether DM, Plott CR (1979). "Economic Theory of Choice and the Preference Reversal Phenomenon". American Economic Review . 69 (4): 623–638. JSTOR   1808708.
  29. Holt C (1986). "Preference Reversals and the Independence Axiom". American Economic Review . 76 (3): 508–515. JSTOR   1813367.
  30. List C (2003). "List C. Are interpersonal comparisons of utility indeterminate?" (PDF). Erkenntnis. 58 (2): 229–260. doi:10.1023/a:1022094826922. ISSN   0165-0106. S2CID   14130575.
  31. Rossi M (April 2014). "Simulation theory and interpersonal utility comparisons reconsidered". Synthese. 191 (6): 1185–1210. doi:10.1007/s11229-013-0318-9. ISSN   0039-7857. S2CID   19813153.
  32. Schoemaker PJ (1980). Experiments on Decisions under Risk: The Expected Utility Hypothesis. doi:10.1007/978-94-017-5040-0. ISBN   978-94-017-5042-4.
  33. Conte A, Hey JD, Moffatt PG (May 2011). "Mixture models of choice under risk" (PDF). Journal of Econometrics. 162 (1): 79–88. doi:10.1016/j.jeconom.2009.10.011.
  34. Caplin A, Leahy J (2001-02-01). "Psychological Expected Utility Theory and Anticipatory Feelings". The Quarterly Journal of Economics. 116 (1): 55–79. doi:10.1162/003355301556347. ISSN   0033-5533.

Further reading

de Finetti B (1964). "Foresight: its Logical Laws, Its Subjective Sources (translation of the 1937 article in French". In Kyburg HE, Smokler HE (eds.). Studies in Subjective Probability. Vol. 7. New York: Wiley. pp. 1–68.