Ellsberg paradox

Last updated

In decision theory, the Ellsberg paradox (or Ellsberg's paradox) is a paradox in which people's decisions are inconsistent with subjective expected utility theory. John Maynard Keynes published a version of the paradox in 1921. [1] Daniel Ellsberg popularized the paradox in his 1961 paper, "Risk, Ambiguity, and the Savage Axioms". [2] It is generally taken to be evidence of ambiguity aversion, in which a person tends to prefer choices with quantifiable risks over those with unknown, incalculable risks.

Contents

Ellsberg's findings indicate that choices with an underlying level of risk are favored in instances where the likelihood of risk is clear, rather than instances in which the likelihood of risk is unknown. A decision-maker will overwhelmingly favor a choice with a transparent likelihood of risk, even in instances where the unknown alternative will likely produce greater utility. When offered choices with varying risk, people prefer choices with calculable risk, even when those choices have less utility. [3]

Experimental research

Ellsberg's experimental research involved two separate thought experiments: the 2-urn 2-color scenario and the 1-urn 3-color scenario.

Two-urns paradox

There are two urns each containing 100 balls. It is known that urn A contains 50 red and 50 black, but urn B contains an unknown mix of red and black balls.

The following bets are offered to a participant:

Bet 1A: get $1 if red is drawn from urn A, $0 otherwise

Bet 2A: get $1 if black is drawn from urn A, $0 otherwise

Bet 1B: get $1 if red is drawn from urn B, $0 otherwise

Bet 2B: get $1 if black is drawn from urn B, $0 otherwise

Typically, participants were seen to be indifferent between bet 1A and bet 2A (consistent with expected utility theory) but were seen to strictly prefer Bet 1A to Bet 1B and Bet 2A to 2B. This result is generally interpreted to be a consequence of ambiguity aversion (also known as uncertainty aversion); people intrinsically dislike situations where they cannot attach probabilities to outcomes, in this case favoring the bet in which they know the probability and utility outcome (0.5 and $1 respectively).

One-urn paradox

There is one urn containing 90 balls: 30 balls are red, while the remaining 60 balls are either black or yellow in unknown proportions. The balls are well mixed so that each ball is as likely to be drawn as any other. The participants then choose a gambling scenario:

Gamble AGamble B
You receive $100 if you draw a red ballYou receive $100 if you draw a black ball

Additionally, the participant may choose a separate gamble scenario within the same situational parameters:

Gamble CGamble D
You receive $100 if you draw a red or yellow ballYou receive $100 if you draw a black or yellow ball

The experimental conditions manufactured by Ellsberg serve to rely upon two economic principles: Knightian uncertainty , the unquantifiable nature of the mix between both yellow and black balls within the single urn, and probability , of which red balls are drawn at 1/3 vs. 2/3.

Utility theory interpretation

Utility theory models the choice by assuming that in choosing between these gambles, people assume a probability that the non-red balls are yellow versus black, and then compute the expected utility of the two gambles individually.

Since the prizes are the same, it follows that the participant will strictlyprefer Gamble A to Gamble B if and only if they believe that drawing a red ball is more likely than drawing a black ball (according to expected utility theory). Also, there would be indifference between the choices if the participant thought that a red ball was as likely as a black ball. Similarly, it follows the participant will strictlyprefer Gamble C to Gamble D if and only if the participant believes that drawing a red or yellow ball is more likely than drawing a black or yellow ball. It might seem intuitive that if drawing a red ball is more likely than drawing a black ball, drawing a red or yellow ball is also more likely than drawing a black or yellow ball. So, supposing the participant strictly prefers Gamble A to Gamble B, it follows that he/she will also strictly prefer Gamble C to Gamble D, and similarly conversely.

However, ambiguity aversion would predict that people would strictly prefer Gamble A to Gamble B, and Gamble D to Gamble C.

Ellsberg's findings violate assumptions made within common Expected Utility Theory, with participants strictly preferring Gamble A to Gamble B and Gamble D to Gamble C.

Numerical demonstration

Mathematically, the estimated probabilities of each color ball can be represented as R, Y, and B. If the participant strictly prefers Gamble A to Gamble B, by utility theory, it is presumed this preference is reflected by the expected utilities of the two gambles. We reach a contradiction in our utility calculations. This contradiction indicates that the participant's preferences are inconsistent with the expected-utility theory.

The generality of the paradox

The result holds regardless of the utility function. Indeed, the amount of the payoff is likewise irrelevant. Whichever gamble is selected, the prize for winning it is the same, and the cost of losing it is the same (no cost), so ultimately there are only two outcomes: receive a specific amount of money or nothing. Therefore, it is sufficient to assume that the preference is to receive some money to nothing (this assumption is not necessary: in the mathematical treatment above, it was assumed U($100) > U($0), but a contradiction can still be obtained for U($100) < U($0) and for U($100) = U($0)).

In addition, the result holds regardless of risk aversion—all gambles involve risk. By choosing Gamble D, the participant has a 1 in 3 chance of receiving nothing, and by choosing Gamble A, a 2 in 3 chance of receiving nothing. If Gamble A was less risky than Gamble B, it would follow [4] that Gamble C was less risky than Gamble D (and vice versa), so the risk is not averted in this way.

However, because the exact chances of winning are known for Gambles A and D and not known for Gambles B and C, this can be taken as evidence for some sort of ambiguity aversion, which cannot be accounted for in expected utility theory. It has been demonstrated that this phenomenon occurs only when the choice set permits the comparison of the ambiguous proposition with a less vague proposition (but not when ambiguous propositions are evaluated in isolation). [5]

Possible explanations

There have been various attempts to provide decision-theoretic explanations of Ellsberg's observation. Since the probabilistic information available to the decision-maker is incomplete, these attempts sometimes focus on quantifying the non-probabilistic ambiguity that the decision-maker faces – see Knightian uncertainty. That is, these alternative approaches sometimes suppose that the agent formulates a subjective (though not necessarily Bayesian) probability for possible outcomes.

One such attempt is based on info-gap decision theory. The agent is told precise probabilities of some outcomes, though the practical meaning of the probability numbers is not entirely clear. For instance, in the gambles discussed above, the probability of a red ball is 30/90, which is a precise number. Nonetheless, the participant may not distinguish intuitively between this and e.g. 30/91. No probability information whatsoever is provided regarding other outcomes, so the participant has very unclear subjective impressions of these probabilities.

In light of the ambiguity in the probabilities of the outcomes, the agent is unable to evaluate a precise expected utility. Consequently, a choice based on maximizing the expected utility is also impossible. The info-gap approach supposes that the agent implicitly formulates info-gap models for the subjectively uncertain probabilities. The agent then tries to satisfice the expected utility and maximize the robustness against uncertainty in the imprecise probabilities. This robust-satisficing approach can be developed explicitly to show that the choices of decision-makers should display precisely the preference reversal that Ellsberg observed. [6]

Another possible explanation is that this type of game triggers a deceit aversion mechanism. Many humans naturally assume in real-world situations that if they are not told the probability of a certain event, it is to deceive them. Participants make the same decisions in the experiment as they would about related but not identical real-life problems where the experimenter would be likely to be a deceiver acting against the subject's interests. When faced with the choice between a red ball and a black ball, the probability of 30/90 is compared to the lower part of the 0/9060/90 range (the probability of getting a black ball). The average person expects there to be fewer black balls than yellow balls because, in most real-world situations, it would be to the advantage of the experimenter to put fewer black balls in the urn when offering such a gamble. On the other hand, when offered a choice between red and yellow balls and black and yellow balls, people assume that there must be fewer than 30 yellow balls as would be necessary to deceive them. When making the decision, it is quite possible that people simply neglect to consider that the experimenter does not have a chance to modify the contents of the urn in between the draws. In real-life situations, even if the urn is not to be modified, people would be afraid of being deceived on that front as well. [7]

Decisions under uncertainty aversion

To describe how an individual would take decisions in a world where uncertainty aversion exists, modifications of the expected utility framework have been proposed. These include:

Alternative explanations

Other alternative explanations include the competence hypothesis [9] and the comparative ignorance hypothesis. [5] Both theories attribute the source of the ambiguity aversion to the participant's pre-existing knowledge.

Daniel Ellsberg's 1962 paper, "Risk, Ambiguity, and Decision"

Upon graduating in Economics from Harvard in 1952, Ellsberg left immediately to serve as a US Marine before coming back to Harvard in 1957 to complete his post-graduate studies on decision-making under uncertainty. [10] Ellsberg left his graduate studies to join the RAND Corporation as a strategic analyst but continued to do academic work on the side. He presented his breakthrough paper at the December 1960 meeting of the Econometric Society. Ellsberg's work built upon previous works by both J.M. Keynes and F.H Knight, challenging the dominant rational choice theory. The work was made public in 2001, some 40 years after being published, because of the Pentagon Papers scandal then encircling Ellsberg's life. The book is considered a highly-influential paper and is still considered influential within economic academia about risk ambiguity and uncertainty.

See also

Related Research Articles

The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the incorrect belief that, if an event has occurred more frequently than expected, it is less likely to happen again in the future. The fallacy is commonly associated with gambling, where it may be believed, for example, that the next dice roll is more than usually likely to be six because there have recently been fewer than the expected number of sixes.

<span class="mw-page-title-main">Amos Tversky</span> Israeli psychologist (1937–1996)

Amos Nathan Tversky was an Israeli cognitive and mathematical psychologist and a key figure in the discovery of systematic human cognitive bias and handling of risk.

<span class="mw-page-title-main">Risk aversion</span> Economics theory

In economics and finance, risk aversion is the tendency of people to prefer outcomes with low uncertainty to those outcomes with high uncertainty, even if the average outcome of the latter is equal to or higher in monetary value than the more certain outcome.

<span class="mw-page-title-main">Prospect theory</span> Theory of behavioral economics and behavioral finance

Prospect theory is a theory of behavioral economics, judgment and decision making that was developed by Daniel Kahneman and Amos Tversky in 1979. The theory was cited in the decision to award Kahneman the 2002 Nobel Memorial Prize in Economics.

<span class="mw-page-title-main">Decision theory</span> Branch of applied probability theory

Decision theory is a branch of applied probability theory and analytic philosophy concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome.

<span class="mw-page-title-main">St. Petersburg paradox</span> Paradox involving a game with repeated coin flipping

The St. Petersburg paradox or St. Petersburg lottery is a paradox involving the game of flipping a coin where the expected payoff of the theoretical lottery game approaches infinity but nevertheless seems to be worth only a very small amount to the participants. The St. Petersburg paradox is a situation where a naïve decision criterion that takes only the expected value into account predicts a course of action that presumably no actual person would be willing to take. Several resolutions to the paradox have been proposed, including the impossible amount of money a casino would need to continue the game indefinitely.

<span class="mw-page-title-main">Loss aversion</span> Overall description of loss aversion theory

Loss aversion is a psychological and economic concept which refers to how outcomes are interpreted as gains and losses where losses are subject to more sensitivity in people's responses compared to equivalent gains acquired. Kahneman and Tversky (1992) have suggested that losses can be twice as powerful, psychologically, as gains. When defined in terms of the utility function shape as in the cumulative prospect theory (CPT), losses have a steeper utility than gains, thus being more "painful" than the satisfaction from a comparable gain as shown in Figure 1. Loss aversion was first proposed by Amos Tversky and Daniel Kahneman as an important framework for prospect theory – an analysis of decision under risk.

The expected utility hypothesis is a foundational assumption in mathematical economics concerning decision making under uncertainty. It postulates that rational agents maximize utility, meaning the subjective desirability of their actions. Rational choice theory, a cornerstone of microeconomics, builds this postulate to model aggregate social behaviour.

In decision theory, subjective expected utility is the attractiveness of an economic opportunity as perceived by a decision-maker in the presence of risk. Characterizing the behavior of decision-makers as using subjective expected utility was promoted and axiomatized by L. J. Savage in 1954 following previous work by Ramsey and von Neumann. The theory of subjective expected utility combines two subjective concepts: first, a personal utility function, and second a personal probability distribution.

In gambling, economics, and the philosophy of probability, a Dutch book or lock is a set of odds and bets that ensures a guaranteed profit. It is generally used as a thought experiment to motivate Von Neumann–Morgenstern axioms or the axioms of probability by showing they are equivalent to philosophical coherence or Pareto efficiency.

In economics, Knightian uncertainty is a lack of any quantifiable knowledge about some possible occurrence, as opposed to the presence of quantifiable risk. The concept acknowledges some fundamental degree of ignorance, a limit to knowledge, and an essential unpredictability of future events.

Howard Raiffa was an American academic who was the Frank P. Ramsey Professor (Emeritus) of Managerial Economics, a joint chair held by the Business School and Harvard Kennedy School at Harvard University. He was an influential Bayesian decision theorist and pioneer in the field of decision analysis, with works in statistical decision theory, game theory, behavioral decision theory, risk analysis, and negotiation analysis. He helped found and was the first director of the International Institute for Applied Systems Analysis.

The Allais paradox is a choice problem designed by Maurice Allais (1953) to show an inconsistency of actual observed choices with the predictions of expected utility theory. Rather than adhering to rationality, the Allais paradox proves that individuals rarely make rational decisions consistently when required to do so immediately. The independence axiom of expected utility theory, which requires that the preferences of an individual should not change when altering two lotteries by equal proportions, was proven to be violated by the paradox.

The ambiguity effect is a cognitive tendency where decision making is affected by a lack of information, or "ambiguity". The effect implies that people tend to select options for which the probability of a favorable outcome is known, over an option for which the probability of a favorable outcome is unknown. The effect was first described by Daniel Ellsberg in 1961.

In decision theory and economics, ambiguity aversion is a preference for known risks over unknown risks. An ambiguity-averse individual would rather choose an alternative where the probability distribution of the outcomes is known over one where the probabilities are unknown. This behavior was first introduced through the Ellsberg paradox.

<span class="mw-page-title-main">Cumulative prospect theory</span>

Cumulative prospect theory (CPT) is a model for descriptive decisions under risk and uncertainty which was introduced by Amos Tversky and Daniel Kahneman in 1992. It is a further development and variant of prospect theory. The difference between this version and the original version of prospect theory is that weighting is applied to the cumulative probability distribution function, as in rank-dependent expected utility theory but not applied to the probabilities of individual outcomes. In 2002, Daniel Kahneman received the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel for his contributions to behavioral economics, in particular the development of Cumulative Prospect Theory (CPT).

David Schmeidler was an Israeli mathematician and economic theorist. He was a Professor Emeritus at Tel Aviv University and the Ohio State University.

The end-of-the-day betting effect is a cognitive bias reflected in the tendency for bettors to take gambles with higher risk and higher reward at the end of their betting session to try to make up for losses. William McGlothlin (1956) and Mukhtar Ali (1977) first discovered this effect after observing the shift in betting patterns at horserace tracks. Mcglothlin and Ali noticed that people are significantly more likely to prefer longshots to conservative bets on the last race of the day. They found that the movement towards longshots, and away from favorites, is so pronounced that some studies show that conservatively betting on the favorite to show in the last race is a profitable bet despite the track’s take.

Risk aversion is a preference for a sure outcome over a gamble with higher or equal expected value. Conversely, rejection of a sure thing in favor of a gamble of lower or equal expected value is known as risk-seeking behavior.

The uncertainty effect, also known as direct risk aversion, is a phenomenon from economics and psychology which suggests that individuals may be prone to expressing such an extreme distaste for risk that they ascribe a lower value to a risky prospect than its worst possible realization.

References

  1. Keynes 1921, pp. 75–76, paragraph 315, footnote 2.
  2. Ellsberg, Daniel (1961). "Risk, Ambiguity, and the Savage Axioms" (PDF). Quarterly Journal of Economics . 75 (4): 643–669. doi:10.2307/1884324. JSTOR   1884324.
  3. "Experimental Discussion of the Ellsberg Paradox". EconPort. Experimental Economics Center, Georgia State University. 2006. Retrieved May 28, 2022.
  4. Segal, Uzi (1987). "The Ellsberg Paradox and Risk Aversion: An Anticipated Utility Approach" (PDF). International Economic Review . 28 (1): 175–202. doi:10.2307/2526866. JSTOR   2526866.
  5. 1 2 Fox, Craig R.; Tversky, Amos (1995). "Ambiguity Aversion and Comparative Ignorance". Quarterly Journal of Economics . 110 (3): 585–603. CiteSeerX   10.1.1.395.8835 . doi:10.2307/2946693. JSTOR   2946693.
  6. Ben-Haim, Yakov (2006). Info-gap Decision Theory: Decisions Under Severe Uncertainty (2nd ed.). Academic Press. section 11.1. ISBN   978-0-12-373552-2.
  7. Lima Filho, Roberto IRL (July 2, 2009). "Rationality Intertwined: Classical vs Institutional View": 5–6. doi:10.2139/ssrn.2389751. S2CID   219336148. SSRN   2389751.{{cite journal}}: Cite journal requires |journal= (help)
  8. I. Gilboa and D. Schmeidler. Maxmin expected utility with non-unique prior. Journal of Mathematical Economics, 18(2):141–153, 1989.
  9. Heath, Chip; Tversky, Amos (1991). "Preference and Belief: Ambiguity and Competence in Choice under Uncertainty". Journal of Risk and Uncertainty. 4: 5–28. CiteSeerX   10.1.1.138.6159 . doi:10.1007/bf00057884. S2CID   146410959.
  10. Yasuhiro Sakai, Daniel Ellsberg on J.M. Keynes and F.H. Knight: risk ambiguity and uncertainty. Evolutionary and Institutional Economics Review. 2018. (16): 1-18

Further reading