The Sleeping Beauty problem, also known as the Sleeping Beauty paradox, [1] is a puzzle in decision theory in which an ideally rational epistemic agent is told she will be awoken from sleep either once or twice according to the toss of a coin. Each time she will have no memory of whether she has been awoken before, and is asked what her degree of belief that the outcome of the coin toss is Heads ought to be when she is first awakened.
The problem was originally formulated in unpublished work in the mid-1980s by Arnold Zuboff (the work was later published as "One Self: The Logic of Experience") [2] followed by a paper by Adam Elga. [3] A formal analysis of the problem of belief formation in decision problems with imperfect recall was provided first by Michele Piccione and Ariel Rubinstein in their paper: "On the Interpretation of Decision Problems with Imperfect Recall" where the "paradox of the absent minded driver" was first introduced and the Sleeping Beauty problem discussed as Example 5. [4] [5] The name "Sleeping Beauty" was given to the problem by Robert Stalnaker and was first used in extensive discussion in the Usenet newsgroup rec.puzzles in 1999. [6]
As originally published by Elga, the problem was:
The only significant difference from Zuboff's unpublished versions is the number of potential wakings; Zuboff used a large number. Elga created a schedule within which to implement his solution, and this has become the canonical form of the problem:
Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:
In either case, she will be awakened on Wednesday without interview and the experiment ends.
Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: "What is your credence now for the proposition that the coin landed heads?"
This problem continues to produce ongoing debate.
The thirder position argues that the probability of heads is 1/3. Adam Elga argued for this position originally [3] as follows: Suppose Sleeping Beauty is told and she comes to fully believe that the coin landed tails. By even a highly restricted principle of indifference, given that the coin lands tails, her credence that it is Monday should equal her credence that it is Tuesday, since being in one situation would be subjectively indistinguishable from the other. In other words, P(Monday|Tails) = P(Tuesday|Tails), and thus
Suppose now that Sleeping Beauty is told upon awakening and comes to fully believe that it is Monday. Guided by the objective chance of heads landing being equal to the chance of tails landing, it should hold that P(Tails|Monday) = P(Heads|Monday), and thus
Since these three outcomes are exhaustive and exclusive for one trial (and thus their probabilities must add to 1), the probability of each is then 1/3 by the previous two steps in the argument.
An alternative argument is as follows: Credence can be viewed as the amount a rational risk-neutral bettor would wager if the payoff for being correct is 1 unit (the wager itself being lost either way). In the heads scenario, Sleeping Beauty would spend her wager amount one time, and receive 1 money for being correct. In the tails scenario, she would spend her wager amount twice, and receive nothing. Her expected value is therefore to gain 0.5 but also lose 1.5 times her wager, thus she should break even if her wager is 1/3.
David Lewis responded to Elga's paper with the position that Sleeping Beauty's credence that the coin landed heads should be 1/2. [7] Sleeping Beauty receives no new non-self-locating information throughout the experiment because she is told the details of the experiment. Since her credence before the experiment is P(Heads) = 1/2, she ought to continue to have a credence of P(Heads) = 1/2 since she gains no new relevant evidence when she wakes up during the experiment. This directly contradicts one of the thirder's premises, since it means P(Tails|Monday) = 1/3 and P(Heads|Monday) = 2/3.[ citation needed ]
The double halfer position [8] argues that both P(Heads) and P(Heads|Monday) equal 1/2. Mikaël Cozic, [9] in particular, argues that context-sensitive propositions like "it is Monday" are in general problematic for conditionalization and proposes the use of an imaging rule instead, which supports the double halfer position.
Another approach to the Sleeping Beauty problem is to assert that the problem, as stated, is ambiguous. This view asserts that the thirder and halfer positions are both correct answers, but to different questions. [10] [11] [12] The key idea is that the question asked of Sleeping Beauty, "what is your credence that the coin came up heads", is ambiguous. The question must be disambiguated based on the particular event whose probability we wish to measure. The two disambiguations are: "what is your credence that the coin landed heads in the act of tossing" and "what is your credence that the coin landed heads in the toss to set up this awakening"; to which, the correct answers are 1/2 and 1/3 respectively.
Another way to see the two different questions is to simplify the Sleeping Beauty problem as follows. [10] Imagine tossing a coin, if the coin comes up heads, a green ball is placed into a box; if, instead, the coin comes up tails, two red balls are placed into a box. We repeat this procedure a large number of times until the box is full of balls of both colours. A single ball is then drawn from the box. In this setting, the question from the original problem resolves to one of two different questions: "what is the probability that a green ball was placed in the box" and "what is the probability a green ball was drawn from the box". These questions ask for the probability of two different events, and thus can have different answers, even though both events are causally dependent on the coin landing heads. (This fact is even more obvious when one considers the complementary questions: "what is the probability that two red balls were placed in the box" and "what is the probability that a red ball was drawn from the box".)
This view evidently violates the principle that, if event A happens if and only if event B happens, then we should have equal credence for event A and event B. [13] This principle is not applicable because the sample spaces are different.
Credence about what precedes awakenings is a core question in connection with the anthropic principle.
This differs from the original in that there are one million and one wakings if tails comes up. It was formulated by Nick Bostrom. [13] [14]
The Sailor's Child problem, introduced by Radford M. Neal, is somewhat similar. It involves a sailor who regularly sails between ports. In one port there is a woman who wants to have a child with him, across the sea there is another woman who also wants to have a child with him. The sailor cannot decide if he will have one or two children, so he will leave it up to a coin toss. If Heads, he will have one child, and if Tails, two children (one with each woman; presumably the children will never meet). But if the coin lands on Heads, which woman would have his child? He would decide this by looking at The Sailor's Guide to Ports and the woman in the port that appears first would be the woman that he has a child with. You are his child. You do not have a copy of The Sailor's Guide to Ports. What is the probability that you are his only child, thus the coin landed on Heads (assume a fair coin)? [15]
The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the belief that, if an event has occurred less frequently than expected, it is more likely to happen again in the future. The fallacy is commonly associated with gambling, where it may be believed, for example, that the next dice roll is more than usually likely to be six because there have recently been fewer than the expected number of sixes.
The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.
The inverse gambler's fallacy, named by philosopher Ian Hacking, is a formal fallacy of Bayesian inference which is an inverse of the better known gambler's fallacy. It is the fallacy of concluding, on the basis of an unlikely outcome of a random process, that the process is likely to have occurred many times before. For example, if one observes a pair of fair dice being rolled and turning up double sixes, it is wrong to suppose that this lends any support to the hypothesis that the dice have been rolled many times before. We can see this from the Bayesian update rule: letting U denote the unlikely outcome of the random process and M the proposition that the process has occurred many times before, we have
A martingale is a class of betting strategies that originated from and were popular in 18th-century France. The simplest of these strategies was designed for a game in which the gambler wins the stake if a coin comes up heads and loses if it comes up tails. The strategy had the gambler double the bet after every loss, so that the first win would recover all previous losses plus win a profit equal to the original stake. Thus the strategy is an instantiation of the St. Petersburg paradox.
In probability theory, an event is said to happen almost surely if it happens with probability 1. In other words, the set of outcomes on which the event does not occur has probability 0, even though the set might not be empty. The concept is analogous to the concept of "almost everywhere" in measure theory. In probability experiments on a finite sample space with a non-zero probability for each outcome, there is no difference between almost surely and surely ; however, this distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability 0.
The St. Petersburg paradox or St. Petersburg lottery is a paradox involving the game of flipping a coin where the expected payoff of the lottery game is infinite but nevertheless seems to be worth only a very small amount to the participants. The St. Petersburg paradox is a situation where a naïve decision criterion that takes only the expected value into account predicts a course of action that presumably no actual person would be willing to take. Several resolutions to the paradox have been proposed, including the impossible amount of money a casino would need to continue the game indefinitely.
Coin flipping, coin tossing, or heads or tails is the practice of throwing a coin in the air and checking which side is showing when it lands, in order to randomly choose between two alternatives. It is a form of sortition which inherently has two possible outcomes. The party who calls the side that is facing up when the coin lands wins.
Data dredging is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. This is done by performing many statistical tests on the data and only reporting those that come back with significant results.
In statistics, the question of checking whether a coin is fair is one whose importance lies, firstly, in providing a simple problem on which to illustrate basic ideas of statistical inference and, secondly, in providing a simple problem that can be used to compare various competing methods of statistical inference, including decision theory. The practical problem of checking whether a coin is fair might be considered as easily solved by performing a sufficiently large number of trials, but statistics and probability theory can provide guidance on two types of question; specifically those of how many trials to undertake and of the accuracy of an estimate of the probability of turning up heads, derived from a given sample of trials.
Cromwell's rule, named by statistician Dennis Lindley, states that the use of prior probabilities of 1 or 0 should be avoided, except when applied to statements that are logically true or false, such as 2 + 2 equaling 4.
The two envelopes problem, also known as the exchange paradox, is a paradox in probability theory. It is of special interest in decision theory and for the Bayesian interpretation of probability theory. It is a variant of an older problem known as the necktie paradox. The problem is typically introduced by formulating a hypothetical challenge like the following example:
Imagine you are given two identical envelopes, each containing money. One contains twice as much as the other. You may pick one envelope and keep the money it contains. Having chosen an envelope at will, but before inspecting it, you are given the chance to switch envelopes. Should you switch?
Credibility theory is a branch of actuarial mathematics concerned with determining risk premiums. To achieve this, it uses mathematical models in an effort to forecast the (expected) number of insurance claims based on past observations. Technically speaking, the problem is to find the best linear approximation to the mean of the Bayesian predictive density, which is why credibility theory has many results in common with linear filtering as well as Bayesian statistics more broadly.
The propensity theory of probability is a probability interpretation in which the probability is thought of as a physical propensity, disposition, or tendency of a given type of situation to yield an outcome of a certain kind, or to yield a long-run relative frequency of such an outcome.
In probability theory, Robbins' problem of optimal stopping, named after Herbert Robbins, is sometimes referred to as the fourth secretary problem or the problem of minimizing the expected rank with full information.
Let X1, ..., Xn be independent, identically distributed random variables, uniform on [0, 1]. We observe the Xk's sequentially and must stop on exactly one of them. No recall of preceding observations is permitted. What stopping rule minimizes the expected rank of the selected observation, and what is its corresponding value?
The "hot hand" is a phenomenon, previously considered a cognitive social bias, that a person who experiences a successful outcome has a greater chance of success in further attempts. The concept is often applied to sports and skill-based tasks in general and originates from basketball, where a shooter is more likely to score if their previous attempts were successful; i.e., while having the "hot hand.” While previous success at a task can indeed change the psychological attitude and subsequent success rate of a player, researchers for many years did not find evidence for a "hot hand" in practice, dismissing it as fallacious. However, later research questioned whether the belief is indeed a fallacy. Some recent studies using modern statistical analysis have observed evidence for the "hot hand" in some sporting activities; however, other recent studies have not observed evidence of the "hot hand". Moreover, evidence suggests that only a small subset of players may show a "hot hand" and, among those who do, the magnitude of the "hot hand" tends to be small.
In philosophy, Pascal's mugging is a thought experiment demonstrating a problem in expected utility maximization. A rational agent should choose actions whose outcomes, when weighted by their probability, have higher utility. But some very unlikely outcomes may have very great utilities, and these utilities can grow faster than the probability diminishes. Hence the agent should focus more on vastly improbable cases with implausibly high rewards; this leads first to counter-intuitive choices, and then to incoherence as the utility of every choice becomes unbounded.
Credence or degree of belief is a statistical term that expresses how much a person believes that a proposition is true. As an example, a reasonable person will believe with close to 50% credence that a fair coin will land on heads the next time it is flipped. If the prize for correctly predicting the coin flip is $100, then a reasonable risk-neutral person will wager $49 on heads, but will not wager $51 on heads.
Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002) is a book by philosopher Nick Bostrom. Bostrom investigates how to reason when one suspects that evidence is biased by "observation selection effects", in other words, when the evidence presented has been pre-filtered by the condition that there was some appropriately positioned observer to "receive" the evidence. This conundrum is sometimes called the "anthropic principle", "self-locating belief", or "indexical information".
Bayesian epistemology is a formal approach to various topics in epistemology that has its roots in Thomas Bayes' work in the field of probability theory. One advantage of its formal method in contrast to traditional epistemology is that its concepts and theorems can be defined with a high degree of precision. It is based on the idea that beliefs can be interpreted as subjective probabilities. As such, they are subject to the laws of probability theory, which act as the norms of rationality. These norms can be divided into static constraints, governing the rationality of beliefs at any moment, and dynamic constraints, governing how rational agents should change their beliefs upon receiving new evidence. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. Bayesians have applied these fundamental principles to various epistemological topics but Bayesianism does not cover all topics of traditional epistemology. The problem of confirmation in the philosophy of science, for example, can be approached through the Bayesian principle of conditionalization by holding that a piece of evidence confirms a theory if it raises the likelihood that this theory is true. Various proposals have been made to define the concept of coherence in terms of probability, usually in the sense that two propositions cohere if the probability of their conjunction is higher than if they were neutrally related to each other. The Bayesian approach has also been fruitful in the field of social epistemology, for example, concerning the problem of testimony or the problem of group belief. Bayesianism still faces various theoretical objections that have not been fully solved.
Arnold Stuart Zuboff is an American philosopher who is the original formulator of the Sleeping Beauty problem. He has worked on topics such as personal identity, the philosophy of mind, ethics, metaphysics, epistemology, and the philosophy of probability. and a view analogous to open individualism—the position that there is one subject of experience, who is everyone—which he calls "universalism".