The two envelopes problem, also known as the exchange paradox, is a paradox in probability theory. It is of special interest in decision theory and for the Bayesian interpretation of probability theory. It is a variant of an older problem known as the necktie paradox. The problem is typically introduced by formulating a hypothetical challenge like the following example:
Imagine you are given two identical envelopes, each containing money. One contains twice as much as the other. You may pick one envelope and keep the money it contains. Having chosen an envelope at will, but before inspecting it, you are given the chance to switch envelopes. Should you switch?
Since the situation is symmetric, it seems obvious that there is no point in switching envelopes. On the other hand, a simple calculation using expected values suggests the opposite conclusion, that it is always beneficial to swap envelopes, since the person stands to gain twice as much money if they switch, while the only risk is halving what they currently have. [1]
A person is given two indistinguishable envelopes, each of which contains a sum of money. One envelope contains twice as much as the other. The person may pick one envelope and keep whatever amount it contains. They pick one envelope at random but before they open it they are given the chance to take the other envelope instead. [1]
Now suppose the person reasons as follows:
The puzzle is to find the flaw in the line of reasoning in the switching argument. This includes determining exactly why and under what conditions that step is not correct, to be sure not to make this mistake in a situation where the misstep may not be so obvious. In short, the problem is to solve the paradox. The puzzle is not solved by finding another way to calculate the probabilities that does not lead to a contradiction.
There have been many solutions proposed, and commonly one writer proposes a solution to the problem as stated, after which another writer shows that altering the problem slightly revives the paradox. Such sequences of discussions have produced a family of closely related formulations of the problem, resulting in voluminous literature on the subject. [2]
No proposed solution is widely accepted as definitive. [3] Despite this, it is common for authors to claim that the solution to the problem is easy, even elementary. [4] Upon investigating these elementary solutions, however, they often differ from one author to the next.
Suppose that the total amount in both envelopes is a constant , with in one envelope and in the other. If you select the envelope with first you gain the amount by swapping. If you select the envelope with first you lose the amount by swapping. So you gain on average by swapping.
So on this supposition that the total amount is fixed, swapping is not better than keeping. The expected value is the same for both the envelopes. Thus no contradiction exists. [5]
The famous mystification is evoked by confusing the situation where the total amount in the two envelopes is fixed with the situation where the amount in one envelope is fixed and the other can be either double or half that amount. The so-called paradox presents two already appointed and already locked envelopes, where one envelope is already locked with twice the amount of the other already locked envelope. Whereas step 6 boldly claims "Thus the other envelope contains 2A with probability 1/2 and A/2 with probability 1/2", in the given situation, that claim can never apply to any A nor to any average A.
This claim is never correct for the situation presented; this claim applies to the Nalebuff asymmetric variant only (see below). In the situation presented, the other envelope cannot generally contain 2A, but can contain 2A only in the very specific instance where envelope A, by chance contains the smaller amount of , but nowhere else. The other envelope cannot generally contain A/2 but can contain A/2 only in the very specific instance where envelope A, by chance, actually contains , but nowhere else. The difference between the two already appointed and locked envelopes is always . No "average amount A" can ever form any initial basis for any expected value, as this does not get to the heart of the problem. [6]
A widely-discussed way to resolve the paradox, both in popular literature and part of the academic literature, especially in philosophy, is to assume that the 'A' in step 7 is intended to be the expected value in envelope A and that we intended to write down a formula for the expected value in envelope B.
Step 7 states that the expected value in B = 1/2(2A + A/2).
It is pointed out that the 'A' in the first part of the formula is the expected value, given that envelope A contains less than envelope B, but the 'A', in the second part of the formula is the expected value in A, given that envelope A contains more than envelope B. The flaw in the argument is that the same symbol is used with two different meanings in both parts of the same calculation but is assumed to have the same value in both cases. This line of argument is introduced by McGrew, Shier and Silverstein (1997). [7]
A correct calculation would be:
If we then take the sum in one envelope to be x and the sum in the other to be 2x the expected value calculations become:
which is equal to the expected sum in A.
In non-technical language, what goes wrong (see Necktie paradox) is that, in the scenario provided, the mathematics use relative values of A and B (that is, it assumes that one would gain more money if A is less than B than one would lose if the opposite were true). However, the two values of money are fixed (one envelope contains, say, $20 and the other $40). If the values of the envelopes are restated as x and 2x, it's much easier to see that, if A were greater, one would lose x by switching and, if B were greater, one would gain x by switching. One does not gain a greater amount of money by switching because the total T of A and B (3x) remains the same, and the difference x is fixed to T/3.
Line 7 should have been worked out more carefully as follows:
A will be larger when A is larger than B, than when it is smaller than B. So its average values (expectation values) in those two cases are different. And the average value of A is not the same as A itself, anyway. Two mistakes are being made: the writer forgot he was taking expectation values, and he forgot he was taking expectation values under two different conditions.
It would have been easier to compute E(B) directly. Denoting the lower of the two amounts by x, and taking it to be fixed (even if unknown) we find that
We learn that 1.5x is the expected value of the amount in Envelope B. By the same calculation it is also the expected value of the amount in Envelope A. They are the same hence there is no reason to prefer one envelope to the other. This conclusion was, of course, obvious in advance; the point is that we identified the false step in the argument for switching by explaining exactly where the calculation being made there went off the rails.
We could also continue from the correct but difficult to interpret result of the development in line 7:
so (of course) different routes to calculate the same thing all give the same answer.
Tsikogiannopoulos presented a different way to do these calculations. [9] It is by definition correct to assign equal probabilities to the events that the other envelope contains double or half that amount in envelope A. So the "switching argument" is correct up to step 6. Given that the player's envelope contains the amount A, he differentiates the actual situation in two different games: The first game would be played with the amounts (A, 2A) and the second game with the amounts (A/2, A). Only one of them is actually played but we don't know which one. These two games need to be treated differently. If the player wants to compute his/her expected return (profit or loss) in case of exchange, he/she should weigh the return derived from each game by the average amount in the two envelopes in that particular game. In the first case the profit would be A with an average amount of 3A/2, whereas in the second case the loss would be A/2 with an average amount of 3A/4. So the formula of the expected return in case of exchange, seen as a proportion of the total amount in the two envelopes, is:
This result means yet again that the player has to expect neither profit nor loss by exchanging his/her envelope.
We could actually open our envelope before deciding on switching or not and the above formula would still give us the correct expected return. For example, if we opened our envelope and saw that it contained 100 euros then we would set A=100 in the above formula and the expected return in case of switching would be:
The mechanism by which the amounts of the two envelopes are determined is crucial for the decision of the player to switch her envelope. [9] [10] Suppose that the amounts in the two envelopes A and B were not determined by first fixing the contents of two envelopes E1 and E2, and then naming them A and B at random (for instance, by the toss of a fair coin [11] ). Instead, we start right at the beginning by putting some amount in envelope A and then fill B in a way which depends both on chance (the toss of a coin) and on what we put in A. Suppose that first of all the amount a in envelope A is fixed in some way or other, and then the amount in Envelope B is fixed, dependent on what is already in A, according to the outcome of a fair coin. If the coin fell Heads then 2a is put in Envelope B, if the coin fell Tails then a/2 is put in Envelope B. If the player was aware of this mechanism, and knows that she holds Envelope A, but do not know the outcome of the coin toss, and do not know a, then the switching argument is correct and she is recommended to switch envelopes. This version of the problem was introduced by Nalebuff (1988) and is often called the Ali-Baba problem. Notice that there is no need to look in envelope A in order to decide whether or not to switch.
Many more variants of the problem have been introduced. Nickerson and Falk systematically survey a total of 8. [11]
The simple resolution above assumed that the person who invented the argument for switching was trying to calculate the expectation value of the amount in Envelope A, thinking of the two amounts in the envelopes as fixed (x and 2x). The only uncertainty is which envelope has the smaller amount x. However, many mathematicians and statisticians interpret the argument as an attempt to calculate the expected amount in Envelope B, given a real or hypothetical amount "A" in Envelope A. One does not need to look in the envelope to see how much is in there, in order to do the calculation. If the result of the calculation is an advice to switch envelopes, whatever amount might be in there, then it would appear that one should switch anyway, without looking. In this case, at Steps 6, 7 and 8 of the reasoning, "A" is any fixed possible value of the amount of money in the first envelope.
This interpretation of the two envelopes problem appears in the first publications in which the paradox was introduced in its present-day form, Gardner (1989) and Nalebuff (1988). [12] ) It is common in the more mathematical literature on the problem. It also applies to the modification of the problem (which seems to have started with Nalebuff) in which the owner of envelope A does actually look in his envelope before deciding whether or not to switch; though Nalebuff does also emphasize that there is no need to have the owner of envelope A look in his envelope. If he imagines looking in it, and if for any amount which he can imagine being in there, he has an argument to switch, then he will decide to switch anyway. Finally, this interpretation was also the core of earlier versions of the two envelopes problem (Littlewood's, Schrödinger's, and Kraitchik's switching paradoxes); see the concluding section, on history of TEP.
This kind of interpretation is often called "Bayesian" because it assumes the writer is also incorporating a prior probability distribution of possible amounts of money in the two envelopes in the switching argument.
The simple resolution depended on a particular interpretation of what the writer of the argument is trying to calculate: namely, it assumed he was after the (unconditional) expectation value of what's in Envelope B. In the mathematical literature on Two Envelopes Problem, a different interpretation is more common, involving the conditional expectation value (conditional on what might be in Envelope A). To solve this and related interpretations or versions of the problem, most authors use the Bayesian interpretation of probability, which means that probability reasoning is not only applied to truly random events like the random pick of an envelope, but also to our knowledge (or lack of knowledge) about things which are fixed but unknown, like the two amounts originally placed in the two envelopes, before one is picked at random and called "Envelope A". Moreover, according to a long tradition going back at least to Laplace and his principle of insufficient reason one is supposed to assign equal probabilities when one has no knowledge at all concerning the possible values of some quantity. Thus the fact that we are not told anything about how the envelopes are filled can already be converted into probability statements about these amounts. No information means that probabilities are equal.
In steps 6 and 7 of the switching argument, the writer imagines that envelope A contains a certain amount a, and then seems to believe that given that information, the other envelope would be equally likely to contain twice or half that amount. That assumption can only be correct, if prior to knowing what was in Envelope A, the writer would have considered the following two pairs of values for both envelopes equally likely: the amounts a/2 and a; and the amounts a and 2a. (This follows from Bayes' rule in odds form: posterior odds equal prior odds times likelihood ratio). But now we can apply the same reasoning, imagining not a but a/2 in Envelope A. And similarly, for 2a. And similarly, ad infinitum, repeatedly halving or repeatedly doubling as many times as you like. [13]
Suppose for the sake of argument, we start by imagining an amount of 32 in Envelope A. In order that the reasoning in steps 6 and 7 is correct whatever amount happened to be in Envelope A, we apparently believe in advance that all the following ten amounts are all equally likely to be the smaller of the two amounts in the two envelopes: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 (equally likely powers of 2 [13] ). But going to even larger or even smaller amounts, the "equally likely" assumption starts to appear a bit unreasonable. Suppose we stop, just with these ten equally likely possibilities for the smaller amount in the two envelopes. In that case, the reasoning in steps 6 and 7 was entirely correct if envelope A happened to contain any of the amounts 2, 4, ... 512: switching envelopes would give an expected (average) gain of 25%. If envelope A happened to contain the amount 1, then the expected gain is actually 100%. But if it happened to contain the amount 1024, a massive loss of 50% (of a rather large amount) would have been incurred. That only happens once in twenty times, but it is exactly enough to balance the expected gains in the other 19 out of 20 times.
Alternatively, we do go on ad infinitum but now we are working with a quite ludicrous assumption, implying for instance, that it is infinitely more likely for the amount in envelope A to be smaller than 1, and infinitely more likely to be larger than 1024, than between those two values. This is a so-called improper prior distribution: probability calculus breaks down; expectation values are not even defined. [13]
Many authors have also pointed out that if a maximum sum that can be put in the envelope with the smaller amount exists, then it is very easy to see that Step 6 breaks down, since if the player holds more than the maximum sum that can be put into the "smaller" envelope they must hold the envelope containing the larger sum, and are thus certain to lose by switching. This may not occur often, but when it does, the heavy loss the player incurs means that, on average, there is no advantage in switching. Some writers consider that this resolves all practical cases of the problem. [14]
But the problem can also be resolved mathematically without assuming a maximum amount. Nalebuff, [14] Christensen and Utts, [15] Falk and Konold, [13] Blachman, Christensen and Utts, [16] Nickerson and Falk, [11] pointed out that if the amounts of money in the two envelopes have any proper probability distribution representing the player's prior beliefs about the amounts of money in the two envelopes, then it is impossible that whatever the amount A=a in the first envelope might be, it would be equally likely, according to these prior beliefs, that the second contains a/2 or 2a. Thus step 6 of the argument, which leads to always switching, is a non-sequitur, also when there is no maximum to the amounts in the envelopes.
The first two resolutions discussed above (the "simple resolution" and the "Bayesian resolution") correspond to two possible interpretations of what is going on in step 6 of the argument. They both assume that step 6 indeed is "the bad step". But the description in step 6 is ambiguous. Is the author after the unconditional (overall) expectation value of what is in envelope B (perhaps - conditional on the smaller amount, x), or is he after the conditional expectation of what is in envelope B, given any possible amount a which might be in envelope A? Thus, there are two main interpretations of the intention of the composer of the paradoxical argument for switching, and two main resolutions.
A large literature has developed concerning variants of the problem. [17] [18] The standard assumption about the way the envelopes are set up is that a sum of money is in one envelope, and twice that sum is in another envelope. One of the two envelopes is randomly given to the player (envelope A). The originally proposed problem does not make clear exactly how the smaller of the two sums is determined, what values it could possibly take and, in particular, whether there is a minimum or a maximum sum it might contain. [19] [20] However, if we are using the Bayesian interpretation of probability, then we start by expressing our prior beliefs as to the smaller amount in the two envelopes through a probability distribution. Lack of knowledge can also be expressed in terms of probability.
A first variant within the Bayesian version is to come up with a proper prior probability distribution of the smaller amount of money in the two envelopes, such that when Step 6 is performed properly, the advice is still to prefer Envelope B, whatever might be in Envelope A. So though the specific calculation performed in step 6 was incorrect (there is no proper prior distribution such that, given what is in the first envelope A, the other envelope is always equally likely to be larger or smaller) a correct calculation, depending on what prior we are using, does lead to the result for all possible values of a. [21]
In these cases, it can be shown that the expected sum in both envelopes is infinite. There is no gain, on average, in swapping.
Though Bayesian probability theory can resolve the first mathematical interpretation of the paradox above, it turns out that examples can be found of proper probability distributions, such that the expected value of the amount in the second envelope, conditioned on the amount in the first, does exceed the amount in the first, whatever it might be. The first such example was already given by Nalebuff. [14] See also Christensen and Utts (1992). [15] [22] [23] [24]
Denote again the amount of money in the first envelope by A and that in the second by B. We think of these as random. Let X be the smaller of the two amounts and Y=2X be the larger. Notice that once we have fixed a probability distribution for X then the joint probability distribution of A, B is fixed, since A, B = X, Y or Y, X each with probability 1/2, independently of X, Y.
The bad step 6 in the "always switching" argument led us to the finding E(B|A=a)>a for all a, and hence to the recommendation to switch, whether or not we know a. Now, it turns out that one can quite easily invent proper probability distributions for X, the smaller of the two amounts of money, such that this bad conclusion is still true. One example is analyzed in more detail, in a moment.
As mentioned before, it cannot be true that whatever a, given A=a, B is equally likely to be a/2 or 2a, but it can be true that whatever a, given A=a, B is larger in expected value than a.
Suppose for example that the envelope with the smaller amount actually contains 2n dollars with probability 2n/3n+1 where n = 0, 1, 2, ... These probabilities sum to 1, hence the distribution is a proper prior (for subjectivists) and a completely decent probability law also for frequentists. [25]
Imagine what might be in the first envelope. A sensible strategy would certainly be to swap when the first envelope contains 1, as the other must then contain 2. Suppose on the other hand the first envelope contains 2. In that case, there are two possibilities: the envelope pair in front of us is either {1, 2} or {2, 4}. All other pairs are impossible. The conditional probability that we are dealing with the {1, 2} pair, given that the first envelope contains 2, is
and consequently the probability it's the {2, 4} pair is 2/5, since these are the only two possibilities. In this derivation, is the probability that the envelope pair is the pair 1 and 2, and envelope A happens to contain 2; is the probability that the envelope pair is the pair 2 and 4, and (again) envelope A happens to contain 2. Those are the only two ways that envelope A can end up containing the amount 2.
It turns out that these proportions hold in general unless the first envelope contains 1. Denote by a the amount we imagine finding in Envelope A, if we were to open that envelope, and suppose that a = 2n for some n ≥ 1. In that case the other envelope contains a/2 with probability 3/5 and 2a with probability 2/5.
So either the first envelope contains 1, in which case the conditional expected amount in the other envelope is 2, or the first envelope contains a > 1, and though the second envelope is more likely to be smaller than larger, its conditionally expected amount is larger: the conditionally expected amount in Envelope B is
which is more than a. This means that the player who looks in envelope A would decide to switch whatever he saw there. Hence there is no need to look in envelope A to make that decision.
This conclusion is just as clearly wrong as it was in the preceding interpretations of the Two Envelopes Problem. But now the flaws noted above do not apply; the a in the expected value calculation is a constant and the conditional probabilities in the formula are obtained from a specified and proper prior distribution.
Most writers think that the new paradox can be defused, although the resolution requires concepts from mathematical economics. [26] Suppose for all . It can be shown that this is possible for some probability distributions of X (the smaller amount of money in the two envelopes) only if . That is, only if the mean of all possible values of money in the envelopes is infinite. To see why, compare the series described above in which the probability of each X is 2/3 as likely as the previous X with one in which the probability of each X is only 1/3 as likely as the previous X. When the probability of each subsequent term is greater than one-half of the probability of the term before it (and each X is twice that of the X before it) the mean is infinite, but when the probability factor is less than one-half, the mean converges. In the cases where the probability factor is less than one-half, for all a other than the first, smallest a, and the total expected value of switching converges to 0. In addition, if an ongoing distribution with a probability factor greater than one-half is made finite by, after any number of terms, establishing a final term with "all the remaining probability," that is, 1 minus the probability of all previous terms, the expected value of switching with respect to the probability that A is equal to the last, largest a will exactly negate the sum of the positive expected values that came before, and again the total expected value of switching drops to 0 (this is the general case of setting out an equal probability of a finite set of values in the envelopes described above). Thus, the only distributions that seem to point to a positive expected value for switching are those in which . Averaging over a, it follows that (because A and B have identical probability distributions, by symmetry, and both A and B are greater than or equal to X).
If we do not look into the first envelope, then clearly there is no reason to switch, since we would be exchanging one unknown amount of money (A), whose expected value is infinite, for another unknown amount of money (B), with the same probability distribution and infinite expected value. However, if we do look into the first envelope, then for all values observed () we would want to switch because for all a. As noted by David Chalmers, this problem can be described as a failure of dominance reasoning. [27]
Under dominance reasoning, the fact that we strictly prefer A to B for all possible observed values a should imply that we strictly prefer A to B without observing a; however, as already shown, that is not true because . To salvage dominance reasoning while allowing , one would have to replace expected value as the decision criterion, thereby employing a more sophisticated argument from mathematical economics.
For example, we could assume the decision maker is an expected utility maximizer with initial wealth W whose utility function, , is chosen to satisfy for at least some values of a (that is, holding onto is strictly preferred to switching to B for some a). Although this is not true for all utility functions, it would be true if had an upper bound, , as w increased toward infinity (a common assumption in mathematical economics and decision theory). [28] Michael R. Powers provides necessary and sufficient conditions for the utility function to resolve the paradox, and notes that neither nor is required. [29]
Some writers would prefer to argue that in a real-life situation, and are bounded simply because the amount of money in an envelope is bounded by the total amount of money in the world (M), implying and . From this perspective, the second paradox is resolved because the postulated probability distribution for X (with ) cannot arise in a real-life situation. Similar arguments are often used to resolve the St. Petersburg paradox.
As mentioned above, any distribution producing this variant of the paradox must have an infinite mean. So before the player opens an envelope the expected gain from switching is "∞ − ∞", which is not defined. In the words of David Chalmers, this is "just another example of a familiar phenomenon, the strange behavior of infinity". [27] Chalmers suggests that decision theory generally breaks down when confronted with games having a diverging expectation, and compares it with the situation generated by the classical St. Petersburg paradox.
However, Clark and Shackel argue that this blaming it all on "the strange behavior of infinity" does not resolve the paradox at all; neither in the single case nor the averaged case. They provide a simple example of a pair of random variables both having infinite mean but where it is clearly sensible to prefer one to the other, both conditionally and on average. [30] They argue that decision theory should be extended so as to allow infinite expectation values in some situations.
The logician Raymond Smullyan questioned if the paradox has anything to do with probabilities at all. [31] He did this by expressing the problem in a way that does not involve probabilities. The following plainly logical arguments lead to conflicting conclusions:
A number of solutions have been put forward. Careful analyses have been made by some logicians. Though solutions differ, they all pinpoint semantic issues concerned with counterfactual reasoning. We want to compare the amount that we would gain by switching if we would gain by switching, with the amount we would lose by switching if we would indeed lose by switching. However, we cannot both gain and lose by switching at the same time. We are asked to compare two incompatible situations. Only one of them can factually occur, the other is a counterfactual situation—somehow imaginary. To compare them at all, we must somehow "align" the two situations, providing some definite points in common.
James Chase argues that the second argument is correct because it does correspond to the way to align two situations (one in which we gain, the other in which we lose), which is preferably indicated by the problem description. [32] Also, Bernard Katz and Doris Olin argue this point of view. [33] In the second argument, we consider the amounts of money in the two envelopes as being fixed; what varies is which one is first given to the player. Because that was an arbitrary and physical choice, the counterfactual world in which the player, counterfactually, got the other envelope to the one he was actually (factually) given is a highly meaningful counterfactual world and hence the comparison between gains and losses in the two worlds is meaningful. This comparison is uniquely indicated by the problem description, in which two amounts of money are put in the two envelopes first, and only after that is one chosen arbitrarily and given to the player. In the first argument, however, we consider the amount of money in the envelope first given to the player as fixed and consider the situations where the second envelope contains either half or twice that amount. This would only be a reasonable counterfactual world if in reality the envelopes had been filled as follows: first, some amount of money is placed in the specific envelope that will be given to the player; and secondly, by some arbitrary process, the other envelope is filled (arbitrarily or randomly) either with double or with half of that amount of money.
Byeong-Uk Yi, on the other hand, argues that comparing the amount you would gain if you would gain by switching with the amount you would lose if you would lose by switching is a meaningless exercise from the outset. [34] According to his analysis, all three implications (switch, indifferent, do not switch) are incorrect. He analyses Smullyan's arguments in detail, showing that intermediate steps are being taken, and pinpointing exactly where an incorrect inference is made according to his formalization of counterfactual inference. An important difference with Chase's analysis is that he does not take account of the part of the story where we are told that the envelope called envelope A is decided completely at random. Thus, Chase puts probability back into the problem description in order to conclude that arguments 1 and 3 are incorrect, argument 2 is correct, while Yi keeps "two envelope problem without probability" completely free of probability and comes to the conclusion that there are no reasons to prefer any action. This corresponds to the view of Albers et al., that without a probability ingredient, there is no way to argue that one action is better than another, anyway.
Bliss argues that the source of the paradox is that when one mistakenly believes in the possibility of a larger payoff that does not, in actuality, exist, one is mistaken by a larger margin than when one believes in the possibility of a smaller payoff that does not actually exist. [35] If, for example, the envelopes contained $5.00 and $10.00 respectively, a player who opened the $10.00 envelope would expect the possibility of a $20.00 payout that simply does not exist. Were that player to open the $5.00 envelope instead, he would believe in the possibility of a $2.50 payout, which constitutes a smaller deviation from the true value; this results in the paradoxical discrepancy.
Albers, Kooi, and Schaafsma consider that without adding probability (or other) ingredients to the problem, [18] Smullyan's arguments do not give any reason to swap or not to swap, in any case. Thus, there is no paradox. This dismissive attitude is common among writers from probability and economics: Smullyan's paradox arises precisely because he takes no account whatever of probability or utility.
As an extension to the problem, consider the case where the player is allowed to look in envelope A before deciding whether to switch. In this "conditional switching" problem, it is often possible to generate a gain over the "never switching" strategy", depending on the probability distribution of the envelopes. [36]
The envelope paradox dates back at least to 1953, when Belgian mathematician Maurice Kraitchik proposed a puzzle in his book Recreational Mathematics concerning two equally rich men who meet and compare their beautiful neckties, presents from their wives, wondering which tie actually cost more money. He also introduces a variant in which the two men compare the contents of their purses. He assumes that each purse is equally likely to contain 1 up to some large number x of pennies, the total number of pennies minted to date. The men do not look in their purses but each reason that they should switch. He does not explain what is the error in their reasoning. It is not clear whether the puzzle already appeared in an earlier 1942 edition of his book. It is also mentioned in a 1953 book on elementary mathematics and mathematical puzzles by the mathematician John Edensor Littlewood, who credited it to the physicist Erwin Schrödinger, where it concerns a pack of cards, each card has two numbers written on it, the player gets to see a random side of a random card, and the question is whether one should turn over the card. Littlewood's pack of cards is infinitely large and his paradox is a paradox of improper prior distributions.
Martin Gardner popularized Kraitchik's puzzle in his 1982 book Aha! Gotcha, in the form of a wallet game:
Two people, equally rich, meet to compare the contents of their wallets. Each is ignorant of the contents of the two wallets. The game is as follows: whoever has the least money receives the contents of the wallet of the other (in the case where the amounts are equal, nothing happens). One of the two men can reason: "I have the amount A in my wallet. That's the maximum that I could lose. If I win (probability 0.5), the amount that I'll have in my possession at the end of the game will be more than 2A. Therefore the game is favourable to me." The other man can reason in exactly the same way. In fact, by symmetry, the game is fair. Where is the mistake in the reasoning of each man?
— Martin Gardner, Aha! Gotcha
Gardner confessed that though, like Kraitchik, he could give a sound analysis leading to the right answer (there is no point in switching), he could not clearly put his finger on what was wrong with the reasoning for switching, and Kraitchik did not give any help in this direction, either.
In 1988 and 1989, Barry Nalebuff presented two different two-envelope problems, each with one envelope containing twice what is in the other, and each with computation of the expectation value 5A/4. The first paper just presents the two problems. The second discusses many solutions to both of them. The second of his two problems is nowadays the more common, and is presented in this article. According to this version, the two envelopes are filled first, then one is chosen at random and called Envelope A. Martin Gardner independently mentioned this same version in his 1989 book Penrose Tiles to Trapdoor Ciphers and the Return of Dr Matrix. Barry Nalebuff's asymmetric variant, often known as the Ali Baba problem, has one envelope filled first, called Envelope A, and given to Ali. Then a fair coin is tossed to decide whether Envelope B should contain half or twice that amount, and only then given to Baba.
Broome in 1995 called a probability distribution 'paradoxical' if for any given first-envelope amount x, the expectation of the other envelope conditional on x is greater than x. The literature contains dozens of commentaries on the problem, much of which observes that a distribution of finite values can have an infinite expected value. [37]
{{cite journal}}
: Cite journal requires |journal=
(help)In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.
In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed to describe the state of the variable, considering the distribution of probabilities across all potential states. Given a discrete random variable , which takes values in the set and is distributed according to , the entropy is where denotes the sum over the variable's possible values. The choice of base for , the logarithm, varies for different applications. Base 2 gives the unit of bits, while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable.
The raven paradox, also known as Hempel's paradox, Hempel's ravens or, rarely, the paradox of indoor ornithology, is a paradox arising from the question of what constitutes evidence for the truth of a statement. Observing objects that are neither black nor ravens may formally increase the likelihood that all ravens are black even though, intuitively, these observations are unrelated.
In economics, utility is a measure of a certain person's satisfaction from a certain state of the world. Over time, the term has been used with at least two meanings.
The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, quality control, scientific, geophysical, actuarial, and many other types of observable phenomena; the principle originally applied to describing the distribution of wealth in a society, fitting the trend that a large portion of wealth is held by a small fraction of the population. The Pareto principle or "80-20 rule" stating that 80% of outcomes are due to 20% of causes was named in honour of Pareto, but the concepts are distinct, and only Pareto distributions with shape value of log45 ≈ 1.16 precisely reflect it. Empirical observation has shown that this 80-20 distribution fits a wide range of cases, including natural phenomena and human activities.
In probability theory, the birthday problem asks for the probability that, in a set of n randomly chosen people, at least two will share the same birthday. The birthday paradox refers to the counterintuitive fact that only 23 people are needed for that probability to exceed 50%.
In probability theory and statistics, Student's t distribution is a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped.
In mathematical finance, a risk-neutral measure is a probability measure such that each share price is exactly equal to the discounted expectation of the share price under this measure. This is heavily used in the pricing of financial derivatives due to the fundamental theorem of asset pricing, which implies that in a complete market, a derivative's price is the discounted expected value of the future payoff under the unique risk-neutral measure. Such a measure exists if and only if the market is arbitrage-free.
In probability theory, the Borel–Kolmogorov paradox is a paradox relating to conditional probability with respect to an event of probability zero. It is named after Émile Borel and Andrey Kolmogorov.
The St. Petersburg paradox or St. Petersburg lottery is a paradox involving the game of flipping a coin where the expected payoff of the lottery game is infinite but nevertheless seems to be worth only a very small amount to the participants. The St. Petersburg paradox is a situation where a naïve decision criterion that takes only the expected value into account predicts a course of action that presumably no actual person would be willing to take. Several resolutions to the paradox have been proposed, including the impossible amount of money a casino would need to continue the game indefinitely.
A prior probability distribution of an uncertain quantity, often simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.
In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution in with a density.
The expected utility hypothesis is a foundational assumption in mathematical economics concerning decision making under uncertainty. It postulates that rational agents maximize utility, meaning the subjective desirability of their actions. Rational choice theory, a cornerstone of microeconomics, builds this postulate to model aggregate social behaviour.
In mathematical statistics, the Kullback–Leibler (KL) divergence, denoted , is a type of statistical distance: a measure of how much a model probability distribution Q is different from a true probability distribution P. Mathematically, it is defined as
In probability theory and statistics, the continuous uniform distributions or rectangular distributions are a family of symmetric probability distributions. Such a distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, and which are the minimum and maximum values. The interval can either be closed or open. Therefore, the distribution is often abbreviated where stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable under no constraint other than that it is contained in the distribution's support.
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.
Renewal theory is the branch of probability theory that generalizes the Poisson process for arbitrary holding times. Instead of exponentially distributed holding times, a renewal process may have any independent and identically distributed (IID) holding times that have finite mean. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times.
In statistics, the method of moments is a method of estimation of population parameters. The same principle is used to derive higher moments like skewness and kurtosis.
Lindley's paradox is a counterintuitive situation in statistics in which the Bayesian and frequentist approaches to a hypothesis testing problem give different results for certain choices of the prior distribution. The problem of the disagreement between the two approaches was discussed in Harold Jeffreys' 1939 textbook; it became known as Lindley's paradox after Dennis Lindley called the disagreement a paradox in a 1957 paper.
In economics and game theory, an all-pay auction is an auction in which every bidder must pay regardless of whether they win the prize, which is awarded to the highest bidder as in a conventional auction. As shown by Riley and Samuelson (1981), equilibrium bidding in an all pay auction with private information is revenue equivalent to bidding in a sealed high bid or open ascending price auction.