This article's tone or style may not reflect the encyclopedic tone used on Wikipedia.(June 2021) |

This article has an unclear citation style.(June 2021) |

The **two envelopes problem**, also known as the **exchange paradox**, is a brain teaser, puzzle, or paradox in logic, probability, and recreational mathematics. It is of special interest in decision theory, and for the Bayesian interpretation of probability theory. It is a variant of an older problem known as the necktie paradox. The problem is typically introduced by formulating a hypothetical challenge like the following example:

- Introduction
- Problem
- Multiplicity of proposed solutions
- Simple resolution
- Other simple resolutions
- Nalebuff asymmetric variant
- Bayesian resolutions
- Simple form of Bayesian resolution
- Introduction to further developments in connection with Bayesian probability theory
- Second mathematical variant
- Proposed resolutions through mathematical economics
- Controversy among philosophers
- Smullyan's non-probabilistic variant
- Proposed resolutions
- Conditional switching
- History of the paradox
- See also
- Notes and references

Imagine you are given two identical envelopes, each containing money. One contains twice as much as the other. You may pick one envelope and keep the money it contains. Having chosen an envelope at will, but before inspecting it, you are given the chance to switch envelopes. Should you switch?

It may seem obvious that there is no point in switching envelopes as the situation is symmetric. However, because the person stands to gain twice as much money if they switch, while the only risk is halving what they currently have, a case can be made for switching the envelope.^{ [1] }

**Basic setup**: A person is given two indistinguishable envelopes, each of which contains a positive sum of money. One envelope contains twice as much as the other. The person may pick one envelope and keep whatever amount it contains. They pick one envelope at random but before they open it they are given the chance to take the other envelope instead.^{ [2] }

**The switching argument**: Now suppose the person reasons as follows:

- Denote by
*A*the amount in the player's selected envelope. - The probability that
*A*is the smaller amount is 1/2, and that it is the larger amount is also 1/2. - The other envelope may contain either 2
*A*or*A*/2. - If
*A*is the smaller amount, then the other envelope contains 2*A*. - If
*A*is the larger amount, then the other envelope contains*A*/2. - Thus the other envelope contains 2
*A*with probability 1/2 and*A*/2 with probability 1/2. - So the expected value of the money in the other envelope is:
- This is greater than
*A*so, on average, the person reasons that they stand to gain by swapping. - After the switch, denote that content by
*B*and reason in exactly the same manner as above. - The person concludes that the most rational thing to do is to swap back again.
- The person will thus end up swapping envelopes indefinitely.
- As it seems more rational to open just any envelope than to swap indefinitely, the player arrives at a contradiction.

**The puzzle**: *The puzzle is to find the flaw in the very compelling line of reasoning above.* This includes determining exactly *why* and under *what conditions* that step is not correct, in order to be sure not to make this mistake in a more complicated situation where the misstep may not be so obvious. In short, the problem is to solve the paradox. Thus, in particular, the puzzle is *not* solved by the very simple task of finding another way to calculate the probabilities that does not lead to a contradiction.

There have been many solutions proposed, and commonly one writer proposes a solution to the problem as stated, after which another writer shows that altering the problem slightly revives the paradox. Such sequences of discussions have produced a family of closely related formulations of the problem, resulting in a voluminous literature on the subject.^{ [3] }

No proposed solution is widely accepted as definitive.^{ [4] } Despite this it is common for authors to claim that the solution to the problem is easy, even elementary.^{ [5] } However, when investigating these elementary solutions they often differ from one author to the next.

The total amount in both envelopes is a constant , with in one envelope and in the other.

If you select the envelope with first you gain the amount by swapping. If you select the envelope with first you lose the amount by swapping. So you gain on average by swapping.

Swapping is not better than keeping. The expected value is the same for both the envelopes. Thus no contradiction exists.^{ [6] }

The famous mystification is evoked by the mixing up of two different circumstances and situations, giving wrong results. The so-called *"paradox"* presents two already appointed and already locked envelopes, where one envelope is already locked with twice the amount of the other already locked envelope. Whereas step 6 boldly claims "Thus the other envelope contains 2A with probability 1/2 and A/2 with probability 1/2.", in the given situation, that claim can never be applicable to *any A* nor to *any average A*.

This claim is never correct for the situation presented, this claim applies to the *Nalebuff asymmetric variant* only (see below). In the situation presented, the other envelope cannot *generally* contain 2A, but can contain 2A only in the very specific instance where envelope A, by chance actually contains the *smaller* amount of , but nowhere else. The other envelope cannot *generally* contain A/2, but can contain A/2 only in the very specific instance where envelope A, by chance, actually contains , but nowhere else. The difference between the two already appointed and locked envelopes is always . No *"average amount A"* can ever form any initial basis for any *expected value*, as this does not get to the heart of the problem.^{ [7] }

A common way to resolve the paradox, both in popular literature and part of the academic literature, especially in philosophy, is to assume that the 'A' in step 7 is intended to be the expected value in envelope A and that we intended to write down a formula for the expected value in envelope B.

Step 7 states that the expected value in B = 1/2( 2A + A/2 )

It is pointed out that the 'A' in the first part of the formula is the expected value, given that envelope A contains less than envelope B, but the 'A', in the second part of the formula is the expected value in A, given that envelope A contains more than envelope B. The flaw in the argument is that same symbol is used with two different meanings in both parts of the same calculation but is assumed to have the same value in both cases.

A correct calculation would be :

- Expected value in B = 1/2 ( (Expected value in B, given A is larger than B) + (Expected value in B, given A is smaller than B) )
^{ [8] }

If we then take the sum in one envelope to be x and the sum in the other to be 2x the expected value calculations becomes:

- Expected value in B = 1/2 (
*x*+ 2*x*)

which is equal to the expected sum in A.

In non-technical language, what goes wrong (see Necktie paradox) is that, in the scenario provided, the mathematics use relative values of A and B (that is, it assumes that one would gain more money if A is less than B than one would lose if the opposite were true). However, the two values of money are fixed (one envelope contains, say, $20 and the other $40). If the values of the envelopes are restated as *x* and 2*x*, it's much easier to see that, if A were greater, one would lose *x* by switching and, if B were greater, one would gain *x* by switching. One does not actually gain a greater amount of money by switching because the total *T* of A and B (3*x*) remains the same, and the difference *x* is fixed to *T/3*.

Line 7 should have been worked out more carefully as follows:

A will be larger when A is larger than B, than when it is smaller than B. So its average values (expectation values) in those two cases are different. And the average value of A is not the same as A itself, anyway. Two mistakes are being made: the writer forgot he was taking expectation values, and he forgot he was taking expectation values under two different conditions.

It would have been easier to compute E(B) directly. Denoting the lower of the two amounts by *x*, and taking it to be fixed (even if unknown) we find that

We learn that 1.5*x* is the expected value of the amount in Envelope B. By the same calculation it is also the expected value of the amount in Envelope A. They are the same hence there is no reason to prefer one envelope to the other. This conclusion was, of course, obvious in advance; the point is that we identified the false step in the argument for switching by explaining exactly where the calculation being made there went off the rails.

We could also continue from the correct but difficult to interpret result of the development in line 7:

so (of course) different routes to calculate the same thing all give the same answer.

Tsikogiannopoulos presented a different way to do these calculations.^{ [9] } It is by definition correct to assign equal probabilities to the events that the other envelope contains double or half that amount in envelope A. So the "switching argument" is correct up to step 6. Given that the player's envelope contains the amount A, he differentiates the actual situation in two different games: The first game would be played with the amounts (A, 2A) and the second game with the amounts (A/2, A). Only one of them is actually played but we don't know which one. These two games need to be treated differently. If the player wants to compute his/her expected return (profit or loss) in case of exchange, he/she should weigh the return derived from each game by the average amount in the two envelopes in that particular game. In the first case the profit would be A with an average amount of 3A/2, whereas in the second case the loss would be A/2 with an average amount of 3A/4. So the formula of the expected return in case of exchange, seen as a proportion of the total amount in the two envelopes, is:

This result means yet again that the player has to expect neither profit nor loss by exchanging his/her envelope.

We could actually open our envelope before deciding on switching or not and the above formula would still give us the correct expected return. For example, if we opened our envelope and saw that it contained 100 euros then we would set A=100 in the above formula and the expected return in case of switching would be:

The mechanism by which the amounts of the two envelopes are determined is crucial for the decision of the player to switch their envelope.^{ [9] }^{ [10] } Suppose that the amounts in the two envelopes A and B were not determined by first fixing contents of two envelopes E1 and E2, and then naming them A and B at random (for instance, by the toss of a fair coin^{ [11] }). Instead, we start right at the beginning by putting some amount in Envelope A, and then fill B in a way which depends both on chance (the toss of a coin) and on what we put in A. Suppose that first of all the amount *a* in Envelope A is fixed in some way or other, and then the amount in Envelope B is fixed, dependent on what is already in A, according to the outcome of a fair coin. Ιf the coin fell Heads then 2*a* is put in Envelope B, if the coin fell Tails then *a*/2 is put in Envelope B. If the player was aware of this mechanism, and knows that they hold Envelope A, but don't know the outcome of the coin toss, and doesn't know *a*, then the switching argument is correct and they are recommended to switch envelopes. This version of the problem was introduced by Nalebuff (1988) and is often called the Ali-Baba problem. Notice that there is no need to look in Envelope A in order to decide whether or not to switch.

Many more variants of the problem have been introduced. Nickerson and Falk systematically survey a total of 8.^{ [11] }

The simple resolution above assumed that the person who invented the argument for switching was trying to calculate the expectation value of the amount in Envelope A, thinking of the two amounts in the envelopes as fixed (*x* and 2*x*). The only uncertainty is which envelope has the smaller amount *x*. However, many mathematicians and statisticians interpret the argument as an attempt to calculate the expected amount in Envelope B, given a real or hypothetical amount "A" in Envelope A. One does not need to look in the envelope to see how much is in there, in order to do the calculation. If the result of the calculation is an advice to switch envelopes, whatever amount might be in there, then it would appear that one should switch anyway, without looking. In this case, at Steps 6, 7 and 8 of the reasoning, "A" is any fixed possible value of the amount of money in the first envelope.

This interpretation of the two envelopes problem appears in the first publications in which the paradox was introduced in its present-day form, Gardner (1989) and Nalebuff (1989). It is common in the more mathematical literature on the problem. It also applies to the modification of the problem (which seems to have started with Nalebuff) in which the owner of Envelope A does actually look in his envelope before deciding whether or not to switch; though Nalebuff does also emphasise that there is no need to have the owner of Envelope A look in his envelope. If he imagines looking in it, and if for any amount which he can imagine being in there, he has an argument to switch, then he will decide to switch anyway. Finally, this interpretation was also the core of earlier versions of the two envelopes problem (Littlewood's, Schrödinger's, and Kraitchik's switching paradoxes); see the concluding section, on history of TEP.

This kind of interpretation is often called "Bayesian" because it assumes the writer is also incorporating a prior probability distribution of possible amounts of money in the two envelopes in the switching argument.

The simple resolution depended on a particular interpretation of what the writer of the argument is trying to calculate: namely, it assumed he was after the (unconditional) expectation value of what's in Envelope B. In the mathematical literature on Two Envelopes Problem a different interpretation is more common, involving the conditional expectation value (conditional on what might be in Envelope A). To solve this and related interpretations or versions of the problem, most authors use the Bayesian interpretation of probability, which means that probability reasoning is not only applied to truly random events like the random pick of an envelope, but also to our knowledge (or lack of knowledge) about things which are fixed but unknown, like the two amounts originally placed in the two envelopes, before one is picked at random and called "Envelope A". Moreover, according to a long tradition going back at least to Laplace and his principle of insufficient reason one is supposed to assign equal probabilities when one has no knowledge at all concerning the possible values of some quantity. Thus the fact that we are not told anything about how the envelopes are filled can already be converted into probability statements about these amounts. No information means that probabilities are equal.

In steps 6 and 7 of the switching argument, the writer imagines that that Envelope A contains a certain amount *a*, and then seems to believe that given that information, the other envelope would be equally likely to contain twice or half that amount. That assumption can only be correct, if prior to knowing what was in Envelope A, the writer would have considered the following two pairs of values for both envelopes equally likely: the amounts *a*/2 and *a*; and the amounts *a* and 2*a*. (This follows from Bayes' rule in odds form: posterior odds equal prior odds times likelihood ratio). But now we can apply the same reasoning, imagining not *a* but *a/2* in Envelope A. And similarly, for 2*a*. And similarly, ad infinitum, repeatedly halving or repeatedly doubling as many times as you like.^{ [12] }

Suppose for the sake of argument, we start by imagining an amount 32 in Envelope A. In order that the reasoning in steps 6 and 7 is correct *whatever* amount happened to be in Envelope A, we apparently believe in advance that all the following ten amounts are all equally likely to be the smaller of the two amounts in the two envelopes: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 (equally likely powers of 2^{ [12] }). But going to even larger or even smaller amounts, the "equally likely" assumption starts to appear a bit unreasonable. Suppose we stop, just with these ten equally likely possibilities for the smaller amount in the two envelopes. In that case, the reasoning in steps 6 and 7 was entirely correct if envelope A happened to contain any of the amounts 2, 4, ... 512: switching envelopes would give an expected (average) gain of 25%. If envelope A happened to contain the amount 1, then the expected gain is actually 100%. But if it happened to contain the amount 1024, a massive loss of 50% (of a rather large amount) would have been incurred. That only happens once in twenty times, but it is exactly enough to balance the expected gains in the other 19 out of 20 times.

Alternatively we do go on ad infinitum but now we are working with a quite ludicrous assumption, implying for instance, that it is infinitely more likely for the amount in envelope A to be smaller than 1, *and* infinitely more likely to be larger than 1024, than between those two values. This is a so-called improper prior distribution: probability calculus breaks down; expectation values are not even defined.^{ [12] }

Many authors have also pointed out that if a maximum sum that can be put in the envelope with the smaller amount exists, then it is very easy to see that Step 6 breaks down, since if the player holds more than the maximum sum that can be put into the "smaller" envelope they must hold the envelope containing the larger sum, and are thus certain to lose by switching. This may not occur often, but when it does, the heavy loss the player incurs means that, on average, there is no advantage in switching. Some writers consider that this resolves all practical cases of the problem.^{ [13] }

But the problem can also be resolved mathematically without assuming a maximum amount. Nalebuff,^{ [13] } Christensen and Utts,^{ [14] } Falk and Konold,^{ [12] } Blachman, Christensen and Utts,^{ [15] } Nickerson and Falk,^{ [11] } pointed out that if the amounts of money in the two envelopes have any proper probability distribution representing the player's prior beliefs about the amounts of money in the two envelopes, then it is impossible that whatever the amount *A=a* in the first envelope might be, it would be equally likely, according to these prior beliefs, that the second contains *a*/2 or 2*a*. Thus step 6 of the argument, which leads to *always switching*, is a non-sequitur, also when there is no maximum to the amounts in the envelopes.

The first two resolutions discussed above (the "simple resolution" and the "Bayesian resolution") correspond to two possible interpretations of what is going on in step 6 of the argument. They both assume that step 6 indeed is "the bad step". But the description in step 6 is ambiguous. Is the author after the unconditional (overall) expectation value of what is in envelope B (perhaps - conditional on the smaller amount, *x*), or is he after the conditional expectation of what is in envelope B, given any possible amount *a* which might be in envelope A? Thus, there are two main interpretations of the intention of the composer of the paradoxical argument for switching, and two main resolutions.

A large literature has developed concerning variants of the problem.^{ [16] }^{ [17] } The standard assumption about the way the envelopes are set up is that a sum of money is in one envelope, and twice that sum is in another envelope. One of the two envelopes is randomly given to the player (*envelope A*). The originally proposed problem does not make clear exactly how the smaller of the two sums is determined, what values it could possibly take and, in particular, whether there is a minimum or a maximum sum it might contain.^{ [18] }^{ [19] } However, if we are using the Bayesian interpretation of probability, then we start by expressing our prior beliefs as to the smaller amount in the two envelopes through a probability distribution. Lack of knowledge can also be expressed in terms of probability.

A first variant within the Bayesian version is to come up with a proper prior probability distribution of the smaller amount of money in the two envelopes, such that when Step 6 is performed properly, the advice is still to prefer Envelope B, whatever might be in Envelope A. So though the specific calculation performed in step 6 was incorrect (there is no proper prior distribution such that, given what is in the first envelope A, the other envelope is always equally likely to be larger or smaller) a correct calculation, depending on what prior we are using, does lead to the result for all possible values of *a*.^{ [20] }

In these cases it can be shown that the expected sum in both envelopes is infinite. There is no gain, on average, in swapping.

Though Bayesian probability theory can resolve the first mathematical interpretation of the paradox above, it turns out that examples can be found of proper probability distributions, such that the expected value of the amount in the second envelope, conditioned on the amount in the first, does exceed the amount in the first, whatever it might be. The first such example was already given by Nalebuff.^{ [13] } See also Christensen and Utts (1992).^{ [14] }^{ [21] }^{ [22] }^{ [23] }

Denote again the amount of money in the first envelope by *A* and that in the second by *B*. We think of these as random. Let *X* be the smaller of the two amounts and *Y=2X* be the larger. Notice that once we have fixed a probability distribution for *X* then the joint probability distribution of *A,B* is fixed, since *A,B* = *X,Y* or *Y,X* each with probability 1/2, independently of *X,Y*.

The *bad step* 6 in the "always switching" argument led us to the finding *E(B|A=a)>a* for all *a*, and hence to the recommendation to switch, whether or not we know *a*. Now, it turns out that one can quite easily invent proper probability distributions for *X*, the smaller of the two amounts of money, such that this bad conclusion is still true. One example is analysed in more detail, in a moment.

As mentioned before, it cannot be true that whatever *a*, given *A=a*, *B* is equally likely to be *a*/2 or 2*a*, but it can be true that whatever *a*, given *A=a*, *B* is larger in expected value than *a*.

Suppose for example that the envelope with the smaller amount actually contains 2^{n} dollars with probability 2^{n}/3^{n+1} where *n* = 0, 1, 2,… These probabilities sum to 1, hence the distribution is a proper prior (for subjectivists) and a completely decent probability law also for frequentists.^{ [24] }

Imagine what might be in the first envelope. A sensible strategy would certainly be to swap when the first envelope contains 1, as the other must then contain 2. Suppose on the other hand the first envelope contains 2. In that case there are two possibilities: the envelope pair in front of us is either {1, 2} or {2, 4}. All other pairs are impossible. The conditional probability that we are dealing with the {1, 2} pair, given that the first envelope contains 2, is

and consequently the probability it's the {2, 4} pair is 2/5, since these are the only two possibilities. In this derivation, is the probability that the envelope pair is the pair 1 and 2, *and* Envelope A happens to contain 2; is the probability that the envelope pair is the pair 2 and 4, *and* (again) Envelope A happens to contain 2. Those are the only two ways that Envelope A can end up containing the amount 2.

It turns out that these proportions hold in general unless the first envelope contains 1. Denote by *a* the amount we imagine finding in Envelope A, if we were to open that envelope, and suppose that *a* = 2^{n} for some *n* ≥ 1. In that case the other envelope contains *a*/2 with probability 3/5 and 2*a* with probability 2/5.

So either the first envelope contains 1, in which case the conditional expected amount in the other envelope is 2, or the first envelope contains *a* > 1, and though the second envelope is more likely to be smaller than larger, its conditionally expected amount is larger: the conditionally expected amount in Envelope B is

which is more than *a*. This means that the player who looks in Envelope A would decide to switch whatever he saw there. Hence there is no need to look in Envelope A to make that decision.

This conclusion is just as clearly wrong as it was in the preceding interpretations of the Two Envelopes Problem. But now the flaws noted above do not apply; the *a* in the expected value calculation is a constant and the conditional probabilities in the formula are obtained from a specified and proper prior distribution.

Most writers think that the new paradox can be defused, although the resolution requires concepts from mathematical economics.^{ [25] } Suppose for all *a*. It can be shown that this is possible for some probability distributions of *X* (the smaller amount of money in the two envelopes) only if . That is, only if the mean of all possible values of money in the envelopes is infinite. To see why, compare the series described above in which the probability of each *X* is 2/3 as likely as the previous *X* with one in which the probability of each *X* is only 1/3 as likely as the previous *X*. When the probability of each subsequent term is greater than one-half of the probability of the term before it (and each *X* is twice that of the *X* before it) the mean is infinite, but when the probability factor is less than one-half, the mean converges. In the cases where the probability factor is less than one-half, for all *a* other than the first, smallest *a*, and the total expected value of switching converges to 0. In addition, if an ongoing distribution with a probability factor greater than one-half is made finite by, after any number of terms, establishing a final term with "all the remaining probability," that is, 1 minus the probability of all previous terms, the expected value of switching with respect to the probability that *A* is equal to the last, largest *a* will exactly negate the sum of the positive expected values that came before, and again the total expected value of switching drops to 0 (this is the general case of setting out an equal probability of a finite set of values in the envelopes described above). Thus, the only distributions that seem to point to a positive expected value for switching are those in which . Averaging over *a*, it follows that (because *A* and *B* have identical probability distributions, by symmetry, and both *A* and *B* are greater than or equal to *X*).

If we don't look into the first envelope, then clearly there is no reason to switch, since we would be exchanging one unknown amount of money (*A*), whose expected value is infinite, for another unknown amount of money (*B*), with the same probability distribution and infinite expected value. However, if we do look into the first envelope, then for all values observed () we would want to switch because for all *a*. As noted by David Chalmers, this problem can be described as a failure of dominance reasoning.^{ [26] }

Under dominance reasoning, the fact that we strictly prefer *A* to *B* for all possible observed values *a* should imply that we strictly prefer *A* to *B* without observing *a*; however, as already shown, that is not true because . To salvage dominance reasoning while allowing , one would have to replace expected value as the decision criterion, thereby employing a more sophisticated argument from mathematical economics.

For example, we could assume the decision maker is an expected utility maximiser with initial wealth *W* whose utility function, , is chosen to satisfy for at least some values of *a* (that is, holding onto is strictly preferred to switching to *B* for some *a*). Although this is not true for all utility functions, it would be true if had an upper bound, , as *w* increased toward infinity (a common assumption in mathematical economics and decision theory).^{ [27] } Michael R. Powers provides necessary and sufficient conditions for the utility function to resolve the paradox, and notes that neither nor is required.^{ [28] }

Some writers would prefer to argue that in a real-life situation, and are bounded simply because the amount of money in an envelope is bounded by the total amount of money in the world (*M*), implying and . From this perspective, the second paradox is resolved because the postulated probability distribution for *X* (with ) cannot arise in a real-life situation. Similar arguments are often used to resolve the St. Petersburg paradox.

As mentioned above, *any distribution* producing this variant of the paradox must have an infinite mean. So before the player opens an envelope the expected gain from switching is "∞ − ∞", which is not defined. In the words of David Chalmers, this is "just another example of a familiar phenomenon, the strange behaviour of infinity".^{ [26] } Chalmers suggests that decision theory generally breaks down when confronted with games having a diverging expectation, and compares it with the situation generated by the classical St. Petersburg paradox.

However, Clark and Shackel argue that this blaming it all on "the strange behaviour of infinity" does not resolve the paradox at all; neither in the single case nor the averaged case. They provide a simple example of a pair of random variables both having infinite mean but where it is clearly sensible to prefer one to the other, both conditionally and on average.^{ [29] } They argue that decision theory should be extended so as to allow infinite expectation values in some situations.

The logician Raymond Smullyan questioned if the paradox has anything to do with probabilities at all.^{ [30] } He did this by expressing the problem in a way that does not involve probabilities. The following plainly logical arguments lead to conflicting conclusions:

- Let the amount in the envelope chosen by the player be
*A*. By swapping, the player may gain*A*or lose*A*/2. So the potential gain is strictly greater than the potential loss. - Let the amounts in the envelopes be
*X*and 2*X*. Now by swapping, the player may gain*X*or lose*X*. So the potential gain is equal to the potential loss.

A number of solutions have been put forward. Careful analyses have been made by some logicians. Though solutions differ, they all pinpoint semantic issues concerned with counterfactual reasoning. We want to compare the amount that we would gain by switching if we would gain by switching, with the amount we would lose by switching if we would indeed lose by switching. However, we cannot both gain and lose by switching at the same time. We are asked to compare two incompatible situations. Only one of them can factually occur, the other is a counterfactual situation—somehow imaginary. To compare them at all, we must somehow "align" the two situations, providing some definite points in common.

James Chase argues that the second argument is correct because it does correspond to the way to align two situations (one in which we gain, the other in which we lose), which is preferably indicated by the problem description.^{ [31] } Also Bernard Katz and Doris Olin argue this point of view.^{ [32] } In the second argument, we consider the amounts of money in the two envelopes as being fixed; what varies is which one is first given to the player. Because that was an arbitrary and physical choice, the *counterfactual world* in which the player, counterfactually, got the other envelope to the one he was actually (factually) given is a highly meaningful counterfactual world and hence the comparison between gains and losses in the two worlds is meaningful. This comparison is uniquely indicated by the problem description, in which two amounts of money are put in the two envelopes first, and only after that is one chosen arbitrarily and given to the player. In the first argument, however, we consider the amount of money in the envelope first given to the player as fixed and consider the situations where the second envelope contains either half or twice that amount. This would only be a reasonable counterfactual world if in reality the envelopes had been filled as follows: first, some amount of money is placed in the specific envelope that will be given to the player; and secondly, by some arbitrary process, the other envelope is filled (arbitrarily or randomly) either with double or with half of that amount of money.

Byeong-Uk Yi, on the other hand, argues that comparing the amount you would gain if you would gain by switching with the amount you would lose if you would lose by switching is a meaningless exercise from the outset.^{ [33] } According to his analysis, all three implications (switch, indifferent, do not switch) are incorrect. He analyses Smullyan's arguments in detail, showing that intermediate steps are being taken, and pinpointing exactly where an incorrect inference is made according to his formalization of counterfactual inference. An important difference with Chase's analysis is that he does not take account of the part of the story where we are told that the envelope called Envelope A is decided completely at random. Thus, Chase puts probability back into the problem description in order to conclude that arguments 1 and 3 are incorrect, argument 2 is correct, while Yi keeps "two envelope problem without probability" completely free of probability, and comes to the conclusion that there are no reasons to prefer any action. This corresponds to the view of Albers et al., that without probability ingredient, there is no way to argue that one action is better than another, anyway.

Bliss argues that the source of the paradox is that when one mistakenly believes in the possibility of a larger payoff that does not, in actuality, exist, one is mistaken by a larger margin than when one believes in the possibility of a smaller payoff that does not actually exist.^{ [34] } If, for example, the envelopes contained $5.00 and $10.00 respectively, a player who opened the $10.00 envelope would expect the possibility of a $20.00 payout that simply does not exist. Were that player to open the $5.00 envelope instead, he would believe in the possibility of a $2.50 payout, which constitutes a smaller deviation from the true value; this results in the paradoxical discrepancy.

Albers, Kooi, and Schaafsma consider that without adding probability (or other) ingredients to the problem,^{ [17] } Smullyan's arguments do not give any reason to swap or not to swap, in any case. Thus, there is no paradox. This dismissive attitude is common among writers from probability and economics: Smullyan's paradox arises precisely because he takes no account whatever of probability or utility.

As an extension to the problem, consider the case where the player is allowed to look in Envelope A before deciding whether to switch. In this "conditional switching" problem, it is often possible to generate a gain over the "never switching" strategy", depending on the probability distribution of the envelopes.^{ [35] }

The envelope paradox dates back at least to 1953, when Belgian mathematician Maurice Kraitchik proposed a puzzle in his book *Recreational Mathematics* concerning two equally rich men who meet and compare their beautiful neckties, presents from their wives, wondering which tie actually cost more money. He also introduces a variant in which the two men compare the contents of their purses. He assumes that each purse is equally likely to contain 1 up to some large number *x* of pennies, the total number of pennies minted to date. The men do not look in their purses but each reasons that they should switch. He does not explain what is the error in their reasoning. It is not clear whether the puzzle already appeared in an earlier 1942 edition of his book. It is also mentioned in a 1953 book on elementary mathematics and mathematical puzzles by the mathematician John Edensor Littlewood, who credited it to the physicist Erwin Schroedinger, where it concerns a pack of cards, each card has two numbers written on it, the player gets to see a random side of a random card, and the question is whether one should turn over the card. Littlewood's pack of cards is infinitely large and his paradox is a paradox of improper prior distributions.

Martin Gardner popularised Kraitchik's puzzle in his 1982 book *Aha! Gotcha*, in the form of a wallet game:

Two people, equally rich, meet to compare the contents of their wallets. Each is ignorant of the contents of the two wallets. The game is as follows: whoever has the least money receives the contents of the wallet of the other (in the case where the amounts are equal, nothing happens). One of the two men can reason: "I have the amount

Ain my wallet. That's the maximum that I could lose. If I win (probability 0.5), the amount that I'll have in my possession at the end of the game will be more than 2A. Therefore the game is favourable to me." The other man can reason in exactly the same way. In fact, by symmetry, the game is fair. Where is the mistake in the reasoning of each man?— Martin Gardner, "Martin Gardner: Aha! Gotcha"

Gardner confessed that though, like Kraitchik, he could give a sound analysis leading to the right answer (there is no point in switching), he could not clearly put his finger on what was wrong with the reasoning for switching, and Kraitchik did not give any help in this direction, either.

In 1988 and 1989, Barry Nalebuff presented two different two-envelope problems, each with one envelope containing twice what is in the other, and each with computation of the expectation value 5*A*/4. The first paper just presents the two problems. The second discusses many solutions to both of them. The second of his two problems is nowadays the more common, and is presented in this article. According to this version, the two envelopes are filled first, then one is chosen at random and called Envelope A. Martin Gardner independently mentioned this same version in his 1989 book *Penrose Tiles to Trapdoor Ciphers and the Return of Dr Matrix*. Barry Nalebuff's asymmetric variant, often known as the Ali Baba problem, has one envelope filled first, called Envelope A, and given to Ali. Then a fair coin is tossed to decide whether Envelope B should contain half or twice that amount, and only then given to Baba.

Broome in 1995 called the probability distribution 'paradoxical' if for any given first-envelope amount *x*, the expectation of the other envelope conditional on *x* is greater than *x*. The literature contains dozens of commentaries on the problem, much of which observes that a distribution of finite values can have an infinite expected value.^{ [36] }

- ↑ See the problem statement for a more precise statement of this argument.
- ↑ Falk, Ruma (2008). "The Unrelenting Exchange Paradox".
*Teaching Statistics*.**30**(3): 86–88. doi:10.1111/j.1467-9639.2008.00318.x. - ↑ A complete list of published and unpublished sources in chronological order can be found in the talk page.
- ↑ Markosian, Ned (2011). "A Simple Solution to the Two Envelope Problem".
*Logos & Episteme*.**II**(3): 347–57. doi: 10.5840/logos-episteme20112318 . - ↑ McDonnell, Mark D; Grant, Alex J; Land, Ingmar; Vellambi, Badri N; Abbott, Derek; Lever, Ken (2011). "Gain from the two-envelope problem via information asymmetry: on the suboptimality of randomized switching".
*Proceedings of the Royal Society A*.**467**(2134): 2825–2851. Bibcode:2011RSPSA.467.2825M. doi: 10.1098/rspa.2010.0541 . - ↑ Priest, Graham; Restall, Greg (2007), "Envelopes and Indifference" (PDF),
*Dialogues, Logics and Other Strange Things*, College Publications: 135–140 - ↑ Priest, Graham; Restall, Greg (2007), "Envelopes and Indifference" (PDF),
*Dialogues, Logics and Other Strange Things*, College Publications: 135–140 - ↑ Schwitzgebe, Eric; Dever, Josh (2008), "The Two Envelope Paradox and Using Variables Within the Expectation Formula" (PDF),
*Sorites*: 135–140 - 1 2 Tsikogiannopoulos, Panagiotis (2012). "Παραλλαγές του προβλήματος της ανταλλαγής φακέλων" [Variations on the Two Envelopes Problem].
*Mathematical Reviews*(in Greek). arXiv: 1411.2823 . Bibcode:2014arXiv1411.2823T. - ↑ Priest, Graham; Restall, Greg (2007), "Envelopes and Indifference" (PDF),
*Dialogues, Logics and Other Strange Things*, College Publications: 135–140 - 1 2 3 Nickerson, Raymond S.; Falk, Ruma (2006-05-01). "The exchange paradox: Probabilistic and cognitive analysis of a psychological conundrum".
*Thinking & Reasoning*.**12**(2): 181–213. doi:10.1080/13576500500200049. ISSN 1354-6783. S2CID 143472998. - 1 2 3 4 Falk, Ruma; Konold, Clifford (1992). "The Psychology of Learning Probability" (PDF).
*Statistics for the Twenty-first Century*– via Mathematical Association of America. - 1 2 3 Nalebuff, Barry (1989), "Puzzles: The Other Person's Envelope is Always Greener",
*Journal of Economic Perspectives*,**3**(1): 171–81, doi:10.1257/jep.3.1.171 . - 1 2 Christensen, R; Utts, J (1992), "Bayesian Resolution of the "Exchange Paradox"",
*The American Statistician*,**46**(4): 274–76, doi:10.1080/00031305.1992.10475902 . - ↑ Blachman, NM; Christensen, R; Utts, J (1996). "Letters to the Editor".
*The American Statistician*.**50**(1): 98–99. doi:10.1080/00031305.1996.10473551. - ↑ Albers, Casper (March 2003), "2. Trying to resolve the two-envelope problem",
*Distributional Inference: The Limits of Reason*(thesis). - 1 2 Albers, Casper J; Kooi, Barteld P; Schaafsma, Willem (2005), "Trying to resolve the two-envelope problem",
*Synthese*, vol. 145, no. 1, p. 91. - ↑ Falk, Ruma; Nickerson, Raymond (2009), "An inside look at the two envelopes paradox",
*Teaching Statistics*,**31**(2): 39–41, doi:10.1111/j.1467-9639.2009.00346.x . - ↑ Chen, Jeff,
*The Puzzle of the Two-Envelope Puzzle—a Logical Approach*(online ed.), p. 274. - ↑ Broome, John (1995), "The Two-envelope Paradox",
*Analysis*,**55**(1): 6–11, doi:10.1093/analys/55.1.6 . - ↑ Binder, DA (1993), "Letter to editor and response",
*The American Statistician*,**47**(2): 160, doi:10.1080/00031305.1991.10475791 . - ↑ Ross (1994), "Letter to editor and response",
*The American Statistician*,**48**(3): 267–269, doi:10.1080/00031305.1994.10476075 . - ↑ Blachman, NM; Christensen, R; Utts, JM (1996), "Letter with corrections to the original article",
*The American Statistician*,**50**(1): 98–99, doi:10.1080/00031305.1996.10473551 . - ↑ Broome, John (1995). "The Two-envelope Paradox".
*Analysis*.**55**(1): 6–11. doi:10.1093/analys/55.1.6. A famous example of a proper probability distribution of the amounts of money in the two envelopes, for which for all*a*. - ↑ Binder, D. A. (1993). "Letters to the Editor".
*The American Statistician*.**47**(2): 157–163. doi:10.1080/00031305.1993.10475966. Comment on Christensen and Utts (1992) - 1 2 Chalmers, David J. (2002). "The St. Petersburg Two-Envelope Paradox".
*Analysis*.**62**(2): 155–157. doi:10.1093/analys/62.2.155. - ↑ DeGroot, Morris H. (1970).
*Optimal Statistical Decisions*. McGraw-Hill. p. 109. - ↑ Powers, Michael R. (2015). "Paradox-Proof Utility Functions for Heavy-Tailed Payoffs: Two Instructive Two-Envelope Problems" (PDF).
*Risks*.**3**(1): 26–34. doi: 10.3390/risks3010026 . - ↑ Clark, M.; Shackel, N. (2000). "The Two-Envelope Paradox" (PDF).
*Mind*.**109**(435): 415–442. doi:10.1093/mind/109.435.415. - ↑ Smullyan, Raymond (1992).
*Satan, Cantor, and infinity and other mind-boggling puzzles*. Alfred A. Knopf. pp. 189–192. ISBN 978-0-679-40688-4. - ↑ Chase, James (2002). "The Non-Probabilistic Two Envelope Paradox" (PDF).
*Analysis*.**62**(2): 157–160. doi:10.1093/analys/62.2.157. - ↑ Katz, Bernard; Olin, Doris (2007). "A tale of two envelopes".
*Mind*.**116**(464): 903–926. doi:10.1093/mind/fzm903. - ↑ Byeong-Uk Yi (2009). "The Two-envelope Paradox With No Probability" (PDF). Archived from the original (PDF) on 2011-09-29.
`{{cite journal}}`

: Cite journal requires`|journal=`

(help) - ↑ Bliss (2012). "A Concise Resolution to the Two Envelope Paradox". arXiv: 1202.4669 . Bibcode:2012arXiv1202.4669B.
`{{cite journal}}`

: Cite journal requires`|journal=`

(help) - ↑ McDonnell, M. D.; Abott, D. (2009). "Randomized switching in the two-envelope problem".
*Proceedings of the Royal Society A*.**465**(2111): 3309–3322. Bibcode:2009RSPSA.465.3309M. doi: 10.1098/rspa.2009.0312 . - ↑ Syverson, Paul (1 April 2010). "Opening Two Envelopes".
*Acta Analytica*.**25**(4): 479–498. doi:10.1007/s12136-010-0096-7. S2CID 12344371.

In probability theory, the **expected value** is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a large number of independently selected outcomes of a random variable.

In statistics, the **standard deviation** is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

In probability theory and statistics, **skewness** is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.

The **raven paradox**, also known as **Hempel's paradox**, **Hempel's ravens**, or rarely the **paradox of indoor ornithology**, is a paradox arising from the question of what constitutes evidence for the truth of a statement. Observing objects that are neither black nor ravens may formally increase the likelihood that all ravens are black even though, intuitively, these observations are unrelated.

In probability theory and statistics, the **multivariate normal distribution**, **multivariate Gaussian distribution**, or **joint normal distribution** is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be *k*-variate normally distributed if every linear combination of its *k* components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

In probability theory, the **birthday problem** asks for the probability that, in a set of n randomly chosen people, at least two will share a birthday. The **birthday paradox** is that, counterintuitively, the probability of a shared birthday exceeds 50% in a group of only 23 people.

The **probabilistic method** is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erdős, for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is strictly greater than zero. Although the proof uses probability, the final conclusion is determined for *certain*, without any possible error.

In mathematical finance, a **risk-neutral measure** is a probability measure such that each share price is exactly equal to the discounted expectation of the share price under this measure. This is heavily used in the pricing of financial derivatives due to the fundamental theorem of asset pricing, which implies that in a complete market a derivative's price is the discounted expected value of the future payoff under the unique risk-neutral measure. Such a measure exists if and only if the market is arbitrage-free.

The **St. Petersburg paradox** or **St. Petersburg lottery** is a paradox involving the game of flipping a coin where the expected payoff of the theoretical lottery game approaches infinity but nevertheless seems to be worth only a very small amount to the participants. The St. Petersburg paradox is a situation where a naive decision criterion which takes only the expected value into account predicts a course of action that presumably no actual person would be willing to take. It is related to probability and decision theory in economics. Several resolutions to the paradox have been proposed.

In Bayesian statistical inference, a **prior probability distribution**, often simply called the **prior**, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.

In statistics, **Gibbs sampling** or a **Gibbs sampler** is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. This sequence can be used to approximate the joint distribution ; to approximate the marginal distribution of one of the variables, or some subset of the variables ; or to compute an integral. Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled.

The **Doomsday argument** (**DA**) is a probabilistic argument that claims to predict the future number of members in the human species given an estimate of the total number of humans born so far.

In statistics, the **score** is the gradient of the log-likelihood function with respect to the parameter vector. Evaluated at a particular point of the parameter vector, the score indicates the steepness of the log-likelihood function and thereby the sensitivity to infinitesimal changes to the parameter values. If the log-likelihood function is continuous over the parameter space, the score will vanish at a local maximum or minimum; this fact is used in maximum likelihood estimation to find the parameter values that maximize the likelihood function.

The **expected utility hypothesis** is a popular concept in economics that serves as a reference guide for decisions when the payoff is uncertain. The theory recommends which option rational individuals should choose in a complex situation, based on their risk appetite and preferences.

In statistics, a **generalized linear model** (**GLM**) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a *link function* and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

In mathematical statistics, the **Kullback–Leibler divergence,**, is a statistical distance: a measure of how one probability distribution *P* is different from a second, reference probability distribution *Q*. A simple interpretation of the divergence of P from Q is the expected excess surprise from using *Q* as a model when the actual distribution is *P*. While it is a distance, it is not a metric, the most familiar type of distance: it is *asymmetric* in the two distributions, and does not satisfy the triangle inequality. Instead, in terms of information geometry, it is a divergence, a generalization of *squared* distance, and for certain classes of distributions, it satisfies a generalized Pythagorean theorem.

**Renewal theory** is the branch of probability theory that generalizes the Poisson process for arbitrary holding times. Instead of exponentially distributed holding times, a renewal process may have any independent and identically distributed (IID) holding times that have finite mean. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times.

The **Boy or Girl paradox** surrounds a set of questions in probability theory, which are also known as **The Two Child Problem**, **Mr. Smith's Children** and the **Mrs. Smith Problem**. The initial formulation of the question dates back to at least 1959, when Martin Gardner featured it in his October 1959 "Mathematical Games column" in *Scientific American*. He titled it **The Two Children Problem**, and phrased the paradox as follows:

In probability theory and statistics, a **categorical distribution** is a discrete probability distribution that describes the possible results of a random variable that can take on one of *K* possible categories, with the probability of each category separately specified. There is no innate underlying ordering of these outcomes, but numerical labels are often attached for convenience in describing the distribution,. The *K*-dimensional categorical distribution is the most general distribution over a *K*-way event; any other discrete distribution over a size-*K* sample space is a special case. The parameters specifying the probabilities of each possible outcome are constrained only by the fact that each must be in the range 0 to 1, and all must sum to 1.

In probability theory, the **coupon collector's problem** describes "collect all coupons and win" contests. It asks the following question: If each box of a brand of cereals contains a coupon, and there are *n* different types of coupons, what is the probability that more than *t* boxes need to be bought to collect all *n* coupons? An alternative statement is: Given *n* coupons, how many coupons do you expect you need to draw with replacement before having drawn each coupon at least once? The mathematical analysis of the problem reveals that the expected number of trials needed grows as . For example, when *n* = 50 it takes about 225 trials on average to collect all 50 coupons.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.