Newcomb's paradox

Last updated
  Predicted
choice
Actual
choice
A + B
(B has $0)
B
(B has $1,000,000)
A + B$1,000$1,001,000
B$0$1,000,000

In philosophy and mathematics, Newcomb's paradox, also known as Newcomb's problem, is a thought experiment involving a game between two players, one of whom is able to predict the future.

Contents

Newcomb's paradox was created by William Newcomb of the University of California's Lawrence Livermore Laboratory. However, it was first analyzed in a philosophy paper by Robert Nozick in 1969 [1] and appeared in the March 1973 issue of Scientific American , in Martin Gardner's "Mathematical Games". [2] Today it is a much debated problem in the philosophical branch of decision theory. [3]

The problem

There is a reliable predictor, another player, and two boxes designated A and B. The player is given a choice between taking only box B or taking both boxes A and B. The player knows the following: [4]

The player does not know what the predictor predicted or what box B contains while making the choice.

Game-theory strategies

In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly." [4] The problem continues to divide philosophers today. [5] [6] In a 2020 survey, a modest plurality of professional philosophers chose to take both boxes (39.0% versus 31.2%). [7]

Game theory offers two strategies for this game that rely on different principles: the expected utility principle and the strategic dominance principle. The problem is called a paradox because two analyses that both sound intuitively logical give conflicting answers to the question of what choice maximizes the player's payout.

David Wolpert and Gregory Benford point out that paradoxes arise when not all relevant details of a problem are specified, and there is more than one "intuitively obvious" way to fill in those missing details. They suggest that in the case of Newcomb's paradox, the conflict over which of the two strategies is "obviously correct" reflects the fact that filling in the details in Newcomb's problem can result in two different noncooperative games, and each of the strategies is reasonable for one game but not the other. They then derive the optimal strategies for both of the games, which turn out to be independent of the predictor's infallibility, questions of causality, determinism, and free will. [4]

Causality and free will

  Predicted
choice
Actual
choice
A + BB
A + B$1,000Impossible
BImpossible$1,000,000

Causality issues arise when the predictor is posited as infallible and incapable of error; Nozick avoids this issue by positing that the predictor's predictions are "almost certainly" correct, thus sidestepping any issues of infallibility and causality. Nozick also stipulates that if the predictor predicts that the player will choose randomly, then box B will contain nothing. This assumes that inherently random or unpredictable events would not come into play anyway during the process of making the choice, such as free will or quantum mind processes. [8] However, these issues can still be explored in the case of an infallible predictor. Under this condition, it seems that taking only B is the correct option. This analysis argues that we can ignore the possibilities that return $0 and $1,001,000, as they both require that the predictor has made an incorrect prediction, and the problem states that the predictor is never wrong. Thus, the choice becomes whether to take both boxes with $1,000 or to take only box B with $1,000,000  so taking only box B is always better.

William Lane Craig has suggested that, in a world with perfect predictors (or time machines, because a time machine could be used as a mechanism for making a prediction), retrocausality can occur. [9] The chooser's choice can be said to have caused the predictor's prediction. Some have concluded that if time machines or perfect predictors can exist, then there can be no free will and choosers will do whatever they are fated to do. Taken together, the paradox is a restatement of the old contention that free will and determinism are incompatible, since determinism enables the existence of perfect predictors. Put another way, this paradox can be equivalent to the grandfather paradox; the paradox presupposes a perfect predictor, implying the "chooser" is not free to choose, yet simultaneously presumes a choice can be debated and decided. This suggests to some that the paradox is an artifact of these contradictory assumptions. [10]

Gary Drescher argues in his book Good and Real that the correct decision is to take only box B, by appealing to a situation he argues is analogous  a rational agent in a deterministic universe deciding whether or not to cross a potentially busy street. [11]

Andrew Irvine argues that the problem is structurally isomorphic to Braess's paradox, a non-intuitive but ultimately non-paradoxical result concerning equilibrium points in physical systems of various kinds. [12]

Simon Burgess has argued that the problem can be divided into two stages: the stage before the predictor has gained all the information on which the prediction will be based and the stage after it. While the player is still in the first stage, they are presumably able to influence the predictor's prediction, for example, by committing to taking only one box. So players who are still in the first stage should simply commit themselves to one-boxing.

Burgess readily acknowledges that those who are in the second stage should take both boxes. As he emphasises, however, for all practical purposes that is beside the point; the decisions "that determine what happens to the vast bulk of the money on offer all occur in the first [stage]". [13] So players who find themselves in the second stage without having already committed to one-boxing will invariably end up without the riches and without anyone else to blame. In Burgess's words: "you've been a bad boy scout"; "the riches are reserved for those who are prepared". [14]

Burgess has stressed that pace certain critics (e.g., Peter Slezak)  he does not recommend that players try to trick the predictor. Nor does he assume that the predictor is unable to predict the player's thought process in the second stage. [15] Quite to the contrary, Burgess analyses Newcomb's paradox as a common cause problem, and he pays special attention to the importance of adopting a set of unconditional probability values  whether implicitly or explicitly  that are entirely consistent at all times. To treat the paradox as a common cause problem is simply to assume that the player's decision and the predictor's prediction have a common cause. (That common cause may be, for example, the player's brain state at some particular time before the second stage begins.)

It is also notable that Burgess highlights a similarity between Newcomb's paradox and the Kavka's toxin puzzle. In both problems one can have a reason to intend to do something without having a reason to actually do it. Recognition of that similarity, however, is something that Burgess actually credits to Andy Egan. [16]

Consciousness and simulation

Newcomb's paradox can also be related to the question of machine consciousness, specifically if a perfect simulation of a person's brain will generate the consciousness of that person. [17] Suppose we take the predictor to be a machine that arrives at its prediction by simulating the brain of the chooser when confronted with the problem of which box to choose. If that simulation generates the consciousness of the chooser, then the chooser cannot tell whether they are standing in front of the boxes in the real world or in the virtual world generated by the simulation in the past. The "virtual" chooser would thus tell the predictor which choice the "real" chooser is going to make, and the chooser, not knowing whether they are the real chooser or the simulation, should take only the second box.

Fatalism

Newcomb's paradox is related to logical fatalism in that they both suppose absolute certainty of the future. In logical fatalism, this assumption of certainty creates circular reasoning ("a future event is certain to happen, therefore it is certain to happen"), while Newcomb's paradox considers whether the participants of its game are able to affect a predestined outcome. [18]

Extensions to Newcomb's problem

Many thought experiments similar to or based on Newcomb's problem have been discussed in the literature. [1] For example, a quantum-theoretical version of Newcomb's problem in which box B is entangled with box A has been proposed. [19]

The meta-Newcomb problem

Another related problem is the meta-Newcomb problem. [20] The setup of this problem is similar to the original Newcomb problem. However, the twist here is that the predictor may elect to decide whether to fill box B after the player has made a choice, and the player does not know whether box B has already been filled. There is also another predictor: a "meta-predictor" who has reliably predicted both the players and the predictor in the past, and who predicts the following: "Either you will choose both boxes, and the predictor will make its decision after you, or you will choose only box B, and the predictor will already have made its decision."

In this situation, a proponent of choosing both boxes is faced with the following dilemma: if the player chooses both boxes, the predictor will not yet have made its decision, and therefore a more rational choice would be for the player to choose box B only. But if the player so chooses, the predictor will already have made its decision, making it impossible for the player's decision to affect the predictor's decision.

See also

Notes

  1. 1 2 Robert Nozick (1969). "Newcomb's Problem and Two Principles of Choice" (PDF). In Rescher, Nicholas (ed.). Essays in Honor of Carl G. Hempel. Springer. Archived from the original (PDF) on 2019-03-31.
  2. Gardner, Martin (March 1974). "Mathematical Games". Scientific American. 231 (3): 102. Bibcode:1974SciAm.231c.187G. doi:10.1038/scientificamerican0974-187. Reprinted with an addendum and annotated bibliography in his book The Colossal Book of Mathematics ( ISBN   0-393-02023-1).
  3. "Causal Decision Theory". Stanford Encyclopedia of Philosophy. The Metaphysics Research Lab, Stanford University. Retrieved 3 February 2016.
  4. 1 2 3 Wolpert, D. H.; Benford, G. (June 2013). "The lesson of Newcomb's paradox". Synthese . 190 (9): 1637–1646. doi:10.1007/s11229-011-9899-3. JSTOR   41931515. S2CID   113227.
  5. Bellos, Alex (28 November 2016). "Newcomb's problem divides philosophers. Which side are you on?". The Guardian. Retrieved 13 April 2018.
  6. Bourget, D., Chalmers, D. J. (2014). "What do philosophers believe?" Philosophical Studies, 170(3), 465–500.
  7. "PhilPapers Survey 2020".
  8. Christopher Langan. "The Resolution of Newcomb's Paradox". Noesis (44).
  9. Craig (1987). "Divine Foreknowledge and Newcomb's Paradox". Philosophia. 17 (3): 331–350. doi:10.1007/BF02455055. S2CID   143485859.
  10. Craig, William Lane (1988). "Tachyons, Time Travel, and Divine Omniscience". The Journal of Philosophy . 85 (3): 135–150. doi:10.2307/2027068. JSTOR   2027068.
  11. Drescher, Gary (2006). Good and Real: Demystifying Paradoxes from Physics to Ethics. ISBN   978-0262042338.
  12. Irvine, Andrew (1993). "How Braess' paradox solves Newcomb's problem". International Studies in the Philosophy of Science. 7 (2): 141–60. doi:10.1080/02698599308573460.
  13. Burgess, Simon (February 2012). "Newcomb's problem and its conditional evidence: a common cause of confusion". Synthese. 184 (3): 336. doi:10.1007/s11229-010-9816-1. JSTOR   41411196. S2CID   28725419.
  14. Burgess, Simon (January 2004). "Newcomb's problem: an unqualified resolution". Synthese. 138 (2): 282. doi:10.1023/b:synt.0000013243.57433.e7. JSTOR   20118389. S2CID   33405473.
  15. Burgess, Simon (February 2012). "Newcomb's problem and its conditional evidence: a common cause of confusion". Synthese. 184 (3): 329–330. doi:10.1007/s11229-010-9816-1. JSTOR   41411196. S2CID   28725419.
  16. Burgess, Simon (February 2012). "Newcomb's problem and its conditional evidence: a common cause of confusion". Synthese. 184 (3): 338. doi:10.1007/s11229-010-9816-1. JSTOR   41411196. S2CID   28725419.
  17. Neal, R. M. (2006). "Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning". arXiv: math.ST/0608592 .
  18. Dummett, Michael (1996), The Seas of Language, Clarendon Press Oxford, pp. 352–358.
  19. Piotrowski, Edward; Jan Sladowski (2003). "Quantum solution to the Newcomb's paradox". International Journal of Quantum Information. 1 (3): 395–402. arXiv: quant-ph/0202074 . doi:10.1142/S0219749903000279. S2CID   20417502.
  20. Bostrom, Nick (2001). "The Meta-Newcomb Problem". Analysis. 61 (4): 309–310. doi:10.1093/analys/61.4.309.

Related Research Articles

<span class="mw-page-title-main">Robert Nozick</span> American political philosopher (1938–2002)

Robert Nozick was an American philosopher. He held the Joseph Pellegrino University Professorship at Harvard University, and was president of the American Philosophical Association. He is best known for his books Anarchy, State, and Utopia (1974), a libertarian answer to John Rawls' A Theory of Justice (1971), in which Nozick also presented his own theory of utopia as one in which people can freely choose the rules of the society they enter into, and Philosophical Explanations (1981), which included his counterfactual theory of knowledge. His other work involved ethics, decision theory, philosophy of mind, metaphysics and epistemology. His final work before his death, Invariances (2001), introduced his theory of evolutionary cosmology, by which he argues invariances, and hence objectivity itself, emerged through evolution across possible worlds.

<span class="mw-page-title-main">Decision theory</span> Branch of applied probability theory

Decision theory is a branch of applied probability theory and analytic philosophy concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome.

The expected utility hypothesis is a foundational assumption in mathematical economics concerning human preference when decision making under uncertainty. It postulates that a rational agent maximizes utility, as formulated in the mathematics of game theory, based on their risk aversion. Rational choice theory, a cornerstone of microeconomics, builds upon the expected utility of individuals to model aggregate social behaviour.

A temporal paradox, time paradox, or time travel paradox, is a paradox, an apparent contradiction, or logical contradiction associated with the idea of time travel or other foreknowledge of the future. While the notion of time travel to the future complies with the current understanding of physics via relativistic time dilation, temporal paradoxes arise from circumstances involving hypothetical time travel to the past – and are often used to demonstrate its impossibility. Temporal paradoxes fall into three broad groups: bootstrap paradoxes, consistency paradoxes, and Newcomb's paradox.

<span class="mw-page-title-main">Liberal paradox</span> Logical paradox in economic theory

The liberal paradox, also Sen paradox or Sen's paradox, is a logical paradox proposed by Amartya Sen which shows that no means of aggregating individual preferences into a single, social choice, can simultaneously fulfill the following, seemingly mild conditions:

  1. The unrestrictedness condition, or U: every possible ranking of each individual's preferences and all outcomes of every possible voting rule will be considered equally,
  2. The Pareto condition, or P: if everybody individually likes some choice better at the same time, the society in its voting rule as a whole likes it better as well, and
  3. Liberalism, or L : all individuals in a society must have at least one possibility of choosing differently, so that the social choice under a given voting rule changes as well. That is, as an individual liberal, anyone can exert their freedom of choice at least in some decision with tangible results.
<span class="mw-page-title-main">Information set (game theory)</span> Concept in game theory

The information set is the basis for decision making in a game, which includes the actions available to both sides and the benefits of each action. The information set is an important concept in non-perfect games. In game theory, an information set is the set of all possible actions in the game for a given player, built on their observations and a set for a particular player that, given what that player has observed, shows the decision vertices available to the player which are indistinguishable to them at the current point in the game. For a better idea on decision vertices, refer to Figure 1. If the game has perfect information, every information set contains only one member, namely the point actually reached at that stage of the game, since each player knows the exact mix of chance moves and player strategies up to the current point in the game. Otherwise, it is the case that some players cannot be sure what the game state is; for instance, not knowing what exactly happened in the past or what should be done right now.

In decision theory, the Ellsberg paradox is a paradox in which people's decisions are inconsistent with subjective expected utility theory. Daniel Ellsberg popularized the paradox in his 1961 paper, "Risk, Ambiguity, and the Savage Axioms". John Maynard Keynes published a version of the paradox in 1921. It is generally taken to be evidence of ambiguity aversion, in which a person tends to prefer choices with quantifiable risks over those with unknown, incalculable risks.

In philosophy, verisimilitude is the notion that some propositions are closer to being true than other propositions. The problem of verisimilitude is the problem of articulating what it takes for one false theory to be closer to the truth than another false theory.

Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by examining the last point at which a decision is to be made and then identifying what action would be most optimal at that moment. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation at every point in time. Backward induction was first used in 1875 by Arthur Cayley, who uncovered the method while trying to solve the Secretary problem.

Freedom of choice describes an individual's opportunity and autonomy to perform an action selected from at least two available options, unconstrained by external parties.

<span class="mw-page-title-main">Fallibilism</span> Philosophical principle

Originally, fallibilism is the philosophical principle that propositions can be accepted even though they cannot be conclusively proven or justified, or that neither knowledge nor belief is certain. The term was coined in the late nineteenth century by the American philosopher Charles Sanders Peirce, as a response to foundationalism. Theorists, following Austrian-British philosopher Karl Popper, may also refer to fallibilism as the notion that knowledge might turn out to be false. Furthermore, fallibilism is said to imply corrigibilism, the principle that propositions are open to revision. Fallibilism is often juxtaposed with infallibilism.

The Allais paradox is a choice problem designed by Maurice Allais (1953) to show an inconsistency of actual observed choices with the predictions of expected utility theory. Rather than adhering to rationality, the Allais paradox proves that individuals rarely make rational decisions consistently when required to do so immediately. The independence axiom of expected utility theory, which requires that the preferences of an individual should not change when altering two lotteries by equal proportions, was proven to be violated by the paradox.

<span class="mw-page-title-main">Sleeping Beauty problem</span> Mathematical problem

The Sleeping Beauty problem is a puzzle in decision theory in which whenever an ideally rational epistemic agent is awoken from sleep, they have no memory of whether they have been awoken before. Upon being told that they have been woken once or twice according to the toss of a coin, once if heads and twice if tails, they are asked their degree of belief for the coin having come up heads.

<span class="mw-page-title-main">Monty Hall problem</span> Probability puzzle

The Monty Hall problem is a brain teaser, in the form of a probability puzzle, loosely based on the American television game show Let's Make a Deal and named after its original host, Monty Hall. The problem was originally posed in a letter by Steve Selvin to the American Statistician in 1975. It became famous as a question from reader Craig F. Whitaker's letter quoted in Marilyn vos Savant's "Ask Marilyn" column in Parade magazine in 1990:

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

<i>Socratic Puzzles</i> 1997 book by Robert Nozick

Socratic Puzzles is a 1997 collection of essays by the philosopher Robert Nozick.

Causal decision theory (CDT) is a school of thought within decision theory which states that, when a rational agent is confronted with a set of possible actions, one should select the action which causes the best outcome in expectation. CDT contrasts with evidential decision theory (EDT), which recommends the action which would be indicative of the best outcome if one received the "news" that it had been taken. In other words, EDT recommends to "do what you most want to learn that you will do."

Evidential decision theory (EDT) is a school of thought within decision theory which states that, when a rational agent is confronted with a set of possible actions, one should select the action with the highest news value, that is, the action which would be indicative of the best outcome in expectation if one received the "news" that it had been taken. In other words, it recommends to "do what you most want to learn that you will do."

Behavioral game theory seeks to examine how people's strategic decision-making behavior is shaped by social preferences, social utility and other psychological factors. Behavioral game theory analyzes interactive strategic decisions and behavior using the methods of game theory, experimental economics, and experimental psychology. Experiments include testing deviations from typical simplifications of economic theory such as the independence axiom and neglect of altruism, fairness, and framing effects. As a research program, the subject is a development of the last three decades.

Maya Bar-Hillel is a professor emeritus of psychology at the Hebrew University of Jerusalem. Known for her work on inaccuracies in human reasoning about probability, she has also studied decision theory in connection with Newcomb's paradox, investigated how gender stereotyping can block human problem-solving, and worked with Dror Bar-Natan, Gil Kalai, and Brendan McKay to debunk the Bible code.

Functional Decision Theory (FDT) is a school of thought within decision theory which states that, when a rational agent is confronted with a set of possible actions, one should select the decision procedure that leads to the best output. It aims to provide a more reliable method to maximize utility — the measure of how much an outcome satisfies an agent's preference — than the more prominent decision theories, Causal Decision Theory (CDT) and Evidential Decision Theory (EDT).

References