In game theory, the **traveler's dilemma** (sometimes abbreviated **TD**) is a non-zero-sum game in which each player proposes a payoff. The lower of the two proposals wins; the lowball player receives the lowball payoff plus a small bonus, and the highball player receives the same lowball payoff, minus a small penalty. Surprisingly, the Nash equilibrium is for both players to aggressively lowball. The traveler's dilemma is notable in that naive play appears to outperform the Nash equilibrium; this apparent paradox also appears in the centipede game and the finitely-iterated prisoner's dilemma.

The original game scenario was formulated in 1994 by Kaushik Basu and goes as follows:^{ [1] }^{ [2] }

"An airline loses two suitcases belonging to two different travelers. Both suitcases happen to be identical and contain identical antiques. An airline manager tasked to settle the claims of both travelers explains that the airline is liable for a maximum of $100 per suitcase—he is unable to find out directly the price of the antiques."

"To determine an honest appraised value of the antiques, the manager separates both travelers so they can't confer, and asks them to write down the amount of their value at no less than $2 and no larger than $100. He also tells them that if both write down the same number, he will treat that number as the true dollar value of both suitcases and reimburse both travelers that amount. However, if one writes down a smaller number than the other, this smaller number will be taken as the true dollar value, and both travelers will receive that amount along with a bonus/malus: $2 extra will be paid to the traveler who wrote down the lower value and a $2 deduction will be taken from the person who wrote down the higher amount. The challenge is: what strategy should both travelers follow to decide the value they should write down?"

The two players attempt to maximize their own payoff, without any concern for the other player's payoff.

One might expect a traveler's optimum choice to be $100; that is, the traveler values the antiques at the airline manager's maximum allowed price. Remarkably, and, to many, counter-intuitively, the Nash equilibrium solution is in fact just $2; that is, the traveler values the antiques at the airline manager's *minimum* allowed price.

For an understanding of why $2 is the Nash equilibrium consider the following proof:

- Alice, having lost her antiques, is asked their value. Alice's first thought is to quote $100, the maximum permissibl.e value.
- On reflection, though, she realizes that her fellow traveler, Bob, might also quote $100. And so Alice changes her mind, and decides to quote $99, which, if Bob quotes $100, will pay $101.
- But Bob, being in an identical position to Alice, might also think of quoting $99. And so Alice changes her mind, and decides to quote $98, which, if Bob quotes $99, will pay $100. This is greater than the $99 Alice would receive if both she and Bob quoted $99.
- This cycle of thought continues, until Alice finally decides to quote just $2—the minimum permissible price.

Another proof goes as follows:

- If Alice only wants to maximize her own payoff, choosing $99 trumps choosing $100. If Bob chooses any dollar value 2–98 inclusive, $99 and $100 give equal payoffs; if Bob chooses $99 or $100, choosing $99 nets Alice an extra dollar.
- A similar line of reasoning shows that choosing $98 is always better for Alice than choosing $99. The only situation where choosing $99 would give a higher payoff than choosing $98 is if Bob chooses $100—but if Bob is only seeking to maximize his own profit, he will always choose $99 instead of $100.
- This line of reasoning can be applied to
*all*of Alice's whole-dollar options until she finally reaches $2, the lowest price.

The ($2, $2) outcome in this instance is the Nash equilibrium of the game. By definition this means that if your opponent chooses this Nash equilibrium value then your best choice is that Nash equilibrium value of $2. This will not be the optimum choice if there is a chance of your opponent choosing a higher value than $2.^{ [3] } When the game is played experimentally, most participants select a value higher than the Nash equilibrium and closer to $100 (corresponding to the Pareto optimal solution). More precisely, the Nash equilibrium strategy solution proved to be a bad predictor of people's behavior in a traveler's dilemma with small bonus/malus and a rather good predictor if the bonus/malus parameter was big.^{ [4] }

Furthermore, the travelers are rewarded by deviating strongly from the Nash equilibrium in the game and obtain much higher rewards than would be realized with the purely rational strategy. These experiments (and others, such as focal points) show that the majority of people do not use purely rational strategies, but the strategies they do use are demonstrably optimal. This paradox could reduce the value of pure game theory analysis, but could also point to the benefit of an expanded reasoning that understands how it can be quite rational to make non-rational choices, at least in the context of games that have players that can be counted on to not play "rationally." For instance, Capraro has proposed a model where humans do not act a priori as single agents but they forecast how the game would be played if they formed coalitions and then they act so as to maximize the forecast. His model fits the experimental data on the Traveler's dilemma and similar games quite well.^{ [5] } Recently, the traveler's dilemma was tested with decision undertaken in groups rather than individually, in order to test the assumption that groups decisions are more rational, delivering the message that, usually, two heads are better than one.^{ [6] } Experimental findings show that groups are always more rational – i.e. their claims are closer to the Nash equilibrium - and more sensitive to the size of the bonus/malus.^{ [7] }

Some players appear to pursue a Bayesian Nash equilibrium.^{ [8] }^{ [9] }

The traveler's dilemma can be framed as a finitely repeated prisoner's dilemma.^{ [8] }^{ [9] } Similar paradoxes are attributed to the centipede game and to the p-beauty contest game ^{ [7] } (or more specifically, "Guess 2/3 of the average"). One variation of the original traveler's dilemma in which both travelers are offered only two integer choices, $2 or $3, is identical mathematically to the standard non-iterated Prisoner's dilemma and thus the traveler's dilemma can be viewed as an extension of prisoner's dilemma. These games tend to involve deep iterative deletion of dominated strategies in order to demonstrate the Nash equilibrium, and tend to lead to experimental results that deviate markedly from classical game-theoretical predictions.

The canonical payoff matrix is shown below (if only integer inputs are taken into account):

100 | 99 | 98 | 97 | ⋯ | 3 | 2 | |
---|---|---|---|---|---|---|---|

100 | 100, 100 | 97, 101 | 96, 100 | 95, 99 | ⋯ | 1, 5 | 0, 4 |

99 | 101, 97 | 99, 99 | 96, 100 | 95, 99 | ⋯ | 1, 5 | 0, 4 |

98 | 100, 96 | 100, 96 | 98, 98 | 95, 99 | ⋯ | 1, 5 | 0, 4 |

97 | 99, 95 | 99, 95 | 99, 95 | 97, 97 | ⋯ | 1, 5 | 0, 4 |

⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋱ | ⋮ | ⋮ |

3 | 5, 1 | 5, 1 | 5, 1 | 5, 1 | ⋯ | 3, 3 | 0, 4 |

2 | 4, 0 | 4, 0 | 4, 0 | 4, 0 | ⋯ | 4, 0 | 2, 2 |

Denoting by the set of strategies available to both players and by the payoff function of one of them we can write

(Note that the other player receives since the game is quantitatively symmetric).

In game theory and economic theory, a **zero-sum game** is a mathematical representation of a situation in which each participant's gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants. If the total gains of the participants are added up and the total losses are subtracted, they will sum to zero. Thus, cutting a cake, where taking a larger piece reduces the amount of cake available for others as much as it increases the amount available for that taker, is a zero-sum game if all participants value each unit of cake equally.

The **prisoner's dilemma** is a standard example of a game analyzed in game theory that shows why two completely rational individuals might not cooperate, even if it appears that it is in their best interests to do so. It was originally framed by Merrill Flood and Melvin Dresher while working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and named it "prisoner's dilemma", presenting it as follows:

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge, but they have enough to convict both on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The possible outcomes are:

In game theory, the **Nash equilibrium**, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

In economics and game theory, a participant is considered to have **superrationality** if they have perfect rationality but assume that all other players are superrational too and that a superrational individual will always come up with the same strategy as any other superrational thinker when facing the same problem. Applying this definition, a superrational player playing against a superrational opponent in a prisoner's dilemma will cooperate while a rationally self-interested player would defect.

The **game of chicken**, also known as the **hawk–dove game** or **snowdrift game**, is a model of conflict for two players in game theory. The principle of the game is that while the outcome is ideal for one player to yield, but the individuals try to avoid it out of pride for not wanting to look like a 'chicken'. So each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game is for the most part over.

In game theory, the **best response** is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

In game theory, **coordination games** are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies.

In game theory, the **centipede game**, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

**Matching pennies** is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so receives one from Even.

In game theory, the **stag hunt** is a game that describes a conflict between safety and social cooperation. Other names for it or its variants include "assurance game", "coordination game", and "trust dilemma". Jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag. This has been taken to be a useful analogy for social cooperation, such as international agreements on climate change.

In game theory, a **solution concept** is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, a **Bayesian game** is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

**Backward induction** is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by first considering the last time a decision might be made and choosing what to do in any situation at that time. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation at every point in time. It was first used by Zermelo in 1913, to prove that chess has pure optimal strategies.

In game theory, **strategic dominance** occurs when one strategy is better than another strategy for one player, no matter how that player's opponents may play. Many simple games can be solved using dominance. The opposite, intransitivity, occurs in games where one strategy may be better or worse than another strategy for one player, depending on how the player's opponents may play.

In game theory, **folk theorems** are a class of theorems about possible Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept subgame-perfect Nash equilibria rather than Nash equilibrium.

In game theory, a **subgame perfect equilibrium** is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

**Quantal response equilibrium** (**QRE**) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

**Risk dominance** and **payoff dominance** are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered **payoff dominant** if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered **risk dominant** if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, an **epsilon-equilibrium**, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

- ↑ Kaushik Basu, "The Traveler's Dilemma: Paradoxes of Rationality in Game Theory";
*American Economic Review*, Vol. 84, No. 2, pp. 391–395; May 1994. - ↑ Kaushik Basu,"The Traveler's Dilemma";
*Scientific American*, June 2007 - ↑ Wolpert, D (2009). "Schelling Formalized: Strategic Choices of Non-Rational Personas". SSRN 1172602 .Cite journal requires
`|journal=`

(help) - ↑ Capra, C. Monica; Goeree, Jacob K.; Gomez, Rosario; Holt, Charles A. (1999-01-01). "Anomalous Behavior in a Traveler's Dilemma?".
*The American Economic Review*.**89**(3): 678–690. doi:10.1257/aer.89.3.678. JSTOR 117040. - ↑ Capraro, V (2013). "A Model of Human Cooperation in Social Dilemmas".
*PLoS ONE*.**8**(8): e72427. arXiv: 1307.4228 . doi:10.1371/journal.pone.0072427. PMC 3756993 . PMID 24009679. - ↑ Cooper, David J; Kagel, John H (2005-06-01). "Are Two Heads Better Than One? Team versus Individual Play in Signaling Games" (PDF).
*American Economic Review*.**95**(3): 477–509. doi:10.1257/0002828054201431. ISSN 0002-8282. - 1 2 Morone, A.; Morone, P.; Germani, A. R. (2014-04-01). "Individual and group behaviour in the traveler's dilemma: An experimental study".
*Journal of Behavioral and Experimental Economics*.**49**: 1–7. doi:10.1016/j.socec.2014.02.001. - 1 2 Becker, T., Carter, M., & Naeve, J. (2005). Experts Playing the Traveler's Dilemma (No. 252/2005). Department of Economics, University of Hohenheim, Germany.
- 1 2 Baader, Malte; Vostroknutov, Alexander (October 2017). "Interaction of reasoning ability and distributional preferences in a social dilemma".
*Journal of Economic Behavior & Organization*.**142**: 79–91. doi:10.1016/j.jebo.2017.07.025.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.