In game theory, the **centipede game**, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds (hence the name), any game with this structure but a different number of rounds is called a centipede game.

- Play
- Formal Definition
- Equilibrium analysis and backward induction
- Empirical results
- Explanations
- Significance
- See also
- References
- External links

The unique subgame perfect equilibrium (and every Nash equilibrium) of these games indicates that the first player take the pot on the first round of the game; however, in empirical tests, relatively few players do so, and as a result, achieve a higher payoff than the payoff predicted by the equilibria analysis. These results are taken to show that subgame perfect equilibria and Nash equilibria fail to predict human play in some circumstances. The Centipede game is commonly used in introductory game theory courses and texts to highlight the concept of backward induction and the iterated elimination of dominated strategies, which show a standard way of providing a solution to the game.

One possible version of a centipede game could be played as follows:

Consider two players: Alice and Bob. Alice moves first. At the start of the game, Alice has two piles of coins in front of her: one pile contains 4 coins and the other pile contains 1 coin. Each player has two moves available: either "take" the larger pile of coins and give the smaller pile to the other player or "push" both piles across the table to the other player. Each time the piles of coins pass across the table, the quantity of coins in each pile doubles. For example, assume that Alice chooses to "push" the piles on her first move, handing the piles of 1 and 4 coins over to Bob, doubling them to 2 and 8. Bob could now use his first move to either "take" the pile of 8 coins and give 2 coins to Alice, or he can "push" the two piles back across the table again to Alice, again increasing the size of the piles to 4 and 16 coins. The game continues for a fixed number of rounds or until a player decides to end the game by pocketing a pile of coins.

The addition of coins is taken to be an externality, as it is not contributed by either player.

The centipede game may be written as where and . Players and alternate, starting with player , and may on each turn play a move from with a maximum of rounds. The game terminates when is played for the first time, otherwise upon moves, if is never played.

Suppose the game ends on round with player making the final move. Then the outcome of the game is defined as follows:

- If played , then gains coins and gains .
- If played , then gains coins and gains .

Here, denotes the other player.

Standard game theoretic tools predict that the first player will defect on the first round, taking the pile of coins for himself. In the centipede game, a pure strategy consists of a set of actions (one for each choice point in the game, even though some of these choice points may never be reached) and a mixed strategy is a probability distribution over the possible pure strategies. There are several pure strategy Nash equilibria of the centipede game and infinitely many mixed strategy Nash equilibria. However, there is only one subgame perfect equilibrium (a popular refinement to the Nash equilibrium concept).

In the unique subgame perfect equilibrium, each player chooses to defect at every opportunity. This, of course, means defection at the first stage. In the Nash equilibria, however, the actions that would be taken after the initial choice opportunities (even though they are never reached since the first player defects immediately) may be cooperative.

Defection by the first player is the unique subgame perfect equilibrium and required by any Nash equilibrium, it can be established by backward induction. Suppose two players reach the final round of the game; the second player will do better by defecting and taking a slightly larger share of the pot. Since we suppose the second player will defect, the first player does better by defecting in the second to last round, taking a slightly higher payoff than she would have received by allowing the second player to defect in the last round. But knowing this, the second player ought to defect in the third to last round, taking a slightly higher payoff than he would have received by allowing the first player to defect in the second to last round. This reasoning proceeds backwards through the game tree until one concludes that the best action is for the first player to defect in the first round. The same reasoning can apply to any node in the game tree.

For a game that ends after four rounds, this reasoning proceeds as follows. If we were to reach the last round of the game, Player *2* would do better by choosing *d* instead of *r*, receiving 4 coins instead of 3. However, given that *2* will choose *d*, *1* should choose *D* in the second to last round, receiving 3 instead of 2. Given that *1* would choose *D* in the second to last round, *2* should choose *d* in the third to last round, receiving 2 instead of 1. But given this, Player *1* should choose *D* in the first round, receiving 1 instead of 0.

There are a large number of Nash equilibria in a centipede game, but in each, the first player defects on the first round and the second player defects in the next round frequently enough to dissuade the first player from passing. Being in a Nash equilibrium does not require that strategies be rational at **every point** in the game as in the subgame perfect equilibrium. This means that strategies that are cooperative in the never-reached later rounds of the game could still be in a Nash equilibrium. In the example above, one Nash equilibrium is for both players to defect on each round (even in the later rounds that are never reached). Another Nash equilibrium is for player 1 to defect on the first round, but pass on the third round and for player 2 to defect at any opportunity.

Several studies have demonstrated that the Nash equilibrium (and likewise, subgame perfect equilibrium) play is rarely observed. Instead, subjects regularly show partial cooperation, playing "R" (or "r") for several moves before eventually choosing "D" (or "d"). It is also rare for subjects to cooperate through the whole game. For examples see McKelvey and Palfrey (1992) and Nagel and Tang (1998). As in many other game theoretic experiments, scholars have investigated the effect of increasing the stakes. As with other games, for instance the ultimatum game, as the stakes increase the play approaches (but does not reach) Nash equilibrium play.^{[ citation needed ]}

Since the empirical studies have produced results that are inconsistent with the traditional equilibrium analysis, several explanations of this behavior have been offered. Rosenthal (1981) suggested that if one has reason to believe his opponent will deviate from Nash behavior, then it may be advantageous to not defect on the first round.

One reason to suppose that people may deviate from the equilibrium behavior is if some are altruistic. The basic idea is that if you are playing against an altruist, that person will always cooperate, and hence, to maximize your payoff you should defect on the last round rather than the first. If enough people are altruists, sacrificing the payoff of first-round defection is worth the price in order to determine whether or not your opponent is an altruist. Nagel and Tang (1998) suggest this explanation.

Another possibility involves error. If there is a significant possibility of error in action, perhaps because your opponent has not reasoned completely through the backward induction, it may be advantageous (and rational) to cooperate in the initial rounds.

However, Parco, Rapoport and Stein (2002) illustrated that the level of financial incentives can have a profound effect on the outcome in a three-player game: the larger the incentives are for deviation, the greater propensity for learning behavior in a repeated single-play experimental design to move toward the Nash equilibrium.

Palacios-Huerta and Volij (2009) find that expert chess players play differently from college students. With a rising Elo, the probability of continuing the game declines; all Grandmasters in the experiment stopped at their first chance. They conclude that chess players are familiar with using backward induction reasoning and hence need less learning to reach the equilibrium. However, in an attempt to replicate these findings, Levitt, List, and Sadoff (2010) find strongly contradictory results, with zero of sixteen Grandmasters stopping the game at the first node.

Like the Prisoner's Dilemma, this game presents a conflict between self-interest and mutual benefit. If it could be enforced, both players would prefer that they both cooperate throughout the entire game. However, a player's self-interest or players' distrust can interfere and create a situation where both do worse than if they had blindly cooperated. Although the Prisoner's Dilemma has received substantial attention for this fact, the Centipede Game has received relatively less.

Additionally, Binmore (2005) has argued that some real-world situations can be described by the Centipede game. One example he presents is the exchange of goods between parties that distrust each other. Another example Binmore (2005) likens to the Centipede game is the mating behavior of a hermaphroditic sea bass which takes turns exchanging eggs to fertilize. In these cases, we find cooperation to be abundant.

Since the payoffs for some amount of cooperation in the Centipede game are so much larger than immediate defection, the "rational" solutions given by backward induction can seem paradoxical. This, coupled with the fact that experimental subjects regularly cooperate in the Centipede game, has prompted debate over the usefulness of the idealizations involved in the backward induction solutions, see Aumann (1995, 1996) and Binmore (1996).

An **evolutionarily stable strategy** (**ESS**) is a strategy which, if adopted by a population in a given environment, is impenetrable, meaning that it cannot be invaded by any alternative strategy that are initially rare. It is relevant in game theory, behavioural ecology, and evolutionary psychology. An ESS is an equilibrium refinement of the Nash equilibrium. It is a Nash equilibrium that is "evolutionarily" stable: once it is fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from invading successfully. The theory is not intended to deal with the possibility of gross external changes to the environment that bring new selective forces to bear.

In game theory, the **Nash equilibrium**, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

In game theory, the **best response** is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

**Matching pennies** is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so receives one from Even.

In game theory, **cheap talk** is communication between players that does not directly affect the payoffs of the game. Providing and receiving information is free. This is in contrast to signaling in which sending certain messages may be costly for the sender depending on the state of the world.

In game theory, the **stag hunt** or sometimes referred to as the **assurance game** or **trust dilemma** describes a conflict between safety and social cooperation. Stag hunt was a story that became a game told by philosopher Jean-Jacques Rousseau in his Discourse on Inequality. Rousseau describes a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag. This has been taken to be a useful analogy for social cooperation, such as international agreements on climate change. The stag hunt differs from the Prisoner's Dilemma in that there are two pure-strategy Nash equilibria when both players cooperate and both players defect. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect.

In game theory, **battle of the sexes** (**BoS**) is a two-player coordination game. Some authors refer to the game as **Bach or Stravinsky** and designate the players simply as Player 1 and Player 2, rather than assigning sex.

In game theory, a **solution concept** is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, a **Perfect Bayesian Equilibrium** (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. It is a refinement of Bayesian Nash equilibrium (BNE). A PBE has two components - *strategies* and *beliefs*:

**Backward induction** is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by first considering the last time a decision might be made and choosing what to do in any situation at that time. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation at every point in time. It was first used by Zermelo in 1913, to prove that chess has pure optimal strategies.

In game theory, **folk theorems** are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a **repeated game** is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. *Single stage game* or *single shot game* are names for non-repeated games.

In game theory, a **Manipulated Nash equilibrium** or **MAPNASH** is a refinement of subgame perfect equilibrium used in dynamic games of imperfect information. Informally, a strategy set is a MAPNASH of a game if it would be a subgame perfect equilibrium of the game if the game had perfect information. MAPNASH were first suggested by Amershi, Sadanand, and Sadanand (1988) and has been discussed in several papers since. It is a solution concept based on how players think about other players' thought processes.

In game theory, a **subgame perfect equilibrium** is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

**Quantal response equilibrium** (**QRE**) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

In game theory, an **epsilon-equilibrium**, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

The two-person **bargaining problem** studies how two agents share a surplus that they can jointly generate. It is in essence a payoff selection problem. In many cases, the surplus created by the two players can be shared in many ways, forcing the players to negotiate which division of payoffs to choose. There are two typical approaches to the bargaining problem. The normative approach studies how the surplus should be shared. It formulates appealing axioms that the solution to a bargaining problem should satisfy. The positive approach answers the question how the surplus will be shared. Under the positive approach, the bargaining procedure is modeled in detail as a non-cooperative game.

**Jean-François Mertens** was a Belgian game theorist and mathematical economist.

**Mertens stability** is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

The **Berge equilibrium** is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.

- Aumann, R. (1995). "Backward Induction and Common Knowledge of Rationality".
*Games and Economic Behavior*.**8**(1): 6–19. doi:10.1016/S0899-8256(05)80015-6. - ——— (1996). "A Reply to Binmore".
*Games and Economic Behavior*.**17**(1): 138–146. doi:10.1006/game.1996.0099. - Binmore, K. (2005).
*Natural Justice*. New York: Oxford University Press. ISBN 978-0-19-517811-1. - ——— (1996). "A Note on Backward Induction".
*Games and Economic Behavior*.**17**(1): 135–137. doi:10.1006/game.1996.0098. - Levitt, S. D.; List, J. A. & Sadoff, S. E. (2010). "Checkmate: Exploring Backward Induction Among Chess Players" (PDF).
*American Economic Review*.**101**(2): 975–990. doi:10.1257/aer.101.2.975. - McKelvey, R. & Palfrey, T. (1992). "An experimental study of the centipede game".
*Econometrica*.**60**(4): 803–836. CiteSeerX 10.1.1.295.2774 . doi:10.2307/2951567. JSTOR 2951567. - Nagel, R. & Tang, F. F. (1998). "An Experimental Study on the Centipede Game in Normal Form: An Investigation on Learning".
*Journal of Mathematical Psychology*.**42**(2–3): 356–384. doi:10.1006/jmps.1998.1225. - Palacios-Huerta, I. & Volij, O. (2009). "Field Centipedes".
*American Economic Review*.**99**(4): 1619–1635. doi:10.1257/aer.99.4.1619. - Parco, J. E.; Rapoport, A. & Stein, W. E. (2002). "Effects of financial incentives on the breakdown of mutual trust".
*Psychological Science*.**13**(3): 292–297. CiteSeerX 10.1.1.612.8407 . doi:10.1111/1467-9280.00454. PMID 12009054. - Rapoport, A.; Stein, W. E.; Parco, J. E. & Nicholas, T. E. (2003). "Equilibrium play and adaptive learning in a three-person centipede game".
*Games and Economic Behavior*.**43**(2): 239–265. doi:10.1016/S0899-8256(03)00009-5. - Rosenthal, R. (1981). "Games of Perfect Information, Predatory Pricing, and the Chain Store".
*Journal of Economic Theory*.**25**(1): 92–100. CiteSeerX 10.1.1.482.8534 . doi:10.1016/0022-0531(81)90018-1.

- EconPort article on the Centipede Game
- Rationality and Game Theory - AMS column about the centipede game
- Online experiment in VeconLab
- Play the Centipede game in your browser on gametheorygame.nl

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.