The **game of chicken**, also known as the **hawk–dove game** or **snowdrift game**,^{ [1] } is a model of conflict for two players in game theory. The principle of the game is that while the outcome is ideal for one player to yield (to avoid the worst outcome if neither yield), but the individuals try to avoid it out of pride for not wanting to look like a 'chicken'. So each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game is for the most part over.

- Popular versions
- Game theoretic applications
- Chicken
- Hawk–dove
- Best response mapping and Nash equilibria
- Strategy polymorphism vs strategy mixing
- Symmetry breaking
- Correlated equilibrium and the game of chicken
- Uncorrelated asymmetries and solutions to the hawk–dove game
- Replicator dynamics
- Related strategies and games
- Brinkmanship
- War of attrition
- Hawk–dove and war of attrition
- Chicken and prisoner's dilemma
- Schedule chicken and project management
- See also
- Notes
- References
- External links

The name "chicken" has its origins in a game in which two drivers drive towards each other on a collision course: one must swerve, or both may die in the crash, but if one driver swerves and the other does not, the one who swerved will be called a "chicken", meaning a coward; this terminology is most prevalent in political science and economics. The name "hawk–dove" refers to a situation in which there is a competition for a shared resource and the contestants can choose either conciliation or conflict; this terminology is most commonly used in biology and evolutionary game theory. From a game-theoretic point of view, "chicken" and "hawk–dove" are identical; the different names stem from parallel development of the basic principles in different research areas.^{ [2] } The game has also been used to describe the mutual assured destruction of nuclear warfare, especially the sort of brinkmanship involved in the Cuban Missile Crisis.^{ [3] }

The game of chicken models two drivers, both headed for a single-lane bridge from opposite directions. The first to swerve away yields the bridge to the other. If neither player swerves, the result is a costly deadlock in the middle of the bridge, or a potentially fatal head-on collision. It is presumed that the best thing for each driver is to stay straight while the other swerves (since the other is the "chicken" while a crash is avoided). Additionally, a crash is presumed to be the worst outcome for both players. This yields a situation where each player, in attempting to secure their best outcome, risks the worst.

The phrase *game of chicken* is also used as a metaphor for a situation where two parties engage in a showdown where they have nothing to gain, and only pride stops them from backing down. Bertrand Russell famously compared the game of Chicken to nuclear brinkmanship:

Since the nuclear stalemate became apparent, the Governments of East and West have adopted the policy which Mr. Dulles calls 'brinkmanship'. This is a policy adapted from a sport which, I am told, is practiced by some youthful degenerates. This sport is called 'Chicken!'. It is played by choosing a long straight road with a white line down the middle and starting two very fast cars towards each other from opposite ends. Each car is expected to keep the wheels on one side of the white line. As they approach each other, mutual destruction becomes more and more imminent. If one of them swerves from the white line before the other, the other, as they pass, shouts 'Chicken!', and the one who has swerved becomes an object of contempt. As played by irresponsible boys, this game is considered decadent and immoral, though only the lives of the players are risked. But when the game is played by eminent statesmen, who risk not only their own lives but those of many hundreds of millions of human beings, it is thought on both sides that the statesmen on one side are displaying a high degree of wisdom and courage, and only the statesmen on the other side are reprehensible. This, of course, is absurd. Both are to blame for playing such an incredibly dangerous game. The game may be played without misfortune a few times, but sooner or later it will come to be felt that loss of face is more dreadful than nuclear annihilation. The moment will come when neither side can face the derisive cry of 'Chicken!' from the other side. When that moment is come, the statesmen of both sides will plunge the world into destruction.

^{ [3] }

Brinkmanship involves the introduction of an element of uncontrollable risk: even if all players act rationally in the face of risk, uncontrollable events can still trigger the catastrophic outcome.^{ [4] } In the "chickie run" scene from the film * Rebel Without a Cause *, this happens when Buzz cannot escape from the car and dies in the crash. The opposite scenario occurs in * Footloose * where Ren McCormack is stuck in his tractor and hence wins the game as they cannot play "chicken". A similar event happens in two different games in the film "The Heavenly Kid", when first Bobby, then later Lenny become stuck in their cars and drive off a cliff. The basic game-theoretic formulation of Chicken has no element of variable, potentially catastrophic, risk, and is also the contraction of a dynamic situation into a one-shot interaction.

The hawk–dove version of the game imagines two players (animals) contesting an indivisible resource who can choose between two strategies, one more escalated than the other.^{ [5] } They can use threat displays (play Dove), or physically attack each other (play Hawk). If both players choose the Hawk strategy, then they fight until one is injured and the other wins. If only one player chooses Hawk, then this player defeats the Dove player. If both players play Dove, there is a tie, and each player receives a payoff lower than the profit of a hawk defeating a dove.

Swerve | Straight | |

Swerve | Tie, Tie | Lose, Win |

Straight | Win, Lose | Crash, Crash |

Fig. 1: A payoff matrix of Chicken |

Swerve | Straight | |

Swerve | 0, 0 | -1, +1 |

Straight | +1, -1 | -1000, -1000 |

Fig. 2: Chicken with numerical payoffs |

A formal version of the game of Chicken has been the subject of serious research in game theory.^{ [6] } Two versions of the payoff matrix for this game are presented here (Figures 1 and 2). In Figure 1, the outcomes are represented in words, where each player would prefer to win over tying, prefer to tie over losing, and prefer to lose over crashing. Figure 2 presents arbitrarily set numerical payoffs which theoretically conform to this situation. Here, the benefit of winning is 1, the cost of losing is -1, and the cost of crashing is -1000.

Both Chicken and Hawk–Dove are *anti-coordination games*, in which it is mutually beneficial for the players to play different strategies. In this way, it can be thought of as the opposite of a coordination game, where playing the same strategy Pareto dominates playing different strategies. The underlying concept is that players use a shared resource. In coordination games, sharing the resource creates a benefit for all: the resource is non-rivalrous, and the shared usage creates positive externalities. In anti-coordination games the resource is rivalrous but non-excludable and sharing comes at a cost (or negative externality).

Because the loss of swerving is so trivial compared to the crash that occurs if nobody swerves, the reasonable strategy would seem to be to swerve before a crash is likely. Yet, knowing this, if one believes one's opponent to be reasonable, one may well decide not to swerve at all, in the belief that they will be reasonable and decide to swerve, leaving the other player the winner. This unstable situation can be formalized by saying there is more than one Nash equilibrium, which is a pair of strategies for which neither player gains by changing their own strategy while the other stays the same. (In this case, the pure strategy equilibria are the two situations wherein one player swerves while the other does not.)

Hawk | Dove | |

Hawk | (V−C)/2, (V−C)/2 | V, 0 |

Dove | 0, V | V/2, V/2 |

Fig. 3: Hawk–Dove game |

Hawk | Dove | |

Hawk | X, X | W, L |

Dove | L, W | T, T |

Fig. 4: General Hawk–Dove game |

In the biological literature, this game is known as Hawk–Dove. The earliest presentation of a form of the Hawk–Dove game was by John Maynard Smith and George Price in their paper, "The logic of animal conflict".^{ [7] } The traditional ^{ [5] }^{ [8] } payoff matrix for the Hawk–Dove game is given in Figure 3, where V is the value of the contested resource, and C is the cost of an escalated fight. It is (almost always) assumed that the value of the resource is less than the cost of a fight, i.e., C > V > 0. If C ≤ V, the resulting game is not a game of Chicken but is instead a Prisoner's Dilemma.

The exact value of the Dove vs. Dove payoff varies between model formulations. Sometimes the players are assumed to split the payoff equally (V/2 each), other times the payoff is assumed to be zero (since this is the expected payoff to a war of attrition game, which is the presumed models for a contest decided by display duration).

While the Hawk–Dove game is typically taught and discussed with the payoffs in terms of V and C, the solutions hold true for any matrix with the payoffs in Figure 4, where W > T > L > X.^{ [8] }

Biologists have explored modified versions of classic Hawk–Dove game to investigate a number of biologically relevant factors. These include adding variation in resource holding potential, and differences in the value of winning to the different players,^{ [9] } allowing the players to threaten each other before choosing moves in the game,^{ [10] } and extending the interaction to two plays of the game.^{ [11] }

One tactic in the game is for one party to signal their intentions convincingly before the game begins. For example, if one party were to ostentatiously disable their steering wheel just before the match, the other party would be compelled to swerve.^{ [12] } This shows that, in some circumstances, reducing one's own options can be a good strategy. One real-world example is a protester who handcuffs themselves to an object, so that no threat can be made which would compel them to move (since they cannot move). Another example, taken from fiction, is found in Stanley Kubrick's * Dr. Strangelove *. In that film, the Russians sought to deter American attack by building a "doomsday machine", a device that would trigger world annihilation if Russia was hit by nuclear weapons or if any attempt were made to disarm it. However, the Russians had planned to signal the deployment of the machine a few days after having set it up, which, because of an unfortunate course of events, turned out to be too late.

Players may also make non-binding threats to not swerve. This has been modeled explicitly in the Hawk–Dove game. Such threats work, but must be wastefully costly if the threat is one of two possible signals ("I will not swerve"/"I will swerve"), or they will be costless if there are three or more signals (in which case the signals will function as a game of "Rock, Paper, Scissors").^{ [10] }

All anti-coordination games have three Nash equilibria. Two of these are pure contingent strategy profiles, in which each player plays one of the pair of strategies, and the other player chooses the opposite strategy. The third one is a mixed equilibrium, in which each player probabilistically chooses between the two pure strategies. Either the pure, or mixed, Nash equilibria will be evolutionarily stable strategies depending upon whether uncorrelated asymmetries exist.

The best response mapping for all 2x2 anti-coordination games is shown in Figure 5. The variables *x* and *y* in Figure 5 are the probabilities of playing the escalated strategy ("Hawk" or "Don't swerve") for players X and Y respectively. The line in graph on the left shows the optimum probability of playing the escalated strategy for player Y as a function of *x*. The line in the second graph shows the optimum probability of playing the escalated strategy for player X as a function of *y* (the axes have not been rotated, so the dependent variable is plotted on the abscissa, and the independent variable is plotted on the ordinate). The Nash equilibria are where the players' correspondences agree, i.e., cross. These are shown with points in the right hand graph. The best response mappings agree (i.e., cross) at three points. The first two Nash equilibria are in the top left and bottom right corners, where one player chooses one strategy, the other player chooses the opposite strategy. The third Nash equilibrium is a mixed strategy which lies along the diagonal from the bottom left to top right corners. If the players do not know which one of them is which, then the mixed Nash is an evolutionarily stable strategy (ESS), as play is confined to the bottom left to top right diagonal line. Otherwise an uncorrelated asymmetry is said to exist, and the corner Nash equilibria are ESSes.

The ESS for the Hawk–Dove game is a mixed strategy. Formal game theory is indifferent to whether this mixture is due to all players in a population choosing randomly between the two pure strategies (a range of possible instinctive reactions for a single situation) or whether the population is a polymorphic mixture of players dedicated to choosing a particular pure strategy(a single reaction differing from individual to individual). Biologically, these two options are strikingly different ideas. The Hawk–Dove game has been used as a basis for evolutionary simulations to explore which of these two modes of mixing ought to predominate in reality.^{ [13] }

In both "Chicken" and "Hawk–Dove", the only symmetric Nash equilibrium is the mixed strategy Nash equilibrium, where both individuals randomly chose between playing Hawk/Straight or Dove/Swerve. This mixed strategy equilibrium is often sub-optimal—both players would do better if they could coordinate their actions in some way. This observation has been made independently in two different contexts, with almost identical results.^{ [14] }

Dare | Chicken | |

Dare | 0,0 | 7,2 |

Chicken | 2,7 | 6,6 |

Fig. 6: A version of Chicken |

Consider the version of "Chicken" pictured in Figure 6. Like all forms of the game, there are three Nash equilibria. The two pure strategy Nash equilibria are (*D*, *C*) and (*C*, *D*). There is also a mixed strategy equilibrium where each player Dares with probability 1/3. It results in expected payoffs of 14/3 = 4.667 for each player.

Now consider a third party (or some natural event) that draws one of three cards labeled: (*C*, *C*), (*D*, *C*), and (*C*, *D*). This exogenous draw event is assumed to be uniformly at random over the 3 outcomes. After drawing the card the third party informs the players of the strategy assigned to them on the card (but **not** the strategy assigned to their opponent). Suppose a player is assigned *D*, they would not want to deviate supposing the other player played their assigned strategy since they will get 7 (the highest payoff possible). Suppose a player is assigned *C*. Then the other player has been assigned *C* with probability 1/2 and *D* with probability 1/2 (due to the nature of the exogenous draw). The expected utility of Daring is 0(1/2) + 7(1/2) = 3.5 and the expected utility of chickening out is 2(1/2) + 6(1/2) = 4. So, the player would prefer to chicken out.

Since neither player has an incentive to deviate from the drawn assignments, this probability distribution over the strategies is known as a correlated equilibrium of the game. Notably, the expected payoff for this equilibrium is 7(1/3) + 2(1/3) + 6(1/3) = 5 which is higher than the expected payoff of the mixed strategy Nash equilibrium.

Although there are three Nash equilibria in the Hawk–Dove game, the one which emerges as the evolutionarily stable strategy (ESS) depends upon the existence of any uncorrelated asymmetry in the game (in the sense of anti-coordination games). In order for row players to choose one strategy and column players the other, the players must be able to distinguish which role (column or row player) they have. If no such uncorrelated asymmetry exists then both players must choose the same strategy, and the ESS will be the mixing Nash equilibrium. If there is an uncorrelated asymmetry, then the mixing Nash is not an ESS, but the two pure, role contingent, Nash equilibria are.

The standard biological interpretation of this uncorrelated asymmetry is that one player is the territory owner, while the other is an intruder on the territory. In most cases, the territory owner plays Hawk while the intruder plays Dove. In this sense, the evolution of strategies in Hawk–Dove can be seen as the evolution of a sort of prototypical version of ownership. Game-theoretically, however, there is nothing special about this solution. The opposite solution—where the owner plays dove and the intruder plays Hawk—is equally stable. In fact, this solution is present in a certain species of spider; when an invader appears the occupying spider leaves. In order to explain the prevalence of property rights over "anti-property rights" one must discover a way to break this additional symmetry.^{ [14] }

Replicator dynamics is a simple model of strategy change commonly used in evolutionary game theory. In this model, a strategy which does better than the average increases in frequency at the expense of strategies that do worse than the average. There are two versions of the replicator dynamics. In one version, there is a single population which plays against itself. In another, there are two population models where each population only plays against the other population (and not against itself).

In the one population model, the only stable state is the mixed strategy Nash equilibrium. Every initial population proportion (except all *Hawk* and all *Dove*) converge to the mixed strategy Nash Equilibrium where part of the population plays *Hawk* and part of the population plays *Dove*. (This occurs because the only ESS is the mixed strategy equilibrium.) In the two population model, this mixed point becomes unstable. In fact, the only stable states in the two population model correspond to the pure strategy equilibria, where one population is composed of all *Hawk*s and the other of all *Dove*s. In this model one population becomes the aggressive population while the other becomes passive. This model is illustrated by the vector field pictured in Figure 7a. The one-dimensional vector field of the single population model (Figure 7b) corresponds to the bottom left to top right diagonal of the two population model.

The single population model presents a situation where no uncorrelated asymmetries exist, and so the best players can do is randomize their strategies. The two population models provide such an asymmetry and the members of each population will then use that to correlate their strategies. In the two population model, one population gains at the expense of another. Hawk–Dove and Chicken thus illustrate an interesting case where the qualitative results for the two different versions of the replicator dynamics differ wildly.^{ [15] }

"Chicken" and "Brinkmanship" are often used synonymously in the context of conflict, but in the strict game-theoretic sense, "brinkmanship" refers to a strategic move designed to avert the possibility of the opponent switching to aggressive behavior. The move involves a credible threat of the risk of irrational behavior in the face of aggression. If player 1 unilaterally moves to A, a rational player 2 cannot retaliate since (A, C) is preferable to (A, A). Only if player 1 has grounds to believe that there is sufficient risk that player 2 responds irrationally (usually by giving up control over the response, so that there is sufficient risk that player 2 responds with A) player 1 will retract and agree on the compromise.

Like "Chicken", the "War of attrition" game models escalation of conflict, but they differ in the form in which the conflict can escalate. Chicken models a situation in which the catastrophic outcome differs in kind from the agreeable outcome, e.g., if the conflict is over life and death. War of attrition models a situation in which the outcomes differ only in degrees, such as a boxing match in which the contestants have to decide whether the ultimate prize of victory is worth the ongoing cost of deteriorating health and stamina.

The Hawk–Dove game is the most commonly used game theoretical model of aggressive interactions in biology.^{ [16] } The war of attrition is another very influential model of aggression in biology. The two models investigate slightly different questions. The Hawk–Dove game is a model of escalation, and addresses the question of when ought an individual escalate to dangerously costly physical combat. The war of attrition seeks to answer the question of how contests may be resolved when there is no possibility of physical combat. The war of attrition is an auction in which both players pay the lower bid (an all-pay second price auction). The bids are assumed to be the duration which the player is willing to persist in making a costly threat display. Both players accrue costs while displaying at each other, the contest ends when the individual making the lower bid quits. Both players will then have paid the lower bid.

Chicken is a symmetrical 2x2 game with conflicting interests, the preferred outcome is to play *Straight* while the opponent plays *Swerve*. Similarly, the prisoner's dilemma is a symmetrical 2x2 game with conflicting interests: the preferred outcome is to *Defect* while the opponent plays *Cooperate*. PD is about the impossibility of cooperation while Chicken is about the inevitability of conflict. Iterated play can solve PD but not Chicken. https://journals.sagepub.com/doi/abs/10.1177/1043463190002004004

Defect | Cooperate | |

Defect | N | T |

Cooperate | P | C |

Prisoner's dilemma. Payoff ranks (to Row player) are: Temptation > Coordination > Neutral > Punishment. |

Both games have a desirable cooperative outcome in which both players choose the less escalated strategy, *Swerve-Swerve* in the Chicken game, and *Cooperate-Cooperate* in the prisoner's dilemma, such that players receive the *Coordination* payoff C (see tables below). The temptation away from this sensible outcome is towards a *Straight* move in Chicken and a *Defect* move in the prisoner's dilemma (generating the **T**emptation payoff, should the other player use the less escalated move). The essential difference between these two games is that in the prisoner's dilemma, the *Cooperate* strategy is dominated, whereas in Chicken the equivalent move is not dominated since the outcome payoffs when the opponent plays the more escalated move (*Straight* in place of *Defect*) are reversed.

Straight | Swerve | |

Straight | P | T |

Swerve | N | C |

Chicken/Hawk–Dove. Payoff ranks (to Row player) are: Temptation > Coordination > Neutral > Punishment. |

The term "schedule chicken"^{ [17] } is used in project management and software development circles. The condition occurs when two or more areas of a product team claim they can deliver features at an unrealistically early date because each assumes the other teams are stretching the predictions even more than they are. This pretense continually moves forward past one project checkpoint to the next until feature integration begins or just before the functionality is actually due.

The practice of "schedule chicken"^{ [18] } often results in contagious schedule slips due to the inter-team dependencies and is difficult to identify and resolve, as it is in the best interest of each team not to be the first bearer of bad news. The psychological drivers underlining the "schedule chicken" behavior in many ways mimic the hawk–dove or snowdrift model of conflict.^{ [19] }

- Brinkmanship
- Coordination game
- Fireship, a naval tactic of intentional suicidal ramming into an enemy ship
- Matching pennies
- Volunteer's dilemma
- War of attrition
- Prisoner's dilemma

- ↑ Sugden, R.
*The Economics of Rights, Cooperation and Welfare*2 edition, page 132. Palgrave Macmillan, 2005. - ↑ Osborne and Rubenstein (1994) p. 30.
- 1 2 Russell (1959) p. 30.
- ↑ Dixit and Nalebuff (1991) pp. 205–222.
- 1 2 Smith, J. M.; Parker, G. A. (1976). "The logic of asymmetric contests".
*Animal Behaviour*.**24**: 159–175. doi:10.1016/S0003-3472(76)80110-8. - ↑ Rapoport and Chammah (1966) pp. 10–14 and 23–28.
- ↑ Maynard-Smith, J.; Price, G. R. (1973). "The Logic of Animal Conflict".
*Nature*.**246**(5427): 15–18. Bibcode:1973Natur.246...15S. doi:10.1038/246015a0. - 1 2 Smith, John (1982).
*Evolution and the theory of games*. Cambridge New York: Cambridge University Press. ISBN 978-0-521-28884-2. - ↑ Hammerstein (1981).
- 1 2 Kim (1995).
- ↑ Cressman (1995).
- ↑ Kahn (1965), cited in Rapoport and Chammah (1966)
- ↑ Bergstrom and Goddfrey-Smith (1998)
- 1 2 Skyrms (1996) pp. 76–79.
- ↑ Weibull (1995) pp. 183–184.
- ↑ Maynard Smith, J. 1998. Evolutionary Genetics. Oxford University Press. ISBN 978-0-19-850231-9
- ↑ Rising, L:
*The Patterns Handbook: Techniques, Strategies, and Applications*, page 169. Cambridge University Press, 1998. - ↑ Beck, K and Fowler, M:
*Planning Extreme Programming*, page 33. Safari Tech Books, 2000. - ↑ Martin T. "Macronomics: February 2012". Macronomy.blogspot.in. Retrieved 2012-08-13.

An **evolutionarily stable strategy** (**ESS**) is a strategy which, if adopted by a population in a given environment, is impenetrable, meaning that it cannot be invaded by any alternative strategy that are initially rare. It is relevant in game theory, behavioural ecology, and evolutionary psychology. An ESS is an equilibrium refinement of the Nash equilibrium. It is a Nash equilibrium that is "evolutionarily" stable: once it is fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from invading successfully. The theory is not intended to deal with the possibility of gross external changes to the environment that bring new selective forces to bear.

In game theory, the **Nash equilibrium**, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

In game theory, the **best response** is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

**Evolutionary game theory** (**EGT**) is the application of game theory to evolving populations in biology. It defines a framework of contests, strategies, and analytics into which Darwinian competition can be modelled. It originated in 1973 with John Maynard Smith and George R. Price's formalisation of contests, analysed as strategies, and the mathematical criteria that can be used to predict the results of competing strategies.

In game theory, **coordination games** are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies.

In game theory, the **centipede game**, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

In game theory, a player's **strategy** is any of the options which he or she chooses in a setting where the outcome depends *not only* on their own actions *but* on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.

In game theory, **battle of the sexes** (**BoS**) is a two-player coordination game. Some authors refer to the game as **Bach or Stravinsky** and designate the players simply as Player 1 and Player 2, rather than assigning sex.

In game theory, **strategic dominance** occurs when one strategy is better than another strategy for one player, no matter how that player's opponents may play. Many simple games can be solved using dominance. The opposite, intransitivity, occurs in games where one strategy may be better or worse than another strategy for one player, depending on how the player's opponents may play.

In game theory, a **symmetric game** is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If one can change the identities of the players without changing the payoff to the strategies, then a game is symmetric. Symmetry can come in different varieties. **Ordinally symmetric games** are games that are symmetric with respect to the ordinal structure of the payoffs. A game is **quantitatively symmetric** if and only if it is symmetric with respect to the exact payoffs. A **partnership game** is a symmetric game where both players receive identical payoffs for any strategy set. That is, the payoff for playing strategy *a* against strategy *b* receives the same payoff as playing strategy *b* against strategy *a*.

In game theory an **uncorrelated asymmetry** is an arbitrary asymmetry in a game which is otherwise symmetrical. The name 'uncorrelated asymmetry' is due to John Maynard Smith who called payoff relevant asymmetries in games with similar roles for each player 'correlated asymmetries'.

In game theory, **folk theorems** are a class of theorems about possible Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept subgame-perfect Nash equilibria rather than Nash equilibrium.

In game theory, a **repeated game** is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. *Single stage game* or *single shot game* are names for non-repeated games.

In game theory, the * war of attrition* is a dynamic timing game in which players choose a time to stop, and fundamentally trade off the strategic gains from outlasting other players and the real costs expended with the passage of time. Its precise opposite is the

In game theory, a **correlated equilibrium** is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy, the distribution is called a correlated equilibrium.

In game theory, the **purification theorem** was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.

In game theory, a **subgame perfect equilibrium** is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

**Risk dominance** and **payoff dominance** are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered **payoff dominant** if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered **risk dominant** if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

A **Markov perfect equilibrium** is an equilibrium concept in game theory. It is the refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be readily identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin. It has since been used, among else, in the analysis of industrial organization, macroeconomics and political economy.

- Bergstrom, C. T. and Godfrey-Smith, P. (1998). "On the evolution of behavioral heterogeneity in individuals and populations".
*Biology and Philosophy*.**13**(2): 205–231. doi:10.1023/A:1006588918909.CS1 maint: multiple names: authors list (link) - Cressman, R. (1995). "Evolutionary Stability for Two-stage Hawk-Dove Games".
*Rocky Mountain Journal of Mathematics*.**25**: 145–155. doi: 10.1216/rmjm/1181072273 . - Deutsch, M. (1974).
*The Resolution of Conflict: Constructive and Destructive Processes*. Yale University Press, New Haven. ISBN 978-0-300-01683-3. - Dixit, A.K. and Nalebuff, B.J. (1991).
*Thinking Strategically*. W.W. Norton. ISBN 0-393-31035-3.CS1 maint: multiple names: authors list (link) - Fink, E.C.; Gates, S.; Humes, B.D. (1998).
*Game Theory Topics: Incomplete Information, Repeated Games, and N-Player Games*. Sage. ISBN 0-7619-1016-6. - Hammerstein, P. (1981). "The Role of Asymmetries in Animal Contests".
*Animal Behaviour*.**29**: 193–205. doi:10.1016/S0003-3472(81)80166-2. - Kahn, H. (1965).
*On escalation: metaphors and scenarios*. Praeger Publ. Co., New York. ISBN 978-0-313-25163-4. - Kim, Y-G. (1995). "Status signaling games in animal contests".
*Journal of Theoretical Biology*.**176**(2): 221–231. doi:10.1006/jtbi.1995.0193. PMID 7475112. - Osborne, M.J. and Rubinstein, A. (1994).
*A course in game theory*. MIT press. ISBN 0-262-65040-1.CS1 maint: multiple names: authors list (link) - Maynard Smith, J. (1982).
*Evolution and the Theory of Games*. Cambridge University Press. ISBN 978-0-521-28884-2. - Maynard Smith, J. and Parker, G.A. (1976). "The logic of asymmetric contests".
*Animal Behaviour*.**24**: 159–175. doi:10.1016/S0003-3472(76)80110-8.CS1 maint: multiple names: authors list (link) - Maynard Smith, J. and Price, G.R. (1973). "The logic of animal conflict".
*Nature*.**246**(5427): 15–18. Bibcode:1973Natur.246...15S. doi:10.1038/246015a0.CS1 maint: multiple names: authors list (link) - Moore, C.W. (1986).
*The Mediation Process: Practical Strategies for Resolving Conflict*. Jossey-Bass, San Francisco. ISBN 978-0-87589-673-1. - Rapoport, A. and Chammah, A.M. (1966). "The Game of Chicken".
*American Behavioral Scientist*.**10**(3): 10–28. doi:10.1177/000276426601000303.CS1 maint: multiple names: authors list (link) - Russell, B.W. (1959).
*Common Sense and Nuclear Warfare*. George Allen and Unwin, London. ISBN 0-04-172003-2. - Skyrms, Brian (1996).
*Evolution of the Social Contract*. New York: Cambridge University Press. ISBN 0-521-55583-3. - Weibull, Jörgen W. (1995).
*Evolutionary Game Theory*. Cambridge, MA: MIT Press. ISBN 0-262-23181-6.

- The game of Chicken as a metaphor for human conflict
- Game-theoretic analysis of Chicken
- Game of Chicken – Rebel Without a Cause by Elmer G. Wiens.
- Online model: Expected Dynamics of an Imitation Model in the Hawk-Dove Game
- Online model: Expected Dynamics of an Intra-Population Imitation Model in the Two-Population Hawk-Dove Game

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.