Stag hunt

Last updated

In game theory, the stag hunt or sometimes referred to as the assurance game or trust dilemma describes a conflict between safety and social cooperation. Stag hunt was a story that became a game told by philosopher Jean-Jacques Rousseau in his Discourse on Inequality. Rousseau describes a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag. This has been taken to be a useful analogy for social cooperation, such as international agreements on climate change. [1] The stag hunt differs from the Prisoner's Dilemma in that there are two pure-strategy Nash equilibria [2] when both players cooperate and both players defect. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect.

Contents

An example of the payoff matrix for the stag hunt is pictured in Figure 2.

StagHare
Staga, ac, b
Hareb, cd, d
Fig. 1: Generic symmetric stag hunt
 
StagHare
Stag4, 41, 3
Hare3, 12, 2
Fig. 2: Stag hunt example

Formal definition

Formally, a stag hunt is a game with two pure strategy Nash equilibria—one that is risk dominant and another that is payoff dominant. The payoff matrix in Figure 1 illustrates a generic stag hunt, where . Often, games with a similar structure but without a risk dominant Nash equilibrium are called assurance games. For instance if a=2, b=1, c=0, and d=1. While (Hare, Hare) remains a Nash equilibrium, it is no longer risk dominant. Nonetheless many would call this game a stag hunt.

Reaction-correspondence-stag-hunt.jpg

In addition to the pure strategy Nash equilibria there is one mixed strategy Nash equilibrium. This equilibrium depends on the payoffs, but the risk dominance condition places a bound on the mixed strategy Nash equilibrium. No payoffs (that satisfy the above conditions including risk dominance) can generate a mixed strategy equilibrium where Stag is played with a probability higher than one half. The best response correspondences are pictured here.

The stag hunt and social cooperation

"Nature and Appearance of Deer" taken from "Livre du Roy Modus", created in the 14th century Nature and Appearance of Deer and how they can be hunted with Dogs Fac simile of a Miniature in the Livre du Roy Modus Manuscript of the Fourteenth Century National Library of Paris.png
"Nature and Appearance of Deer" taken from "Livre du Roy Modus", created in the 14th century

Although most authors focus on the prisoner's dilemma as the game that best represents the problem of social cooperation, some authors believe that the stag hunt represents an equally (or more) interesting context in which to study cooperation and its problems (for an overview see Skyrms 2004).

There is a substantial relationship between the stag hunt and the prisoner's dilemma. In biology many circumstances that have been described as prisoner's dilemma might also be interpreted as a stag hunt, depending on how fitness is calculated.

CooperateDefect
Cooperate2, 20, 3
Defect3, 01, 1
Fig. 3: Prisoner's dilemma example

It is also the case that some human interactions that seem like prisoner's dilemmas may in fact be stag hunts. For example, suppose we have a prisoner's dilemma as pictured in Figure 3. The payoff matrix would need adjusting if players who defect against cooperators might be punished for their defection. For instance, if the expected punishment is −2, then the imposition of this punishment turns the above prisoner's dilemma into the stag hunt given at the introduction.

Examples of the stag hunt

The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a certain path. If all the hunters work together, they can kill the stag and all eat. If they are discovered, or do not cooperate, the stag will flee, and all will go hungry.

The hunters hide and wait along a path. An hour goes by, with no sign of the stag. Two, three, four hours pass, with no trace. A day passes. The stag may not pass every day, but the hunters are reasonably certain that it will come. However, a hare is seen by all hunters moving along the path.

If a hunter leaps out and kills the hare, he will eat, but the trap laid for the stag will be wasted and the other hunters will starve. There is no certainty that the stag will arrive; the hare is present. The dilemma is that if one hunter waits, he risks one of his fellows killing the hare for himself, sacrificing everyone else. This makes the risk twofold; the risk that the stag does not appear, and the risk that another hunter takes the kill.

In addition to the example suggested by Rousseau, David Hume provides a series of examples that are stag hunts. One example addresses two individuals who must row a boat. If both choose to row they can successfully move the boat. However, if one doesn't, the other wastes his effort. Hume's second example involves two neighbors wishing to drain a meadow. If they both work to drain it they will be successful, but if either fails to do his part the meadow will not be drained.

Several animal behaviors have been described as stag hunts. One is the coordination of slime molds. In times of stress, individual unicellular protists will aggregate to form one large body. Here if they all act together they can successfully reproduce, but success depends on the cooperation of many individual protozoa. Another example is the hunting practices of orcas (known as carousel feeding). Orcas cooperatively corral large schools of fish to the surface and stun them by hitting them with their tails. Since this requires that the fish have no way to escape, it requires the cooperation of many orcas.

Author James Cambias describes a solution to the game as the basis for an extraterrestrial civilization in his 2014 science fiction book A Darkling Sea.

See also

Related Research Articles

An evolutionarily stable strategy (ESS) is a strategy which, if adopted by a population in a given environment, is impenetrable, meaning that it cannot be invaded by any alternative strategy that are initially rare. It is relevant in game theory, behavioural ecology, and evolutionary psychology. An ESS is an equilibrium refinement of the Nash equilibrium. It is a Nash equilibrium that is "evolutionarily" stable: once it is fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from invading successfully. The theory is not intended to deal with the possibility of gross external changes to the environment that bring new selective forces to bear.

The prisoner's dilemma is a standard example of a game analyzed in game theory that shows why two completely rational individuals might not cooperate, even if it appears that it is in their best interests to do so. It was originally framed by Merrill Flood and Melvin Dresher while working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and named it "prisoner's dilemma", presenting it as follows:

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge, but they have enough to convict both on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The possible outcomes are:

In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

The game of chicken, also known as the hawk–dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while the outcome is ideal for one player to yield, but the individuals try to avoid it out of pride for not wanting to look like a 'chicken'. So each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game is for the most part over.

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

Matching pennies is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so receives one from Even.

In game theory, a player's strategy is any of the options which he or she chooses in a setting where the outcome depends not only on their own actions but on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.

Solution concept formal rule for predicting how a strategic game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, but rather represent the game by way of a matrix. While this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, and their corresponding payoffs, for each player.

In game theory, strategic dominance occurs when one strategy is better than another strategy for one player, no matter how that player's opponents may play. Many simple games can be solved using dominance. The opposite, intransitivity, occurs in games where one strategy may be better or worse than another strategy for one player, depending on how the player's opponents may play.

In game theory, a symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If one can change the identities of the players without changing the payoff to the strategies, then a game is symmetric. Symmetry can come in different varieties. Ordinally symmetric games are games that are symmetric with respect to the ordinal structure of the payoffs. A game is quantitatively symmetric if and only if it is symmetric with respect to the exact payoffs. A partnership game is a symmetric game where both players receive identical payoffs for any strategy set. That is, the payoff for playing strategy a against strategy b receives the same payoff as playing strategy b against strategy a.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. Single stage game or single shot game are names for non-repeated games.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

Subjective expected relative similarity (SERS) is a normative and descriptive theory that predicts and explains cooperation levels in a family of games termed Similarity Sensitive Games (SSG), among them the well-known Prisoner's Dilemma game (PD). SERS was originally developed in order to (i) provide a new rational solution to the PD game and (ii) to predict human behavior in single-step PD games. It was further developed to account for: (i) repeated PD games, (ii) evolutionary perspectives and, as mentioned above, (iii) the SSG subgroup of 2x2 games. SERS predicts that individuals cooperate whenever their subjectively perceived similarity with their opponent exceeds a situational index derived from the game’s payoffs, termed the similarity threshold of the game. SERS proposes a solution to the rational paradox associated with the single step PD and provides accurate behavioral predictions. The theory was developed by Prof. Ilan Fischer at the University of Haifa.

The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.

References

Notes
  1. david1gibbon (February 11, 2013). "Uses of Game Theory in International Relations". Economic Theory of Networks at Temple University.
  2. Fang, C., Kimbrough, S. O., Pace, S., Valluri, A., & Zheng, Z (2002). "On Adaptive Emergence of Trust Behavior in the Game of Stag Hunt".CS1 maint: multiple names: authors list (link)
Bibliography