Grim trigger

Last updated

In game theory, grim trigger (also called the grim strategy, just grim, or grudger) is a trigger strategy for a repeated game.

Contents

Initially, a player using grim trigger will cooperate, but as soon as the opponent defects (thus satisfying the trigger condition), the player using grim trigger will defect for the remainder of the iterated game. Since a single defect by the opponent triggers defection forever, grim trigger is the most strictly unforgiving of strategies in an iterated game.

In Robert Axelrod's book The Evolution of Cooperation , grim trigger is called "Friedman", for a 1971 paper by James Friedman, which uses the concept. [1]

The infinitely repeated prisoners' dilemma

The infinitely repeated prisoners’ dilemma is a well-known example for the grim trigger strategy. The normal game for two prisoners is as follows:

Prisoner B
Prisoner A
Stays Silent (Cooperate)Betray (Defect)
Stays Silent (Cooperate)1, 1-1, 2
Betray (Defect)2, -10, 0

In the prisoners' dilemma, each player has two choices in each stage:

  1. Cooperate
  2. Defect for an immediate gain

If a player defects, he will be punished for the remainder of the game. In fact, both players are better off to stay silent(cooperate) than to betray the other, so playing (C, C) is the cooperative profile while playing (D, D), also the unique Nash equilibrium in this game, is the punishment profile.

In the grim trigger strategy, a player cooperates in the first round and in the subsequent rounds as long as his opponent does not defect from the agreement. Once the player finds that the opponent has betrayed in the previous game, he will then defect forever.

In order to evaluate the subgame perfect equilibrium (SPE) for the following grim trigger strategy of the game, strategy S* for players i and j is as follows:

Then, the strategy is an SPE only if the discount factor is . In other words, neither Player 1 or Player 2 is incentivized to defect from the cooperation profile if the discount factor is greater than one half. [3]

To prove that the strategy is an SPE, cooperation should be the best response to the other player's cooperation, and the defection should be the best response to the other player's defection. [2]

Step 1: Suppose that D is never played so far.

Then, C is better than D if . This shows that if , playing C is pareto optimal.

Step 2: Suppose that someone has played D previously, then Player j will play D no matter what.

Since , playing D is optimal.

The preceding argument emphasizes that there is no incentive to deviate(no profitable deviation) from the cooperation profile if , and this is true for every subgame. Therefore, the strategy for the infinitely repeated prisoners’ dilemma game is a Subgame Perfect Nash equilibrium.

In iterated prisoner's dilemma strategy competitions, grim trigger performs poorly even without noise, and adding signal errors makes it even worse. Its ability to threaten permanent defection gives it a theoretically effective way to sustain trust, but because of its unforgiving nature and the inability to communicate this threat in advance, it performs poorly. [4]

Grim trigger in International relations

Under the grim trigger in international relations perspective, a nation cooperates only if its partner has never been exploited in the past. Because a nation will refuse to cooperate in all future periods once its partner defects once, the indefinite removal of cooperation becomes the threat that makes such strategy a limiting case. [5] While grim trigger is a limiting case, Folk theorem states that a perfect equilibrium can be made if both nations are patient. [6]

Grim trigger in User network interaction game

Game Theory has recently been used in developing future communications system, and the user in the user-network interaction game employing the grim trigger strategy is one of such examples. [7] If the grim trigger is decided to be used in the user-network interaction game, the user stays in the network(cooperate) if the network maintains a certain quality, but punishes the network by stopping the interaction and leaving the network as soon as the user found the opponent defect. [8] Antoniou et al. explains that “given such a strategy, the network has a stronger incentive to keep the promise given for a certain quality, since it faces the threat of losing its customer forever.” [7]

Comparison with other strategies

Tit for tat and grim trigger strategies are similar in nature in that both are trigger strategy where a player refuses to defect first if he has the ability to punish the opponent for defecting. The difference, however, is that grim trigger seeks maximal punishment for a single defection while tit for tat is more forgiving, offering one punishment for each defection. [9]

See also

Related Research Articles

An evolutionarily stable strategy (ESS) is a strategy which, if adopted by a population in a given environment, is impenetrable, meaning that it cannot be invaded by any alternative strategy that are initially rare. It is relevant in game theory, behavioural ecology, and evolutionary psychology. An ESS is an equilibrium refinement of the Nash equilibrium. It is a Nash equilibrium that is "evolutionarily" stable: once it is fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from invading successfully. The theory is not intended to deal with the possibility of gross external changes to the environment that bring new selective forces to bear.

The prisoner's dilemma is a standard example of a game analyzed in game theory that shows why two completely rational individuals might not cooperate, even if it appears that it is in their best interests to do so. It was originally framed by Merrill Flood and Melvin Dresher while working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and named it "prisoner's dilemma", presenting it as follows:

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge, but they have enough to convict both on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The possible outcomes are:

In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only his own strategy.

Tit for tat English saying meaning "equivalent retaliation"

Tit for tat is an English saying meaning "equivalent retaliation". It developed from "tip for tap", first used in 1558.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

In game theory, the stag hunt is a game that describes a conflict between safety and social cooperation. Other names for it or its variants include "assurance game", "coordination game", and "trust dilemma". Jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag. This has been taken to be a useful analogy for social cooperation, such as international agreements on climate change.

Solution concept formal rule for predicting how a strategic game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, a Perfect Bayesian Equilibrium (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. It is a refinement of Bayesian Nash equilibrium (BNE). A PBE has two components - strategies and beliefs:

In game theory, a trigger strategy is any of a class of strategies employed in a repeated non-cooperative game. A player using a trigger strategy initially cooperates but punishes the opponent if a certain level of defection is observed.

In game theory, a Bayesian game is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

In game theory, folk theorems are a class of theorems about possible Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. Single stage game or single shot game are names for non-repeated games.

In game theory, the purification theorem was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

Tacit collusion occurs where firms choose actions that are likely to minimize a response from another firm, e.g. avoiding the opportunity to price cut an opposition because it would cause the opposition to retaliate. Put another way, two firms agree to play a certain strategy without explicitly saying so. Oligopolists usually try not to engage in price cutting, excessive advertising or other forms of competition. Thus, there may be unwritten rules of collusive behavior such as price leadership. A price leader will then emerge and it sets the general industry price, with other firms following suit. For example, see the case of British Salt Limited and New Cheshire Salt Works Limited.

Peace war game

Peace war game is an iterated game originally played in academic groups and by computer simulation for years to study possible strategies of cooperation and aggression. As peace makers became richer over time it became clear that making war had greater costs than initially anticipated. The only strategy that acquired wealth more rapidly was a "Genghis Khan", a constant aggressor making war continually to gain resources. This led to the development of the "provokable nice guy" strategy, a peace-maker until attacked. Multiple players continue to gain wealth cooperating with each other while bleeding the constant aggressor.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

Subjective expected relative similarity (SERS) is a normative and descriptive theory that predicts and explains cooperation levels in a family of games termed Similarity Sensitive Games (SSG), among them the well-known Prisoner's Dilemma game (PD). SERS was originally developed in order to (i) provide a new rational solution to the PD game and (ii) to predict human behavior in single-step PD games. It was further developed to account for: (i) repeated PD games, (ii) evolutionary perspectives and, as mentioned above, (iii) the SSG subgroup of 2x2 games. SERS predicts that individuals cooperate whenever their subjectively perceived similarity with their opponent exceeds a situational index derived from the game’s payoffs, termed the similarity threshold of the game. SERS proposes a solution to the rational paradox associated with the single step PD and provides accurate behavioral predictions. The theory was developed by Prof. Ilan Fischer at the University of Haifa.

The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.

References

  1. Friedman, James W. (1971). "A Non-cooperative Equilibrium for Supergames". Review of Economic Studies . 38 (1): 1–12. doi:10.2307/2296617.
  2. 1 2 Acemoglu, Daron (November 2, 2009). "Repeated Games and Cooperation".
  3. Levin, Jonathan (May 2006). "Repeated Games I: Perfect Monitoring" (PDF).
  4. Axelrod, Robert (2000). "On Six Advances in Cooperation Theory" (PDF). Retrieved 2007-11-02. (page 13)
  5. McGillivra, Fiona; Smith, Alastair (2000). "Trust and Cooperation Through Agent-specific Punishments". International Organization. 54 (4): 809–824. doi:10.1162/002081800551370.
  6. Fudenberg, Drew; Maskin, Eric (May 1986). "The Folk Theorem in Repeated Games with Discounting or with Incomplete Information". Econometrica. 54 (3): 533–554. CiteSeerX   10.1.1.308.5775 . doi:10.2307/1911307.
  7. 1 2 Antoniou, Josephina; Papadopoulou, Vicky (November 2009). "Cooperative user–network interactions in next generation communication networks". Computer Networks. 54 (13): 2239–2255. doi:10.1016/j.comnet.2010.03.013.
  8. Antoniou, Josephina; Petros A, Ioannou (2016). Game Theory in Communication Networks: Cooperative Resolution of Interactive Networking Scenarios. CRC Press. ISBN   9781138199385.
  9. Baurmann, Michael; Leist, Anton (May 2016). "On Six Advances in Cooperation Theory". Journal of Philosophy and Social Theory. 22 (1): 130–151.