Grim trigger

Last updated

In game theory, grim trigger (also called the grim strategy or just grim) is a trigger strategy for a repeated game.

Contents

Initially, a player using grim trigger will cooperate, but as soon as the opponent defects (thus satisfying the trigger condition), the player using grim trigger will defect for the remainder of the iterated game. Since a single defect by the opponent triggers defection forever, grim trigger is the most strictly unforgiving of strategies in an iterated game.

In Robert Axelrod's book The Evolution of Cooperation , grim trigger is called "Friedman", [1] for a 1971 paper by James W. Friedman, which uses the concept. [2] [3]

The infinitely repeated prisoners' dilemma

The infinitely repeated prisoners’ dilemma is a well-known example for the grim trigger strategy. The normal game for two prisoners is as follows:

Prisoner B
Prisoner A
Stays Silent (Cooperate)Betray (Defect)
Stays Silent (Cooperate)1, 1-1, 2
Betray (Defect)2, -10, 0

In the prisoners' dilemma, each player has two choices in each stage:

  1. Cooperate
  2. Defect for an immediate gain

If a player defects, he will be punished for the remainder of the game. In fact, both players are better off to stay silent (cooperate) than to betray the other, so playing (C, C) is the cooperative profile while playing (D, D), also the unique Nash equilibrium in this game, is the punishment profile.

In the grim trigger strategy, a player cooperates in the first round and in the subsequent rounds as long as his opponent does not defect from the agreement. Once the player finds that the opponent has betrayed in the previous game, he will then defect forever.

In order to evaluate the subgame perfect equilibrium (SPE) for the following grim trigger strategy of the game, strategy S* for players i and j is as follows:

Then, the strategy is an SPE only if the discount factor is . In other words, neither Player 1 or Player 2 is incentivized to defect from the cooperation profile if the discount factor is greater than one half. [5]

To prove that the strategy is a SPE, cooperation should be the best response to the other player's cooperation, and the defection should be the best response to the other player's defection. [4]

Step 1: Suppose that D is never played so far.

Then, C is better than D if .

Step 2: Suppose that someone has played D previously, then Player j will play D no matter what.

Since , playing D is optimal.

The preceding argument emphasizes that there is no incentive to deviate (no profitable deviation) from the cooperation profile if , and this is true for every subgame. Therefore, the strategy for the infinitely repeated prisoners’ dilemma game is a Subgame Perfect Nash equilibrium.

In iterated prisoner's dilemma strategy competitions, grim trigger performs poorly even without noise, and adding signal errors makes it even worse. Its ability to threaten permanent defection gives it a theoretically effective way to sustain trust, but because of its unforgiving nature and the inability to communicate this threat in advance, it performs poorly. [6]

Grim trigger in international relations

Under the grim trigger in international relations perspective, a nation cooperates only if its partner has never exploited it in the past. Because a nation will refuse to cooperate in all future periods once its partner defects once, the indefinite removal of cooperation becomes the threat that makes such strategy a limiting case. [7]

Grim trigger in user-network interactions

Game theory has recently been used in developing future communications systems, and the user in the user-network interaction game employing the grim trigger strategy is one of such examples. [8] If the grim trigger is decided to be used in the user-network interaction game, the user stays in the network (cooperates) if the network maintains a certain quality, but punishes the network by stopping the interaction and leaving the network as soon as the user finds out the opponent defects. [9] Antoniou et al. explains that “given such a strategy, the network has a stronger incentive to keep the promise given for a certain quality, since it faces the threat of losing its customer forever.” [8]

Comparison with other strategies

Tit for tat and grim trigger strategies are similar in nature in that both are trigger strategy where a player refuses to defect first if he has the ability to punish the opponent for defecting. The difference, however, is that grim trigger seeks maximal punishment for a single defection while tit for tat is more forgiving, offering one punishment for each defection. [10]

See also

Related Research Articles

The prisoner's dilemma is a game theory thought experiment that involves two rational agents, each of whom can cooperate for mutual benefit or betray their partner ("defect") for individual reward. This dilemma was originally framed by Merrill Flood and Melvin Dresher in 1950 while they worked at the RAND Corporation. Albert W. Tucker later formalized the game by structuring the rewards in terms of prison sentences and named it the "prisoner's dilemma".

In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy. The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.

<span class="mw-page-title-main">Tit for tat</span> English saying meaning "equivalent retaliation"

Tit for tat is an English saying meaning "equivalent retaliation". It is an alteration of tip for tap "blow for blow", first recorded in 1558.

In economics and game theory, a participant is considered to have superrationality if they have perfect rationality but assume that all other players are superrational too and that a superrational individual will always come up with the same strategy as any other superrational thinker when facing the same problem. Applying this definition, a superrational player playing against a superrational opponent in a prisoner's dilemma will cooperate while a rationally self-interested player would defect.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round, but after an additional switch the potential payoff will be higher. Therefore, although at each round a player has an incentive to take the pot, it would be better for them to wait. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

In game theory, a non-cooperative game is a game in which there are no external rules or binding agreements that enforces the cooperation of the players. on-cooperative it is typically used to model a competitive environment. This is stated in various accounts most prominent being John Nash's 1951 paper in the journal Annals of Mathematics.

In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare. However, both hunters know the only way to successfully hunt a stag is with the other's help. One hunter can catch a hare alone with less effort and less time, but it is worth far less than a stag and has much less meat. But both hunters would be better off if both choose the more ambitious and more rewarding goal of getting the stag, giving up some autonomy in exchange for the other hunter's cooperation and added might. This situation is often seen as a useful analogy for many kinds of social cooperation, such as international agreements on climate change.

<span class="mw-page-title-main">Solution concept</span> Formal rule for predicting how a game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games.

In game theory, a trigger strategy is any of a class of strategies employed in a repeated non-cooperative game. A player using a trigger strategy initially cooperates but punishes the opponent if a certain level of defection is observed.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Equilibrium selection is a concept from game theory which seeks to address reasons for players of a game to select a certain equilibrium over another. The concept is especially relevant in evolutionary game theory, where the different methods of equilibrium selection respond to different ideas of what equilibria will be stable and persistent for one player to play even in the face of deviations of the other players. This is important because there are various equilibrium concepts, and for many particular concepts, such as the Nash equilibrium, many games have multiple equilibria.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

Cooperative bargaining is a process in which two people decide how to share a surplus that they can jointly generate. In many cases, the surplus created by the two players can be shared in many ways, forcing the players to negotiate which division of payoffs to choose. Such surplus-sharing problems are faced by management and labor in the division of a firm's profit, by trade partners in the specification of the terms of trade, and more.

In game theory, Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

Program equilibrium is a game-theoretic solution concept for a scenario in which players submit computer programs to play the game on their behalf and the programs can read each other's source code. The term was introduced by Moshe Tennenholtz in 2004. The same setting had previously been studied by R. Preston McAfee, J. V. Howard and Ariel Rubinstein.

Subjective expected relative similarity (SERS) is a normative and descriptive theory that predicts and explains cooperation levels in a family of games termed Similarity Sensitive Games (SSG), among them the well-known Prisoner's Dilemma game (PD). SERS was originally developed in order to (i) provide a new rational solution to the PD game and (ii) to predict human behavior in single-step PD games. It was further developed to account for: (i) repeated PD games, (ii) evolutionary perspectives and, as mentioned above, (iii) the SSG subgroup of 2×2 games. SERS predicts that individuals cooperate whenever their subjectively perceived similarity with their opponent exceeds a situational index derived from the game's payoffs, termed the similarity threshold of the game. SERS proposes a solution to the rational paradox associated with the single step PD and provides accurate behavioral predictions. The theory was developed by Prof. Ilan Fischer at the University of Haifa.

The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.

References

  1. Axelrod, Robert (2006). The Evolution of Cooperation (Revised ed.). Basic Books. p. 36. ISBN   0-465-00564-0.
  2. Friedman, James W. (1971). "A Non-cooperative Equilibrium for Supergames". Review of Economic Studies . 38 (1): 1–12. doi:10.2307/2296617. JSTOR   2296617.
  3. The article on JSTOR
  4. 1 2 Acemoglu, Daron (November 2, 2009). "Repeated Games and Cooperation".
  5. Levin, Jonathan (May 2006). "Repeated Games I: Perfect Monitoring" (PDF).
  6. Axelrod, Robert (2000). "On Six Advances in Cooperation Theory" (PDF). Archived from the original (PDF) on 2007-06-22. Retrieved 2007-11-02. (page 13)
  7. McGillivra, Fiona; Smith, Alastair (2000). "Trust and Cooperation Through Agent-specific Punishments". International Organization. 54 (4): 809–824. doi:10.1162/002081800551370. S2CID   22744046.
  8. 1 2 Antoniou, Josephina; Papadopoulou, Vicky (November 2009). "Cooperative user–network interactions in next generation communication networks". Computer Networks. 54 (13): 2239–2255. doi:10.1016/j.comnet.2010.03.013.
  9. Antoniou, Josephina; Petros A, Ioannou (2016). Game Theory in Communication Networks: Cooperative Resolution of Interactive Networking Scenarios. CRC Press. ISBN   9781138199385.
  10. Baurmann, Michael; Leist, Anton (May 2016). "On Six Advances in Cooperation Theory". Journal of Philosophy and Social Theory. 22 (1): 130–151.