Purification theorem

Last updated

In game theory, the purification theorem was contributed by Nobel laureate John Harsanyi in 1973. [1] The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.

Contents

The mixed strategy equilibria are explained as being the limit of pure strategy equilibria for a disturbed game of incomplete information in which the payoffs of each player are known to themselves but not their opponents. The idea is that the predicted mixed strategy of the original game emerge as ever improving approximations of a game that is not observed by the theorist who designed the original, idealized game.

The apparently mixed nature of the strategy is actually just the result of each player playing a pure strategy with threshold values that depend on the ex-ante distribution over the continuum of payoffs that a player can have. As that continuum shrinks to zero, the players strategies converge to the predicted Nash equilibria of the original, unperturbed, complete information game.

The result is also an important aspect of modern-day inquiries in evolutionary game theory where the perturbed values are interpreted as distributions over types of players randomly paired in a population to play games.

Example

CD
C3, 32, 4
D4, 20, 0
Fig. 1: a Hawk–Dove game

Consider the Hawk–Dove game shown here. The game has two pure strategy equilibria (Defect, Cooperate) and (Cooperate, Defect). It also has a mixed equilibrium in which each player plays Cooperate with probability 2/3.

Suppose that each player i bears an extra cost ai from playing Cooperate, which is uniformly distributed on [−A, A]. Players only know their own value of this cost. So this is a game of incomplete information which we can solve using Bayesian Nash equilibrium. The probability that aia* is (a* + A)/2A. If player 2 Cooperates when a2a*, then player 1's expected utility from Cooperating is a1 + 3(a* + A)/2A + 2(1 − (a* + A)/2A); his expected utility from Defecting is 4(a* + A)/2A. He should therefore himself Cooperate when a1 ≤ 2 - 3(a*+A)/2A. Seeking a symmetric equilibrium where both players cooperate if aia*, we solve this for a* = 1/(2 + 3/A). Now we have worked out a*, we can calculate the probability of each player playing Cooperate as

As A → 0, this approaches 2/3 – the same probability as in the mixed strategy in the complete information game.

Thus, we can think of the mixed strategy equilibrium as the outcome of pure strategies followed by players who have a small amount of private information about their payoffs.

Technical details

Harsanyi's proof involves the strong assumption that the perturbations for each player are independent of the other players. However, further refinements to make the theorem more general have been attempted. [2] [3]

The main result of the theorem is that all the mixed strategy equilibria of a given game can be purified using the same sequence of perturbed games. However, in addition to independence of the perturbations, it relies on the set of payoffs for this sequence of games being of full measure. There are games, of a pathological nature, for which this condition fails to hold.

The main problem with these games falls into one of two categories: (1) various mixed strategies of the game are purified by different sequences of perturbed games and (2) some mixed strategies of the game involve weakly dominated strategies. No mixed strategy involving a weakly dominated strategy can be purified using this method because if there is ever any non-negative probability that the opponent will play a strategy for which the weakly dominated strategy is not a best response, then one will never wish to play the weakly dominated strategy. Hence, the limit fails to hold because it involves a discontinuity. [4]

Related Research Articles

An evolutionarily stable strategy (ESS) is a strategy which, if adopted by a population in a given environment, is impenetrable, meaning that it cannot be invaded by any alternative strategy that are initially rare. It is relevant in game theory, behavioural ecology, and evolutionary psychology. An ESS is an equilibrium refinement of the Nash equilibrium. It is a Nash equilibrium that is "evolutionarily" stable: once it is fixed in a population, natural selection alone is sufficient to prevent alternative (mutant) strategies from invading successfully. The theory is not intended to deal with the possibility of gross external changes to the environment that bring new selective forces to bear.

In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

The game of chicken, also known as the hawk–dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while the outcome is ideal for one player to yield, but the individuals try to avoid it out of pride for not wanting to look like a 'chicken'. So each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game is for the most part over.

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

In game theory, a player's strategy is any of the options which he or she chooses in a setting where the outcome depends not only on their own actions but on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.

In game theory, the stag hunt or sometimes referred to as the assurance game or trust dilemma describes a conflict between safety and social cooperation. Stag hunt was a story that became a game told by philosopher Jean-Jacques Rousseau in his Discourse on Inequality. Rousseau describes a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag. This has been taken to be a useful analogy for social cooperation, such as international agreements on climate change. The stag hunt differs from the Prisoner's Dilemma in that there are two pure-strategy Nash equilibria when both players cooperate and both players defect. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect.

In game theory, battle of the sexes (BoS) is a two-player coordination game. Some authors refer to the game as Bach or Stravinsky and designate the players simply as Player 1 and Player 2, rather than assigning sex.

Solution concept formal rule for predicting how a strategic game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, a Bayesian game is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

In game theory, trembling hand perfect equilibrium is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. Single stage game or single shot game are names for non-repeated games.

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory a Poisson game is a game with a random number of players, where the distribution of the number of players follows a Poisson random process. An extension of games of imperfect information, Poisson games have mostly seen application to models of voting.

In game theory a max-dominated strategy is a strategy which is not a best response to any strategy profile of the other players. This is an extension to the notion of strictly dominated strategies, which are max-dominated as well.

Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.

References

  1. J. C. Harsanyi. 1973. "Games with randomly disturbed payoffs: a new rationale for mixed-strategy equilibrium points. Int. J. Game Theory 2 (1973), pp. 1–23. doi : 10.1007/BF01737554
  2. R. Aumann, et al. 1983. "Approximate Purificaton of Mixed Strategies. Mathematics of Operations Research 8 (1983), pp. 327–341.
  3. Govindan, S., Reny, P. J. and Robson, A.J. 2003. "A Short Proof of Harsanyi's Purification Theorem. Games and Economic Behavior45(2) (2003), pp. 369–374. doi : 10.1016/S0899-8256(03)00149-0
  4. Fudenberg, Drew and Jean Tirole: Game Theory, MIT Press, 1991, pp. 233–234