Symmetric game

Last updated

In game theory, a symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If one can change the identities of the players without changing the payoff to the strategies, then a game is symmetric. Symmetry can come in different varieties. Ordinally symmetric games are games that are symmetric with respect to the ordinal structure of the payoffs. A game is quantitatively symmetric if and only if it is symmetric with respect to the exact payoffs. A partnership game is a symmetric game where both players receive identical payoffs for any strategy set. That is, the payoff for playing strategy a against strategy b receives the same payoff as playing strategy b against strategy a.

Contents

Symmetry in 2x2 games

EF
Ea, ab, c
Fc, bd, d

Only 12 out of the 144 ordinally distinct 2x2 games are symmetric. However, many of the commonly studied 2x2 games are at least ordinally symmetric. The standard representations of chicken, the Prisoner's Dilemma, and the Stag hunt are all symmetric games. Formally, in order for a 2x2 game to be symmetric, its payoff matrix must conform to the schema pictured to the right.

The requirements for a game to be ordinally symmetric are weaker, there it need only be the case that the ordinal ranking of the payoffs conform to the schema on the right.

Symmetry and equilibria

Nash (1951) shows that every finite symmetric game has a symmetric mixed strategy Nash equilibrium. Cheng et al. (2004) show that every two-strategy symmetric game has a (not necessarily symmetric) pure strategy Nash equilibrium. Emmons et al. (2022) show that in every common-payoff game (a.k.a. team game) (that is, every game in which all players receive the same payoff), every optimal strategy profile is also a Nash equilibrium.

Uncorrelated asymmetries: payoff neutral asymmetries

Symmetries here refer to symmetries in payoffs. Biologists often refer to asymmetries in payoffs between players in a game as correlated asymmetries. These are in contrast to uncorrelated asymmetries which are purely informational and have no effect on payoffs (e.g. see Hawk-dove game).

The general case

A game with a payoff of for player , where is player 's strategy set and , is considered symmetric if for any permutation ,

[1]

Partha Dasgupta and Eric Maskin give the following definition, which has been repeated since in the economics literature

However, this is a stronger condition that implies the game is not only symmetric in the sense above, but is a common-interest game, in the sense that all players' payoffs are identical. [1]

Related Research Articles

In game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a non-cooperative game involving two or more players. In a Nash equilibrium, each player is assumed to know the equilibrium strategies of the other players, and no one has anything to gain by changing only one's own strategy. The principle of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to competing firms choosing outputs.

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.

In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, but rather represent the game by way of a matrix. While this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, and their corresponding payoffs, for each player.

In game theory, trembling hand perfect equilibrium is a type of refinement of a Nash equilibrium that was first proposed by Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.

In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their private observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from their strategy, the distribution from which the signals are drawn is called a correlated equilibrium.

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

Equilibrium selection is a concept from game theory which seeks to address reasons for players of a game to select a certain equilibrium over another. The concept is especially relevant in evolutionary game theory, where the different methods of equilibrium selection respond to different ideas of what equilibria will be stable and persistent for one player to play even in the face of deviations of the other players. This is important because there are various equilibrium concepts, and for many particular concepts, such as the Nash equilibrium, many games have multiple equilibria.

In game theory, a game is said to be a potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In economics and game theory, the decisions of two or more players are called strategic complements if they mutually reinforce one another, and they are called strategic substitutes if they mutually offset one another. These terms were originally coined by Bulow, Geanakoplos, and Klemperer (1985).

A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

In mathematics, and in particular the study of game theory, a function is graph continuous if it exhibits the following properties. The concept was originally defined by Partha Dasgupta and Eric Maskin in 1986 and is a version of continuity that finds application in the study of continuous games.

In algorithmic game theory, a succinct game or a succinctly representable game is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of polynomial type if in a game represented by a string of length n the number of players, as well as the number of strategies of each player, is bounded by a polynomial in n.

A Markov perfect equilibrium is an equilibrium concept in game theory. It has been used in analyses of industrial organization, macroeconomics, and political economy. It is a refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin.

M equilibrium is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization and perfect beliefs. The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis.

The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.

In game theory, a satisfaction equilibrium is a solution concept for a class of non-cooperative games, namely games in satisfaction form. Games in satisfaction form model situations in which players aim at satisfying a given individual constraint, e.g., a performance metric must be smaller or bigger than a given threshold. When a player satisfies its own constraint, the player is said to be satisfied. A satisfaction equilibrium, if it exists, arises when all players in the game are satisfied.

References

  1. 1 2 Ham, Nicholas (18 November 2013). "Notions of Anonymity, Fairness and Symmetry for Finite Strategic-Form Games". arXiv: 1311.4766 [math.CO].

Further reading