List of games in game theory

Last updated

Game theory studies strategic interaction between individuals in situations called games. Classes of these games have been given names. This is a list of the most commonly studied games

Contents

Explanation of features

Games can have several features, a few of the most common are listed here.

List of games

GamePlayersStrategies
per player
No. of pure strategy
Nash equilibria
Sequential Perfect
information
Zero sum Move by nature
Battle of the sexes 222NoNoNoNo
Blotto games 2variablevariableNoNoYesNo
Cake cutting N, usually 2infinitevariable [1] YesYesYesNo
Centipede game 2variable1YesYesNoNo
Chicken (aka hawk-dove)222NoNoNoNo
Gift-exchange game N, usually 2variable1YesYesNoNo
Commune game 3Yes
Coordination game N variable>2NoNoNoNo
Cournot game 2infinite [2] 1NoNoNoNo
Deadlock 221NoNoNoNo
Dictator game 2infinite [2] 1N/A [3] N/A [3] YesNo
Diner's dilemma N 21NoNoNoNo
Dollar auction 220YesYesNoNo
El Farol bar N 2variableNoNoNoNo
Game without a value 2infinite0NoNoYesNo
Guess 2/3 of the average N infinite1NoNoMaybe [4] No
Kuhn poker 227 & 640YesNoYesYes
Matching pennies 220NoNoYesNo
Muddy Children Puzzle N 21YesNoNoYes
Nash bargaining game 2infinite [2] infinite [2] NoNoNoNo
Optional prisoner's dilemma 231NoNoNoNo
Peace war game N variable>2YesNoNoNo
Pirate game N infinite [2] infinite [2] YesYesNoNo
Platonia dilemma N 2NoYesNoNo
Princess and monster game 2infinite0NoNoYesNo
Prisoner's dilemma 221NoNoNoNo
Public goods N infinite1NoNoNoNo
Rock, paper, scissors 230NoNoYesNo
Screening game 2variablevariableYesNoNoYes
Signaling game N variablevariableYesNoNoYes
Stag hunt 222NoNoNoNo
Traveler's dilemma 2N >> 11NoNoNoNo
Truel 31-3infiniteYesYesNoNo
Trust game 2infinite1YesYesNoNo
Ultimatum game 2infinite [2] infinite [2] YesYesNoNo
Vickrey auction N infinite1NoNoNoYes [5]
Volunteer's dilemma N 22NoNoNoNo
War of attrition 220NoNoNoNo

Notes

  1. For the cake cutting problem, there is a simple solution if the object to be divided is homogenous; one person cuts, the other chooses who gets which piece (continued for each player). With a non-homogenous object, such as a half chocolate/half vanilla cake or a patch of land with a single source of water, the solutions are far more complex.
  2. 1 2 3 4 5 6 7 8 There may be finite strategies depending on how goods are divisible
  3. 1 2 Since the dictator game only involves one player actually choosing a strategy (the other does nothing), it cannot really be classified as sequential or perfect information.
  4. Potentially zero-sum, provided that the prize is split among all players who make an optimal guess. Otherwise non-zero sum.
  5. The real value of the auctioned item is random, as well as the perceived value.

Related Research Articles

Game theory The study of mathematical models of strategic interaction between rational decision-makers

Game theory is the study of mathematical models of strategic interaction among rational decision-makers. It has applications in all fields of social science, as well as in logic, systems science and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.

In game theory and economic theory, a zero-sum game is a mathematical representation of a situation in which each participant's gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants. If the total gains of the participants are added up and the total losses are subtracted, they will sum to zero. Thus, cutting a cake, where taking a larger piece reduces the amount of cake available for others as much as it increases the amount available for that taker, is a zero-sum game if all participants value each unit of cake equally.

In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

Matching pennies is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so receives one from Even.

In game theory, a player's strategy is any of the options which he or she chooses in a setting where the outcome depends not only on their own actions but on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.

Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.

In game theory, battle of the sexes (BoS) is a two-player coordination game. Some authors refer to the game as Bach or Stravinsky and designate the players simply as Player 1 and Player 2, rather than assigning sex.

Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by first considering the last time a decision might be made and choosing what to do in any situation at that time. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation at every point in time. It was first used by Zermelo in 1913, to prove that chess has pure optimal strategies.

In game theory, trembling hand perfect equilibrium is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. Single stage game or single shot game are names for non-repeated games.

In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy, the distribution is called a correlated equilibrium.

Sequential equilibrium is a refinement of Nash Equilibrium for extensive form games due to David M. Kreps and Robert Wilson. A sequential equilibrium specifies not only a strategy for each of the players but also a belief for each of the players. A belief gives, for each information set of the game belonging to the player, a probability distribution on the nodes in the information set. A profile of strategies and beliefs is called an assessment for the game. Informally speaking, an assessment is a perfect Bayesian equilibrium if its strategies are sensible given its beliefs and its beliefs are confirmed on the outcome path given by its strategies. The definition of sequential equilibrium further requires that there be arbitrarily small perturbations of beliefs and associated strategies with the same property.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In algorithmic game theory, a succinct game or a succinctly representable game is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of polynomial type if in a game represented by a string of length n the number of players, as well as the number of strategies of each player, is bounded by a polynomial in n.

A Markov perfect equilibrium is an equilibrium concept in game theory. It has been used in analyses of industrial organization, macroeconomics, and political economy. It is a refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin.

Jean-François Mertens Belgian game theorist

Jean-François Mertens was a Belgian game theorist and mathematical economist.

Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

References