In game theory, a move, action, or play is any one of the options which a player can choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others. [1] The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship. [2]
The term strategy is typically used to mean a complete algorithm for playing a game, telling a player what to do for every possible situation. A player's strategy determines the action the player will take at any stage of the game. However, the idea of a strategy is often confused or conflated with that of a move or action, because of the correspondence between moves and pure strategies in most games: for any move X, "always play move X" is an example of a valid strategy, and as a result every move can also be considered to be a strategy. Other authors treat strategies as being a different type of thing from actions, and therefore distinct.
It is helpful to think about a "strategy" as a list of directions, and a "move" as a single turn on the list of directions itself. This strategy is based on the payoff or outcome of each action. The goal of each agent is to consider their payoff based on a competitors action. For example, competitor A can assume competitor B enters the market. From there, Competitor A compares the payoffs they receive by entering and not entering. The next step is to assume Competitor B does not enter and then consider which payoff is better based on if Competitor A chooses to enter or not enter. This technique can identify dominant strategies where a player can identify an action that they can take no matter what the competitor does to try to maximize the payoff.
A strategy profile (sometimes called a strategy combination) is a set of strategies for all players which fully specifies all actions in a game. A strategy profile must include one and only one strategy for every player.
A player's strategy set defines what strategies are available for them to play.
A player has a finite strategy set if they have a number of discrete strategies available to them. For instance, a game of rock paper scissors comprises a single move by each player—and each player's move is made without knowledge of the other's, not as a response—so each player has the finite strategy set {rock paper scissors}.
A strategy set is infinite otherwise. For instance the cake cutting game has a bounded continuum of strategies in the strategy set {Cut anywhere between zero percent and 100 percent of the cake}.
In a dynamic game, games that are played over a series of time, the strategy set consists of the possible rules a player could give to a robot or agent on how to play the game. For instance, in the ultimatum game, the strategy set for the second player would consist of every possible rule for which offers to accept and which to reject.
In a Bayesian game, or games in which players have incomplete information about one another, the strategy set is similar to that in a dynamic game. It consists of rules for what action to take for any possible private information.
In applied game theory, the definition of the strategy sets is an important part of the art of making a game simultaneously solvable and meaningful. The game theorist can use knowledge of the overall problem, that is the friction between two or more players, to limit the strategy spaces, and ease the solution.
For instance, strictly speaking in the Ultimatum game a player can have strategies such as: Reject offers of ($1, $3, $5, ..., $19), accept offers of ($0, $2, $4, ..., $20). Including all such strategies makes for a very large strategy space and a somewhat difficult problem. A game theorist might instead believe they can limit the strategy set to: {Reject any offer ≤ x, accept any offer > x; for x in ($0, $1, $2, ..., $20)}.
A pure strategy provides a complete definition of how a player will play a game. Pure strategy can be thought about as a singular concrete plan subject to the observations they make during the course of the game of play. In particular, it determines the move a player will make for any situation they could face. A player's strategy set is the set of pure strategies available to that player.
A mixed strategy is an assignment of a probability to each pure strategy. When enlisting mixed strategy, it is often because the game does not allow for a rational description in specifying a pure strategy for the game. This allows for a player to randomly select a pure strategy. (See the following section for an illustration.) Since probabilities are continuous, there are infinitely many mixed strategies available to a player. Since probabilities are being assigned to strategies for a specific player when discussing the payoffs of certain scenarios the payoff must be referred to as "expected payoff".
Of course, one can regard a pure strategy as a degenerate case of a mixed strategy, in which that particular pure strategy is selected with probability 1 and every other strategy with probability 0.
A totally mixed strategy is a mixed strategy in which the player assigns a strictly positive probability to every pure strategy. (Totally mixed strategies are important for equilibrium refinement such as trembling hand perfect equilibrium.)
In a soccer penalty kick, the kicker must choose whether to kick to the right or left side of the goal, and simultaneously the goalie must decide which way to block it. Also, the kicker has a direction they are best at shooting, which is left if they are right-footed. The matrix for the soccer game illustrates this situation, a simplified form of the game studied by Chiappori, Levitt, and Groseclose (2002). [3] It assumes that if the goalie guesses correctly, the kick is blocked, which is set to the base payoff of 0 for both players. If the goalie guesses wrong, the kick is more likely to go in if it is to the left (payoffs of +2 for the kicker and -2 for the goalie) than if it is to the right (the lower payoff of +1 to kicker and -1 to goalie).
Goalie | |||
Lean Left | Lean Right | ||
Kicker | Kick Left | 0, 0 | +2, -2 |
Kick Right | +1, -1 | 0, 0 | |
Payoff for the Soccer Game (Kicker, Goalie) | |||
This game has no pure-strategy equilibrium, because one player or the other would deviate from any profile of strategies—for example, (Left, Left) is not an equilibrium because the Kicker would deviate to Right and increase his payoff from 0 to 1.
The kicker's mixed-strategy equilibrium is found from the fact that they will deviate from randomizing unless their payoffs from Left Kick and Right Kick are exactly equal. If the goalie leans left with probability g, the kicker's expected payoff from Kick Left is g(0) + (1-g)(2), and from Kick Right is g(1) + (1-g)(0). Equating these yields g= 2/3. Similarly, the goalie is willing to randomize only if the kicker chooses mixed strategy probability k such that Lean Left's payoff of k(0) + (1-k)(-1) equals Lean Right's payoff of k(-2) + (1-k)(0), so k = 1/3. Thus, the mixed-strategy equilibrium is (Prob(Kick Left) = 1/3, Prob(Lean Left) = 2/3).
In equilibrium, the kicker kicks to their best side only 1/3 of the time. That is because the goalie is guarding that side more. Also, in equilibrium, the kicker is indifferent which way they kick, but for it to be an equilibrium they must choose exactly 1/3 probability.
Chiappori, Levitt, and Groseclose try to measure how important it is for the kicker to kick to their favored side, add center kicks, etc., and look at how professional players actually behave. They find that they do randomize, and that kickers kick to their favored side 45% of the time and goalies lean to that side 57% of the time. Their article is well-known as an example of how people in real life use mixed strategies.
In his famous paper, John Forbes Nash proved that there is an equilibrium for every finite game. One can divide Nash equilibria into two types. Pure strategy Nash equilibria are Nash equilibria where all players are playing pure strategies. Mixed strategy Nash equilibria are equilibria where at least one player is playing a mixed strategy. While Nash proved that every finite game has a Nash equilibrium, not all have pure strategy Nash equilibria. For an example of a game that does not have a Nash equilibrium in pure strategies, see Matching pennies. However, many games do have pure strategy Nash equilibria (e.g. the Coordination game, the Prisoner's dilemma, the Stag hunt). Further, games can have both pure strategy and mixed strategy equilibria. An easy example is the pure coordination game, where in addition to the pure strategies (A,A) and (B,B) a mixed equilibrium exists in which both players play either strategy with probability 1/2.
During the 1980s, the concept of mixed strategies came under heavy fire for being "intuitively problematic", since they are weak Nash equilibria, and a player is indifferent about whether to follow their equilibrium strategy probability or deviate to some other probability. [4] [5] Game theorist Ariel Rubinstein describes alternative ways of understanding the concept. The first, due to Harsanyi (1973), [6] is called purification , and supposes that the mixed strategies interpretation merely reflects our lack of knowledge of the players' information and decision-making process. Apparently random choices are then seen as consequences of non-specified, payoff-irrelevant exogenous factors. [5] A second interpretation imagines the game players standing for a large population of agents. Each of the agents chooses a pure strategy, and the payoff depends on the fraction of agents choosing each strategy. The mixed strategy hence represents the distribution of pure strategies chosen by each population. However, this does not provide any justification for the case when players are individual agents.
Later, Aumann and Brandenburger (1995), [7] re-interpreted Nash equilibrium as an equilibrium in beliefs, rather than actions. For instance, in rock paper scissors an equilibrium in beliefs would have each player believing the other was equally likely to play each strategy. This interpretation weakens the descriptive power of Nash equilibrium, however, since it is possible in such an equilibrium for each player to actually play a pure strategy of Rock in each play of the game, even though over time the probabilities are those of the mixed strategy.
While a mixed strategy assigns a probability distribution over pure strategies, a behavior strategy assigns at each information set a probability distribution over the set of possible actions. While the two concepts are very closely related in the context of normal form games, they have very different implications for extensive form games. Roughly, a mixed strategy randomly chooses a deterministic path through the game tree, while a behavior strategy can be seen as a stochastic path. The relationship between mixed and behavior strategies is the subject of Kuhn's theorem, a behavioral outlook on traditional game-theoretic hypotheses. The result establishes that in any finite extensive-form game with perfect recall, for any player and any mixed strategy, there exists a behavior strategy that, against all profiles of strategies (of other players), induces the same distribution over terminal nodes as the mixed strategy does. The converse is also true.
A famous example of why perfect recall is required for the equivalence is given by Piccione and Rubinstein (1997)[ full citation needed ] with their Absent-Minded Driver game.
Outcome equivalence combines the mixed and behavioral strategy of Player i in relation to the pure strategy of Player i’s opponent. Outcome equivalence is defined as the situation in which, for any mixed and behavioral strategy that Player i takes, in response to any pure strategy that Player I’s opponent plays, the outcome distribution of the mixed and behavioral strategy must be equal. This equivalence can be described by the following formula: (Q^(U(i), S(-i)))(z) = (Q^(β(i), S(-i)))(z), where U(i) describes Player i's mixed strategy, β(i) describes Player i's behavioral strategy, and S(-i) is the opponent's strategy. [8]
Perfect recall is defined as the ability of every player in game to remember and recall all past actions within the game. Perfect recall is required for equivalence as, in finite games with imperfect recall, there will be existing mixed strategies of Player I in which there is no equivalent behavior strategy. This is fully described in the Absent-Minded Driver game formulated by Piccione and Rubinstein. In short, this game is based on the decision-making of a driver with imperfect recall, who needs to take the second exit off the highway to reach home but does not remember which intersection they are at when they reach it. Figure [2] describes this game.
Without perfect information (i.e. imperfect information), players make a choice at each decision node without knowledge of the decisions that have preceded it. Therefore, a player’s mixed strategy can produce outcomes that their behavioral strategy cannot, and vice versa. This is demonstrated in the Absent-minded Driver game. With perfect recall and information, the driver has a single pure strategy, which is [continue, exit], as the driver is aware of what intersection (or decision node) they are at when they arrive to it. On the other hand, looking at the planning-optimal stage only, the maximum payoff is achieved by continuing at both intersections, maximized at p=2/3 (reference). This simple one player game demonstrates the importance of perfect recall for outcome equivalence, and its impact on normal and extended form games. [9]
In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy. The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.
The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while the ideal outcome is for one player to yield, individuals try to avoid it out of pride, not wanting to look like "chickens." Each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game essentially ends.
In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.
A coordination game is a type of simultaneous game found in game theory. It describes the situation where a player will earn a higher payoff when they select the same course of action as another player. The game is not one of pure conflict, which results in multiple pure strategy Nash equilibria in which players choose matching strategies. Figure 1 shows a 2-player example.
Matching pennies is a non-cooperative game studied in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even wins and keeps both pennies. If the pennies do not match, then Odd wins and keeps both pennies.
In game theory, the battle of the sexes is a two-player coordination game that also involves elements of conflict. The game was introduced in 1957 by R. Duncan Luce and Howard Raiffa in their classic book, Games and Decisions. Some authors prefer to avoid assigning sexes to the players and instead use Players 1 and 2, and some refer to the game as Bach or Stravinsky, using two concerts as the two events. The game description here follows Luce and Raiffa's original story.
In game theory, a dominant strategy is a strategy that is better than any other strategy for one player, no matter how that player's opponent will play. Some very simple games can be solved using dominance.
Rationalizability is a solution concept in game theory. It is the most permissive possible solution concept that still requires both players to be at least somewhat rational and know the other players are also somewhat rational, i.e. that they do not play dominated strategies. A strategy is rationalizable if there exists some possible set of beliefs both players could have about each other's actions, that would still result in the strategy being played.
In game theory, trembling hand perfect equilibrium is a type of refinement of a Nash equilibrium that was first proposed by Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.
In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.
In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.
In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their private observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from their strategy, the distribution from which the signals are drawn is called a correlated equilibrium.
In game theory, the purification theorem was contributed by Nobel laureate John Harsanyi in 1973. The theorem justifies a puzzling aspect of mixed strategy Nash equilibria: each player is wholly indifferent between each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.
In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".
Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.
Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game.1 When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.
In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.
In game theory, a strong Nash equilibrium(SNE) is a combination of actions of the different players, in which no coalition of players can cooperatively deviate in a way that strictly benefits all of its members, given that the actions of the other players remain fixed. This is in contrast to simple Nash equilibrium, which considers only deviations by individual players. The concept was introduced by Israel Aumann in 1959. SNE is particularly useful in areas such as the study of voting systems, in which there are typically many more players than possible outcomes, and so plain Nash equilibria are far too abundant.
Jean-François Mertens was a Belgian game theorist and mathematical economist.
In game theory, Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.