Solution concept

Last updated
Selected equilibrium refinements in game theory. Arrows point from a refinement to the more general concept (i.e., ESS
[?]
{\displaystyle \subset }
Proper). Equilibrium refinements.svg
Selected equilibrium refinements in game theory. Arrows point from a refinement to the more general concept (i.e., ESS Proper).

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

Contents

Many solution concepts, for many games, will result in more than one solution. This puts any one of the solutions in doubt, so a game theorist may apply a refinement to narrow down the solutions. Each successive solution concept presented in the following improves on its predecessor by eliminating implausible equilibria in richer games.

Formal definition

Let be the class of all games and, for each game , let be the set of strategy profiles of . A solution concept is an element of the direct product i.e., a function such that for all

Rationalizability and iterated dominance

In this solution concept, players are assumed to be rational and so strictly dominated strategies are eliminated from the set of strategies that might feasibly be played. A strategy is strictly dominated when there is some other strategy available to the player that always has a higher payoff, regardless of the strategies that the other players choose. (Strictly dominated strategies are also important in minimax game-tree search.) For example, in the (single period) prisoners' dilemma (shown below), cooperate is strictly dominated by defect for both players because either player is always better off playing defect, regardless of what his opponent does.

Prisoner 2 CooperatePrisoner 2 Defect
Prisoner 1 Cooperate−0.5, −0.5−10, 0
Prisoner 1 Defect0, −10−2, −2

Nash equilibrium

A Nash equilibrium is a strategy profile (a strategy profile specifies a strategy for every player, e.g. in the above prisoners' dilemma game (cooperate, defect) specifies that prisoner 1 plays cooperate and prisoner 2 plays defect) in which every strategy played by every agent (agent i) is a best response to every other strategy played by all the other opponents (agents j for every j≠i) . A strategy by a player is a best response to another player's strategy if there is no other strategy that could be played that would yield a higher pay-off in any situation in which the other player's strategy is played.

Backward induction

In some games, there are multiple Nash equilibria, but not all of them are realistic. In dynamic games, backward induction can be used to eliminate unrealistic Nash equilibria. Backward induction assumes that players are rational and will make the best decisions based on their future expectations. This eliminates noncredible threats, which are threats that a player would not carry out if they were ever called upon to do so.

For example, consider a dynamic game with an incumbent firm and a potential entrant to the industry. The incumbent has a monopoly and wants to maintain its market share. If the entrant enters, the incumbent can either fight or accommodate the entrant. If the incumbent accommodates, the entrant will enter and gain profit. If the incumbent fights, it will lower its prices, run the entrant out of business (incurring exit costs), and damage its own profits.

The best response for the incumbent if the entrant enters is to accommodate, and the best response for the entrant if the incumbent accommodates is to enter. This results in a Nash equilibrium. However, if the incumbent chooses to fight, the best response for the entrant is to not enter. If the entrant does not enter, it does not matter what the incumbent chooses to do. Hence, fight can be considered a best response for the incumbent if the entrant does not enter, resulting in another Nash equilibrium.

However, this second Nash equilibrium can be eliminated by backward induction because it relies on a noncredible threat from the incumbent. By the time the incumbent reaches the decision node where it can choose to fight, it would be irrational to do so because the entrant has already entered. Therefore, backward induction eliminates this unrealistic Nash equilibrium.

See also:

Subgame perfect Nash equilibrium

A generalization of backward induction is subgame perfection. Backward induction assumes that all future play will be rational. In subgame perfect equilibria, play in every subgame is rational (specifically a Nash equilibrium). Backward induction can only be used in terminating (finite) games of definite length and cannot be applied to games with imperfect information. In these cases, subgame perfection can be used. The eliminated Nash equilibrium described above is subgame imperfect because it is not a Nash equilibrium of the subgame that starts at the node reached once the entrant has entered.

Perfect Bayesian equilibrium

Sometimes subgame perfection does not impose a large enough restriction on unreasonable outcomes. For example, since subgames cannot cut through information sets, a game of imperfect information may have only one subgame – itself – and hence subgame perfection cannot be used to eliminate any Nash equilibria. A perfect Bayesian equilibrium (PBE) is a specification of players' strategies and beliefs about which node in the information set has been reached by the play of the game. A belief about a decision node is the probability that a particular player thinks that node is or will be in play (on the equilibrium path). In particular, the intuition of PBE is that it specifies player strategies that are rational given the player beliefs it specifies and the beliefs it specifies are consistent with the strategies it specifies.

In a Bayesian game a strategy determines what a player plays at every information set controlled by that player. The requirement that beliefs are consistent with strategies is something not specified by subgame perfection. Hence, PBE is a consistency condition on players' beliefs. Just as in a Nash equilibrium no player's strategy is strictly dominated, in a PBE, for any information set no player's strategy is strictly dominated beginning at that information set. That is, for every belief that the player could hold at that information set there is no strategy that yields a greater expected payoff for that player. Unlike the above solution concepts, no player's strategy is strictly dominated beginning at any information set even if it is off the equilibrium path. Thus in PBE, players cannot threaten to play strategies that are strictly dominated beginning at any information set off the equilibrium path.

The Bayesian in the name of this solution concept alludes to the fact that players update their beliefs according to Bayes' theorem. They calculate probabilities given what has already taken place in the game.

Forward induction

Forward induction is so called because just as backward induction assumes future play will be rational, forward induction assumes past play was rational. Where a player does not know what type another player is (i.e. there is imperfect and asymmetric information), that player may form a belief of what type that player is by observing that player's past actions. Hence the belief formed by that player of what the probability of the opponent being a certain type is based on the past play of that opponent being rational. A player may elect to signal his type through his actions.

Kohlberg and Mertens (1986) introduced the solution concept of Stable equilibrium, a refinement that satisfies forward induction. A counter-example was found where such a stable equilibrium did not satisfy backward induction. To resolve the problem Jean-François Mertens introduced what game theorists now call Mertens-stable equilibrium concept, probably the first solution concept satisfying both forward and backward induction.

Forward induction yields a unique solution for the burning money game.

See also

Related Research Articles

In game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a non-cooperative game involving two or more players. In a Nash equilibrium, each player is assumed to know the equilibrium strategies of the other players, and no one has anything to gain by changing only one's own strategy. The principle of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to competing firms choosing outputs.

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round, but after an additional switch the potential payoff will be higher. Therefore, although at each round a player has an incentive to take the pot, it would be better for them to wait. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

<span class="mw-page-title-main">Signaling game</span> Game class in game theory

In game theory, a signaling game is a simple type of a dynamic Bayesian game.

Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.

In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games.

Backward induction is the process of determining a sequence of optimal choices by employing reasoning backward from the end of a problem or situation to its beginning, choice by choice. It involves examining the last point at which a decision is to be made and identifying the most optimal choice of action at that point. Using this information, one can then determine what to do at the second-to-last point of decision. This process continues backward until the best action for every possible point along the sequence is determined. Backward induction was first utilized in 1875 by Arthur Cayley, who discovered the method while attempting to solve the secretary problem.

In game theory, trembling hand perfect equilibrium is a type of refinement of a Nash equilibrium that was first proposed by Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.

In game theory, a Manipulated Nash equilibrium or MAPNASH is a refinement of subgame perfect equilibrium used in dynamic games of imperfect information. Informally, a strategy set is a MAPNASH of a game if it would be a subgame perfect equilibrium of the game if the game had perfect information. MAPNASH were first suggested by Amershi, Sadanand, and Sadanand (1988) and has been discussed in several papers since. It is a solution concept based on how players think about other players' thought processes.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, a stochastic game, introduced by Lloyd Shapley in the early 1950s, is a repeated game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.

The concept of coalition-proof Nash equilibrium applies to certain "noncooperative" environments in which players can freely discuss their strategies but cannot make binding commitments. It emphasizes the immunization to deviations that are self-enforcing. While the best-response property in Nash equilibrium is necessary for self-enforceability, it is not generally sufficient when players can jointly deviate in a way that is mutually beneficial.

<span class="mw-page-title-main">Jean-François Mertens</span> Belgian game theorist (1946–2012)

Jean-François Mertens was a Belgian game theorist and mathematical economist.

In game theory, Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

M equilibrium is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization and perfect beliefs. The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis.

References