Manipulated Nash equilibrium

Last updated
MAPNASH
A solution concept in game theory
Relationship
Subset of Nash equilibrium, Subgame perfect equilibrium
Significance
Proposed byA. Amershi, A. Sadanand, and V. Sadanand
Used for Dynamic games of imperfect information
Example Battle of the sexes

In game theory, a Manipulated Nash equilibrium or MAPNASH is a refinement of subgame perfect equilibrium used in dynamic games of imperfect information. Informally, a strategy set is a MAPNASH of a game if it would be a subgame perfect equilibrium of the game if the game had perfect information. MAPNASH were first suggested by Amershi, Sadanand, and Sadanand (1988) and has been discussed in several papers since. It is a solution concept based on how players think about other players' thought processes.

Game theory is the study of mathematical models of strategic interaction in between rational decision-makers. It has applications in all fields of social science, as well as in logic and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.

Solution concept formal rule for predicting how a strategic game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

Contents

Formal definition and an example

Consider a dynamic game of imperfect information, G. Based on G, construct a game, PG, which has the same strategies, payoffs, and order of moves as G except PG is a game of perfect information (every player in PG is aware of the strategies chosen by those players who moved before). A strategy, S, in G is a MAPNASH of G if and only if S is a Nash equilibrium of G and S is a subgame perfect equilibrium of PG.

In game theory, a player's strategy is any of the options which he or she chooses in a setting where the outcome depends not only on their own actions but on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.

In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

Battle of the Sexes with imperfect information (G) Battle of the sexes - imperfect information.png
Battle of the Sexes with imperfect information (G)
Battle of the Sexes with perfect information (PG) Battle of the sexes - perfect information.png
Battle of the Sexes with perfect information (PG)

As an example, consider a sequential version of Battle of the sexes (pictured above on the left). This game has three Nash equilibrium: (O, o), (F, f), and one mixed equilibrium. We can construct a perfect information version (pictured above on the right). This game has only one subgame perfect equilibrium (O, Oo) If the first player chooses O, the second will choose Oo because 2 is better than 0. If the first player chooses F, the second will choose Ff because 3 is better than 0. So, player 1 is choosing between 3 if she chooses O and 2 if she chooses F. As a result, player 1 will choose O and player 2 will choose Oo.

In game theory, battle of the sexes (BoS) is a two-player coordination game. Some authors refer to the game as Bach or Stravinsky and designate the players simply as Player 1 and Player 2, rather than assigning sex.

In the imperfect information Battle of the sexes (G) the only MAPNASH is (O, o). Effectively, by moving first, player 1 can force the other player to choose her preferred equilibrium, hence the name "manipulated."

Significance

In traditional game theory the order of moves was only relevant if there was asymmetric information. In the case of battle of the sexes discussed above, the imperfect information game is equivalent to a game where player 2 moves first and a game where both players move simultaneously. If players follow MAPNASH, the order of moves is relevant even if it does not introduce asymmetries in information. Experimental evidence seems to suggest that actual players are influenced by the order of moves even if the order does not provide players with additional information.

Cooper et al. (1993) study a version of battle of the sexes and find that when one player moves before the other, the first player tends to choose his favorite equilibrium more often and the second player chooses her less favored equilibrium more often. This is a reversal for the second player compared to the same game where both players chooses simultaneously. Similar results are observed in public good games by Budescu, Au, and Chen (1997) and Rapoport (1997).

All of these games are coordination games where equilibrium selection is an important problem. In these games one player has a preferred equilibrium, and one might suppose that the order of moves introduces an asymmetry that solves the coordination problem. In order to resolve this problem Weber, Camerer, and Knez (2004) study a coordination game where no player prefers one equilibrium over another. They find that in this game introducing order results in different equilibria being selected, and they conclude that MAPNASH may be an important predictive tool.

In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies.

Related Research Articles

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

An extensive-form game is a specification of a game in game theory, allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature".

In game theory, an information set is a set that, for a particular player, establishes all the possible moves that could have taken place in the game so far, given what that player has observed. If the game has perfect information, every information set contains only one member, namely the point actually reached at that stage of the game. Otherwise, it is the case that some players cannot be sure exactly what has taken place so far in the game and what their position is.

In game theory, a Perfect Bayesian Equilibrium (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. A PBE is a refinement of both Bayesian Nash equilibrium (BNE) and subgame perfect equilibrium (SPE). A PBE has two components - strategies and beliefs:

In game theory, a Bayesian game is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by first considering the last time a decision might be made and choosing what to do in any situation at that time. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation at every point in time. It was first used by Zermelo in 1913, to prove that chess has pure optimal strategies.

In game theory, trembling hand perfect equilibrium is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

In game theory, folk theorems are a class of theorems about possible Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept subgame-perfect Nash equilibria rather than Nash equilibrium.

Sequential equilibrium is a refinement of Nash Equilibrium for extensive form games due to David M. Kreps and Robert Wilson. A sequential equilibrium specifies not only a strategy for each of the players but also a belief for each of the players. A belief gives, for each information set of the game belonging to the player, a probability distribution on the nodes in the information set. A profile of strategies and beliefs is called an assessment for the game. Informally speaking, an assessment is a perfect Bayesian equilibrium if its strategies are sensible given its beliefs and its beliefs are confirmed on the outcome path given by its strategies. The definition of sequential equilibrium further requires that there be arbitrarily small perturbations of beliefs and associated strategies with the same property.

A Markov perfect equilibrium is an equilibrium concept in game theory. It is the refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be readily identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin. It has since been used, among else, in the analysis of industrial organization, macroeconomics and political economy.

Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

References