M equilibrium

Last updated

M equilibrium is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization ("no mistakes") and perfect beliefs ("no surprises"). The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis. [1]

Contents

Background

A large body of work in experimental game theory has documented systematic departures from Nash equilibrium, the cornerstone of classic game theory. [2] The lack of empirical support for Nash equilibrium led Nash himself to return to doing research in pure mathematics. [3] Selten, who shared the 1994 Nobel Prize with Nash, likewise concluded that "game theory is for proving theorems, not for playing games". [4] M equilibrium is motivated by the desire for an empirically relevant game theory.

M equilibrium accomplishes this by replacing the two main assumptions underlying classical game theory, perfect maximization and rational expectations, with the weaker notions of ordinal monotonicity –players' choice probabilities are ranked the same as the expected payoffs based on their beliefs – and ordinal consistency – players' beliefs yield the same ranking of expected payoffs as their choices.

M equilibria do not follow from the fixed-points that follow by imposing rational expectations and that have long dominated economics. Instead, the mathematical machinery used to characterize M equilibria is semi-algebraic geometry. Interestingly, some of this machinery was developed by Nash himself. [5] [6] [7] The characterization of M equilibria as semi-algebraic sets allows for mathematically precise and empirically testable predictions.

Definition

M equilibrium is based on the following two conditions;

Let and denote the concatenations of players’ choice and belief profiles respectively, and let and denote the concatenations of players’ rank correspondences and profit functions. We write for the profile of expected payoffs based on players’ beliefs and for the profile of expected payoffs when beliefs are correct, i.e. for . The set of possible choice profiles is and the set of possible belief profiles is .

Definition: We say form an M Equilibrium if they are the closures of the largest non-empty sets and that satisfy:

for all , .

Properties

It can be shown that, generically, M equilibria satisfy the following properties:

  1. M equilibria have positive measure in
  2. M equilibria are "colorable" by a unique rank vector
  3. Nash equilibria arise as boundary points of some M equilibrium

The number of M equilibria can generically be even or odd, and may be less than, equal, or greater than the number of Nash equilibria. Also, any M equilibrium may contain zero, one, or multiple Nash equilibria. Importantly, the measure of any M equilibrium choice set is bounded and decreases exponentially with the number of players and the number of possible choices.

Meta Theory

Surprisingly, M equilibrium "minimally envelopes" various parametric models based on fixed-points, including Quantal Response Equilibrium. [1] Unlike QRE, however, M equilibrium is parameter-free, easy to compute, and does not impose the rational-expectations condition of homogeneous and correct beliefs.

Behavioral stability

The interior of a colored M equilibrium set consists of choices and beliefs that are behaviorally stable. A profile is behaviorally stable when small perturbations in the game do not destroy its equilibrium nature. So an M-equilibrium is behaviorally stable when it remains an M equilibrium even after perturbing the game. Behavioral stability is a strengthening of the concept of strategic stability. [1] [8]

See also

Related Research Articles

In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy. The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round, but after an additional switch the potential payoff will be higher. Therefore, although at each round a player has an incentive to take the pot, it would be better for them to wait. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

Matching pennies is a non-cooperative game studied in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even wins and keeps both pennies. If the pennies do not match, then Odd wins and keeps both pennies.

Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.

<span class="mw-page-title-main">Solution concept</span> Formal rule for predicting how a game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games.

In game theory, a Bayesian game is a strategic decision-making model which assumes players have incomplete information. Players hold private information relevant to the game, meaning that the payoffs are not common knowledge. Bayesian games model the outcome of player interactions using aspects of Bayesian probability. They are notable because they allowed, for the first time in game theory, for the specification of the solutions to games with incomplete information.

In game theory, a symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If one can change the identities of the players without changing the payoff to the strategies, then a game is symmetric. Symmetry can come in different varieties. Ordinally symmetric games are games that are symmetric with respect to the ordinal structure of the payoffs. A game is quantitatively symmetric if and only if it is symmetric with respect to the exact payoffs. A partnership game is a symmetric game where both players receive identical payoffs for any strategy set. That is, the payoff for playing strategy a against strategy b receives the same payoff as playing strategy b against strategy a.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their private observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from their strategy, the distribution from which the signals are drawn is called a correlated equilibrium.

In mathematical logic, the Borel hierarchy is a stratification of the Borel algebra generated by the open subsets of a Polish space; elements of this algebra are called Borel sets. Each Borel set is assigned a unique countable ordinal number called the rank of the Borel set. The Borel hierarchy is of particular interest in descriptive set theory.

Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game.1 When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

Proper equilibrium is a refinement of Nash Equilibrium by Roger B. Myerson. Proper equilibrium further refines Reinhard Selten's notion of a trembling hand perfect equilibrium by assuming that more costly trembles are made with significantly smaller probability than less costly ones.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

Rabin fairness is a fairness model invented by Matthew Rabin. It goes beyond the standard assumptions in modeling behavior, rationality and self-interest, to incorporate fairness. Rabin's fairness model incorporates findings from the economics and psychology fields to provide an alternative utility model. Fairness is one type of social preference.

A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

In set theory and mathematical logic, the Lévy hierarchy, introduced by Azriel Lévy in 1965, is a hierarchy of formulas in the formal language of the Zermelo–Fraenkel set theory, which is typically called just the language of set theory. This is analogous to the arithmetical hierarchy, which provides a similar classification for sentences of the language of arithmetic.

In game theory, Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

References

  1. 1 2 3 Goeree, Jacob K.; Louis, Philippos (2018). "M Equilibrium: A dual theory of beliefs and choices in games". arXiv: 1811.05138 [econ.TH].
  2. Goeree, Jacob K.; Holt, Charles (2001). "Ten little treasures of game theory and ten intuitive contradictions". American Economic Review. 91 (5): 1402–1422. CiteSeerX   10.1.1.184.8700 . doi:10.1257/aer.91.5.1402.
  3. Nasar, Sylvia (1998). A Beautiful Mind. New York: Simon & Schuster. ISBN   978-0743224574.
  4. Goeree, Jacob K.; Holt, Charles (1999). "Stochastic game theory: For playing games, not just for doing theory". Proceedings of the National Academy of Sciences. 96 (19): 10564–10567. Bibcode:1999PNAS...9610564G. doi: 10.1073/pnas.96.19.10564 . PMC   33741 . PMID   10485862.
  5. Kollár, János (2017). "Nash's work in algebraic geometry". Bulletin of the American Mathematical Society. 54 (2): 307–324. doi: 10.1090/bull/1543 .
  6. Bochnak, Jacek; Coste, Michel; Roy, Marie-Françoise (2013). Real algebraic geometry. Springer Science & Business Media. doi:10.1007/978-3-662-03718-8. ISBN   978-3-642-08429-4. S2CID   118839789.
  7. Nash, John F. (1952). "Real algebraic manifolds". Annals of Mathematics. 56 (3): 405–421. doi:10.2307/1969649. JSTOR   1969649.
  8. Kohlberg, Elon; Mertens, Jean-Francois (1986). "On the strategic stability of equilibria". Econometrica. 54 (5): 1003–1037. CiteSeerX   10.1.1.295.4592 . doi:10.2307/1912320. JSTOR   1912320.