Mertens-stable equilibrium

Last updated

Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens [1] for games with finite numbers of players and strategies. Later, Mertens [2] proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. [3] This solution concept is now called Mertens stability, or just stability.

Contents

Like other refinements of Nash equilibrium [4] used in game theory stability selects subsets of the set of Nash equilibria that have desirable properties. Stability invokes stronger criteria than other refinements, and thereby ensures that more desirable properties are satisfied.

Desirable Properties of a Refinement

Refinements have often been motivated by arguments for admissibility, backward induction, and forward induction. In a two-player game, an admissible decision rule for a player is one that does not use any strategy that is weakly dominated by another (see Strategic dominance). Backward induction posits that a player's optimal action in any event anticipates that his and others' subsequent actions are optimal. The refinement called subgame perfect equilibrium implements a weak version of backward induction, and increasingly stronger versions are sequential equilibrium, perfect equilibrium, quasi-perfect equilibrium, and proper equilibrium. Forward induction posits that a player's optimal action in any event presumes the optimality of others' past actions whenever that is consistent with his observations. Forward induction [5] is satisfied by a sequential equilibrium for which a player's belief at an information set assigns probability only to others' optimal strategies that enable that information to be reached.

Kohlberg and Mertens emphasized further that a solution concept should satisfy the invariance principle that it not depend on which among the many equivalent representations of the strategic situation as an extensive-form game is used. Thus it should depend only on the reduced normal-form game obtained after elimination of pure strategies that are redundant because their payoffs for all players can be replicated by a mixture of other pure strategies. Mertens [6] [7] emphasized also the importance of the small worlds principle that a solution concept should depend only on the ordinal properties of players' preferences, and should not depend on whether the game includes extraneous players whose actions have no effect on the original players' feasible strategies and payoffs.

Kohlberg and Mertens demonstrated via examples that not all of these properties can be obtained from a solution concept that selects single Nash equilibria. Therefore, they proposed that a solution concept should select closed connected subsets of the set of Nash equilibria. [8]

Properties of Stable Sets

For two-player games with perfect recall and generic payoffs, stability is equivalent to just three of these properties: a stable set uses only undominated strategies, includes a quasi-perfect equilibrium, and is immune to embedding in a larger game. [10]

Definition of a Stable Set

A stable set is defined mathematically by essentiality of the projection map from a closed connected neighborhood in the graph of the Nash equilibria over the space of perturbed games obtained by perturbing players' strategies toward completely mixed strategies. This definition requires more than every nearby game having a nearby equilibrium. Essentiality requires further that no deformation of the projection maps to the boundary, which ensures that perturbations of the fixed point problem defining Nash equilibria have nearby solutions. This is apparently necessary to obtain all the desirable properties listed above.

Mertens provided several formal definitions depending on the coefficient module used for homology or cohomology.

A formal definition requires some notation. For a given game let be product of the simplices of the players' of mixed strategies. For each , let and let be its topological boundary. For let be the minimum probability of any pure strategy. For any define the perturbed game as the game where the strategy set of each player is the same as in , but where the payoff from a strategy profile is the payoff in from the profile . Say that is a perturbed equilibrium of if is an equilibrium of . Let be the graph of the perturbed equilibrium correspondence over , viz., the graph is the set of those pairs such that is a perturbed equilibrium of . For , is the corresponding equilibrium of . Denote by the natural projection map from to . For , let , and for let . Finally, refers to Čech cohomology with integer coefficients.

The following is a version of the most inclusive of Mertens' definitions, called *-stability.

Definition of a *-stable set: is a *-stable set if for some closed subset of with it has the following two properties:

If essentiality in cohomology or homology is relaxed to homotopy, then a weaker definition is obtained, which differs chiefly in a weaker form of the decomposition property. [11]

Related Research Articles

In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

Polyakov action Action Principle from Conformal Field Theory

In physics, the Polyakov action is an action of the two-dimensional conformal field theory describing the worldsheet of a string in string theory. It was introduced by Stanley Deser and Bruno Zumino and independently by L. Brink, P. Di Vecchia and P. S. Howe, and has become associated with Alexander Polyakov after he made use of it in quantizing the string. The action reads

Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.

Solution concept formal rule for predicting how a strategic game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, a Bayesian game is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

In game theory, trembling hand perfect equilibrium is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

Duffing equation Non-linear second order differential equation and its attractor

The Duffing equation, named after Georg Duffing (1861–1944), is a non-linear second-order differential equation used to model certain damped and driven oscillators. The equation is given by

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. Single stage game or single shot game are names for non-repeated games.

The 't Hooft η symbol is a symbol which allows one to express the generators of the SU(2) Lie algebra in terms of the generators of Lorentz algebra. The symbol is a blend between the Kronecker delta and the Levi-Civita symbol. It was introduced by Gerard 't Hooft. It is used in the construction of the BPST instanton.

Proper equilibrium is a refinement of Nash Equilibrium due to Roger B. Myerson. Proper equilibrium further refines Reinhard Selten's notion of a trembling hand perfect equilibrium by assuming that more costly trembles are made with significantly smaller probability than less costly ones.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, a stochastic game, introduced by Lloyd Shapley in the early 1950s, is a dynamic game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.

Equilibrium constants are determined in order to quantify chemical equilibria. When an equilibrium constant K is expressed as a concentration quotient,

In mathematics and computer science, the probabilistic automaton (PA) is a generalization of the nondeterministic finite automaton; it includes the probability of a given transition into the transition function, turning it into a transition matrix. Thus, the probabilistic automaton generalizes the concept of a Markov chain or subshift of finite type. The languages recognized by probabilistic automata are called stochastic languages; these include the regular languages as a subset. The number of stochastic languages is uncountable.

A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

In mathematics, weak bialgebras are a generalization of bialgebras that are both algebras and coalgebras but for which the compatibility conditions between the two structures have been "weakened". In the same spirit, weak Hopf algebras are weak bialgebras together with a linear map S satisfying specific conditions; they are generalizations of Hopf algebras.

Jean-François Mertens Belgian game theorist

Jean-François Mertens was a Belgian game theorist and mathematical economist.

Nodal decomposition mathematical term

In category theory, an abstract mathematical discipline, a nodal decomposition of a morphism is a representation of as a product , where is a strong epimorphism, a bimorphism, and a strong monomorphism.

M equilibrium is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization and perfect beliefs. The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis.

Magnetic diffusion refers to the motion of magnetic field lines, typically in the presence of a conducting fluid such as a plasma. The motion of magnetic fields is described by the magnetic diffusion equation and is due primarily to induction and diffusion of magnetic fields through the plasma. The magnetic diffusion equation is a partial differential equation commonly used in physics. Understanding the phenomenon is essential to magnetohydrodynamics and has important consequences in astrophysics and geophysics.

References

  1. Kohlberg, Elon, and Jean-François Mertens (1986). "On the Strategic Stability of Equilibria" (PDF). Econometrica. 54 (5): 1003–1037. CiteSeerX   10.1.1.295.4592 . doi:10.2307/1912320. JSTOR   1912320.CS1 maint: uses authors parameter (link)
  2. Mertens, Jean-François, 1989, and 1991. "Stable Equilibria - A Reformulation," Mathematics of Operations Research, 14: 575-625 and 16: 694-753.
  3. Govindan, Srihari, and Jean-François Mertens, 2004. "An Equivalent Definition of Stable Equilibria," International Journal of Game Theory, 32(3): 339-357.
  4. Govindan, Srihari & Robert Wilson, 2008. "Refinements of Nash Equilibrium," The New Palgrave Dictionary of Economics, 2nd edition. "Archived copy" (PDF). Archived from the original (PDF) on 2010-06-20. Retrieved 2012-02-12.CS1 maint: archived copy as title (link)
  5. Govindan, Srihari, and Robert Wilson, 2009. "On Forward Induction," Econometrica, 77(1): 1-28.
  6. Mertens, Jean-François, 2003. "Ordinality in Non Cooperative Games," International Journal of Game Theory, 32: 387–430.
  7. Mertens, Jean-François, 1992. "The Small Worlds Axiom for Stable Equilibria," Games and Economic Behavior, 4: 553-564.
  8. The requirement that the set is connected excludes the trivial refinement that selects all equilibria. If only a single (possibly unconnected) subset is selected then only the trivial refinement satisfies the conditions invoked by H. Norde, J. Potters, H. Reijnierse, and D. Vermeulen (1996): ``Equilibrium Selection and Consistency, Games and Economic Behavior, 12: 219-225.
  9. See Appendix D of Govindan, Srihari, and Robert Wilson, 2012. "Axiomatic Theory of Equilibrium Selection for Generic Two-Player Games," Econometrica, 70.
  10. Govindan, Srihari, and Robert Wilson, 2012. "Axiomatic Theory of Equilibrium Selection for Generic Two-Player Games," Econometrica, 70.
  11. Srihari Govindan and Robert Wilson, 2008. "Metastable Equilibria," Mathematics of Operations Research, 33: 787-820.