Strong Nash equilibrium

Last updated
Strong Nash equilibrium
A solution concept in game theory
Relationship
Subset of Evolutionarily stable strategy (if the strong Nash equilibrium is not also weak)
Significance
Used forAll non-cooperative games of more than 2 players

In game theory, a strong Nash equilibrium(SNE) is a combination of actions of the different players, in which no coalition of players can cooperatively deviate in a way that strictly benefits all of its members, given that the actions of the other players remain fixed. This is in contrast to simple Nash equilibrium, which considers only deviations by individual players. The concept was introduced by Israel Aumann in 1959. [1] SNE is particularly useful in areas such as the study of voting systems, in which there are typically many more players than possible outcomes, and so plain Nash equilibria are far too abundant. [2]

Contents

Existence

Nessah and Tian [3] prove that an SNE exists if the following conditions are satisfied:

For example, consider a game with two players, with strategy spaces [1/3, 2] and [3/4, 2], which are clearly compact and convex. The utility functions are:

which are continuous and convex. It remains to check coalition consistency. For every strategy-tuple x, we check the weighted-best-response of each coalition:

So, with w1=0.6,w2=0.4 the point (1/3,3/4) is a consistent social-welfare-best-response for all coalitions simultaneously. Therefore, an SNE exists, at the same point (1/3,3/4).

Here is an example in which the coalition consistency fails, and indeed there is no SNE. [3] :Example.3.1There are two players, with strategy space [0,1]. Their utility functions are:

There is a unique Nash equilibrium at (0,0), with payoff vector (0,0). However, it is not SNE as the coalition {1,2} can deviate to (1,1), with payoff vector (1,1). Indeed, coalition consistency is violated at x=(0,0): for the coalition {1,2}, for any weight-vector wS, the social-welfare-best-response is either on the line (1,0)--(1,1) or on the line (0,1)--(1,1); but any such point is not a best-response for the player playing 1.

Nessah and Tian [3] also present a necessary and sufficient condition for SNE existence, along with an algorithm that finds an SNE if and only if it exists.

Properties

Every SNE is a Nash equilibrium. This can be seen by considering a deviation of the n singleton coalitions.

Every SNE is weakly Pareto-efficient. This can be seen by considering a deviation of the grand coalition - the coalition of all players.

Every SNE is in the weak alpha-core and in the weak-beta core. [3]

Criticism

The strong Nash concept is criticized as too "strong" in that the environment allows for unlimited private communication. As a result of these requirements, Strong Nash rarely exists in games interesting enough to deserve study. Nevertheless, it is possible for there to be multiple strong Nash equilibria. For instance, in Approval voting, there is always a strong Nash equilibrium for any Condorcet winner that exists, but this is only unique (apart from inconsequential changes) when there is a majority Condorcet winner.

A relatively weaker yet refined Nash stability concept is called coalition-proof Nash equilibrium (CPNE) [2] in which the equilibria are immune to multilateral deviations that are self-enforcing. Every correlated strategy supported by iterated strict dominance and on the Pareto frontier is a CPNE. [4] Further, it is possible for a game to have a Nash equilibrium that is resilient against coalitions less than a specified size k. CPNE is related to the theory of the core.

Confusingly, the concept of a strong Nash equilibrium is unrelated to that of a weak Nash equilibrium. That is, a Nash equilibrium can be both strong and weak, either, or neither.

Related Research Articles

In game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a non-cooperative game involving two or more players. In a Nash equilibrium, each player is assumed to know the equilibrium strategies of the other players, and no one has anything to gain by changing only one's own strategy. The principle of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to competing firms choosing outputs.

Matching pennies is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even wins and keeps both pennies. If the pennies do not match, then Odd wins and keeps both pennies.

In game theory, a player's strategy is any of the options which they choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship. A player's strategy will determine the action which the player will take at any stage of the game. In studying game theory, economists enlist a more rational lens in analyzing decisions rather than the psychological or sociological perspectives taken when analyzing relationships between decisions of two or more parties in different disciplines.

Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.

In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games.

In computer graphics, the Liang–Barsky algorithm is a line clipping algorithm. The Liang–Barsky algorithm uses the parametric equation of a line and inequalities describing the range of the clipping window to determine the intersections between the line and the clip window. With these intersections it knows which portion of the line should be drawn. So this algorithm is significantly more efficient than Cohen–Sutherland. The idea of the Liang–Barsky clipping algorithm is to do as much testing as possible before computing line intersections.

In game theory, strategic dominance occurs when one strategy is better than another strategy for one player, no matter how that player's opponents may play. Many simple games can be solved using dominance. The opposite, intransitivity, occurs in games where one strategy may be better or worse than another strategy for one player, depending on how the player's opponents may play.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.

<span class="mw-page-title-main">Asymptote (vector graphics language)</span> Descriptive vector graphics language

Asymptote is a descriptive vector graphics language – developed by Andy Hammerlindl, John C. Bowman, and Tom Prince – which provides a natural coordinate-based framework for technical drawing. Asymptote runs on all major platforms. It is free software, available under the terms of the GNU Lesser General Public License (LGPL).

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, a game is said to be a potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

A Rubinstein bargaining model refers to a class of bargaining games that feature alternating offers through an infinite time horizon. The original proof is due to Ariel Rubinstein in a 1982 paper. For a long time, the solution to this type of game was a mystery; thus, Rubinstein's solution is one of the most influential findings in game theory.

The concept of coalition-proof Nash equilibrium applies to certain "noncooperative" environments in which players can freely discuss their strategies but cannot make binding commitments. It emphasizes the immunization to deviations that are self-enforcing. While the best-response property in Nash equilibrium is necessary for self-enforceability, it is not generally sufficient when players can jointly deviate in a way that is mutually beneficial.

In game theory, Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

M equilibrium is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization and perfect beliefs. The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis.

The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.

References

  1. R. Aumann (1959), Acceptable points in general cooperative n-person games in "Contributions to the Theory of Games IV", Princeton Univ. Press, Princeton, N.J..
  2. 1 2 B. D. Bernheim; B. Peleg; M. D. Whinston (1987), "Coalition-Proof Equilibria I. Concepts", Journal of Economic Theory, 42: 1–12, doi:10.1016/0022-0531(87)90099-8.
  3. 1 2 3 4 Nessah, Rabia; Tian, Guoqiang (2014-06-15). "On the existence of strong Nash equilibria". Journal of Mathematical Analysis and Applications. 414 (2): 871–885. doi:10.1016/j.jmaa.2014.01.030. ISSN   0022-247X.
  4. D. Moreno; J. Wooders (1996), "Coalition-Proof Equilibrium", Games and Economic Behavior, 17: 80–112, doi:10.1006/game.1996.0095, hdl: 10016/4408 .