Trembling hand perfect equilibrium

Last updated
(Normal form) trembling hand perfect equilibrium
Solution concept in game theory
Relationship
Subset of Nash Equilibrium
Superset of Proper equilibrium
Significance
Proposed by Reinhard Selten

In game theory, trembling hand perfect equilibrium is a type of refinement of a Nash equilibrium that was first proposed by Reinhard Selten. [1] A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

Contents

Definition

First define a perturbed game. A perturbed game is a copy of a base game, with the restriction that only totally mixed strategies are allowed to be played. A totally mixed strategy is a mixed strategy in an -player strategic game where every pure strategy is played with positive probability. This is the "trembling hands" of the players; they sometimes play a different strategy, other than the one they intended to play. Then define a mixed strategy profile as being trembling hand perfect if there is a sequence of perturbed games strategy profiles that converges to such that for every and every player the strategy is a best reply to .

Note: All completely mixed Nash equilibria are perfect.

Note 2: The mixed strategy extension of any finite normal-form game has at least one perfect equilibrium. [2]

Example

The game represented in the following normal form matrix has two pure strategy Nash equilibria, namely and . However, only is trembling-hand perfect.

LeftRight
Up1, 12, 0
Down0, 22, 2
Trembling hand perfect equilibrium

Assume player 1 (the row player) is playing a mixed strategy , for .

Player 2's expected payoff from playing L is:

Player 2's expected payoff from playing the strategy R is:

For small values of , player 2 maximizes his expected payoff by placing a minimal weight on R and maximal weight on L. By symmetry, player 1 should place a minimal weight on D and maximal weight on U if player 2 is playing the mixed strategy . Hence is trembling-hand perfect.

However, similar analysis fails for the strategy profile .

Assume player 2 is playing a mixed strategy . Player 1's expected payoff from playing U is:

Player 1's expected payoff from playing D is:

For all positive values of , player 1 maximizes his expected payoff by placing a minimal weight on D and maximal weight on U. Hence is not trembling-hand perfect because player 2 (and, by symmetry, player 1) maximizes his expected payoff by deviating most often to L if there is a small chance of error in the behavior of player 1.

Equilibria of two-player games

For 2x2 games, the set of trembling-hand perfect equilibria coincides with the set of equilibria consisting of two undominated strategies. In the example above, we see that the equilibrium <Down,Right> is imperfect, as Left (weakly) dominates Right for Player 2 and Up (weakly) dominates Down for Player 1. [3]

Equilibria of extensive form games

Extensive-form trembling hand perfect equilibrium
Solution concept in game theory
Relationship
Subset of Subgame perfect equilibrium, Perfect Bayesian equilibrium, Sequential equilibrium
Significance
Proposed by Reinhard Selten
Used for Extensive form games

There are two possible ways of extending the definition of trembling hand perfection to extensive form games.

The notions of normal-form and extensive-form trembling hand perfect equilibria are incomparable, i.e., an equilibrium of an extensive-form game may be normal-form trembling hand perfect but not extensive-form trembling hand perfect and vice versa. As an extreme example of this, Jean-François Mertens has given an example of a two-player extensive form game where no extensive-form trembling hand perfect equilibrium is admissible, i.e., the sets of extensive-form and normal-form trembling hand perfect equilibria for this game are disjoint.[ citation needed ]

An extensive-form trembling hand perfect equilibrium is also a sequential equilibrium. A normal-form trembling hand perfect equilibrium of an extensive form game may be sequential but is not necessarily so. In fact, a normal-form trembling hand perfect equilibrium does not even have to be subgame perfect.

Problems with perfection

Myerson (1978) [4] pointed out that perfection is sensitive to the addition of a strictly dominated strategy, and instead proposed another refinement, known as proper equilibrium.

Related Research Articles

In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy. The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.

<span class="mw-page-title-main">Granular material</span> Conglomeration of discrete solid, macroscopic particles

A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact. The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids.

In game theory, a move, action, or play is any one of the options which a player can choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship.

Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.

<span class="mw-page-title-main">Solution concept</span> Formal rule for predicting how a game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

<span class="mw-page-title-main">Two-state quantum system</span> Simple quantum mechanical system

In quantum mechanics, a two-state system is a quantum system that can exist in any quantum superposition of two independent quantum states. The Hilbert space describing such a system is two-dimensional. Therefore, a complete basis spanning the space will consist of two independent states. Any two-state system can also be seen as a qubit.

In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, but rather represent the game by way of a matrix. While this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, and their corresponding payoffs, for each player.

Determinacy is a subfield of set theory, a branch of mathematics, that examines the conditions under which one or the other player of a game has a winning strategy, and the consequences of the existence of such strategies. Alternatively and similarly, "determinacy" is the property of a game whereby such a strategy exists. Determinacy was introduced by Gale and Stewart in 1950, under the name "determinateness".

<span class="mw-page-title-main">Beam emittance</span> Property of a charged particle beam

In accelerator physics, emittance is a property of a charged particle beam. It refers to the area occupied by the beam in a position-and-momentum phase space.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

Quasi-perfect equilibrium is a refinement of Nash Equilibrium for extensive form games due to Eric van Damme.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Proper equilibrium is a refinement of Nash Equilibrium by Roger B. Myerson. Proper equilibrium further refines Reinhard Selten's notion of a trembling hand perfect equilibrium by assuming that more costly trembles are made with significantly smaller probability than less costly ones.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, a stochastic game, introduced by Lloyd Shapley in the early 1950s, is a repeated game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.

In computer science, in the area of formal language theory, frequent use is made of a variety of string functions; however, the notation used is different from that used for computer programming, and some commonly used functions in the theoretical realm are rarely used when programming. This article defines some of these basic terms.

In algorithmic game theory, a succinct game or a succinctly representable game is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of polynomial type if in a game represented by a string of length n the number of players, as well as the number of strategies of each player, is bounded by a polynomial in n.

In game theory, Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

M equilibrium is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization and perfect beliefs. The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis.

The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.

References

  1. Selten, R. (1975). "A Reexamination of the Perfectness Concept for Equilibrium Points in Extensive Games". International Journal of Game Theory . 4 (1): 25–55. doi:10.1007/BF01766400.
  2. Selten, R.: Reexamination of the perfectness concept for equilibrium points in extensive games. Int. J. Game Theory4, 1975, 25–55.
  3. Van Damme, Eric (1987). Stability and Perfection of Nash Equilibria. doi:10.1007/978-3-642-96978-2. ISBN   978-3-642-96980-5.
  4. Myerson, Roger B. "Refinements of the Nash equilibrium concept." International journal of game theory 7.2 (1978): 73-80.

Further reading