WikiMili The Free Encyclopedia

Proper equilibrium | |
---|---|

A solution concept in game theory | |

Relationship | |

Subset of | Trembling hand perfect equilibrium |

Significance | |

Proposed by | Roger B. Myerson |

**Proper equilibrium** is a refinement of Nash Equilibrium due to Roger B. Myerson. Proper equilibrium further refines Reinhard Selten's notion of a trembling hand perfect equilibrium by assuming that more costly trembles are made with significantly smaller probability than less costly ones.

**Reinhard Justus Reginald Selten** was a German economist, who won the 1994 Nobel Memorial Prize in Economic Sciences. He is also well known for his work in bounded rationality and can be considered as one of the founding fathers of experimental economics.

In game theory, **trembling hand perfect equilibrium** is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or **tremble,** may choose unintended strategies, albeit with negligible probability.

Given a normal form game and a parameter , a totally mixed strategy profile is defined to be **-proper** if, whenever a player has two pure strategies s and s' such that the expected payoff of playing s is smaller than the expected payoff of playing s' (that is ), then the probability assigned to s is at most times the probability assigned to s'.

In abstract rewriting, an object is in **normal form** if it cannot be rewritten any further. Depending on the rewriting system and the object, several normal forms may exist, or none at all.

A strategy profile of the game is then said to be a proper equilibrium if it is a limit point, as approaches 0, of a sequence of -proper strategy profiles.

The game to the right is a variant of Matching Pennies.

Guess heads up | Guess tails up | Grab penny | |
---|---|---|---|

Hide heads up | -1, 1 | 0, 0 | -1, 1 |

Hide tails up | 0, 0 | -1, 1 | -1, 1 |

Player 1 (row player) hides a penny and if Player 2 (column player) guesses correctly whether it is heads up or tails up, he gets the penny. In this variant, Player 2 has a third option: Grabbing the penny without guessing. The Nash equilibria of the game are the strategy profiles where Player 2 grabs the penny with probability 1. Any mixed strategy of Player 1 is in (Nash) equilibrium with this pure strategy of Player 2. Any such pair is even trembling hand perfect. Intuitively, since Player 1 expects Player 2 to grab the penny, he is not concerned about leaving Player 2 uncertain about whether it is heads up or tails up. However, it can be seen that the unique proper equilibrium of this game is the one where Player 1 hides the penny heads up with probability 1/2 and tails up with probability 1/2 (and Player 2 grabs the penny). This unique proper equilibrium can be motivated intuitively as follows: Player 1 fully expects Player 2 to grab the penny. However, Player 1 still prepares for the unlikely event that Player 2 does not grab the penny and instead for some reason decides to make a guess. Player 1 prepares for this event by making sure that Player 2 has no information about whether the penny is heads up or tails up, exactly as in the original Matching Pennies game.

In game theory, the **Nash equilibrium**, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

One may apply the properness notion to extensive form games in two different ways, completely analogous to the two different ways trembling hand perfection is applied to extensive games. This leads to the notions of **normal form proper equilibrium** and **extensive form proper equilibrium** of an extensive form game. It was shown by van Damme that a normal form proper equilibrium of an extensive form game is behaviorally equivalent to a quasi-perfect equilibrium of that game.

**Quasi-perfect equilibrium** is a refinement of Nash Equilibrium for extensive form games due to Eric van Damme.

In game theory, the **best response** is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

**Matching pennies** is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so receives one from Even.

In game theory, a player's **strategy** is any of the options which he or she chooses in a setting where the outcome depends *not only* on their own actions *but* on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.

Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.

In game theory, a **solution concept** is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, a **Bayesian game** is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

In game theory, **folk theorems** are a class of theorems about possible Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept subgame-perfect Nash equilibria rather than Nash equilibrium.

**Sequential equilibrium** is a refinement of Nash Equilibrium for extensive form games due to David M. Kreps and Robert Wilson. A sequential equilibrium specifies not only a strategy for each of the players but also a **belief** for each of the players. A belief gives, for each information set of the game belonging to the player, a probability distribution on the nodes in the information set. A profile of strategies and beliefs is called an **assessment** for the game. Informally speaking, an assessment is a perfect Bayesian equilibrium if its strategies are sensible given its beliefs **and** its beliefs are confirmed on the outcome path given by its strategies. The definition of sequential equilibrium further requires that there be arbitrarily small perturbations of beliefs and associated strategies with the same property.

In game theory, a **subgame perfect equilibrium** is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game has a subgame perfect equilibrium.

**Risk dominance** and **payoff dominance** are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered **payoff dominant** if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered **risk dominant** if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, an **epsilon-equilibrium**, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, a **stochastic game**, introduced by Lloyd Shapley in the early 1950s, is a dynamic game with **probabilistic transitions** played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some **state**. The players select actions and each player receives a **payoff** that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.

A **continuous game** is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite.

In game theory a **Poisson game** is a game with a random number of players, where the distribution of the number of players follows a Poisson random process. An extension of games of imperfect information, Poisson games have mostly seen application to models of voting.

In algorithmic game theory, a **succinct game** or a **succinctly representable game** is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of *polynomial type* if in a game represented by a string of length *n* the number of players, as well as the number of strategies of each player, is bounded by a polynomial in *n*.

**Mertens stability** is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

**M equilibrium** is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization and perfect beliefs. The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis.

This article does not cite any sources .(September 2013) (Learn how and when to remove this template message) |

- Roger B. Myerson. Refinements of the Nash equilibrium concept.
*International Journal of Game Theory*, 15:133-154, 1978. - Eric van Damme. "A relationship between perfect equilibria in extensive form games and proper equilibria in normal form games."
*International Journal of Game Theory*13:1--13, 1984.

**Eric Eleterius Coralie van Damme** is a Dutch economist and Professor of Economics at the Tilburg University, known for his contributions to game theory.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.