WikiMili The Free Encyclopedia

The topic of this article may not meet Wikipedia's general notability guideline .(June 2018) (Learn how and when to remove this template message) |

Sequential Equilibrium | |
---|---|

A solution concept in game theory | |

Relationship | |

Subset of | Subgame perfect equilibrium, perfect Bayesian equilibrium |

Superset of | extensive-form trembling hand perfect equilibrium, Quasi-perfect equilibrium |

Significance | |

Proposed by | David M. Kreps and Robert Wilson |

Used for | Extensive form games |

**Sequential equilibrium** is a refinement of Nash Equilibrium for extensive form games due to David M. Kreps and Robert Wilson. A sequential equilibrium specifies not only a strategy for each of the players but also a **belief** for each of the players. A belief gives, for each information set of the game belonging to the player, a probability distribution on the nodes in the information set. A profile of strategies and beliefs is called an **assessment** for the game. Informally speaking, an assessment is a perfect Bayesian equilibrium if its strategies are sensible given its beliefs **and** its beliefs are confirmed on the outcome path given by its strategies. The definition of sequential equilibrium further requires that there be arbitrarily small perturbations of beliefs and associated strategies with the same property.

**David Marc "Dave" Kreps** is a game theorist and economist and professor at the Graduate School of Business at Stanford University. He is known for his analysis of dynamic choice models and non-cooperative game theory, particularly the idea of sequential equilibrium, which he developed with Stanford Business School colleague Robert B. Wilson.

**Robert Butler "Bob" Wilson, Jr.** is an American economist and the Adams Distinguished Professor of Management, Emeritus at Stanford University. He is known for his contributions to management science and business economics. His doctoral thesis introduced sequential quadratic programming, which became a leading iterative method for nonlinear programming. With other mathematical economists at the Stanford Business School, he helped to reformulate the economics of industrial organization and organization theory using non-cooperative game theory. His research on nonlinear pricing has influenced policies for large firms, particularly in the energy industry, especially electricity.

In game theory, an **information set** is a set that, for a particular player, establishes all the possible moves that could have taken place in the game so far, given what that player has observed. If the game has perfect information, every information set contains only one member, namely the point actually reached at that stage of the game. Otherwise, it is the case that some players cannot be sure exactly what has taken place so far in the game and what their position is.

The formal definition of a strategy being sensible given a belief is straightforward; the strategy should simply maximize expected payoff in every information set. It is also straightforward to define what a sensible belief should be for those information sets that are reached with positive probability given the strategies; the beliefs should be the conditional probability distribution on the nodes of the information set, given that it is reached. This entails the application of Bayes' rule.

It is far from straightforward to define what a sensible belief should be for those information sets that are reached with probability zero, given the strategies. Indeed, this is the main conceptual contribution of Kreps and Wilson. Their **consistency** requirement is the following: The assessment should be a ** limit point ** of a sequence of totally mixed strategy profiles and associated sensible beliefs, in the above straightforward sense.

In mathematics, a **limit point** of a set *S* in a topological space *X* is a point *x* that can be "approximated" by points of *S* in the sense that every neighbourhood of *x* with respect to the topology on *X* also contains a point of *S* other than *x* itself. A limit point of a set *S* does not itself have to be an element of *S*.

Sequential equilibrium is a further refinement of subgame perfect equilibrium and even perfect Bayesian equilibrium. It is itself refined by extensive-form trembling hand perfect equilibrium and proper equilibrium. Strategies of sequential equilibria (or even extensive-form trembling hand perfect equilibria) are not necessarily admissible. A refinement of sequential equilibrium that guarantees admissibility is quasi-perfect equilibrium.

In game theory, a **subgame perfect equilibrium** is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game has a subgame perfect equilibrium.

In game theory, a **Bayesian game** is a game in which the players have incomplete information on the other players, but, they have beliefs with known probability distribution.

In game theory, **trembling hand perfect equilibrium** is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or **tremble,** may choose unintended strategies, albeit with negligible probability.

In game theory, a **signaling game** is a simple type of a dynamic Bayesian game.

In game theory, a player's **strategy** is any of the options which he or she chooses in a setting where the outcome depends *not only* on their own actions *but* on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.

In game theory, a **solution concept** is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

An **extensive-form game** is a specification of a game in game theory, allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature".

In game theory, a **Perfect Bayesian Equilibrium** (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. A PBE is a refinement of both Bayesian Nash equilibrium (BNE) and subgame perfect equilibrium (SPE). A PBE has two components - *strategies* and *beliefs*:

In game theory, the **purification theorem** was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.

**Quasi-perfect equilibrium** is a refinement of Nash Equilibrium for extensive form games due to Eric van Damme.

**Quantal response equilibrium** (**QRE**) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

**Risk dominance** and **payoff dominance** are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered **payoff dominant** if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered **risk dominant** if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

**Proper equilibrium** is a refinement of Nash Equilibrium due to Roger B. Myerson. Proper equilibrium further refines Reinhard Selten's notion of a trembling hand perfect equilibrium by assuming that more costly trembles are made with significantly smaller probability than less costly ones.

**The intuitive criterion** is a technique for equilibrium refinement in signaling games. It aims to reduce possible outcome scenarios by first restricting the type group to types of agents who could obtain higher utility levels by deviating to off-the-equilibrium messages and second by considering in this sub-set of types the types for which the off-the-equilibrium message is not equilibrium dominated.

A **Markov perfect equilibrium** is an equilibrium concept in game theory. It is the refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be readily identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin. It has since been used, among else, in the analysis of industrial organization, macroeconomics and political economy.

**Jean-François Mertens** was a Belgian game theorist and mathematical economist.

**Mertens stability** is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

David M. Kreps and Robert Wilson. "Sequential Equilibria", *Econometrica* 50:863-894, 1982.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.