Information set (game theory)

Last updated

In game theory, an information set is a set that, for a particular player, establishes all the possible moves that could have taken place in the game so far, given what that player has observed. If the game has perfect information, every information set contains only one member, namely the point actually reached at that stage of the game. Otherwise, it is the case that some players cannot be sure exactly what has taken place so far in the game and what their position is.

Contents

More specifically, in the extensive form, an information set is a set of decision nodes such that:

  1. Every node in the set belongs to one player.
  2. When the game reaches the information set, the player with the move cannot differentiate between nodes within the information set, i.e. if the information set contains more than one node, the player to whom that set belongs does not know which node in the set has been reached.

The notion of information set was introduced by John von Neumann, motivated by studying the game of Poker.

Example

Battle of the sexes 1 Battle of the sexes - perfect information.png
Battle of the sexes 1
Battle of the sexes 2 Battle of the sexes - imperfect information.png
Battle of the sexes 2

At the right are two versions of the battle of the sexes game, shown in extensive form. Below, the normal form for both of these games is shown as well.

The first game is simply sequential-when player 2 has the chance to move, he or she is aware of whether player 1 has chosen O(pera) or F(ootball).

The second game is also sequential, but the dotted line shows player 2's information set. This is the common way to show that when player 2 moves, he or she is not aware of what player 1 did.

This difference also leads to different predictions for the two games. In the first game, player 1 has the upper hand. They know that they can choose O(pera) safely because once player 2 knows that player 1 has chosen opera, player 2 would rather go along for o(pera) and get 2 than choose f(ootball) and get 0. Formally, that's applying subgame perfection to solve the game.

In the second game, player 2 can't observe what player 1 did, so it might as well be a simultaneous game. So subgame perfection doesn't get us anything that Nash equilibrium can't get us, and we have the standard 3 possible equilibria:

  1. Both choose opera
  2. both choose football
  3. or both use a mixed strategy, with player 1 choosing O(pera) 3/5 of the time, and player 2 choosing f(ootball) 3/5 of the time

See also

Related Research Articles

Minimax is a decision rule used in artificial intelligence, decision theory, game theory, statistics and philosophy for minimizing the possible loss for a worst case scenario. When dealing with gains, it is referred to as "maximin"—to maximize the minimum gain. Originally formulated for two-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.

In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

In game theory, a subgame is any part of a game that meets the following criteria :

  1. It has a single initial node that is the only member of that node's information set.
  2. If a node is contained in the subgame then so are all of its successors.
  3. If a node in a particular information set is in the subgame then all members of that information set belong to the subgame.
Solution concept formal rule for predicting how a strategic game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

An extensive-form game is a specification of a game in game theory, allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature".

In game theory, a Perfect Bayesian Equilibrium (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. It is a refinement of Bayesian Nash equilibrium (BNE). A PBE has two components - strategies and beliefs:

In game theory, a Bayesian game is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

Sequential game class of turn-based game in which one player chooses their action before the others choose theirs

In game theory, a sequential game is a game where one player chooses their action before the others choose theirs. Importantly, the later players must have some information of the first's choice, otherwise the difference in time would have no strategic effect. Sequential games hence are governed by the time axis, and represented in the form of decision trees.

Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by first considering the last time a decision might be made and choosing what to do in any situation at that time. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation at every point in time. It was first used by Zermelo in 1913, to prove that chess has pure optimal strategies.

In game theory, strategic dominance occurs when one strategy is better than another strategy for one player, no matter how that player's opponents may play. Many simple games can be solved using dominance. The opposite, intransitivity, occurs in games where one strategy may be better or worse than another strategy for one player, depending on how the player's opponents may play.

The chainstore paradox is an apparent game theory paradox involving the chain store game, where a "deterrence strategy" appears optimal instead of the backward induction strategy of standard game theory reasoning.

In game theory, trembling hand perfect equilibrium is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

In game theory, folk theorems are a class of theorems about possible Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept subgame-perfect Nash equilibria rather than Nash equilibrium.

Sequential equilibrium is a refinement of Nash Equilibrium for extensive form games due to David M. Kreps and Robert Wilson. A sequential equilibrium specifies not only a strategy for each of the players but also a belief for each of the players. A belief gives, for each information set of the game belonging to the player, a probability distribution on the nodes in the information set. A profile of strategies and beliefs is called an assessment for the game. Informally speaking, an assessment is a perfect Bayesian equilibrium if its strategies are sensible given its beliefs and its beliefs are confirmed on the outcome path given by its strategies. The definition of sequential equilibrium further requires that there be arbitrarily small perturbations of beliefs and associated strategies with the same property.

In game theory, a Manipulated Nash equilibrium or MAPNASH is a refinement of subgame perfect equilibrium used in dynamic games of imperfect information. Informally, a strategy set is a MAPNASH of a game if it would be a subgame perfect equilibrium of the game if the game had perfect information. MAPNASH were first suggested by Amershi, Sadanand, and Sadanand (1988) and has been discussed in several papers since. It is a solution concept based on how players think about other players' thought processes.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

Non-credible threat

A non-credible threat is a term used in game theory and economics to describe a threat in a sequential game that a rational player would actually not carry out, because it would not be in his best interest to do so.

Simultaneous game class of game where each player chooses their action without knowledge of the actions chosen by other players

In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the actions chosen by other players. Simultaneous games contrast with sequential games, which are played by the players taking turns. In other words, both players normally act at the same time in a simultaneous game. Even if the players do not act at the same time, both players are uninformed of each other's move while making their decisions. Normal form representations are usually used for simultaneous games. Simultaneous games are often called static games. Given a continuous game, players will have different information sets if the game is simultaneous than if it is sequential because they have less information to act on at each step in the game. For example, in a two player continuous game that is sequential, the second player can act in response to the action taken by the first player. However, this is not possible in a simultaneous game where both players act at the same time.

A Markov perfect equilibrium is an equilibrium concept in game theory. It is the refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be readily identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin. It has since been used, among else, in the analysis of industrial organization, macroeconomics and political economy.

References