Information set (game theory)

Last updated

The information set is the basis for decision making in a game, which includes the actions available to both sides and the benefits of each action. The information set is an important concept in non-perfect games. In game theory, an information set is the set of all possible actions in the game for a given player, built on their observations and a set for a particular player that, given what that player has observed, shows the decision vertices available to the player which are indistinguishable to them at the current point in the game. For a better idea on decision vertices, refer to Figure 1. If the game has perfect information, every information set contains only one member, namely the point actually reached at that stage of the game, since each player knows the exact mix of chance moves and player strategies up to the current point in the game. Otherwise, it is the case that some players cannot be sure what the game state is; for instance, not knowing what exactly happened in the past or what should be done right now.

Contents

Figure 1: A game tree which depicts each player's possible information set by showing the options at each vertex (A and B for player's 1 and 2 respectively) Cuban Missile Crisis Game Tree.svg
Figure 1: A game tree which depicts each player's possible information set by showing the options at each vertex (A and B for player's 1 and 2 respectively)

Information sets are used in extensive form games and are often depicted in game trees. Game trees show the path from the start of a game and the subsequent paths that can be made depending on each player's next move. For non-perfect information game problems, there is hidden information. That is, each player does not have complete knowledge of the opponent's information, such as cards that do not appear in a poker game. Therefore, when constructing a game tree, it is difficult to determine precisely where a node is located based on known information alone, as we do not know certain information about our opponent. We can only be sure that we are at one of a range of possible nodes. This inability to distinguish the set of nodes in a particular player's game tree is known as the 'information set'. Information sets can be easily depicted in game trees to display each player's possible moves typically using dotted lines, circles or even by just labelling the vertices which shows a particular player's options at the current stage of the game as shown in Figure 1.

More specifically, in the extensive form, an information set is a set of decision nodes such that:

  1. Every node in the set belongs to one player.
  2. When the game reaches the information set, the player with the move cannot differentiate between nodes within the information set, i.e. if an information set contains multiple nodes, the participants associated with that information set are uncertain about which node to select for their move.

Games in extensive form often involve each player being able to play multiple moves which results in the formation of multiple information sets as well. A player is to make choices at each of these vertices based on the options in the information set. This is known as the player's strategy and can provide the player's path from the start of the game, to the end which is also known as the play of the game. From the play of the game, the outcome will always be known based on the strategy of each player unless chance moves are involved, then there will not always be a singular outcome. Not all games play's are strategy based as they can also involve chance moves. When chance moves are involved, a vector of strategies can result in the probability distribution of the multiple outcomes of the games that could occur. Multiple outcomes of games can be created when chance is involved as the moves are likely to be different each time. However, based on the strength of the strategy, some outcomes could have higher probabilities than others.

Assuming that there are multiple information sets in a game, the game transforms from a static game to a dynamic game. The key to solving dynamic game is to calculate each player's information set and make decisions based on their choices at different stages. For example, when player A chooses first, the player B will make the best decision for him based on A's choice. Player A, in turn, can predict B's reaction and make a choice in his favour. The notion of information set was introduced by John von Neumann, motivated by studying the game of Poker.

Example

Battle of the sexes 1 Battle of the sexes - perfect information.png
Battle of the sexes 1
Battle of the sexes 2 Battle of the sexes - imperfect information.png
Battle of the sexes 2

At the right are two versions of the battle of the sexes game, shown in extensive form. Below, the normal form for both of these games is shown as well.

The first game is simply sequential―when player 2 makes a choice, both parties are already aware of whether player 1 has chosen O(pera) or F(ootball).

The second game is also sequential, but the dotted line shows player 2's information set. This is the common way to show that when player 2 moves, he or she is not aware of what player 1 did.

This difference also leads to different predictions for the two games. In the first game, player 1 has the upper hand. They know that they can choose O(pera) safely because once player 2 knows that player 1 has chosen opera, player 2 would rather go along for o(pera) and get 2 than choose f(ootball) and get 0. Formally, that's applying subgame perfection to solve the game.

In the second game, player 2 can't observe what player 1 did, so it might as well be a simultaneous game. So subgame perfection doesn't get us anything that Nash equilibrium can't get us, and we have the standard 3 possible equilibria:

  1. Both choose opera
  2. both choose football
  3. or both use a mixed strategy, with player 1 choosing O(pera) 3/5 of the time, and player 2 choosing f(ootball) 2/5 of the time

See also

Related Research Articles

Game theory is the study of mathematical models of strategic interactions among rational agents. It has applications in all fields of social science, as well as in logic, systems science and computer science. The concepts of game theory are used extensively in economics as well. The traditional methods of game theory addressed two-person zero-sum games, in which each participant's gains or losses are exactly balanced by the losses and gains of other participants. In the 21st century, the advanced game theories apply to a wider range of behavioral relations; it is now an umbrella term for the science of logical decision making in humans, animals, as well as computers.

Minmax is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for minimizing the possible loss for a worst case scenario. When dealing with gains, it is referred to as "maximin" – to maximize the minimum gain. Originally formulated for several-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.

In the context of combinatorial game theory, which typically studies sequential games with perfect information, a game tree is a graph representing all possible game states within such a game. Such games include well-known ones such as chess, checkers, Go, and tic-tac-toe. This can be used to measure the complexity of a game, as it represents all the possible ways a game can pan out. Due to the large game trees of complex games such as chess, algorithms that are designed to play this class of games will use partial game trees, which makes computation feasible on modern computers. Various methods exist to solve game trees. If a complete game tree can be generated, a deterministic algorithm, such as backward induction or retrograde analysis can be used. Randomized algorithms and minimax algorithms such as MCTS can be used in cases where a complete game tree is not feasible.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round, but after an additional switch the potential payoff will be higher. Therefore, although at each round a player has an incentive to take the pot, it would be better for them to wait. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

In game theory, a subgame is any part of a game that meets the following criteria :

  1. It has a single initial node that is the only member of that node's information set.
  2. If a node is contained in the subgame then so are all of its successors.
  3. If a node in a particular information set is in the subgame then all members of that information set belong to the subgame.

In game theory, a player's strategy is any of the options which they choose in a setting where the optimal outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship. A player's strategy will determine the action which the player will take at any stage of the game. In studying game theory, economists enlist a more rational lens in analyzing decisions rather than the psychological or sociological perspectives taken when analyzing relationships between decisions of two or more parties in different disciplines.

<span class="mw-page-title-main">Solution concept</span> Formal rule for predicting how a game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, an extensive-form game is a specification of a game allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature". Extensive-form representations differ from normal-form in that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix.

In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games.

In game theory, a Bayesian game is a strategic decision-making model which assumes players have incomplete information. Players hold private information relevant to the game, meaning that the payoffs are not common knowledge. Bayesian games model the outcome of player interactions using aspects of Bayesian probability. They are notable because they allowed, for the first time in game theory, for the specification of the solutions to games with incomplete information.

<span class="mw-page-title-main">Sequential game</span> Class of games where players choose their actions sequentially

In game theory, a sequential game is a game where one player chooses their action before the others choose theirs. The other players must have information on the first player's choice so that the difference in time has no strategic effect. Sequential games are governed by the time axis and represented in the form of decision trees.

Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by examining the last point at which a decision is to be made and then identifying what action would be most optimal at that moment. Using this information, one can then determine what to do at the second-to-last time of decision. This process continues backwards until one has determined the best action for every possible situation at every point in time. Backward induction was first used in 1875 by Arthur Cayley, who uncovered the method while trying to solve the Secretary problem.

In computational complexity theory, generalized geography is a well-known PSPACE-complete problem.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

Sequential equilibrium is a refinement of Nash equilibrium for extensive form games due to David M. Kreps and Robert Wilson. A sequential equilibrium specifies not only a strategy for each of the players but also a belief for each of the players. A belief gives, for each information set of the game belonging to the player, a probability distribution on the nodes in the information set. A profile of strategies and beliefs is called an assessment for the game. Informally speaking, an assessment is a perfect Bayesian equilibrium if its strategies are sensible given its beliefs and its beliefs are confirmed on the outcome path given by its strategies. The definition of sequential equilibrium further requires that there be arbitrarily small perturbations of beliefs and associated strategies with the same property.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

<span class="mw-page-title-main">Simultaneous game</span>

In game theory, a simultaneous game or static game is a game where each player chooses their action without knowledge of the actions chosen by other players. Simultaneous games contrast with sequential games, which are played by the players taking turns. In other words, both players normally act at the same time in a simultaneous game. Even if the players do not act at the same time, both players are uninformed of each other's move while making their decisions. Normal form representations are usually used for simultaneous games. Given a continuous game, players will have different information sets if the game is simultaneous than if it is sequential because they have less information to act on at each step in the game. For example, in a two player continuous game that is sequential, the second player can act in response to the action taken by the first player. However, this is not possible in a simultaneous game where both players act at the same time.

A Markov perfect equilibrium is an equilibrium concept in game theory. It has been used in analyses of industrial organization, macroeconomics, and political economy. It is a refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin.

The one-shot deviation principle is the principle of optimality of dynamic programming applied to game theory. It says that a strategy profile of a finite multi-stage extensive-form game with observed actions is a subgame perfect equilibrium (SPE) if and only if there exist no profitable single deviation for each subgame and every player. In simpler terms, if no player can increase their expected payoff by deviating from their original strategy via a single action, then the strategy profile is an SPE. In other words, no player can profit by deviating from the strategy in one period and then reverting to the strategy.

References