Complete information

Last updated

In economics and game theory, complete information is an economic situation or game in which knowledge about other market participants or players is available to all participants. The utility functions (including risk aversion), payoffs, strategies and "types" of players are thus common knowledge. Complete information is the concept that each player in the game is aware of the sequence, strategies, and payoffs throughout gameplay. Given this information, the players have the ability to plan accordingly based on the information to maximize their own strategies and utility at the end of the game.

Contents

Inversely, in a game with incomplete information, players do not possess full information about their opponents. Some players possess private information, a fact that the others should take into account when forming expectations about how those players will behave. A typical example is an auction: each player knows his own utility function (valuation for the item), but does not know the utility function of the other players. [1]

Applications

Games of incomplete information arise frequently in social science. For instance, John Harsanyi was motivated by consideration of arms control negotiations, where the players may be uncertain both of the capabilities of their opponents and of their desires and beliefs.

It is often assumed that the players have some statistical information about the other players, e.g. in an auction, each player knows that the valuations of the other players are drawn from some probability distribution. In this case, the game is called a Bayesian game.

In games that have a varying degree of complete information and game type, there are different methods available to the player to solve the game based on this information. In games with static, complete information, the approach to solve is to use Nash equilibrium to find viable strategies. In dynamic games with complete information, backward induction is the solution concept, which eliminates non-credible threats as potential strategies for players.

A classic example of a dynamic game with complete information is Stackelberg's (1934) sequential-move version of Cournot duopoly. Other examples include Leontief's (1946) monopoly-union model and Rubenstein's bargaining model. [2]

Lastly, when complete information is unavailable (incomplete information games), these solutions turn towards Bayesian Nash Equilibria since games with incomplete information become Bayesian games. [2] In a game of complete information, the players' payoffs functions are common knowledge, whereas in a game of incomplete information at least one player is uncertain about another player's payoff function.

Extensive form

In a normal extensive form, each player knows exactly where they are at in the game and what moves have been previously made. Extensive-form tree.svg
In a normal extensive form, each player knows exactly where they are at in the game and what moves have been previously made.

The extensive form can be used to visualize the concept of complete information. By definition, players know where they are as depicted by the nodes, and the final outcomes as illustrated by the utility payoffs. The players also understand the potential strategies of each player and as a result their own best course of action to maximize their payoffs.

Complete versus perfect information

Complete information is importantly different from perfect information.

In a game of complete information, the structure of the game and the payoff functions of the players are commonly known but players may not see all of the moves made by other players (for instance, the initial placement of ships in Battleship); there may also be a chance element (as in most card games). Conversely, in games of perfect information, every player observes other players' moves, but may lack some information on others' payoffs, or on the structure of the game. [3] A game with complete information may or may not have perfect information, and vice versa.

See also

Related Research Articles

Game theory Study of mathematical models of strategic interaction between rational decision-makers

Game theory is the study of mathematical models of strategic interaction among rational decision-makers. It has applications in all fields of social science, as well as in logic, systems science and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. In the 21st century, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.

Perfect information Condition in economics and game theory

In economics, perfect information is a feature of perfect competition. With perfect information in a market, all consumers and producers have perfect and instantaneous knowledge of all market prices, their own utility, and own cost functions.

In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

Signaling game

In game theory, a signaling game is a simple type of a dynamic Bayesian game.

In game theory, a player's strategy is any of the options which they choose in a setting where the outcome depends not only on their own actions but on the actions of others. The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship. A player's strategy will determine the action which the player will take at any stage of the game. In studying game theory, economists enlist a more rational lens in analyzing decisions rather than the psychological or sociological perspectives taken when analyzing relationships between decisions of two or more parties in different disciplines.

In game theory, the battle of the sexes (BoS) is a two-player coordination game that also involves elements of conflict. The game was introduced in 1957 by Luce and Raiffa in their classic book, Games and Decisions.

Solution concept

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

An extensive-form game is a specification of a game in game theory, allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature".

In game theory, a Perfect Bayesian Equilibrium (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. It is a refinement of Bayesian Nash equilibrium (BNE). A perfect Bayesian equilibrium has two components -- strategies and beliefs:

In game theory, a Bayesian game is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, but rather represent the game by way of a matrix. While this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, and their corresponding payoffs, for each player.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. Single stage game or single shot game are names for non-repeated games.

In game theory, the war of attrition is a dynamic timing game in which players choose a time to stop, and fundamentally trade off the strategic gains from outlasting other players and the real costs expended with the passage of time. Its precise opposite is the pre-emption game, in which players elect a time to stop, and fundamentally trade off the strategic costs from outlasting other players and the real gains occasioned by the passage of time. The model was originally formulated by John Maynard Smith; a mixed evolutionarily stable strategy (ESS) was determined by Bishop & Cannings. An example is a second price all-pay auction, in which the prize goes to the player with the highest bid and each player pays the loser's low bid.

In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their private observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from their strategy, the distribution from which the signals are drawn is called a correlated equilibrium.

In game theory, the purification theorem was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Cooperative bargaining is a process in which two people decide how to share a surplus that they can jointly generate. In many cases, the surplus created by the two players can be shared in many ways, forcing the players to negotiate which division of payoffs to choose. Such surplus-sharing problems are faced by management and labor in the division of a firm's profit, by trade partners in the specification of the terms of trade, and more.

Jean-François Mertens

Jean-François Mertens was a Belgian game theorist and mathematical economist.

References

  1. Levin, Jonathan (2002). "Games with Incomplete Information" (PDF). Retrieved 25 August 2016.
  2. 1 2 Gibbons, Robert (1992). A Primer in Game Theory. Harvester-Wheatsheaf. p. 133.
  3. Osborne, M. J.; Rubinstein, A. (1994). "Chapter 6: Extensive Games with Perfect Information". A Course in Game Theory. Cambridge M.A.: The MIT Press. ISBN   0-262-65040-1.
  4. Thomas, L. C. (2003). Games, Theory and Applications. Mineola N.Y.: Dover Publications. p. 19. ISBN   0-486-43237-8.
  5. Osborne, M. J.; Rubinstein, A. (1994). "Chapter 11: Extensive Games with Imperfect Information". A Course in Game Theory. Cambridge M.A.: The MIT Press. ISBN   0-262-65040-1.