In game theory, a game is said to be a potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley. [1]
The properties of several types of potential games have since been studied. Games can be either ordinal or cardinal potential games. In cardinal games, the difference in individual payoffs for each player from individually changing one's strategy, other things equal, has to have the same value as the difference in values for the potential function. In ordinal games, only the signs of the differences have to be the same.
The potential function is a useful tool to analyze equilibrium properties of games, since the incentives of all players are mapped into one function, and the set of pure Nash equilibria can be found by locating the local optima of the potential function. Convergence and finite-time convergence of an iterated game towards a Nash equilibrium can also be understood by studying the potential function.
Potential games can be studied as repeated games with state so that every round played has a direct consequence on game's state in the next round. [2] This approach has applications in distributed control such as distributed resource allocation, where players without a central correlation mechanism can cooperate to achieve a globally optimal resource distribution.
Let be the number of players, the set of action profiles over the action sets of each player and be the payoff function for player .
Given a game , we say that is a potential game with an exact (weighted, ordinal, generalized ordinal, best response) potential function if is an exact (weighted, ordinal, generalized ordinal, best response, respectively) potential function for . Here, is called
Note that while there are utility functions, one for each player, there is only one potential function. Thus, through the lens of potential functions, the players become interchangeable (in the sense of one of the definitions above). Because of this symmetry of the game, decentralized algorithms based on the shared potential function often lead to convergence (in some of sense) to a Nash equilibria.
In a 2-player, 2-action game with externalities, individual players' payoffs are given by the function ui(ai, aj) = biai + waiaj, where ai is players i's action, aj is the opponent's action, and w is a positive externality from choosing the same action. The action choices are +1 and −1, as seen in the payoff matrix in Figure 1.
This game has a potential function P(a1, a2) = b1a1 + b2a2 + wa1a2.
If player 1 moves from −1 to +1, the payoff difference is Δu1 = u1(+1, a2) – u1(–1, a2) = 2 b1 + 2 wa2.
The change in potential is ΔP = P(+1, a2) – P(–1, a2) = (b1 + b2a2 + wa2) – (–b1 + b2a2 – wa2) = 2 b1 + 2 wa2 = Δu1.
The solution for player 2 is equivalent. Using numerical values b1 = 2, b2 = −1, w = 3, this example transforms into a simple battle of the sexes, as shown in Figure 2. The game has two pure Nash equilibria, (+1, +1) and (−1, −1). These are also the local maxima of the potential function (Figure 3). The only stochastically stable equilibrium is (+1, +1), the global maximum of the potential function.
|
|
|
A 2-player, 2-action game cannot be a potential game unless
Exact potential games are equivalent to congestion games: Rosenthal [3] proved that every congestion game has an exact potential; Monderer and Shapley [1] proved the opposite direction: every game with an exact potential function is a congestion game.
An improvement path (also called Nash dynamics) is a sequence of strategy-vectors, in which each vector is attained from the previous vector by a single player switching his strategy to a strategy that strictly increases his utility. If a game has a generalized-ordinal-potential function , then is strictly increasing in every improvement path, so every improvement path is acyclic. If, in addition, the game has finitely many strategies, then every improvement path must be finite. This property is called the finite improvement property (FIP). We have just proved that every finite generalized-ordinal-potential game has the FIP. The opposite is also true: every finite game has the FIP has a generalized-ordinal-potential function. [4] [ clarification needed ] The terminal state in every finite improvement path is a Nash equilibrium, so FIP implies the existence of a pure-strategy Nash equilibrium. Moreover, it implies that a Nash equlibrium can be computed by a distributed process, in which each agent only has to improve his own strategy.
A best-response path is a special case of an improvement path, in which each vector is attained from the previous vector by a single player switching his strategy to a best-response strategy. The property that every best-response path is finite is called the finite best-response property (FBRP). FBRP is weaker than FIP, and it still implies the existence of a pure-strategy Nash equilibrium. It also implies that a Nash equlibrium can be computed by a distributed process, but the computational burden on the agents is higher than with FIP, since they have to compute a best-response.
An even weaker property is weak-acyclicity (WA). [5] It means that, for any initial strategy-vector, there exists a finite best-response path starting at that vector. Weak-acyclicity is not sufficient for existence of a potential function (since some improvement-paths may be cyclic), but it is sufficient for the existence of pure-strategy Nash equilibirum. It implies that a Nash equilibrium can be computed almost-surely by a stochastic distributed process, in which at each point, a player is chosen at random, and this player chooses a best-strategy at random. [4]
Minmax is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for minimizing the possible loss for a worst case scenario. When dealing with gains, it is referred to as "maximin" – to maximize the minimum gain. Originally formulated for several-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.
In game theory, the Nash equilibrium, named after the mathematician John Nash, is the most common way to define the solution of a non-cooperative game involving two or more players. In a Nash equilibrium, each player is assumed to know the equilibrium strategies of the other players, and no one has anything to gain by changing only one's own strategy. The principle of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to competing firms choosing outputs.
In game theory, a cooperative game is a game with competition between groups of players ("coalitions") due to the possibility of external enforcement of cooperative behavior. Those are opposed to non-cooperative games in which there is either no possibility to forge alliances or all agreements need to be self-enforcing.
Mechanism design is a field in economics and game theory that takes an objectives-first approach to designing economic mechanisms or incentives, toward desired objectives, in strategic settings, where players act rationally. Because it starts at the end of the game, then goes backwards, it is also called reverse game theory. It has broad applications, from economics and politics in fields such as market design, auction theory and social choice theory to networked-systems.
In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.
In game theory, cheap talk is communication between players that does not directly affect the payoffs of the game. Providing and receiving information is free. This is in contrast to signalling, in which sending certain messages may be costly for the sender depending on the state of the world.
Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.
In game theory, an extensive-form game is a specification of a game allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature". Extensive-form representations differ from normal-form in that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix.
In game theory, a Bayesian game is a strategic decision-making model which assumes players have incomplete information. Players hold private information relevant to the game, meaning that the payoffs are not common knowledge. Bayesian games model the outcome of player interactions using aspects of Bayesian probability. They are notable because they allowed, for the first time in game theory, for the specification of the solutions to games with incomplete information.
In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, but rather represent the game by way of a matrix. While this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, and their corresponding payoffs, for each player.
In game theory, a symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If one can change the identities of the players without changing the payoff to the strategies, then a game is symmetric. Symmetry can come in different varieties. Ordinally symmetric games are games that are symmetric with respect to the ordinal structure of the payoffs. A game is quantitatively symmetric if and only if it is symmetric with respect to the exact payoffs. A partnership game is a symmetric game where both players receive identical payoffs for any strategy set. That is, the payoff for playing strategy a against strategy b receives the same payoff as playing strategy b against strategy a.
In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.
In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.
In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their private observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from their strategy, the distribution from which the signals are drawn is called a correlated equilibrium.
In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.
In game theory, a stochastic game, introduced by Lloyd Shapley in the early 1950s, is a repeated game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.
In mathematics, and in particular the study of game theory, a function is graph continuous if it exhibits the following properties. The concept was originally defined by Partha Dasgupta and Eric Maskin in 1986 and is a version of continuity that finds application in the study of continuous games.
Congestion games (CG) are a class of games in game theory. They represent situations which commonly occur in roads, communication networks, oligopoly markets and natural habitats. There is a set of resources ; there are several players who need resources ; each player chooses a subset of these resources ; the delay in each resource is determined by the number of players choosing a subset that contains this resource. The cost of each player is the sum of delays among all resources he chooses. Naturally, each player wants to minimize his own delay; however, each player's choices impose a negative externality on the other players, which may lead to inefficient outcomes.
In game theory, a bimatrix game is a simultaneous game for two players in which each player has a finite number of possible actions. The name comes from the fact that the normal form of such a game can be described by two matrices - matrix describing the payoffs of player 1 and matrix describing the payoffs of player 2.
The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.