WikiMili The Free Encyclopedia

Correlated equilibrium | |
---|---|

A solution concept in game theory | |

Relationship | |

Superset of | Nash equilibrium |

Significance | |

Proposed by | Robert Aumann |

Example | Chicken |

In game theory, a **correlated equilibrium** is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974.^{ [1] }^{ [2] } The idea is that each player chooses their action according to their observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy (assuming the others don't deviate), the distribution is called a correlated equilibrium.

**Game theory** is the study of mathematical models of strategic interaction among rational decision-makers. It has applications in all fields of social science, as well as in logic, systems science, and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.

In game theory, a **solution concept** is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, the **Nash equilibrium**, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

An -player strategic game is characterized by an action set and utility function for each player . When player chooses strategy and the remaining players choose a strategy profile described by the -tuple , then player 's utility is .

A *strategy modification* for player is a function . That is, tells player to modify his behavior by playing action when instructed to play .

Let be a countable probability space. For each player , let be his information partition, be 's posterior and let , assigning the same value to states in the same cell of 's information partition. Then is a correlated equilibrium of the strategic game if for every player and for every strategy modification :

In mathematics, a **countable set** is a set with the same cardinality as some subset of the set of natural numbers. A countable set is either a finite set or a **countably infinite** set. Whether finite or infinite, the elements of a countable set can always be counted one at a time and, although the counting may never finish, every element of the set is associated with a unique natural number.

In probability theory, a **probability space** or a **probability triple** is a mathematical construct that models a real-world process consisting of states that occur randomly. A probability space is constructed with a specific kind of situation or experiment in mind. One proposes that each time a situation of that kind arises, the set of possible outcomes is the same and the probabilities are also the same.

In Bayesian statistics, the **posterior probability** of a random event or an uncertain proposition is the conditional probability that is assigned after the relevant evidence or background is taken into account. Similarly, the **posterior probability distribution** is the probability distribution of an unknown quantity, treated as a random variable, conditional on the evidence obtained from an experiment or survey. "Posterior", in this context, means after taking into account the relevant evidence related to the particular case being examined. For instance, there is a ("non-posterior") probability of a person finding buried treasure if they dig in a random spot, and a posterior probability of finding buried treasure if they dig in a spot where their metal detector rings.

In other words, is a correlated equilibrium if no player can improve his or her expected utility via a strategy modification.

Dare | Chicken out | |

Dare | 0, 0 | 7, 2 |

Chicken out | 2, 7 | 6, 6 |

A game of Chicken |

Consider the game of chicken pictured. In this game two individuals are challenging each other to a contest where each can either *dare* or *chicken out*. If one is going to Dare, it is better for the other to chicken out. But if one is going to chicken out it is better for the other to Dare. This leads to an interesting situation where each wants to dare, but only if the other might chicken out.

In this game, there are three Nash equilibria. The two pure strategy Nash equilibria are (*D*, *C*) and (*C*, *D*). There is also a mixed strategy equilibrium where each player Dares with probability 1/3.

Now consider a third party (or some natural event) that draws one of three cards labeled: (*C*, *C*), (*D*, *C*), and (*C*, *D*), with the same probability, i.e. probability 1/3 for each card. After drawing the card the third party informs the players of the strategy assigned to them on the card (but **not** the strategy assigned to their opponent). Suppose a player is assigned *D*, he would not want to deviate supposing the other player played their assigned strategy since he will get 7 (the highest payoff possible). Suppose a player is assigned *C*. Then the other player will play *C* with probability 1/2 and *D* with probability 1/2. The expected utility of Daring is 7(1/2) + 0(1/2) = 3.5 and the expected utility of chickening out is 2(1/2) + 6(1/2) = 4. So, the player would prefer chickening out.

Since neither player has an incentive to deviate, this is a correlated equilibrium. The expected payoff for this equilibrium is 7(1/3) + 2(1/3) + 6(1/3) = 5 which is higher than the expected payoff of the mixed strategy Nash equilibrium.

The following correlated equilibrium has an even higher payoff to both players: Recommend (*C*, *C*) with probability 1/2, and (*D*, *C*) and (*C*, *D*) with probability 1/4 each. Then when a player is recommended to play *C*, she knows that the other player will play *D* with (conditional) probability 1/3 and *C* with probability 2/3, and gets expected payoff 14/3, which is equal to (not less than) the expected payoff when she plays *D*. In this correlated equilibrium, both players get 5.25 in expectation. It can be shown that this is the correlated equilibrium with maximal sum of expected payoffs to the two players.

One of the advantages of correlated equilibria is that they are computationally less expensive than Nash equilibria. This can be captured by the fact that computing a correlated equilibrium only requires solving a linear program whereas solving a Nash equilibrium requires finding its fixed point completely.^{ [3] } Another way of seeing this is that it is possible for two players to respond to each other's historical plays of a game and end up converging to a correlated equilibrium.^{ [4] }

In game theory and economic theory, a **zero-sum game** is a mathematical representation of a situation in which each participant's gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants. If the total gains of the participants are added up and the total losses are subtracted, they will sum to zero. Thus, cutting a cake, where taking a larger piece reduces the amount of cake available for others, is a zero-sum game if all participants value each unit of cake equally.

The **game of chicken**, also known as the **hawk–dove game** or **snowdrift game**, is a model of conflict for two players in game theory. The principle of the game is that while it is to both players’ benefit if one player yields, the other player's optimal choice depends on what their opponent is doing: if the player opponent yields, they should not, but if the opponent fails to yield, the player should.

In game theory, the **best response** is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

In game theory, **coordination games** are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies.

**Matching pennies** is the name for a simple game used in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match, then Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so receives one from Even.

In game theory, a player's **strategy** is any of the options which he or she chooses in a setting where the outcome depends *not only* on their own actions *but* on the actions of others. A player's strategy will determine the action which the player will take at any stage of the game.

In game theory, a **Perfect Bayesian Equilibrium** (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. A PBE is a refinement of both Bayesian Nash equilibrium (BNE) and subgame perfect equilibrium (SPE). A PBE has two components - *strategies* and *beliefs*:

In game theory, a **Bayesian game** is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

In game theory, **trembling hand perfect equilibrium** is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or **tremble,** may choose unintended strategies, albeit with negligible probability.

In game theory, **folk theorems** are a class of theorems about possible Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept subgame-perfect Nash equilibria rather than Nash equilibrium.

In game theory, a **repeated game** is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. *Single stage game* or *single shot game* are names for non-repeated games.

In game theory, the **purification theorem** was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.

**Quantal response equilibrium** (**QRE**) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

**Risk dominance** and **payoff dominance** are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered **payoff dominant** if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered **risk dominant** if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, a game is said to be a **potential game** if the incentive of all players to change their strategy can be expressed using a single global function called the **potential function**. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley.

**Proper equilibrium** is a refinement of Nash Equilibrium due to Roger B. Myerson. Proper equilibrium further refines Reinhard Selten's notion of a trembling hand perfect equilibrium by assuming that more costly trembles are made with significantly smaller probability than less costly ones.

In game theory, an **epsilon-equilibrium**, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, the **price of stability (PoS)** of a game is the ratio between the best objective function value of one of its equilibria and that of an optimal outcome. The PoS is relevant for games in which there is some objective authority that can influence the players a bit, and maybe help them converge to a good Nash equilibrium. When measuring how efficient a Nash equilibrium is in a specific game we often time also talk about the price of anarchy (PoA).

Congestion games are a class of games in game theory first proposed by Rosenthal in 1973. In a **Congestion game** we define players and resources, where the payoff of each player depends on the resources it chooses and the number of players choosing the same resource. Congestion games are a special case of potential games. Rosenthal proved that any congestion game is a potential game and Monderer and Shapley (1996) proved the converse: for any potential game, there is a congestion game with the same potential function.

- ↑ Aumann, Robert (1974). "Subjectivity and correlation in randomized strategies".
*Journal of Mathematical Economics*.**1**(1): 67–96. CiteSeerX 10.1.1.120.1740 . doi:10.1016/0304-4068(74)90037-8. - ↑ Aumann, Robert (1987). "Correlated Equilibrium as an Expression of Bayesian Rationality".
*Econometrica*.**55**(1): 1–18. CiteSeerX 10.1.1.295.4243 . doi:10.2307/1911154. JSTOR 1911154. - ↑ Papadimitriou, Christos H.; Roughgarden, Tim (2008). "Computing correlated equilibria in multi-player games".
*J. ACM*.**55**(3): 14:1–14:29. CiteSeerX 10.1.1.335.2634 . doi:10.1145/1379759.1379762. - ↑ Foster, Dean P.; Vohra, Rakesh V. (1996). "Calibrated Learning and Correlated Equilibrium".
*Games and Economic Behavior*.

- Fudenberg, Drew and Jean Tirole (1991)
*Game Theory*, MIT Press, 1991, ISBN 0-262-06141-4 - Leyton-Brown, Kevin; Shoham, Yoav (2008),
*Essentials of Game Theory: A Concise, Multidisciplinary Introduction*, San Rafael, CA: Morgan & Claypool Publishers, ISBN 978-1-59829-593-1 . An 88-page mathematical introduction; see Section 3.5. Free online at many universities. - Osborne, Martin J. and Ariel Rubinstein (1994).
*A Course in Game Theory*, MIT Press. ISBN 0-262-65040-1 (a modern introduction at the graduate level) - Shoham, Yoav; Leyton-Brown, Kevin (2009),
*Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations*, New York: Cambridge University Press, ISBN 978-0-521-89943-7 . A comprehensive reference from a computational perspective; see Sections 3.4.5 and 4.6. Downloadable free online. - Éva Tardos (2004) Class notes from
*Algorithmic game theory*(note an important typo) - Iskander Karibzhanov. MATLAB code to plot the set of correlated equilibria in a two player normal form game
- Noam Nisan (2005) Lecture notes from the course
*Topics on the border of Economics and Computation*(lowercase u should be replaced by u_i)

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.