In game theory, cheap talk is communication between players that does not directly affect the payoffs of the game. Providing and receiving information is free. This is in contrast to signaling in which sending certain messages may be costly for the sender depending on the state of the world.
One actor has information and the other has ability to act. The informed player can choose strategically what to say and what not to say. Things become interesting when the interests of the players are not aligned. The classic example[ citation needed ] is of an expert (say, an ecologist) trying to explain the state of the world to an uninformed decision maker (say, politician voting on a deforestation bill). The decision maker, after hearing the report from the expert, must then make a decision which affects the payoffs of both players.
This basic setting set by Crawford and Sobelhas given rise to a variety of variants.
To give a formal definition, cheap talk is communication that is:
Therefore, an agent engaging in cheap talk could lie with impunity, but may choose in equilibrium not to do so.
In the basic form of the game, there are two players communicating, one sender S and one receiver R.
Type. Sender S gets knowledge of the state of the world or of his "type" t. Receiver R does not know t ; he has only ex-ante beliefs about it, and relies on a message from S to possibly improve the accuracy of his beliefs.
Message.S decides to send message m. Message m may disclose full information, but it may also give limited, blurred information: it will typically say "The state of the world is between t1 and t2". It may give no information at all.
The form of the message does not matter, as long as there is mutual understanding, common interpretation. It could be a general statement from a central bank's chairman, a political speech in any language, etc. Whatever the form, it is eventually taken to mean "The state of the world is between t1 and t2".
Action. Receiver R receives message m. R updates his beliefs about the state of the world given new information that he might get, using Bayes's rule. R decides to take action a. This action impacts both his own utility and the sender's utility.
Utility. The decision of S regarding the content of m is based on maximizing his utility, given what he expects R to do. Utility is a way to quantify satisfaction or wishes. It can be financial profits, or non-financial satisfaction—for instance the extent to which the environment is protected.
→ Quadratic utilities:
The respective utilities of S and R can be specified by the following:
The theory applies to more general forms of utility, but quadratic preferences makes exposition easier. Thus S and R have different objectives if b ≠ 0. Parameter b is interpreted as conflict of interest between the two players, or alternatively as bias.
UR is maximized when a = t, meaning that the receiver wants to take action that matches the state of the world, which he does not know in general. US is maximized when a = t + b, meaning that S wants a slightly higher action to be taken. Since S does not control action, S must obtain the desired action by choosing what information to reveal. Each player’s utility depends on the state of the world and on both players’ decisions that eventually lead to action a.
Nash equilibrium. We look for an equilibrium where each player decides optimally, assuming that the other player also decides optimally. Players are rational, although R has only limited information. Expectations get realized, and there is no incentive to deviate from this situation.
Crawford and Sobel characterize possible Nash equilibria.
When interests are aligned, then information is fully disclosed. When conflict of interest is very large, all information is kept hidden. These are extreme cases. The model allowing for more subtle case when interests are close, but different and in these cases optimal behavior leads to some but not all information being disclosed, leading to various kinds of carefully worded sentences that we may observe.
More generally :
- There exists N* > 0 such that for all N with 1 ≤ N ≤ N*,
- there exists at least an equilibrium in which the set of induced actions has cardinality N; and moreover
- there is no equilibrium that induces more than N* actions.
Messages. While messages could ex-ante assume an infinite number of possible values µ(t) for the infinite number of possible states of the world t, actually they may take only a finite number of values (m1, m2, . . . , mN).
Thus an equilibrium may be characterized by a partition (t0(N), t1(N). . . tN(N)) of the set of types [0, 1], where 0 = t0(N) < t1(N) < . . . < tN(N) = 1. This partition is shown on the top right segment of Figure 1.
The ti(N)’s are the bounds of intervals where the messages are constant: for ti-1(N) < t < ti(N), µ(t) = mi.
Actions. Since actions are functions of messages, actions are also constant over these intervals: for ti-1(N) < t < ti(N), α(t) = α(mi) = ai.
The action function is now indirectly characterized by the fact that each value ai optimizes return for the R, knowing that t is between t1 and t2. Mathematically (assuming that t is uniformly distributed over [0, 1]),
→ Quadratic utilities:
Given that R knows that t is between ti-1 and ti, and in the special case quadratic utility where R wants action a to be as close to t as possible, we can show that quite intuitively the optimal action is the middle of the interval:
Indifference condition. What happens at t = ti? The sender has to be indifferent between sending either message mi-1 or mi. 1 ≤ i≤ N-1
This gives information about N and the ti.
We consider a partition of size N.
One can show that
N must be small enough so that the numerator is positive. This determines the maximum allowed value
where is the ceiling of , i.e. the smallest positive integer greater or equal to . Example: We assume that b = 1/20. Then N* = 3. We now describe all the equilibria for N=1, 2, or 3 (see Figure 2).
N = 1: This is the babbling equilibrium. t0 = 0, t1 = 1; a1 = 1/2 = 0.5.
N = 2:t0 = 0, t1 = 2/5 = 0.4, t2 = 1; a1 = 1/5 = 0.2, a2 = 7/10 = 0.7.
N = N* = 3:t0 = 0, t1 = 2/15, t2 = 7/15, t3 = 1; a1 = 1/15, a2 = 3/10 = 0.3, a3 = 11/15.
With N = 1, we get the coarsest possible message, which does not give any information. So everything is red on the top left panel. With N = 3, the message is finer. However, it remains quite coarse compared to full revelation, which would be the 45° line, but which is not a Nash equilibrium.
With a higher N, and a finer message, the blue area is more important. This implies higher utility. Disclosing more information benefits both parties.
Cheap talk can, in general, be added to any game and has the potential to enhance the set of possible equilibrium outcomes. For example, one can add a round of cheap talk in the beginning of the Battle of the Sexes. Each player announces whether they intend to go to the football game, or the opera. Because the Battle of the Sexes is a coordination game, this initial round of communication may enable the players to select among multiple equilibria, thereby achieving higher payoffs than in the uncoordinated case. The messages and strategies which yield this outcome are symmetric for each player. They are: 1) announce opera or football with even probability 2) if a person announces opera (or football), then upon hearing this message the other person will say opera (or football) as well (Farrell and Rabin, 1996). If they both announce different options, then no coordination is achieved. In the case of only one player messaging, this could also give that player a first-mover advantage.
It is not guaranteed, however, that cheap talk will have an effect on equilibrium payoffs. Another game, the Prisoner's Dilemma, is a game whose only equilibrium is in dominant strategies. Any pre-play cheap talk will be ignored and players will play their dominant strategies (Defect, Defect) regardless of the messages sent.
It has been commonly argued that cheap talk will have no effect on the underlying structure of the game. In biology authors have often argued that costly signalling best explains signalling between animals (see Handicap principle, Signalling theory). This general belief has been receiving some challenges (see work by Carl Bergstromand Brian Skyrms 2002, 2004). In particular, several models using evolutionary game theory indicate that cheap talk can have effects on the evolutionary dynamics of particular games.
In game theory, the Nash equilibrium, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.
In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.
In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.
In game theory, a signaling game is a simple type of a dynamic Bayesian game.
In game theory, the stag hunt or sometimes referred to as the assurance game or trust dilemma describes a conflict between safety and social cooperation. Stag hunt was a story that became a game told by philosopher Jean-Jacques Rousseau in his Discourse on Inequality. Rousseau describes a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag. This has been taken to be a useful analogy for social cooperation, such as international agreements on climate change. The stag hunt differs from the Prisoner's Dilemma in that there are two pure-strategy Nash equilibria when both players cooperate and both players defect. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect.
In game theory, a Perfect Bayesian Equilibrium (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. It is a refinement of Bayesian Nash equilibrium (BNE). A PBE has two components - strategies and beliefs:
In game theory, a Bayesian game is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.
In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.
In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. Single stage game or single shot game are names for non-repeated games.
In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy, the distribution is called a correlated equilibrium.
In game theory, the purification theorem was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.
Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.
Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.
In game theory, a game is said to be a potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley.
In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.
The intuitive criterion (IC) is a technique for equilibrium refinement in signaling games. It aims to reduce possible outcome scenarios by first restricting the type group to types of agents who could obtain higher utility levels by deviating to off-the-equilibrium messages and second by considering in this sub-set of types the types for which the off-the-equilibrium message is not equilibrium dominated.
The Price of Anarchy (PoA) is a concept in economics and game theory that measures how the efficiency of a system degrades due to selfish behavior of its agents. It is a general notion that can be extended to diverse systems and notions of efficiency. For example, consider the system of transportation of a city and many agents trying to go from some initial location to a destination. Let efficiency in this case mean the average time for an agent to reach the destination. In the 'centralized' solution, a central authority can tell each agent which path to take in order to minimize the average travel time. In the 'decentralized' version, each agent chooses its own path. The Price of Anarchy measures the ratio between average travel time in the two cases.
In game theory a Poisson game is a game with a random number of players, where the distribution of the number of players follows a Poisson random process. An extension of games of imperfect information, Poisson games have mostly seen application to models of voting.
In algorithmic game theory, a succinct game or a succinctly representable game is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of polynomial type if in a game represented by a string of length n the number of players, as well as the number of strategies of each player, is bounded by a polynomial in n.
Jean-François Mertens was a Belgian game theorist and mathematical economist.