WikiMili The Free Encyclopedia

In game theory, **cheap talk** is communication between players that does not directly affect the payoffs of the game. Providing and receiving information is free. This is in contrast to signaling in which sending certain messages may be costly for the sender depending on the state of the world.

- Crawford and Sobel's original article
- Setting
- Theorem
- Applications
- Game theory
- Biological applications
- See also
- Notes
- References

One actor has information and the other has ability to act. The informed player can choose strategically what to say and what not to say. Things become interesting when the interests of the players are not aligned. The classic example is of an expert (say, an ecologist) trying to explain the state of the world to an uninformed decision maker (say, politician voting on a deforestation bill). The decision maker, after hearing the report from the expert, must then make a decision which affects the payoffs of both players.

This basic setting set by Crawford and Sobel^{ [1] } has given rise to a variety of variants.

To give a formal definition, cheap talk is communication that is:^{ [2] }

- costless to transmit and receive
- non-binding (i.e. does not limit strategic choices by either party)
- unverifiable (i.e. cannot be verified by a third party like a court)

Therefore, an agent engaging in cheap talk could lie with impunity, but may choose in equilibrium not to do so.

In the basic form of the game, there are two players communicating, one sender *S* and one receiver *R*.

**Type.** Sender *S* gets knowledge of the state of the world or of his "type" *t*. Receiver *R* does not know *t* ; he has only ex-ante beliefs about it, and relies on a message from *S* to possibly improve the accuracy of his beliefs.

**Message.***S* decides to send message *m*. Message *m* may disclose full information, but it may also give limited, blurred information: it will typically say "The state of the world is between *t _{1}* and

The form of the message does not matter, as long as there is mutual understanding, common interpretation. It could be a general statement from a central bank's chairman, a political speech in any language, etc. Whatever the form, it is eventually taken to mean "The state of the world is between *t _{1}* and

**Action.** Receiver *R* receives message *m*. *R* updates his beliefs about the state of the world given new information that he might get, using Bayes's rule. *R* decides to take action *a*. This action impacts both his own utility and the sender's utility.

**Utility.** The decision of *S* regarding the content of *m* is based on maximizing his utility, given what he expects *R* to do. Utility is a way to quantify satisfaction or wishes. It can be financial profits, or non-financial satisfaction—for instance the extent to which the environment is protected.

→ Quadratic utilities:The respective utilities of

SandRcan be specified by the following:The theory applies to more general forms of utility, but quadratic preferences makes exposition easier. Thus

SandRhave different objectives ifb ≠ 0. Parameterbis interpreted asconflict of interestbetween the two players, or alternatively as bias.

Uis maximized when^{R}a = t, meaning that the receiver wants to take action that matches the state of the world, which he does not know in general.Uis maximized when^{S}a = t + b, meaning thatSwants a slightly higher action to be taken. SinceSdoes not control action,Smust obtain the desired action by choosing what information to reveal. Each player’s utility depends on the state of the world and on both players’ decisions that eventually lead to actiona.

**Nash equilibrium.** We look for an equilibrium where each player decides optimally, assuming that the other player also decides optimally. Players are rational, although *R* has only limited information. Expectations get realized, and there is no incentive to deviate from this situation.

Crawford and Sobel characterize possible Nash equilibria.

- There are typically
**multiple equilibria**, but in a finite number. **Separating**, which means full information revelation, is not a Nash equilibrium.**Babbling**, which means no information transmitted, is always an equilibrium outcome.

When interests are aligned, then information is fully disclosed. When conflict of interest is very large, all information is kept hidden. These are extreme cases. The model allowing for more subtle case when interests are close, but different and in these cases optimal behavior leads to some but not all information being disclosed, leading to various kinds of carefully worded sentences that we may observe.

More generally :

- There exists
Nsuch that for all^{*}> 0Nwith1 ≤ N ≤ N,^{*}- there exists at least an equilibrium in which the set of induced actions has cardinality
N; and moreover- there is no equilibrium that induces more than
Nactions.^{*}

**Messages.** While messages could ex-ante assume an infinite number of possible values *µ(t)* for the infinite number of possible states of the world *t*, actually they may take only a finite number of values *(m _{1}, m_{2}, . . . , m_{N})*.

Thus an equilibrium may be characterized by a partition *(t _{0}(N), t_{1}(N). . . t_{N}(N))* of the set of types [0, 1], where

The *t _{i}(N)*’s are the bounds of intervals where the messages are constant: for

**Actions.** Since actions are functions of messages, actions are also constant over these intervals: for *t _{i-1}(N) < t < t_{i}(N)*,

The action function is now indirectly characterized by the fact that each value *a _{i}* optimizes return for the

→

Quadratic utilities:Given that

Rknows thattis betweentand_{i-1}t, and in the special case quadratic utility where_{i}Rwants actionato be as close totas possible, we can show that quite intuitively the optimal action is the middle of the interval:

**Indifference condition.** What happens at *t = t _{i}*? The sender has to be indifferent between sending either message

This gives information about *N* and the *t _{i}*.

→ Practically:We consider a partition of size

N.One can show that

Nmust be small enough so that the numerator is positive. This determines the maximum allowed valuewhere is the ceiling of , i.e. the smallest positive integer greater or equal to . Example: We assume that

b = 1/20. ThenN. We now describe all the equilibria for^{*}= 3N=1,2, or3(see Figure 2).

This is the babbling equilibrium.N = 1:t;_{0}= 0, t_{1}= 1a._{1}= 1/2 = 0.5

N = 2:t;_{0}= 0, t_{1}= 2/5 = 0.4, t_{2}= 1a._{1}= 1/5 = 0.2, a_{2}= 7/10 = 0.7

N = N^{*}= 3:t;_{0}= 0, t_{1}= 2/15, t_{2}= 7/15, t_{3}= 1a._{1}= 1/15, a_{2}= 3/10 = 0.3, a_{3}= 11/15

With

N = 1, we get thecoarsestpossible message, which does not give any information. So everything is red on the top left panel. WithN = 3, the message isfiner. However, it remains quite coarse compared to full revelation, which would be the 45° line, but which is not a Nash equilibrium.

With a higher

N, and a finer message, the blue area is more important. This implies higher utility. Disclosing more information benefits both parties.

Cheap talk can, in general, be added to any game and has the potential to enhance the set of possible equilibrium outcomes. For example, one can add a round of cheap talk in the beginning of the Battle of the Sexes. Each player announces whether they intend to go to the football game, or the opera. Because the Battle of the Sexes is a coordination game, this initial round of communication may enable the players to select among multiple equilibria, thereby achieving higher payoffs than in the uncoordinated case. The messages and strategies which yield this outcome are symmetric for each player. They are: 1) announce opera or football with even probability 2) if a person announces opera (or football), then upon hearing this message the other person will say opera (or football) as well (Farrell and Rabin, 1996). If they both announce different options, then no coordination is achieved. In the case of only one player messaging, this could also give that player a first-mover advantage.

It is not guaranteed, however, that cheap talk will have an effect on equilibrium payoffs. Another game, the Prisoner's Dilemma, is a game whose only equilibrium is in dominant strategies. Any pre-play cheap talk will be ignored and players will play their dominant strategies (Defect, Defect) regardless of the messages sent.

It has been commonly argued that cheap talk will have no effect on the underlying structure of the game. In biology authors have often argued that costly signalling best explains signalling between animals (see Handicap principle, Signalling theory). This general belief has been receiving some challenges (see work by Carl Bergstrom^{ [3] } and Brian Skyrms 2002, 2004). In particular, several models using evolutionary game theory indicate that cheap talk can have effects on the evolutionary dynamics of particular games.

- ↑ Crawford, Vincent P.; Sobel, Joel (November 1982). "Strategic Information Transmission".
*Econometrica*.**50**(6): 1431–1451. CiteSeerX 10.1.1.295.3462 . doi:10.2307/1913390. JSTOR 1913390. - ↑ Farrell, Joseph (1987). "Cheap Talk, Coordination, and Entry".
*The RAND Journal of Economics*.**18**(1): 34–39. doi:10.2307/2555533. JSTOR 2555533. - ↑ "
*The Biology of Information*". Archived from the original on 2005-03-04. Retrieved 2005-03-17.

In game theory and economic theory, a **zero-sum game** is a mathematical representation of a situation in which each participant's gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants. If the total gains of the participants are added up and the total losses are subtracted, they will sum to zero. Thus, cutting a cake, where taking a larger piece reduces the amount of cake available for others, is a zero-sum game if all participants value each unit of cake equally.

In game theory, the **Nash equilibrium**, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

In game theory, the **best response** is the strategy which produces the most favorable outcome for a player, taking other players' strategies as given. The concept of a best response is central to John Nash's best-known contribution, the Nash equilibrium, the point at which each player in a game has selected the best response to the other players' strategies.

In game theory, a **signaling game** is a simple type of a dynamic Bayesian game.

In game theory, the **stag hunt** is a game that describes a conflict between safety and social cooperation. Other names for it or its variants include "assurance game", "coordination game", and "trust dilemma". Jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag. This has been taken to be a useful analogy for social cooperation, such as international agreements on climate change.

In game theory, a **Perfect Bayesian Equilibrium** (PBE) is an equilibrium concept relevant for dynamic games with incomplete information. It is a refinement of Bayesian Nash equilibrium (BNE). A PBE has two components - *strategies* and *beliefs*:

In game theory, a **Bayesian game** is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

In game theory, a **symmetric game** is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If one can change the identities of the players without changing the payoff to the strategies, then a game is symmetric. Symmetry can come in different varieties. **Ordinally symmetric games** are games that are symmetric with respect to the ordinal structure of the payoffs. A game is **quantitatively symmetric** if and only if it is symmetric with respect to the exact payoffs. A **partnership game** is a symmetric game where both players receive identical payoffs for any strategy set. That is, the payoff for playing strategy *a* against strategy *b* receives the same payoff as playing strategy *b* against strategy *a*.

In game theory, **folk theorems** are a class of theorems about possible Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept subgame-perfect Nash equilibria rather than Nash equilibrium.

In game theory, a **repeated game** is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. *Single stage game* or *single shot game* are names for non-repeated games.

In game theory, a **correlated equilibrium** is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from the recommended strategy, the distribution is called a correlated equilibrium.

In game theory, a **subgame perfect equilibrium** is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

**Quantal response equilibrium** (**QRE**) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

**Risk dominance** and **payoff dominance** are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered **payoff dominant** if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered **risk dominant** if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, a game is said to be a **potential game** if the incentive of all players to change their strategy can be expressed using a single global function called the **potential function**. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley.

**Proper equilibrium** is a refinement of Nash Equilibrium due to Roger B. Myerson. Proper equilibrium further refines Reinhard Selten's notion of a trembling hand perfect equilibrium by assuming that more costly trembles are made with significantly smaller probability than less costly ones.

In game theory, an **epsilon-equilibrium**, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

**The intuitive criterion** is a technique for equilibrium refinement in signaling games. It aims to reduce possible outcome scenarios by first restricting the type group to types of agents who could obtain higher utility levels by deviating to off-the-equilibrium messages and second by considering in this sub-set of types the types for which the off-the-equilibrium message is not equilibrium dominated.

In algorithmic game theory, a **succinct game** or a **succinctly representable game** is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of *polynomial type* if in a game represented by a string of length *n* the number of players, as well as the number of strategies of each player, is bounded by a polynomial in *n*.

In game theory, a **bimatrix game** is a simultaneous game for two players in which each player has a finite number of possible actions. The name comes from the fact that the normal form of such a game can be described by two matrices - matrix *A* describing the payoffs of player 1 and matrix *B* describing the payoffs of player 2.

- Crawford, V. P.; Sobel, J. (1982). "Strategic Information Transmission".
*Econometrica*.**50**(6): 1431–1451. CiteSeerX 10.1.1.461.9770 . doi:10.2307/1913390. JSTOR 1913390. - Farrell, J.; Rabin, M. (1996). "Cheap Talk".
*Journal of Economic Perspectives*.**10**(3): 103–118. doi:10.1257/jep.10.3.103. JSTOR 2138522. - Robson, A. J. (1990). "Efficiency in Evolutionary Games: Darwin, Nash, and the Secret Handshake" (PDF).
*Journal of Theoretical Biology*.**144**(3): 379–396. doi:10.1016/S0022-5193(05)80082-7. - Skyrms, B. (2002). "Signals, Evolution and the Explanatory Power of Transient Information".
*Philosophy of Science*.**69**(3): 407–428. doi:10.1086/342451. - Skyrms, B. (2004).
*The Stag Hunt and the Evolution of Social Structure*. New York: Cambridge University Press. ISBN 0-521-82651-9.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.