Non-credible threat

Last updated
Illustration that shows the difference between a SPNE and another NE. The blue equilibrium is not subgame perfect because player two makes a non-credible threat at 2(2) to be unkind (U). SGPNEandPlainNE explainingexample.svg
Illustration that shows the difference between a SPNE and another NE. The blue equilibrium is not subgame perfect because player two makes a non-credible threat at 2(2) to be unkind (U).

A non-credible threat is a term used in game theory and economics to describe a threat in a sequential game that a rational player would not actually carry out, because it would not be in his best interest to do so.

Contents

A threat, and its counterpart a commitment, are both defined by American economist and Nobel prize winner, T.C. Schelling, who stated that: "A announces that B's behaviour will lead to a response from A. If this response is a reward, then the announcement is a commitment; if this response is a penalty, then the announcement is a threat." [1] While a player might make a threat, it is only deemed credible if it serves the best interest of the player. [2] In other words, the player would be willing to carry through with the action that is being threatened regardless of the choice of the other player. [3] This is based on the assumption that the player is rational. [1]

A non-credible threat is made on the hope that it will be believed, and therefore the threatening undesirable action will not need to be carried out. [4] For a threat to be credible within an equilibrium, whenever a node is reached where a threat should be fulfilled, it will be fulfilled. [3] Those Nash equilibria that rely on non-credible threats can be eliminated through backward induction; the remaining equilibria are called subgame perfect Nash equilibria. [2] [5]

Examples of Non-credible threats

Player 2 threatening action A if Player 1 chooses action B is a non-credible threat. This is because if Player 1 chooses action B, Player 2 will choose action B, as this results in a higher payoff than action A for Player 2. Non-credible threat.png
Player 2 threatening action A if Player 1 chooses action B is a non-credible threat. This is because if Player 1 chooses action B, Player 2 will choose action B, as this results in a higher payoff than action A for Player 2.

Market Entry Game

An example of a non-credible threat is demonstrated by Shaorong Sun & Na Sun in their book Management Game Theory. The example game, the market entry game, describes a situation in which an existing firm, firm 2, has a strong hold on the market and a new firm, firm 1, is considering entering. If firm 1 doesn’t enter, the payoff is (4,10). However, if firm 1 does enter, firm 2 has the choice to either attack or not attack. If firm 2 attacks, the payoff is (3,3) whereas if firm 2 doesn’t attack, the payoff is (6,6). Given that firm 2’s optimum payoff is firm 1 not entering, it can issue a threat that they will attack if firm 1 enters, to discourage firm 1 from entering the market. However, this is a non-credible threat. If firm 1 does decide to enter the market, the action that is in the best interest for firm 2 is to not attack as this leads to a payoff of 6 for the firm, as opposed to the payoff of 3 from attacking. [1]

Eric van Damme's Extensive Form Game

Eric van Damme's Extensive Form Game demonstrates another example of a non-credible threat. In this game, player 1 has the choice of L or R, and if player 1 chooses R, then player 2 has the choice of l or r. Player 2 can threaten choosing l with a payoff of (0,0) to entice player 1 to choose L with a payoff of (2,2), as this is the highest payoff for player 2. However, this is a non-credible threat as, if player 1 does decide to choose R, player 2 will choose r as their payoff is 1 as opposed to l which has a payoff of 0 for player 2. Given that action l is not in player 2’s best interest, their threat to play that is non-credible. [4]

Rationality

The notion of credibility is contingent on the principle of rationality. A rational player always make decisions that maximise their own utility, however, players are not always rational. [6] Therefore, in real world applications, the assumption that all players will be rational and act to maximise their utility is not practical, thus non-credible threats cannot be ignored. [7]

Experiment using the Beard and Beil Game (1994)

Nicolas Jacquemet and Adam Zylbersztejn conducted experiments based on the Beard and Beil Game to investigate whether people act to maximise their payoffs. From the study Jacquemet and Zylbersztejn found that failure to maximise utility stemmed from two observations: "subjects are not willing to rely on others’ self-interested maximization, and self-interested maximization is not ubiquitous." [8] A key component of the utility maximising strategy in the game was the elimination of non-credible threats, however, the study found that suboptimal payoffs were a direct result of players following through on these non-credible threats. [8] In real world applications, non-credible threats must be considered as there is a high possibility players will not act rationally. [7]


Notes

  1. 1 2 3 Sun, Shaorong; Sun, Na (2018). Management Game Theory. doi:10.1007/978-981-13-1062-1. ISBN   978-981-13-1061-4. S2CID   169075970.
  2. 1 2 Heifetz, A., & Yalon-Fortus, J. (2012). Game Theory: Interactive Strategies in Economics and Management. Cambridge University Press. ProQuest Ebook Central
  3. 1 2 Schelling, Thomas C. (1956). "An Essay on Bargaining". The American Economic Review. 46 (3): 281–306. JSTOR   1805498.
  4. 1 2 van Damme, Eric (1989). "Extensive Form Games". In Eatwell, J.; Milgate, M.; Newman, P. (eds.). Game Theory. Palgrave Macmillan. pp. 139–144. ISBN   978-1-349-20181-5.
  5. Harrington, J. E. (1989). "Noncooperative Games". In Eatwell, John; Milgate, Murray; Newman, Peter (eds.). Game Theory. Palgrave Macmillan. pp. 178–184. doi:10.1007/978-1-349-20181-5. ISBN   978-1-349-20181-5.
  6. Monahan, K. (2018). How Behavioral Economics Influences Management Decision-Making: A New Paradigm. Academic Press. doi:10.1016/C2016-0-05106-9. ISBN   9780128135310.
  7. 1 2 Khalil, Elias L. (2011). "Rational, Normative and Procedural Theories of Beliefs: Can They Explain Internal Motivations?". Journal of Economic Issues. 45 (3): 641–664. doi:10.2753/JEI0021-3624450307. S2CID   143987777.
  8. 1 2 Jacquemet, Nicolas; Zylbersztejn, Adam (June 2014). "What Drives Failure to Maximize Payoffs in the Lab? A Test of the Inequality Aversion Hypothesis". doi:10.2139/ssrn.1895287. S2CID   219374150. SSRN   1895287.{{cite journal}}: Cite journal requires |journal= (help)

See also


Related Research Articles

Game theory is the study of mathematical models of strategic interactions. It has applications in many fields of social science, and is used extensively in economics, logic, systems science and computer science. Initially, game theory addressed two-person zero-sum games, in which a participant's gains or losses are exactly balanced by the losses and gains of the other participant. In the 1950s, it was extended to the study of non zero-sum games, and was eventually applied to a wide range of behavioral relations. It is now an umbrella term for the science of rational decision making in humans, animals, and computers.

In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy. The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.

Satisficing is a decision-making strategy or cognitive heuristic that entails searching through the available alternatives until an acceptability threshold is met. The term satisficing, a portmanteau of satisfy and suffice, was introduced by Herbert A. Simon in 1956, although the concept was first posited in his 1947 book Administrative Behavior. Simon used satisficing to explain the behavior of decision makers under circumstances in which an optimal solution cannot be determined. He maintained that many natural problems are characterized by computational intractability or a lack of information, both of which preclude the use of mathematical optimization procedures. He observed in his Nobel Prize in Economics speech that "decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world. Neither approach, in general, dominates the other, and both have continued to co-exist in the world of management science".

The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while the ideal outcome is for one player to yield, individuals try to avoid it out of pride, not wanting to look like "chickens." Each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game essentially ends.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round, but after an additional switch the potential payoff will be higher. Therefore, although at each round a player has an incentive to take the pot, it would be better for them to wait. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

<span class="mw-page-title-main">Solution concept</span> Formal rule for predicting how a game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, an extensive-form game is a specification of a game allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation of incomplete information in the form of chance events modeled as "moves by nature". Extensive-form representations differ from normal-form in that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix.

In game theory, a Perfect Bayesian Equilibrium (PBE) is a solution with Bayesian probability to a turn-based game with incomplete information. More specifically, it is an equilibrium concept that uses Bayesian updating to describe player behavior in dynamic games with incomplete information. Perfect Bayesian equilibria are used to solve the outcome of games where players take turns but are unsure of the "type" of their opponent, which occurs when players don't know their opponent's preference between individual moves. A classic example of a dynamic game with types is a war game where the player is unsure whether their opponent is a risk-taking "hawk" type or a pacifistic "dove" type. Perfect Bayesian Equilibria are a refinement of Bayesian Nash equilibrium (BNE), which is a solution concept with Bayesian probability for non-turn-based games.

The Stackelberg leadership model is a strategic game in economics in which the leader firm moves first and then the follower firms move sequentially. It is named after the German economist Heinrich Freiherr von Stackelberg who published Marktform und Gleichgewicht [Market Structure and Equilibrium] in 1934, which described the model. In game theory terms, the players of this game are a leader and a follower and they compete on quantity. The Stackelberg leader is sometimes referred to as the Market Leader.

Backward induction is the process of determining a sequence of optimal choices by reasoning from the endpoint of a problem or situation back to its beginning using individual events or actions. Backward induction involves examining the final point in a series of decisions and identifying the optimal process or action required to arrive at that point. This process continues backward until the best action for every possible point along the sequence is determined. Backward induction was first utilized in 1875 by Arthur Cayley, who discovered the method while attempting to solve the secretary problem.

The chain store paradox is an apparent game theory paradox describing the decisions a chain store might make, where a "deterrence strategy" appears optimal instead of the backward induction strategy of standard game theory reasoning.

In game theory, trembling hand perfect equilibrium is a type of refinement of a Nash equilibrium that was first proposed by Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.

In game theory, the outcome of a game is the ultimate result of a strategic interaction with one or more people, dependant on the choices made by all participants in a certain exchange. It represents the final payoff resulting from a set of actions that individuals can take within the context of the game. Outcomes are pivotal in determining the payoffs and expected utility for parties involved. Game theorists commonly study how the outcome of a game is determined and what factors affect it.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Quantal response equilibrium (QRE) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

Equilibrium selection is a concept from game theory which seeks to address reasons for players of a game to select a certain equilibrium over another. The concept is especially relevant in evolutionary game theory, where the different methods of equilibrium selection respond to different ideas of what equilibria will be stable and persistent for one player to play even in the face of deviations of the other players. This is important because there are various equilibrium concepts, and for many particular concepts, such as the Nash equilibrium, many games have multiple equilibria.

Cooperative bargaining is a process in which two people decide how to share a surplus that they can jointly generate. In many cases, the surplus created by the two players can be shared in many ways, forcing the players to negotiate which division of payoffs to choose. Such surplus-sharing problems are faced by management and labor in the division of a firm's profit, by trade partners in the specification of the terms of trade, and more.

A Markov perfect equilibrium is an equilibrium concept in game theory. It has been used in analyses of industrial organization, macroeconomics, and political economy. It is a refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin.