Berge equilibrium

Last updated

The Berge equilibrium is a game theory solution concept named after the mathematician Claude Berge. It is similar to the standard Nash equilibrium, except that it aims to capture a type of altruism rather than purely non-cooperative play. Whereas a Nash equilibrium is a situation in which each player of a strategic game ensures that they personally will receive the highest payoff given other players' strategies, in a Berge equilibrium every player ensures that all other players will receive the highest payoff possible. Although Berge introduced the intuition for this equilibrium notion in 1957, it was only formally defined by Vladislav Iosifovich Zhukovskii in 1985, and it was not in widespread use until half a century after Berge originally developed it.

Contents

History

The Berge equilibrium was first introduced in Claude Berge's 1957 book Théorie générale des jeux à n personnes. [1] Moussa Larbani and Vladislav Iosifovich Zhukovskii write that the ideas in this book were not widely used in Russia partly due to a harsh review that it received shortly after its translation into Russian in 1961, and they were not used in the English speaking world because the book had only received French and Russian printings. [2] These explanations are echoed by other authors, [3] with Pierre Courtois et al. adding that the impact of the book was likely dampened by its lack of economic examples, as well as by its reliance on tools from graph theory that would have been less familiar to economists of the time. [4]

Berge introduced his original equilibrium notion only in intuitive terms, and the first formal definition of the Berge equilibrium was published by Vladislav Iosifovich Zhukovskii in 1985. [5] The topic of Berge equilibria was then studied in detail by Konstantin Semenovich Vaisman in his 1995 PhD dissertation, [6] and Larbani and Zhukovskii document that the tool became more widely used in the mid-2000s as economists became interested in increasingly complex systems in which players might be more inclined to seek globally favourable equilibria and attach value to other players' payoffs. [2] Colman et al. connect interest in the Berge equilibrium to interest in cooperative game theory, the evolution of cooperation, and topics like altruism in evolutionary game theory. [5]

Definition

Formal definition

Consider a normal-form game , where is the set of players, is the (nonempty) strategy set of player where , and is that player's utility function. Denote a strategy profile as , and denote an incomplete strategy profile . A strategy profile is called a Berge equilibrium if, for any player and any , the strategy profile satisfies . [7]

Informal definition

The players in a game are playing a Berge equilibrium if they have chosen a strategy profile such that, if any given player sticks with their chosen strategy while some of the other players change their strategies, then player 's payoff will not increase. So, every player in a Berge equilibrium guarantees the best possible payoff for every other player who is playing their Berge equilibrium strategy; this is a contrast with Nash equilibria, in which each player is only concerned about maximizing their own payoffs from their strategy, and no other player cares about the payoff obtained by player . [5]

Example

Consider the following prisoner's dilemma game, from Larbani and Zhukovskii (2017): [2]

Red
Blue
CooperateDefect
Cooperate
20
20
25
5
Defect
5
25
10
10

Berge result

A Berge equilibrium of this game is the situation in which both players pick "cooperate", denote it . This is a Berge equilibrium because each player can only lower the other player's payoff by switching their strategy; if either player switched from "cooperate" to "defect", then they would lower the other player's payoff from 20 down to 5, so they must be in a Berge equilibrium.

Berge versus Nash result

Notice first that the Berge equilibrium is not a Nash equilibrium, because either the row player or the column player could increase their own payoff from 20 to 25 by switching to "defect" instead of "cooperate".

A Nash equilibrium of this prisoner's dilemma game is the situation in which both players pick "defect", denote it . That strategy pair yields a payoff of 10 to the row player and 10 to the column player, and no player has a unilateral incentive to switch their strategy to maximize their own payoff. However, is not a Berge equilibrium, because the row player could ensure a higher payoff for the column player by switching strategies and giving the column player a payoff of 25 instead of 10, and the column player could do the same for the row player.

The cooperative nature of the Berge equilibrium therefore avoids the mutual defection problem that has made the prisoner's dilemma a notorious example of the potential for Nash equilibrium reasoning to produce a mutually suboptimal result. [8]

Motivation

The Berge equilibrium has been motivated as the exact opposite of a Nash equilibrium, in that while the Nash equilibrium models selfish behaviours, the Berge equilibrium models altruistic behaviours. [9] Moussa Larbani and Vladislav Iosifovich Zhukovskii note that Berge equilibria could be interpreted as a method for formalising the Golden Rule in strategic interactions. [2]

One advantage of the Berge equilibrium over the Nash equilibrium is that the Berge results may agree more closely with results obtained from experimental psychology and experimental economics. Several authors have noted that players asked to play games like the Prisoner's Dilemma or the ultimatum game in laboratory scenarios rarely reach the Nash Equilibrium result, in part because people in real situations often do attach value to the well-being of others, and that therefore Berge equilibria could sometimes be a better fit to real behaviour in certain situations. [5] [4]

A challenge for the use of Berge equilibria is that they do not have as strong existence properties as Nash equilibria, although their existence may be assured by adding extra conditions. [10] The Berge equilibrium solution concept may also be used for games that do not satisfy the conditions for Nash's existence theorem and have no Nash equilibria, such as certain games with infinite strategy sets, or in situations where equilibria in pure strategies are desired and yet there are no Nash equilibria among the pure strategy profiles. [2] [11]

Related Research Articles

An evolutionarily stable strategy (ESS) is a strategy that is impermeable when adopted by a population in adaptation to a specific environment, that is to say it cannot be displaced by an alternative strategy which may be novel or initially rare. Introduced by John Maynard Smith and George R. Price in 1972/3, it is an important concept in behavioural ecology, evolutionary psychology, mathematical game theory and economics, with applications in other fields such as anthropology, philosophy and political science.

In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy. The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.

In game theory, the centipede game, first introduced by Robert Rosenthal in 1981, is an extensive form game in which two players take turns choosing either to take a slightly larger share of an increasing pot, or to pass the pot to the other player. The payoffs are arranged so that if one passes the pot to one's opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round, but after an additional switch the potential payoff will be higher. Therefore, although at each round a player has an incentive to take the pot, it would be better for them to wait. Although the traditional centipede game had a limit of 100 rounds, any game with this structure but a different number of rounds is called a centipede game.

<span class="mw-page-title-main">Solution concept</span> Formal rule for predicting how a game will be played

In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, but rather represent the game by way of a matrix. While this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, and their corresponding payoffs, for each player.

In game theory, a symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If one can change the identities of the players without changing the payoff to the strategies, then a game is symmetric. Symmetry can come in different varieties. Ordinally symmetric games are games that are symmetric with respect to the ordinal structure of the payoffs. A game is quantitatively symmetric if and only if it is symmetric with respect to the exact payoffs. A partnership game is a symmetric game where both players receive identical payoffs for any strategy set. That is, the payoff for playing strategy a against strategy b receives the same payoff as playing strategy b against strategy a.

In game theory, trembling hand perfect equilibrium is a type of refinement of a Nash equilibrium that was first proposed by Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or tremble, may choose unintended strategies, albeit with negligible probability.

In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.

In game theory, a correlated equilibrium is a solution concept that is more general than the well known Nash equilibrium. It was first discussed by mathematician Robert Aumann in 1974. The idea is that each player chooses their action according to their private observation of the value of the same public signal. A strategy assigns an action to every possible observation a player can make. If no player would want to deviate from their strategy, the distribution from which the signals are drawn is called a correlated equilibrium.

In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game, no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves".

Risk dominance and payoff dominance are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered payoff dominant if it is Pareto superior to all other Nash equilibria in the game.1 When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered risk dominant if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, a game is said to be a potential game if the incentive of all players to change their strategy can be expressed using a single global function called the potential function. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley.

Proper equilibrium is a refinement of Nash Equilibrium by Roger B. Myerson. Proper equilibrium further refines Reinhard Selten's notion of a trembling hand perfect equilibrium by assuming that more costly trembles are made with significantly smaller probability than less costly ones.

In game theory, an epsilon-equilibrium, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

The volunteer's dilemma is a game that models a situation in which each player can either make a small sacrifice that benefits everybody, or instead wait in hope of benefiting from someone else's sacrifice.

The Price of Anarchy (PoA) is a concept in economics and game theory that measures how the efficiency of a system degrades due to selfish behavior of its agents. It is a general notion that can be extended to diverse systems and notions of efficiency. For example, consider the system of transportation of a city and many agents trying to go from some initial location to a destination. Here, efficiency means the average time for an agent to reach the destination. In the 'centralized' solution, a central authority can tell each agent which path to take in order to minimize the average travel time. In the 'decentralized' version, each agent chooses its own path. The Price of Anarchy measures the ratio between average travel time in the two cases.

In algorithmic game theory, a succinct game or a succinctly representable game is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of polynomial type if in a game represented by a string of length n the number of players, as well as the number of strategies of each player, is bounded by a polynomial in n.

In game theory, Mertens stability is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

In game theory, an aggregative game is a game in which every player’s payoff is a function of the player’s own strategy and the aggregate of all players’ strategies. The concept was first proposed by Nobel laureate Reinhard Selten in 1970 who considered the case where the aggregate is the sum of the players' strategies.

References

  1. Berge, Claude (1957). Theorie generale des jeux a n personnes (in French). Gauthier-Villars.
  2. 1 2 3 4 5 Larbani, Moussa; Zhukovskii, Vladislav Iosifovich (2017). "Berge equilibrium in normal form static games: a literature review". Izvestiya Instituta Matematiki I Informatiki Udmurtskogo Gosudarstvennogo Universiteta. 49: 80–110. doi: 10.20537/2226-3594-2017-49-04 .
  3. Pykacz, Jarosław; Bytner, Paweł; Frąckiewicz, Piotr (2019). "Example of a Finite Game with No Berge Equilibria at All". Games. 10 (1): 7. arXiv: 1807.05821 . doi: 10.3390/g10010007 .
  4. 1 2 Courtois, Pierre; Nessah, Rabia; Tazdaït, Tarik (2015). "How to play the games? Nash versus Berge behavior rules". Economics & Philosophy. 31 (1): 123–139. doi:10.1017/S026626711400042X. hdl: 20.500.12210/21717 .
  5. 1 2 3 4 Colman, Andrew M.; Körner, Tom W.; Musy, Olivier; Tazdaït, Tarik (April 2011). "Mutual support in games: Some properties of Berge equilibria". Journal of Mathematical Psychology. 55 (2): 166–175. doi:10.1016/j.jmp.2011.02.001. hdl: 2381/9716 .
  6. Vaisman, Konstantin Semenovih (1995). The Berge equilibrium. Saint Petersburg State University.
  7. Zhukovskii, Vladislav Iosifovich. P. Kenderov (ed.). Some Problems of Non-Antagonistic Differential Games. Matematiceskie Metody v Issledovanii Operacij (Mathematical Methods in Operations Research). Sofia: Bulgarian Academy of Sciences. pp. 103–195.
  8. Lung, Rodica Ioana; Suciu, Mihai; Gaskó, Noémi; Dumitrescu, D. (15 July 2015). "Characterization and Detection of ϵ-Berge-Zhukovskii Equilibria". PLOS One . 10 (7): e0131983. Bibcode:2015PLoSO..1031983L. doi: 10.1371/journal.pone.0131983 . PMC   4503462 . PMID   26177217.
  9. Kudryavtsev, Konstantin; Ukhobotov, Viktor; Zhukovskiy, Vladislav (2018). "The Berge Equilibrium in Cournot Oligopoly Model". Optimization and Applications. OPTIMA 2018. Communications in Computer and Information Science: 415–426.
  10. Nessaha, Rabia; Larbanic, Moussa; Tazdait, Tarik (August 2007). "A note on Berge equilibrium". Applied Mathematics Letters. 20 (8): 926–932. doi: 10.1016/j.aml.2006.09.005 .
  11. Musy, Olivier; Pottier, Antonin; Tazdait, Tarik (2012). "A new theorem to find Berge equilibria". International Game Theory Review. 14 (1). doi:10.1142/S0219198912500053.