Jean-François Mertens | |
---|---|

Born | Antwerp, Belgium | March 11, 1946

Died | July 17, 2012 66)^{ [1] } | (aged

Nationality | Belgium |

Alma mater | Université Catholique de Louvain Docteur ès Sciences 1970 |

Awards | Econometric Society Fellow von Neumann Lecturer of Game Theory Society |

Scientific career | |

Fields | Game Theory Mathematical economics |

Doctoral advisor | José Paris Jacques Neveu |

Influences | Robert Aumann Reinhard Selten John Harsanyi John von Neumann |

Influenced | Claude d'Aspremont Bernard De Meyer Amrita Dhillon Francoise Forges Jean Gabszewicz Srihari Govindan Abraham Neyman Anna Rubinchik Sylvain Sorin |

**Jean-François Mertens** (11 March 1946 – 17 July 2012) was a Belgian game theorist and mathematical economist.^{ [1] }

- Epistemic models
- Repeated games with incomplete information
- Stochastic games
- Market games: limit price mechanism
- Shapley value
- Refinements and Mertens-stable equilibria
- Social choice theory and relative utilitarianism
- Intergenerational equity in policy evaluation
- References

Mertens contributed to economic theory in regards to order-book of market games, cooperative games, noncooperative games, repeated games, epistemic models of strategic behavior, and refinements of Nash equilibrium (see solution concept). In cooperative game theory he contributed to the solution concepts called the core and the Shapley value.

Regarding repeated games and stochastic games, Mertens 1982^{ [2] } and 1986^{ [3] } survey articles, and his 1994^{ [4] } survey co-authored with Sylvain Sorin and Shmuel Zamir, are compendiums of results on this topic, including his own contributions. Mertens also made contributions to probability theory^{ [5] } and published articles on elementary topology.^{ [6] }^{ [7] }

Mertens and Zamir^{ [8] }^{ [9] } implemented John Harsanyi's proposal to model games with incomplete information by supposing that each player is characterized by a privately known type that describes his feasible strategies and payoffs as well as a probability distribution over other players' types. They constructed a universal space of types in which, subject to specified consistency conditions, each type corresponds to the infinite hierarchy of his probabilistic beliefs about others' probabilistic beliefs. They also showed that any subspace can be approximated arbitrarily closely by a finite subspace, which is the usual tactic in applications.^{ [10] }

Repeated games with incomplete information, were pioneered by Aumann and Maschler.^{ [11] }^{ [12] } Two of Jean-François Mertens's contributions to the field are the extensions of repeated two person zero-sum games with incomplete information on both sides for both (1) the type of information available to players and (2) the signalling structure.^{ [13] }

- (1) Information: Mertens extended the theory from the independent case where the private information of the players is generated by independent random variables, to the dependent case where correlation is allowed.
- (2) Signalling structures: the standard signalling theory where after each stage both players are informed of the previous moves played, was extended to deal with general signalling structure where after each stage each player gets a private signal that may depend on the moves and on the state.

In those set-ups Jean-François Mertens provided an extension of the characterization of the minmax and maxmin value for the infinite game in the dependent case with state independent signals.^{ [14] } Additionally with Shmuel Zamir,^{ [15] } Jean-François Mertens showed the existence of a limiting value. Such a value can be thought either as the limit of the values of the stage games, as goes to infinity, or the limit of the values of the -discounted games, as agents become more patient and .

A building block of Mertens and Zamir's approach is the construction of an operator, now simply referred to as the MZ operator in the field in their honor. In continuous time (differential games with incomplete information), the MZ operator becomes an infinitesimal operator at the core of the theory of such games.^{ [16] }^{ [17] }^{ [18] } Unique solution of a pair of functional equations, Mertens and Zamir showed that the limit value may be a transcendental function unlike the maxmin or the minmax (value in the complete information case). Mertens also found the exact rate of convergence in the case of game with incomplete information on one side and general signalling structure.^{ [19] } A detailed analysis of the speed of convergence of the *n*-stage game (finitely repeated) value to its limit has profound links to the central limit theorem and the normal law, as well as the maximal variation of bounded martingales.^{ [20] }^{ [21] } Attacking the study of the difficult case of games with state dependent signals and without recursive structure, Mertens and Zamir introduced new tools on the introduction based on an auxiliary game, reducing down the set of strategies to a core that is 'statistically sufficient.'^{ [22] }^{ [23] }

Collectively Jean-François Mertens's contributions with Zamir (and also with Sorin) provide the foundation for a general theory for two person zero sum repeated games that encompasses stochastic and incomplete information aspects and where concepts of wide relevance are deployed as for example reputation, bounds on rational levels for the payoffs, but also tools like splitting lemma, signalling and approachability. While in many ways Mertens's work here goes back to the von Neumann original roots of game theory with a zero-sum two person set up, vitality and innovations with wider application have been pervasive.

Stochastic games were introduced by Lloyd Shapley in 1953.^{ [24] } The first paper studied the discounted two-person zero-sum stochastic game with finitely many states and actions and demonstrates the existence of a value and stationary optimal strategies. The study of the undiscounted case evolved in the following three decades, with solutions of special cases by Blackwell and Ferguson in 1968^{ [25] } and Kohlberg in 1974. The existence of an undiscounted value in a very strong sense, both a uniform value and a limiting average value, was proved in 1981 by Jean-François Mertens and Abraham Neyman.^{ [26] } The study of the non-zero-sum with a general state and action spaces attracted much attention, and Mertens and Parthasarathy^{ [27] } proved a general existence result under the condition that the transitions, as a function of the state and actions, are norm continuous in the actions.

Mertens had the idea to use linear competitive economies as an order book (trading) to model limit orders and generalize double auctions to a multivariate set up.^{ [28] } Acceptable relative prices of players are conveyed by their linear preferences, money can be one of the goods and it is ok for agents to have positive marginal utility for money in this case (after all agents are really just orders!). In fact this is the case for most order in practice. More than one order (and corresponding order-agent) can come from same actual agent. In equilibrium good sold must have been at a relative price compared to the good bought no less than the one implied by the utility function. Goods brought to the market (quantities in the order) are conveyed by initial endowments. Limit order are represented as follows: the order-agent brings one good to the market and has non zero marginal utilities in that good and another one (money or numeraire). An *at market* sell order will have a zero utility for the good sold *at market* and positive for money or the numeraire. Mertens clears orders creating a matching engine by using the competitive equilibrium – in spite of most usual interiority conditions being violated for the auxiliary linear economy. Mertens's mechanism provides a generalization of Shapley–Shubik trading posts and has the potential of a real life implementation with limit orders across markets rather than with just one specialist in one market.

The diagonal formula in the theory of non-atomic cooperatives games elegantly attributes the Shapley value of each infinitesimal player as his marginal contribution to the worth of a perfect sample of the population of players when averaged over all possible sample sizes. Such a marginal contribution has been most easily expressed in the form of a derivative—leading to the diagonal formula formulated by Aumann and Shapley. This is the historical reason why some differentiability conditions have been originally required to define Shapley value of non-atomic cooperative games. But first exchanging the order of taking the "average over all possible sample sizes" and taking such a derivative, Jean-François Mertens uses the smoothing effect of such an averaging process to extend the applicability of the diagonal formula.^{ [29] } This trick alone works well for majority games (represented by a step function applied on the percentage of population in the coalition). Exploiting even further this commutation idea of taking averages before taking derivative, Jean-François Mertens expends by looking at invariant transformations and taking averages over those, before taking the derivative. Doing so, Mertens expends the diagonal formula to a much larger space of games, defining a Shapley value at the same time.^{ [30] }^{ [31] }

Solution concepts that are refinements^{ [32] } of Nash equilibrium have been motivated primarily by arguments for backward induction and forward induction. Backward induction posits that a player's optimal action now anticipates the optimality of his and others' future actions. The refinement called subgame perfect equilibrium implements a weak version of backward induction, and increasingly stronger versions are sequential equilibrium, perfect equilibrium, quasi-perfect equilibrium, and proper equilibrium, where the latter three are obtained as limits of perturbed strategies. Forward induction posits that a player's optimal action now presumes the optimality of others' past actions whenever that is consistent with his observations. Forward induction^{ [33] } is satisfied by a sequential equilibrium for which a player's belief at an information set assigns probability only to others' optimal strategies that enable that information to be reached. In particular since completely mixed Nash equilibrium are sequential – such equilibria when they exist satisfy both forward and backward induction. In his work Mertens manages for the first time to select Nash equilibria that satisfy both forward and backward induction. The method is to let such feature be inherited from perturbed games that are forced to have completely mixed strategies—and the goal is only achieved with Mertens-stable equilibria, not with the simpler Kohlberg Mertens equilibria.

Elon Kohlberg and Mertens^{ [34] } emphasized that a solution concept should be consistent with an admissible decision rule. Moreover, it should satisfy the *invariance* principle that it should not depend on which among the many equivalent representations of the strategic situation as an extensive-form game is used. In particular, it should depend only on the reduced normal form of the game obtained after elimination of pure strategies that are redundant because their payoffs for all players can be replicated by a mixture of other pure strategies. Mertens^{ [35] }^{ [36] } emphasized also the importance of the *small worlds* principle that a solution concept should depend only on the ordinal properties of players' preferences, and should not depend on whether the game includes extraneous players whose actions have no effect on the original players' feasible strategies and payoffs.

Kohlberg and Mertens defined tentatively a set-valued solution concept called stability for games with finite numbers of pure strategies that satisfies admissibility, invariance and forward induction, but a counterexample showed that it need not satisfy backward induction; viz. the set might not include a sequential equilibrium. Subsequently, Mertens^{ [37] }^{ [38] } defined a refinement, also called stability and now often called a set of Mertens-stable equilibria, that has several desirable properties:

- Admissibility and Perfection: All equilibria in a stable set are perfect, hence admissible.
- Backward Induction and Forward Induction: A stable set includes a proper equilibrium of the normal form of the game that induces a quasi-perfect and sequential equilibrium in every extensive-form game with perfect recall that has the same normal form. A subset of a stable set survives iterative elimination of weakly dominated strategies and strategies that are inferior replies at every equilibrium in the set.
- Invariance and Small Worlds: The stable sets of a game are the projections of the stable sets of any larger game in which it is embedded while preserving the original players' feasible strategies and payoffs.
- Decomposition and Player Splitting. The stable sets of the product of two independent games are the products of their stable sets. Stable sets are not affected by splitting a player into agents such that no path through the game tree includes actions of two agents.

For two-player games with perfect recall and generic payoffs, stability is equivalent to just three of these properties: a stable set uses only undominated strategies, includes a quasi-perfect equilibrium, and is immune to embedding in a larger game.^{ [39] }

A stable set is defined mathematically by (in brief) essentiality of the projection map from a closed connected neighborhood in the graph of the Nash equilibria over the space of perturbed games obtained by perturbing players' strategies toward completely mixed strategies. This definition entails more than the property that every nearby game has a nearby equilibrium. Essentiality requires further that no deformation of the projection maps to the boundary, which ensures that perturbations of the fixed point problem defining Nash equilibria have nearby solutions. This is apparently necessary to obtain all the desirable properties listed above.

A social welfare function (SWF) maps profiles of individual preferences to social preferences over a fixed set of alternatives. In a seminal paper Arrow (1950)^{ [40] } showed the famous "Impossibility Theorem", i.e. there does not exist an SWF that satisfies a very minimal system of axioms: *Unrestricted Domain*, *Independence of Irrelevant Alternatives*, the *Pareto criterion* and *Non-dictatorship*. A large literature documents various ways to relax Arrow's axioms to get possibility results. Relative Utilitarianism (RU) (Dhillon and Mertens, 1999)^{ [41] } is a SWF that consists of normalizing individual utilities between 0 and 1 and adding them, and is a "possibility" result that is derived from a system of axioms that are very close to Arrow's original ones but modified for the space of preferences over lotteries. Unlike classical Utilitarianism, RU does not assume cardinal utility or interpersonal comparability. Starting from individual preferences over lotteries, which are assumed to satisfy the von-Neumann–Morgenstern axioms (or equivalent), the axiom system uniquely fixes the interpersonal comparisons. The theorem can be interpreted as providing an axiomatic foundation for the "right" interpersonal comparisons, a problem that has plagued social choice theory for a long time. The axioms are:

*Individualism:*If all individuals are indifferent between all alternatives then so is society,*Non Triviality:*The SWF is not constantly totally indifferent between all alternatives,*No Ill will*: It is not true that when all individuals but one are totally indifferent then society's preferences are opposite to his,*Anonymity:*A permutation of all individuals leaves the social preferences unchanged.*Independence of Redundant Alternatives:*This axiom restricts Arrow's Independence of Irrelevant Alternatives (IIA) to the case where both before and after the change, the "irrelevant" alternatives are lotteries on the other alternatives.*Monotonicity*is much weaker than the following "good will axiom": Consider two lotteries and and two preference profiles which coincide for all individuals except , is indifferent between and on the first profile but strictly prefers to in the second profile, then society strictly prefers to in the second profile as well.- Finally the
*Continuity*axiom is basically a closed graph property taking the strongest possible convergence for preference profiles.

The main theorem shows that RU satisfies all the axioms and if the number of individuals is bigger than three, number of candidates is bigger than 5 then any SWF satisfying the above axioms is equivalent to RU, whenever there exist at least 2 individuals who do not have exactly the same or exactly the opposite preferences.

Relative utilitarianism ^{ [41] } can serve to rationalize using 2% as an intergenerationally fair social discount rate for cost-benefit analysis. Mertens and Rubinchik^{ [42] } show that a shift-invariant welfare function defined on a rich space of (temporary) policies, if differentiable, has as a derivative a discounted sum of the policy (change), with a fixed discount rate, i.e., the induced social discount rate. (Shift-invariance requires a function evaluated on a shifted policy to return an affine transformation of the value of the original policy, while the coefficients depend on the time-shift only.) In an overlapping generations model with exogenous growth (with time being the whole real line), relative utilitarian function is shift-invariant when evaluated on (small temporary) policies around a balanced growth equilibrium (with capital stock growing exponentially). When policies are represented as changes in endowments of individuals (transfers or taxes), and utilities of all generations are weighted equally, the social discount rate induced by relative utilitarianism is the growth rate of per capita GDP (2% in the U.S.^{ [43] }). This is also consistent with the current practices described in the Circular A-4 of the US Office of Management and Budget, stating:

- If your rule will have important intergenerational benefits or costs you might consider a further sensitivity analysis using a lower but positive discount rate in addition to calculating net benefits using discount rates of 3 and 7 percent.
^{ [44] }

**Game theory** is the study of mathematical models of strategic interaction among rational decision-makers. It has applications in all fields of social science, as well as in logic, systems science and computer science. Originally, it addressed zero-sum games, in which each participant's gains or losses are exactly balanced by those of the other participants. Today, game theory applies to a wide range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers.

In game theory, the **Nash equilibrium**, named after the mathematician John Forbes Nash Jr., is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.

Game theory is the branch of mathematics in which games are studied: that is, models describing human behaviour. This is a glossary of some terms of the subject.

In game theory, a **solution concept** is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium.

In game theory, a **Bayesian game** is a game in which players have incomplete information about the other players. For example, a player may not know the exact payoff functions of the other players, but instead have beliefs about these payoff functions. These beliefs are represented by a probability distribution over the possible payoff functions.

In game theory, **trembling hand perfect equilibrium** is a refinement of Nash equilibrium due to Reinhard Selten. A trembling hand perfect equilibrium is an equilibrium that takes the possibility of off-the-equilibrium play into account by assuming that the players, through a "slip of the hand" or **tremble,** may choose unintended strategies, albeit with negligible probability.

In game theory, **folk theorems** are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games. The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.

In game theory, a **repeated game** is an extensive form game that consists of a number of repetitions of some base game. The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of his or her current action on the future actions of other players; this impact is sometimes called his or her reputation. *Single stage game* or *single shot game* are names for non-repeated games.

In game theory, the **purification theorem** was contributed by Nobel laureate John Harsanyi in 1973. The theorem aims to justify a puzzling aspect of mixed strategy Nash equilibria: that each player is wholly indifferent amongst each of the actions he puts non-zero weight on, yet he mixes them so as to make every other player also indifferent.

**Hobart Peyton Young** is an American game theorist and economist known for his contributions to evolutionary game theory and its application to the study of institutional and technological change, as well as the theory of learning in games. He is currently centennial professor at the London School of Economics, James Meade Professor of Economics Emeritus at the University of Oxford, professorial fellow at Nuffield College Oxford, and research principal at the Office of Financial Research at the U.S. Department of the Treasury.

In game theory, a **subgame perfect equilibrium** is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that if the players played any smaller game that consisted of only one part of the larger game, their behavior would represent a Nash equilibrium of that smaller game. Every finite extensive game with perfect recall has a subgame perfect equilibrium.

**Quantal response equilibrium** (**QRE**) is a solution concept in game theory. First introduced by Richard McKelvey and Thomas Palfrey, it provides an equilibrium notion with bounded rationality. QRE is not an equilibrium refinement, and it can give significantly different results from Nash equilibrium. QRE is only defined for games with discrete strategies, although there are continuous-strategy analogues.

**Risk dominance** and **payoff dominance** are two related refinements of the Nash equilibrium (NE) solution concept in game theory, defined by John Harsanyi and Reinhard Selten. A Nash equilibrium is considered **payoff dominant** if it is Pareto superior to all other Nash equilibria in the game. When faced with a choice among equilibria, all players would agree on the payoff dominant equilibrium since it offers to each player at least as much payoff as the other Nash equilibria. Conversely, a Nash equilibrium is considered **risk dominant** if it has the largest basin of attraction. This implies that the more uncertainty players have about the actions of the other player(s), the more likely they will choose the strategy corresponding to it.

In game theory, a game is said to be a **potential game** if the incentive of all players to change their strategy can be expressed using a single global function called the **potential function**. The concept originated in a 1996 paper by Dov Monderer and Lloyd Shapley.

In game theory, an **epsilon-equilibrium**, or near-Nash equilibrium, is a strategy profile that approximately satisfies the condition of Nash equilibrium. In a Nash equilibrium, no player has an incentive to change his behavior. In an approximate Nash equilibrium, this requirement is weakened to allow the possibility that a player may have a small incentive to do something different. This may still be considered an adequate solution concept, assuming for example status quo bias. This solution concept may be preferred to Nash equilibrium due to being easier to compute, or alternatively due to the possibility that in games of more than 2 players, the probabilities involved in an exact Nash equilibrium need not be rational numbers.

In game theory, a **stochastic game**, introduced by Lloyd Shapley in the early 1950s, is a dynamic game with **probabilistic transitions** played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some **state**. The players select actions and each player receives a **payoff** that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.

A **Markov perfect equilibrium** is an equilibrium concept in game theory. It is the refinement of the concept of subgame perfect equilibrium to extensive form games for which a pay-off relevant state space can be readily identified. The term appeared in publications starting about 1988 in the work of economists Jean Tirole and Eric Maskin. It has since been used, among else, in the analysis of industrial organization, macroeconomics and political economy.

**Mertens stability** is a solution concept used to predict the outcome of a non-cooperative game. A tentative definition of stability was proposed by Elon Kohlberg and Jean-François Mertens for games with finite numbers of players and strategies. Later, Mertens proposed a stronger definition that was elaborated further by Srihari Govindan and Mertens. This solution concept is now called Mertens stability, or just stability.

**Abraham Neyman** is an Israeli mathematician and game theorist, Professor of Mathematics at the Federmann Center for the Study of Rationality and the Einstein Institute of Mathematics at the Hebrew University of Jerusalem in Israel. He served as president of the Israeli Chapter of the Game Theory Society (2014–2018).

**M equilibrium** is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization and perfect beliefs. The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis.

- 1 2 "Jean-Francois Mertens, 1946–2012 « The Leisure of the Theory Class". Theoryclass.wordpress.com. 2012-08-07. Retrieved 2012-10-01.
- ↑ Mertens, Jean-François, 1982. "Repeated Games: An Overview of the Zero-sum Case," Advances in Economic Theory, edited by W. Hildenbrand, Cambridge University Press, London and New York.
- ↑ Mertens, Jean-François, 1986. "Repeated Games," International Congress of Mathematicians. Archived 2014-02-02 at the Wayback Machine
- ↑ Mertens, Jean-François, and Sylvain Sorin, and Shmuel Zamir, 1994. "Repeated Games," Parts A, B, C; Discussion Papers 1994020, 1994021, 1994022; Université Catholique de Louvain, Center for Operations Research and Econometrics (CORE). "Archived copy". Archived from the original on 2011-09-08. Retrieved 2012-02-19.CS1 maint: archived copy as title (link) "Archived copy". Archived from the original on 2007-12-01. Retrieved 2012-02-19.CS1 maint: archived copy as title (link)
- ↑ Mertens, Jean-François (1973). "Strongly supermedian functions and optimal stopping".
*Probability Theory and Related Fields*.**26**(2): 119–139. doi:10.1007/BF00533481. - ↑ Mertens, Jean-François (1992). "Essential Maps and Manifolds".
*Proceedings of the American Mathematical Society*.**115**(2): 513. doi: 10.1090/s0002-9939-1992-1116269-x . - ↑ Mertens, Jean-François (2003). "Localization of the Degree on Lower-dimensional Sets".
*International Journal of Game Theory*.**32**(3): 379–386. doi:10.1007/s001820400164. - ↑ Mertens, Jean-François; Zamir, Shmuel (1985). "Formulation of Bayesian analysis for games with incomplete information" (PDF).
*International Journal of Game Theory*.**14**(1): 1–29. doi:10.1007/bf01770224. - ↑ An exposition for the general reader is by Shmuel Zamir, 2008: "Bayesian games: Games with incomplete information," Discussion Paper 486, Center for Rationality, Hebrew University.
^{[ }*permanent dead link*] - ↑ A popular version in the form of a sequence of dreams about dreams appears in the film "Inception." The logical aspects of players' beliefs about others' beliefs is related to players' knowledge about others' knowledge; see Prisoners and hats puzzle for an entertaining example, and Common knowledge (logic) for another example and a precise definition.
- ↑ Aumann, R. J., and Maschler, M. 1995.
*Repeated Games with Incomplete Information*. Cambridge London: MIT Press - ↑ Sorin S (2002a) A first course on zero-sum repeated games. Springer, Berlin
- ↑ Mertens J-F (1987) Repeated games. In: Proceedings of the international congress of mathematicians, Berkeley 1986. American Mathematical Society, Providence, pp 1528–1577
- ↑ Mertens J-F (1972) The value of two-person zero-sum repeated games: the extensive case. Int J Game Theory 1:217–227
- ↑ Mertens J-F, Zamir S (1971) The value of two-person zero-sum repeated games with lack of information on both sides. Int J Game Theory 1:39–64
- ↑ Cardaliaguet P (2007) Differential games with asymmetric information. SIAM J Control Optim 46:816–838
- ↑ De Meyer B (1996a) Repeated games and partial differential equations. Math Oper Res 21:209–236
- ↑ De Meyer B. (1999), From repeated games to Brownian games, 'Annales de l'Institut Henri Poincaré, Probabilites et Statistiques', 35, 1–48.
- ↑ Mertens J.-F. (1998), The speed of convergence in repeated games with incomplete information on one side, 'International Journal of Game Theory', 27, 343–359.
- ↑ Mertens J.-F. and S. Zamir (1976b), The normal distribution and repeated games, 'International Journal of Game Theory', 5, 187–197.
- ↑ De Meyer B (1996b) Repeated games, duality and the Central Limit theorem. Math Oper Res 21:237– 251
- ↑ Mertens J-F, Zamir S (1976a) On a repeated game without a recursive structure. Int J Game Theory 5:173–182
- ↑ Sorin S (1989) On repeated games without a recursive structure: existence of . Int J Game Theory 18:45–55
- ↑ Shapley, L. S. (1953). "Stochastic games".
*PNAS*.**39**(10): 1095–1100. Bibcode:1953PNAS...39.1095S. doi:10.1073/pnas.39.10.1095. PMC 1063912 . PMID 16589380. - ↑ Blackwell and Ferguson,1968. "The Big Match", Ann. Math. Statist. Volume 39, Number 1 (1968), 159–163.
- ↑ Mertens, Jean-François; Neyman, Abraham (1981). "Stochastic Games".
*International Journal of Game Theory*.**10**(2): 53–66. doi:10.1007/bf01769259. - ↑ Mertens, J-F., Parthasarathy, T.P. 2003. Equilibria for discounted stochastic games. In Neyman A, Sorin S, editors, Stochastic Games and Applications, Kluwer Academic Publishers, 131–172.
- ↑ Mertens, J.F. (2003). "The limit-price mechanism".
*Journal of Mathematical Economics*.**39**(5–6): 433–528. doi:10.1016/S0304-4068(03)00015-6. - ↑ Mertens, Jean-François (1980). "Values and Derivatives".
*Mathematics of Operations Research*.**5**(4): 523–552. doi:10.1287/moor.5.4.523. JSTOR 3689325. - ↑ Mertens, Jean-François (1988). "The Shapley Value in the Non Differentiable Case".
*International Journal of Game Theory*.**17**: 1–65. doi:10.1007/BF01240834. - ↑ Neyman, A., 2002. Value of Games with infinitely many Players, "Handbook of Game Theory with Economic Applications," Handbook of Game Theory with Economic Applications, Elsevier, edition 1, volume 3, number 3, 00. R.J. Aumann & S. Hart (ed.).
- ↑ Govindan, Srihari, and Robert Wilson, 2008. "Refinements of Nash Equilibrium," The New Palgrave Dictionary of Economics, 2nd Edition. "Archived copy" (PDF). Archived from the original (PDF) on 2010-06-20. Retrieved 2012-02-12.CS1 maint: archived copy as title (link)
- ↑ Govindan, Srihari, and Robert Wilson, 2009. "On Forward Induction," Econometrica, 77(1): 1–28.
- ↑ Kohlberg, Elon; Mertens, Jean-François (1986). "On the Strategic Stability of Equilibria" (PDF).
*Econometrica*.**54**(5): 1003–1037. doi:10.2307/1912320. JSTOR 1912320. - ↑ Mertens, Jean-François (2003). "Ordinality in Non Cooperative Games".
*International Journal of Game Theory*.**32**(3): 387–430. doi:10.1007/s001820400166. - ↑ Mertens, Jean-François, 1992. "The Small Worlds Axiom for Stable Equilibria," Games and Economic Behavior, 4: 553–564.
- ↑ Mertens, Jean-François (1989). "Stable Equilibria – A Reformulation".
*Mathematics of Operations Research*.**14**(4): 575–625. doi:10.1287/moor.14.4.575.; Mertens, Jean-François (1991). "Stable Equilibria – A Reformulation".*Mathematics of Operations Research*.**16**(4): 694–753. doi:10.1287/moor.16.4.694. - ↑ Govindan, Srihari; Mertens, Jean-François (2004). "An Equivalent Definition of Stable Equilibria".
*International Journal of Game Theory*.**32**(3): 339–357. doi:10.1007/s001820400165. - ↑ Govindan, Srihari, and Robert Wilson, 2012. "Axiomatic Theory of Equilibrium Selection for Generic Two-Player Games," Econometrica, 70.
- ↑ Arrow, K.J., "A Difficulty in the Concept of Social Welfare", Journal of Political Economy 58(4) (August, 1950), pp. 328–346
- 1 2 Dhillon, A. and J.F.Mertens, "Relative Utilitarianism", Econometrica 67,3 (May 1999) 471–498
- ↑ Mertens, Jean-François; Anna Rubinchik (February 2012). "Intergenerational equity and the Discount Rate for Policy Analysis".
*Macroeconomic Dynamics*.**16**(1): 61–93. doi:10.1017/S1365100510000386. hdl:2078/115068 . Retrieved 5 October 2012. - ↑ Johnston, L. D. and S. H. Williamson. "What Was the U.S. GDP Then? Economic History Services MeasuringWorth" . Retrieved 5 October 2012.
- ↑ The U.S. Office of Management and Budget. "Circular A-4" . Retrieved 5 October 2012.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.