Mean-field game theory

Last updated

Mean-field game theory is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal, [1] in the engineering literature by Peter E. Caines and his co-workers [2] [3] and independently and around the same time by mathematicians Jean-Michel Lasry  [ fr ] and Pierre-Louis Lions. [4] [5] [6] [7]

Contents

Use of the term "mean field" is inspired by mean-field theory in physics, which considers the behaviour of systems of large numbers of particles where individual particles have negligible impact upon the system.

In continuous time a mean-field game is typically composed by a Hamilton–Jacobi–Bellman equation that describes the optimal control problem of an individual and a Fokker–Planck equation that describes the dynamics of the aggregate distribution of agents. Under fairly general assumptions it can be proved that a class of mean-field games is the limit as of a N-player Nash equilibrium. [8]

A related concept to that of mean-field games is "mean-field-type control". In this case a social planner controls a distribution of states and chooses a control strategy. The solution to a mean-field-type control problem can typically be expressed as dual adjoint Hamilton–Jacobi–Bellman equation coupled with Kolmogorov equation. Mean-field-type game theory [9] [10] [11] [12] is the multi-agent generalization of the single-agent mean-field-type control. [13] [14]

Linear-quadratic Gaussian game problem

From Caines (2009), a relatively simple model of large-scale games is the linear-quadratic Gaussian model. The individual agent's dynamics are modeled as a stochastic differential equation

where is the state of the -th agent, and is the control. The individual agent's cost is

The coupling between agents occurs in the cost function.

See also

Related Research Articles

Brownian motion Random motion of particles suspended in a fluid

Brownian motion, or pedesis, is the random motion of particles suspended in a medium..

Stochastic process mathematical object usually defined as a collection of random variables

In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a family of random variables. Historically, the random variables were associated with or indexed by a set of numbers, usually viewed as points in time, giving the interpretation of a stochastic process representing numerical values of some system randomly changing over time, such as the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. They have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, image processing, signal processing, control theory, information theory, computer science, cryptography and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.

Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.

Prospect theory theory in behavioural economics, describing how individuals assess their loss and gain perspectives in an asymmetric manner

Prospect theory is a theory of behavioral economics and behavioral finance that was developed by Daniel Kahneman and Amos Tversky in 1979. The theory was cited in the decision to award Kahneman the 2002 Nobel Memorial Prize in Economics.

In mathematics, the replicator equation is a deterministic monotone non-linear and non-innovative game dynamic used in evolutionary game theory. The replicator equation differs from other equations used to model replication, such as the quasispecies equation, in that it allows the fitness function to incorporate the distribution of the population types rather than setting the fitness of a particular type constant. This important property allows the replicator equation to capture the essence of selection. Unlike the quasispecies equation, the replicator equation does not incorporate mutation and so is not able to innovate new types or pure strategies.

Pierre-Louis Lions French mathematician and Fields Medalist

Pierre-Louis Lions is a French mathematician. He is a recipient of the 1994 Fields Medal.

In mathematics, the viscosity solution concept was introduced in the early 1980s by Pierre-Louis Lions and Michael G. Crandall as a generalization of the classical concept of what is meant by a 'solution' to a partial differential equation (PDE). It has been found that the viscosity solution is the natural solution concept to use in many applications of PDE's, including for example first order equations arising in optimal control, differential games or front evolution problems, as well as second-order equations such as the ones arising in stochastic optimal control or stochastic differential games.

In statistics, stochastic volatility models are those in which the variance of a stochastic process is itself randomly distributed. They are used in the field of mathematical finance to evaluate derivative securities, such as options. The name derives from the models' treatment of the underlying security's volatility as a random process, governed by state variables such as the price level of the underlying security, the tendency of volatility to revert to some long-run mean value, and the variance of the volatility process itself, among others.

In game theory, a stochastic game, introduced by Lloyd Shapley in the early 1950s, is a dynamic game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some state. The players select actions and each player receives a payoff that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.

Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.

In game theory, differential games are a group of problems related to the modeling and analysis of conflict in the context of a dynamical system. More specifically, a state variable or variables evolve over time according to a differential equation. Early analyses reflected military interests, considering two actors—the pursuer and the evader—with diametrically opposed goals. More recent analyses have reflected engineering or economic considerations.

In mathematics, the Kardar–Parisi–Zhang (KPZ) equation is a non-linear stochastic partial differential equation, introduced by Mehran Kardar, Giorgio Parisi, and Yi-Cheng Zhang in 1986. It describes the temporal change of a height field with spatial coordinate and time coordinate :

Jean-François Mertens Belgian game theorist

Jean-François Mertens was a Belgian game theorist and mathematical economist.

Mean field particle methods are a broad class of interacting type Monte Carlo algorithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability measures can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depends on the distributions of the current random states. A natural way to simulate these sophisticated nonlinear Markov processes is to sample a large number of copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methods these mean field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. In other words, starting with a chaotic configuration based on independent copies of initial state of the nonlinear Markov chain model, the chaos propagates at any time horizon as the size the system tends to infinity; that is, finite blocks of particles reduces to independent copies of the nonlinear Markov process. This result is called the propagation of chaos property. The terminology "propagation of chaos" originated with the work of Mark Kac in 1976 on a colliding mean field kinetic gas model

Peter Edwin Caines, FRSC is a control theorist and James McGill Professor and Macdonald Chair in Department of Electrical and Computer Engineering at McGill University, Montreal, Quebec, Canada, which he joined in 1980.

In quantum probability, the Belavkin equation, also known as Belavkin-Schrödinger equation, quantum filtering equation, stochastic master equation, is a quantum stochastic differential equation describing the dynamics of a quantum system undergoing observation in continuous time. It was derived and henceforth studied by Viacheslav Belavkin in 1988.

Olivier Guéant is a French mathematician, working on mean field game theory and financial mathematics. He is currently Full Professor of applied mathematics at Université Paris 1 Panthéon Sorbonne.

Hamidou Tembine is a French game theorist and researcher specializing in evolutionary games and co-opetitive mean-field-type games. He is a Global Network Assistant Professor at New York University. He is also the principal investigator and director of the Game Theory and Learning Laboratory at New York University.

Panagiotis E. Souganidis is a Greek-American mathematician, specializing in partial differential equations.

Benjamin Moll economist (Princeton University)

Benjamin Moll is a German macroeconomist who is Professor of Economics at the London School of Economics. He is the recipient of the 2017 Bernacer Prize for his "path-breaking contributions to incorporate consumer and firm heterogeneity into macroeconomic models and use such models to study rich interactions between inequality and the macroeconomy".

References

  1. Jovanovic, Boyan; Rosenthal, Robert W. (1988). "Anonymous Sequential Games". Journal of Mathematical Economics . 17 (1): 77–87. doi:10.1016/0304-4068(88)90029-8.
  2. Huang, M. Y.; Malhame, R. P.; Caines, P. E. (2006). "Large Population Stochastic Dynamic Games: Closed-Loop McKean–Vlasov Systems and the Nash Certainty Equivalence Principle". Communications in Information and Systems. 6 (3): 221–252. doi: 10.4310/CIS.2006.v6.n3.a5 . Zbl   1136.91349.
  3. Nourian, M.; Caines, P. E. (2013). "ε–Nash mean field game theory for nonlinear stochastic dynamical systems with major and minor agents". SIAM Journal on Control and Optimization. 51 (4): 3302–3331. arXiv: 1209.5684 . doi:10.1137/120889496. S2CID   36197045.
  4. Lions, Pierre-Louis; Lasry, Jean-Michel (March 2007). "Large investor trading impacts on volatility". Annales de l'Institut Henri Poincaré C. 24 (2): 311–323. Bibcode:2007AIHPC..24..311L. doi:10.1016/j.anihpc.2005.12.006.
  5. Lasry, Jean-Michel; Lions, Pierre-Louis (28 March 2007). "Mean field games". Japanese Journal of Mathematics. 2 (1): 229–260. doi:10.1007/s11537-007-0657-8. S2CID   1963678.
  6. Lasry, Jean-Michel; Lions, Pierre-Louis (November 2006). "Jeux à champ moyen. II – Horizon fini et contrôle optimal" [Mean field games. II – Finite horizon and optimal control]. Comptes Rendus Mathématique (in French). 343 (10): 679–684. doi:10.1016/j.crma.2006.09.018.
  7. Lasry, Jean-Michel; Lions, Pierre-Louis (November 2006). "Jeux à champ moyen. I – Le cas stationnaire" [Mean field games. I – The stationary case]. Comptes Rendus Mathématique (in French). 343 (9): 619–625. doi:10.1016/j.crma.2006.09.019.
  8. Cardaliaguet, Pierre (September 27, 2013). "Notes on Mean Field Games" (PDF).
  9. Tembine, Hamidou (September 2015). "Risk-sensitive mean-field-type games with Lp-norm drifts". Automatica. 59: 224–237. arXiv: 1505.06280 . doi:10.1016/j.automatica.2015.06.036. S2CID   8161026.
  10. Djehiche, Boualem; Tcheukam, Alain; Tembine, Hamidou (2017). "Mean-Field-Type Games in Engineering". AIMS Electronics and Electrical Engineering. 1 (1): 18–73. arXiv: 1605.03281 . doi:10.3934/ElectrEng.2017.1.18. S2CID   16055840.
  11. Tembine, Hamidou (2017). "Mean-field-type games". AIMS Mathematics. 2 (4): 706–735. doi: 10.3934/Math.2017.4.706 .
  12. Duncan, Tyrone; Tembine, Hamidou (12 February 2018). "Linear–Quadratic Mean-Field-Type Games: A Direct Method". Games. 9 (1): 7. doi: 10.3390/g9010007 .
  13. Andersson, Daniel; Djehiche, Boualem (30 October 2010). "A Maximum Principle for SDEs of Mean-Field Type". Applied Mathematics & Optimization. 63 (3): 341–356. doi:10.1007/s00245-010-9123-8. S2CID   121265168.
  14. Bensoussan, Alain; Frehse, Jens; Yam, Phillip (2013). Mean Field Games and Mean Field Type Control Theory. SpringerBriefs in Mathematics. New York: Springer-Verlag. ISBN   9781461485070.[ page needed ]