In game theory, **differential games** are a group of problems related to the modeling and analysis of conflict in the context of a dynamical system. More specifically, a state variable or variables evolve over time according to a differential equation. Early analyses reflected military interests, considering two actors—the pursuer and the evader—with diametrically opposed goals. More recent analyses have reflected engineering or economic considerations.^{ [1] }^{ [2] }

Differential games are related closely with optimal control problems. In an optimal control problem there is single control and a single criterion to be optimized; differential game theory generalizes this to two controls and two criteria, one for each player.^{ [3] } Each player attempts to control the state of the system so as to achieve its goal; the system responds to the inputs of all players.

In the study of competition, differential games have been employed since a 1925 article by Charles F. Roos.^{ [4] } The first to study the formal theory of differential games was Rufus Isaacs, publishing a text-book treatment in 1965.^{ [5] } One of the first games analyzed was the 'homicidal chauffeur game'.

Games with a random time horizon are a particular case of differential games.^{ [6] } In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectancy of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval^{ [7] }^{ [8] }

Differential games have been applied to economics. Recent developments include adding stochasticity to differential games and the derivation of the stochastic feedback Nash equilibrium (SFNE). A recent example is the stochastic differential game of capitalism by Leong and Huang (2010).^{ [9] } In 2016 Yuliy Sannikov received the Clark Medal from the *American Economic Association * for his contributions to the analysis of continuous time dynamic games using stochastic calculus methods.^{ [10] }^{ [11] }

For a survey of pursuit-evasion differential games see Pachter.^{ [12] }

- ↑ Tembine, Hamidou (2017-12-06). "Mean-field-type games".
*AIMS Mathematics*.**2**(4): 706–735. doi:10.3934/Math.2017.4.706. - ↑ Djehiche, Boualem; Tcheukam, Alain; Tembine, Hamidou (2017-09-27). "Mean-Field-Type Games in Engineering".
*AIMS Electronics and Electrical Engineering*.**1**: 18–73. arXiv: 1605.03281 . doi:10.3934/ElectrEng.2017.1.18. - ↑ Kamien, Morton I.; Schwartz, Nancy L. (1991). "Differential Games".
*Dynamic Optimization : The Calculus of Variations and Optimal Control in Economics and Management*. Amsterdam: North-Holland. pp. 272–288. ISBN 0-444-01609-0. - ↑ Roos, C. F. (1925). "A Mathematical Theory of Competition".
*American Journal of Mathematics*.**47**(3): 163–175. doi:10.2307/2370550. JSTOR 2370550. - ↑ Isaacs, Rufus (1999) [1965].
*Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization*(Dover ed.). London: John Wiley and Sons. ISBN 0-486-40682-2 – via Google Books. - ↑ Petrosjan, L.A.; Murzov, N.V. (1966). "Game-theoretic problems of mechanics".
*Litovsk. Mat. Sb.*(in Russian).**6**: 423–433. - ↑ Petrosjan, L.A.; Shevkoplyas, E.V. (2000). "Cooperative games with random duration".
*Vestnik of St.Petersburg Univ.*(in Russian).**4**(1). - ↑ Marín-Solano, Jesús; Shevkoplyas, Ekaterina V. (December 2011). "Non-constant discounting and differential games with random time horizon".
*Automatica*.**47**(12): 2626–2638. doi:10.1016/j.automatica.2011.09.010. - ↑ Leong, C. K.; Huang, W. (2010). "A stochastic differential game of capitalism".
*Journal of Mathematical Economics*.**46**(4): 552. doi:10.1016/j.jmateco.2010.03.007. - ↑ "American Economic Association".
*www.aeaweb.org*. Retrieved 2017-08-21. - ↑ Tembine, H.; Duncan, Tyrone E. (2018). "Linear–Quadratic Mean-Field-Type Games: A Direct Method".
*Games*.**9**(1): 7. doi:10.3390/g9010007. - ↑ Pachter, Meir (2002). "Simple-motion pursuit-evasion differential games" (PDF). Archived from the original (PDF) on July 20, 2011.

- Dockner, Engelbert; Jorgensen, Steffen; Long, Ngo Van; Sorger, Gerhard (2001),
*Differential Games in Economics and Management Science*, Cambridge University Press, ISBN 978-0-521-63732-9 - Petrosyan, Leon (1993),
*Differential Games of Pursuit (Series on Optimization, Vol 2)*, World Scientific Publishers, ISBN 978-981-02-0979-7

- Bressan, Alberto (December 8, 2010). "Noncooperative Differential Games: A Tutorial" (PDF). Department of Mathematics, Penn State University.

This game theory article is a stub. You can help Wikipedia by expanding it. |

**Mathematical optimization** or **mathematical programming** is the selection of a best element from some set of available alternatives. Optimization problems of sorts arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

**Optimal control theory** is a branch of applied mathematics that deals with finding a control law for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in both science and engineering. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy.

In mathematical optimization and decision theory, a **loss function** or **cost function** is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An **objective function** is either a loss function or its negative, in which case it is to be maximized.

In optimal control theory, the **Hamilton–Jacobi–Bellman** (**HJB**) **equation** gives a necessary and sufficient condition for optimality of a control with respect to a loss function. It is, in general, a nonlinear partial differential equation in the value function, which means its solution *is* the value function itself. Once the solution is known, it can be used to obtain the optimal control by taking the maximizer/minimizer of the Hamiltonian involved in the HJB equation.

**Model predictive control** (**MPC**) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from Linear-Quadratic Regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.

A **Bellman equation**, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality” prescribes.

In game theory, a **stochastic game**, introduced by Lloyd Shapley in the early 1950s, is a dynamic game with **probabilistic transitions** played by one or more players. The game is played in a sequence of stages. At the beginning of each stage the game is in some **state**. The players select actions and each player receives a **payoff** that depends on the current state and the chosen actions. The game then moves to a new random state whose distribution depends on the previous state and the actions chosen by the players. The procedure is repeated at the new state and play continues for a finite or infinite number of stages. The total payoff to a player is often taken to be the discounted sum of the stage payoffs or the limit inferior of the averages of the stage payoffs.

In game theory, a **princess and monster game** is a pursuit-evasion game played by two players in a region. The game was devised by Rufus Isaacs and published in his book *Differential Games* (1965) as follows:

The monster searches for the princess, the time required being the payoff. They are both in a totally dark room, but they are each cognizant of its boundary. Capture means that the distance between the princess and the monster is within the capture radius, which is assumed to be small in comparison with the dimension of the room. The monster, supposed highly intelligent, moves at a known speed. We permit the princess full freedom of locomotion.

In the theory of stochastic processes, the **filtering problem** is a mathematical model for a number of state estimation problems in signal processing and related fields. The general idea is to establish a "best estimate" for the true value of some system from an incomplete, potentially noisy set of observations on that system. The problem of optimal non-linear filtering was solved by Ruslan L. Stratonovich, see also Harold J. Kushner's work and Moshe Zakai's, who introduced a simplified dynamics for the unnormalized conditional law of the filter known as Zakai equation. The solution, however, is infinite-dimensional in the general case. Certain approximations and special cases are well understood: for example, the linear filters are optimal for Gaussian random variables, and are known as the Wiener filter and the Kalman-Bucy filter. More generally, as the solution is infinite dimensional, it requires finite dimensional approximations to be implemented in a computer with finite memory. A finite dimensional approximated nonlinear filter may be more based on heuristics, such as the Extended Kalman Filter or the Assumed Density Filters, or more methodologically oriented such as for example the Projection Filters, some sub-families of which are shown to coincide with the Assumed Density Filters.

**Stochastic control** or **stochastic optimal control** is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.

**Yu-Chi "Larry" Ho** is a Chinese-American mathematician, control theorist, and a professor at the School of Engineering and Applied Sciences, Harvard University.

**Mathematical economics** is the application of mathematical methods to represent theories and analyze problems in economics. By convention, these applied methods are beyond simple geometry, such as differential and integral calculus, difference and differential equations, matrix algebra, mathematical programming, and other computational methods. Proponents of this approach claim that it allows the formulation of theoretical relationships with rigor, generality, and simplicity.

**Jan Hendrik van Schuppen** is a Dutch mathematician and Professor at the Department of Mathematics of the Vrije Universiteit, known for his contributions in the field of systems theory, particularly on control theory and system identification, on probability, and on a number of related practical applications.

The **Sethi model** was developed by Suresh P. Sethi and describes the process of how sales evolve over time in response to advertising. The rate of change in sales depend on three effects: response to advertising that acts positively on the unsold portion of the market, the loss due to forgetting or possibly due to competitive factors that act negatively on the sold portion of the market, and a random effect that can go either way.

**Viability theory** is an area of mathematics that studies the evolution of dynamical systems under constraints on the system state. It was developed to formalize problems arising in the study of various natural and social phenomena, and has close ties to the theories of optimal control and set-valued analysis.

**Leon Petrosjan** is a professor of Applied Mathematics and the Head of the Department of Mathematical Game theory and Statistical Decision Theory at the St. Petersburg University, Russia.

**Mean field game theory** is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal, in the engineering literature by Peter E. Caines and his co-workers and independently and around the same time by mathematicians Jean-Michel Lasry and Pierre-Louis Lions.

**Ji-Feng Zhang** was born in Shandong, China. He is currently the vice-chair of the technical board of the International Federation of Automatic Control (IFAC), the vice-president of the Systems Engineering Society of China (SESC), the vice-president of the Chinese Association of Automation (CAA), the chair of the technical committee on Control Theory (CAA), and the editor-in-chief for both *All About Systems and Control* and the *Journal of Systems Science and Mathematical Sciences*.

**Vivek Shripad Borkar** is an Indian electrical engineer, mathematician and an Institute chair professor at the Indian Institute of Technology, Mumbai. He is known for introducing analytical paradigm in stochastic optimal control processes and is an elected fellow of all the three major Indian science academies viz. the Indian Academy of Sciences, Indian National Science Academy and the National Academy of Sciences, India. He also holds elected fellowships of The World Academy of Sciences, Institute of Electrical and Electronics Engineers, Indian National Academy of Engineering and the American Mathematical Society. The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards for his contributions to Engineering Sciences in 1992. He received the TWAS Prize of the World Academy of Sciences in 2009.

**Hamidou Tembine** is a French game theorist and researcher specializing in evolutionary games and co-opetitive mean-field-type games. He is a Global Network Assistant Professor at New York University. He is also the principal investigator and director of the Game Theory and Learning Laboratory at New York University.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.