Differential dynamic programming

Last updated

Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by Mayne [1] and subsequently analysed in Jacobson and Mayne's eponymous book. [2] The algorithm uses locally-quadratic models of the dynamics and cost functions, and displays quadratic convergence. It is closely related to Pantoja's step-wise Newton's method. [3] [4]

Contents

Finite-horizon discrete-time problems

The dynamics

 

 

 

 

(1)

describe the evolution of the state given the control from time to time . The total cost is the sum of running costs and final cost , incurred when starting from state and applying the control sequence until the horizon is reached:

where , and the for are given by Eq. 1 . The solution of the optimal control problem is the minimizing control sequence Trajectory optimization means finding for a particular , rather than for all possible initial states.

Dynamic programming

Let be the partial control sequence and define the cost-to-go as the partial sum of costs from to :

The optimal cost-to-go or value function at time is the cost-to-go given the minimizing control sequence:

Setting , the dynamic programming principle reduces the minimization over an entire sequence of controls to a sequence of minimizations over a single control, proceeding backwards in time:

 

 

 

 

(2)

This is the Bellman equation.

Differential dynamic programming

DDP proceeds by iteratively performing a backward pass on the nominal trajectory to generate a new control sequence, and then a forward-pass to compute and evaluate a new nominal trajectory. We begin with the backward pass. If

is the argument of the operator in Eq. 2 , let be the variation of this quantity around the -th pair:

and expand to second order

 

 

 

 

(3)

The notation used here is a variant of the notation of Morimoto where subscripts denote differentiation in denominator layout. [5] Dropping the index for readability, primes denoting the next time-step , the expansion coefficients are

The last terms in the last three equations denote contraction of a vector with a tensor. Minimizing the quadratic approximation (3) with respect to we have

 

 

 

 

(4)

giving an open-loop term and a feedback gain term . Plugging the result back into (3) , we now have a quadratic model of the value at time :

Recursively computing the local quadratic models of and the control modifications , from down to , constitutes the backward pass. As above, the Value is initialized with . Once the backward pass is completed, a forward pass computes a new trajectory:

The backward passes and forward passes are iterated until convergence.

Differential dynamic programming is a second-order algorithm like Newton's method. It therefore takes large steps toward the minimum and often requires regularization and/or line-search to achieve convergence. [6] [7] Regularization in the DDP context means ensuring that the matrix in Eq. 4 is positive definite. Line-search in DDP amounts to scaling the open-loop control modification by some .

Monte Carlo version

Sampled differential dynamic programming (SaDDP) is a Monte Carlo variant of differential dynamic programming. [8] [9] [10] It is based on treating the quadratic cost of differential dynamic programming as the energy of a Boltzmann distribution. This way the quantities of DDP can be matched to the statistics of a multidimensional normal distribution. The statistics can be recomputed from sampled trajectories without differentiation.

Sampled differential dynamic programming has been extended to Path Integral Policy Improvement with Differential Dynamic Programming. [11] This creates a link between differential dynamic programming and path integral control, [12] which is a framework of stochastic optimal control.

Constrained problems

Interior Point Differential dynamic programming (IPDDP) is an interior-point method generalization of DDP that can address the optimal control problem with nonlinear state and input constraints. [13]

See also

Related Research Articles

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Dynamic programming</span> Problem optimization method

Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.

<span class="mw-page-title-main">Moment of inertia</span> Scalar measure of the rotational inertia with respect to a fixed axis of rotation

The moment of inertia, otherwise known as the mass moment of inertia, angular mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is a quantity that determines the torque needed for a desired angular acceleration about a rotational axis, akin to how mass determines the force needed for a desired acceleration. It depends on the body's mass distribution and the axis chosen, with larger moments requiring more torque to change the body's rate of rotation by a given amount.

<span class="mw-page-title-main">Kalman filter</span> Algorithm that estimates unknowns from a series of measurements over time

For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.

In the mathematical field of differential geometry, a metric tensor is an additional structure on a manifold M that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point p of M is a bilinear form defined on the tangent space at p, and a metric tensor on M consists of a metric tensor at each point p of M that varies smoothly with p.

<span class="mw-page-title-main">Optimal control</span> Mathematical way of attaining a desired output from a dynamic system

Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.

In mechanics, virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action.

The work of a force on a particle along a virtual displacement is known as the virtual work.

A multipole expansion is a mathematical series representing a function that depends on angles—usually the two angles used in the spherical coordinate system for three-dimensional Euclidean space, . Similarly to Taylor series, multipole expansions are useful because oftentimes only the first few terms are needed to provide a good approximation of the original function. The function being expanded may be real- or complex-valued and is defined either on , or less often on for some other .

In statistics, the theory of minimum norm quadratic unbiased estimation (MINQUE) was developed by C. R. Rao. MINQUE is a theory alongside other estimation methods in estimation theory, such as the method of moments or maximum likelihood estimation. Similar to the theory of best linear unbiased estimation, MINQUE is specifically concerned with linear regression models. The method was originally conceived to estimate heteroscedastic error variance in multiple linear regression. MINQUE estimators also provide an alternative to maximum likelihood estimators or restricted maximum likelihood estimators for variance components in mixed effects models. MINQUE estimators are quadratic forms of the response variable and are used to estimate a linear function of the variances.

<span class="mw-page-title-main">Cartesian tensor</span>

In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is done through an orthogonal transformation.

The Debye–Waller factor (DWF), named after Peter Debye and Ivar Waller, is used in condensed matter physics to describe the attenuation of x-ray scattering or coherent neutron scattering caused by thermal motion. It is also called the B factor, atomic B factor, or temperature factor. Often, "Debye–Waller factor" is used as a generic term that comprises the Lamb–Mössbauer factor of incoherent neutron scattering and Mössbauer spectroscopy.

The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic function is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear–quadratic regulator (LQR), a feedback controller whose equations are given below.

In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems, and it can also be operated repeatedly for model predictive control. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.

The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Inspired by—but distinct from—the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian.

The Gross–Pitaevskii equation describes the ground state of a quantum system of identical bosons using the Hartree–Fock approximation and the pseudopotential interaction model.

Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().

Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.

In physics, slowly varying envelope approximation is the assumption that the envelope of a forward-travelling wave pulse varies slowly in time and space compared to a period or wavelength. This requires the spectrum of the signal to be narrow-banded—hence it is also referred to as the narrow-band approximation.

<span class="mw-page-title-main">Lagrangian mechanics</span> Formulation of classical mechanics

In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle. It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique.

References

  1. Mayne, D. Q. (1966). "A second-order gradient method of optimizing non-linear discrete time systems". Int J Control. 3: 85–95. doi:10.1080/00207176608921369.
  2. Mayne, David H.; Jacobson, David Q. (1970). Differential dynamic programming. New York: American Elsevier Pub. Co. ISBN   978-0-444-00070-5.
  3. de O. Pantoja, J. F. A. (1988). "Differential dynamic programming and Newton's method". International Journal of Control. 47 (5): 1539–1553. doi:10.1080/00207178808906114. ISSN   0020-7179.
  4. Liao, L. Z.; C. A Shoemaker (1992). "Advantages of differential dynamic programming over Newton's method for discrete-time optimal control problems". Cornell University. hdl: 1813/5474 .
  5. Morimoto, J.; G. Zeglin; C.G. Atkeson (2003). "Minimax differential dynamic programming: Application to a biped walking robot". Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on. Vol. 2. pp. 1927–1932.
  6. Liao, L. Z; C. A Shoemaker (1991). "Convergence in unconstrained discrete-time differential dynamic programming". IEEE Transactions on Automatic Control. 36 (6): 692. doi:10.1109/9.86943.
  7. Tassa, Y. (2011). Theory and implementation of bio-mimetic motor controllers (PDF) (Thesis). Hebrew University. Archived from the original (PDF) on 2016-03-04. Retrieved 2012-02-27.
  8. "Sampled differential dynamic programming". 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). doi:10.1109/IROS.2016.7759229. S2CID   1338737.
  9. Rajamäki, Joose; Hämäläinen, Perttu (June 2018). Regularizing Sampled Differential Dynamic Programming - IEEE Conference Publication. 2018 Annual American Control Conference (ACC). pp. 2182–2189. doi:10.23919/ACC.2018.8430799. S2CID   243932441 . Retrieved 2018-10-19.
  10. Rajamäki, Joose (2018). Random Search Algorithms for Optimal Control. Aalto University. ISBN   978-952-60-8156-4. ISSN   1799-4942.
  11. Lefebvre, Tom; Crevecoeur, Guillaume (July 2019). "Path Integral Policy Improvement with Differential Dynamic Programming". 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). pp. 739–745. doi:10.1109/AIM.2019.8868359. hdl: 1854/LU-8623968 . ISBN   978-1-7281-2493-3. S2CID   204816072.
  12. Theodorou, Evangelos; Buchli, Jonas; Schaal, Stefan (May 2010). "Reinforcement learning of motor skills in high dimensions: A path integral approach". 2010 IEEE International Conference on Robotics and Automation. pp. 2397–2403. doi:10.1109/ROBOT.2010.5509336. ISBN   978-1-4244-5038-1. S2CID   15116370.
  13. Pavlov, Andrei; Shames, Iman; Manzie, Chris (2020). "Interior Point Differential Dynamic Programming". arXiv: 2004.12710 [math.OC].