Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by Mayne[1] and subsequently analysed in Jacobson and Mayne's eponymous book.[2] The algorithm uses locally-quadratic models of the dynamics and cost functions, and displays quadratic convergence. It is closely related to Pantoja's step-wise Newton's method.[3][4]
describe the evolution of the state given the control from time to time . The total cost is the sum of running costs and final cost , incurred when starting from state and applying the control sequence until the horizon is reached:
where , and the for are given by Eq. 1. The solution of the optimal control problem is the minimizing control sequence Trajectory optimization means finding for a particular , rather than for all possible initial states.
Dynamic programming
Let be the partial control sequence and define the cost-to-go as the partial sum of costs from to :
The optimal cost-to-go or value function at time is the cost-to-go given the minimizing control sequence:
Setting , the dynamic programming principle reduces the minimization over an entire sequence of controls to a sequence of minimizations over a single control, proceeding backwards in time:
DDP proceeds by iteratively performing a backward pass on the nominal trajectory to generate a new control sequence, and then a forward-pass to compute and evaluate a new nominal trajectory. We begin with the backward pass. If
is the argument of the operator in Eq. 2, let be the variation of this quantity around the -th pair:
and expand to second order
3
The notation used here is a variant of the notation of Morimoto where subscripts denote differentiation in denominator layout.[5] Dropping the index for readability, primes denoting the next time-step , the expansion coefficients are
The last terms in the last three equations denote contraction of a vector with a tensor. Minimizing the quadratic approximation (3) with respect to we have
4
giving an open-loop term and a feedback gain term . Plugging the result back into (3), we now have a quadratic model of the value at time :
Recursively computing the local quadratic models of and the control modifications , from down to , constitutes the backward pass. As above, the Value is initialized with . Once the backward pass is completed, a forward pass computes a new trajectory:
The backward passes and forward passes are iterated until convergence. If the Hessians are replaced by their Gauss-Newton approximation, the method reduces to the iterative Linear Quadratic Regulator (iLQR).[6]
Regularization and line-search
Differential dynamic programming is a second-order algorithm like Newton's method. It therefore takes large steps toward the minimum and often requires regularization and/or line-search to achieve convergence.[7][8] Regularization in the DDP context means ensuring that the matrix in Eq. 4 is positive definite. Line-search in DDP amounts to scaling the open-loop control modification by some .
Monte Carlo version
Sampled differential dynamic programming (SaDDP) is a Monte Carlo variant of differential dynamic programming.[9][10][11] It is based on treating the quadratic cost of differential dynamic programming as the energy of a Boltzmann distribution. This way the quantities of DDP can be matched to the statistics of a multidimensional normal distribution. The statistics can be recomputed from sampled trajectories without differentiation.
Sampled differential dynamic programming has been extended to Path Integral Policy Improvement with Differential Dynamic Programming.[12] This creates a link between differential dynamic programming and path integral control,[13] which is a framework of stochastic optimal control.
Constrained problems
Interior Point Differential dynamic programming (IPDDP) is an interior-point method generalization of DDP that can address the optimal control problem with nonlinear state and input constraints.[14]
↑ de O. Pantoja, J. F. A. (1988). "Differential dynamic programming and Newton's method". International Journal of Control. 47 (5): 1539–1553. doi:10.1080/00207178808906114. ISSN0020-7179.
↑ Liao, L. Z; C. A Shoemaker (1991). "Convergence in unconstrained discrete-time differential dynamic programming". IEEE Transactions on Automatic Control. 36 (6): 692. doi:10.1109/9.86943.
↑ "Sampled differential dynamic programming". 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). doi:10.1109/IROS.2016.7759229. S2CID1338737.
↑ Theodorou, Evangelos; Buchli, Jonas; Schaal, Stefan (May 2010). "Reinforcement learning of motor skills in high dimensions: A path integral approach". 2010 IEEE International Conference on Robotics and Automation. pp.2397–2403. doi:10.1109/ROBOT.2010.5509336. ISBN978-1-4244-5038-1. S2CID15116370.
The open-source software framework acados provides an efficient and embeddable implementation of DDP.
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.