# Optimal control

Last updated

Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. [1] It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. [2] Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. [3] A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory. [4] [5]

## Contents

Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies. [6] The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane. [7] Optimal control can be seen as a control strategy in control theory. [1]

## General method

Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost function. The optimal control can be derived using Pontryagin's maximum principle (a necessary condition also known as Pontryagin's minimum principle or simply Pontryagin's Principle), [8] or by solving the Hamilton–Jacobi–Bellman equation (a sufficient condition).

We begin with a simple example. Consider a car traveling in a straight line on a hilly road. The question is, how should the driver press the accelerator pedal in order to minimize the total traveling time? In this example, the term control law refers specifically to the way in which the driver presses the accelerator and shifts the gears. The system consists of both the car and the road, and the optimality criterion is the minimization of the total traveling time. Control problems usually include ancillary constraints. For example, the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc.

A proper cost function will be a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, and initial conditions of the system. Constraints are often interchangeable with the cost function.

Another related optimal control problem may be to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another related control problem may be to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel.

A more abstract framework goes as follows. [1] Minimize the continuous-time cost functional

${\displaystyle J[{\textbf {x}}(\cdot ),{\textbf {u}}(\cdot ),t_{0},t_{f}]:=E\,[\,{\textbf {x}}(t_{0}),t_{0},{\textbf {x}}(t_{f}),t_{f}\,]+\int \limits _{t_{0}}^{t_{f}}F\,[\,{\textbf {x}}(t),{\textbf {u}}(t),t\,]\,\operatorname {d} t}$

subject to the first-order dynamic constraints (the state equation)

${\displaystyle {\dot {\textbf {x}}}(t)={\textbf {f}}\,[\,{\textbf {x}}(t),{\textbf {u}}(t),t\,],}$

the algebraic path constraints

${\displaystyle {\textbf {h}}\,[\,{\textbf {x}}(t),{\textbf {u}}(t),t\,]\leq {\textbf {0}},}$

and the endpoint conditions

${\displaystyle {\textbf {e}}\,[\,{\textbf {x}}(t_{0}),t_{0},{\textbf {x}}(t_{f}),t_{f}\,]=0}$

where ${\displaystyle {\textbf {x}}(t)}$ is the state, ${\displaystyle {\textbf {u}}(t)}$ is the control, ${\displaystyle t}$ is the independent variable (generally speaking, time), ${\displaystyle t_{0}}$ is the initial time, and ${\displaystyle t_{f}}$ is the terminal time. The terms ${\displaystyle E}$ and ${\displaystyle F}$ are called the endpoint cost and the running cost respectively. In the calculus of variations, ${\displaystyle E}$ and ${\displaystyle F}$ are referred to as the Mayer term and the Lagrangian , respectively. Furthermore, it is noted that the path constraints are in general inequality constraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution ${\displaystyle [{\textbf {x}}^{*}(t),{\textbf {u}}^{*}(t),t_{0}^{*},t_{f}^{*}]}$ to the optimal control problem is locally minimizing.

A special case of the general nonlinear optimal control problem given in the previous section is the linear quadratic (LQ) optimal control problem. The LQ problem is stated as follows. Minimize the quadratic continuous-time cost functional

${\displaystyle J={\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}(t_{f})\mathbf {S} _{f}\mathbf {x} (t_{f})+{\tfrac {1}{2}}\int _{t_{0}}\limits ^{t_{f}}[\,\mathbf {x} ^{\mathsf {T}}(t)\mathbf {Q} (t)\mathbf {x} (t)+\mathbf {u} ^{\mathsf {T}}(t)\mathbf {R} (t)\mathbf {u} (t)\,]\,\operatorname {d} t}$

Subject to the linear first-order dynamic constraints

${\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} (t)\mathbf {x} (t)+\mathbf {B} (t)\mathbf {u} (t),}$

and the initial condition

${\displaystyle \mathbf {x} (t_{0})=\mathbf {x} _{0}}$

A particular form of the LQ problem that arises in many control system problems is that of the linear quadratic regulator (LQR) where all of the matrices (i.e., ${\displaystyle \mathbf {A} }$, ${\displaystyle \mathbf {B} }$, ${\displaystyle \mathbf {Q} }$, and ${\displaystyle \mathbf {R} }$) are constant, the initial time is arbitrarily set to zero, and the terminal time is taken in the limit ${\displaystyle t_{f}\rightarrow \infty }$ (this last assumption is what is known as infinite horizon). The LQR problem is stated as follows. Minimize the infinite horizon quadratic continuous-time cost functional

${\displaystyle J={\tfrac {1}{2}}\int \limits _{0}^{\infty }[\,\mathbf {x} ^{\mathsf {T}}(t)\mathbf {Q} \mathbf {x} (t)+\mathbf {u} ^{\mathsf {T}}(t)\mathbf {R} \mathbf {u} (t)\,]\,\operatorname {d} t}$

Subject to the linear time-invariant first-order dynamic constraints

${\displaystyle {\dot {\mathbf {x} }}(t)=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t),}$

and the initial condition

${\displaystyle \mathbf {x} (t_{0})=\mathbf {x} _{0}}$

In the finite-horizon case the matrices are restricted in that ${\displaystyle \mathbf {Q} }$ and ${\displaystyle \mathbf {R} }$ are positive semi-definite and positive definite, respectively. In the infinite-horizon case, however, the matrices ${\displaystyle \mathbf {Q} }$ and ${\displaystyle \mathbf {R} }$ are not only positive-semidefinite and positive-definite, respectively, but are also constant. These additional restrictions on ${\displaystyle \mathbf {Q} }$ and ${\displaystyle \mathbf {R} }$ in the infinite-horizon case are enforced to ensure that the cost functional remains positive. Furthermore, in order to ensure that the cost function is bounded, the additional restriction is imposed that the pair ${\displaystyle (\mathbf {A} ,\mathbf {B} )}$ is controllable . Note that the LQ or LQR cost functional can be thought of physically as attempting to minimize the control energy (measured as a quadratic form).

The infinite horizon problem (i.e., LQR) may seem overly restrictive and essentially useless because it assumes that the operator is driving the system to zero-state and hence driving the output of the system to zero. This is indeed correct. However the problem of driving the output to a desired nonzero level can be solved after the zero output one is. In fact, it can be proved that this secondary LQR problem can be solved in a very straightforward manner. It has been shown in classical optimal control theory that the LQ (or LQR) optimal control has the feedback form

${\displaystyle \mathbf {u} (t)=-\mathbf {K} (t)\mathbf {x} (t)}$

where ${\displaystyle \mathbf {K} (t)}$ is a properly dimensioned matrix, given as

${\displaystyle \mathbf {K} (t)=\mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} (t),}$

and ${\displaystyle \mathbf {S} (t)}$ is the solution of the differential Riccati equation. The differential Riccati equation is given as

${\displaystyle {\dot {\mathbf {S} }}(t)=-\mathbf {S} (t)\mathbf {A} -\mathbf {A} ^{\mathsf {T}}\mathbf {S} (t)+\mathbf {S} (t)\mathbf {B} \mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} (t)-\mathbf {Q} }$

For the finite horizon LQ problem, the Riccati equation is integrated backward in time using the terminal boundary condition

${\displaystyle \mathbf {S} (t_{f})=\mathbf {S} _{f}}$

For the infinite horizon LQR problem, the differential Riccati equation is replaced with the algebraic Riccati equation (ARE) given as

${\displaystyle \mathbf {0} =-\mathbf {S} \mathbf {A} -\mathbf {A} ^{\mathsf {T}}\mathbf {S} +\mathbf {S} \mathbf {B} \mathbf {R} ^{-1}\mathbf {B} ^{\mathsf {T}}\mathbf {S} -\mathbf {Q} }$

Understanding that the ARE arises from infinite horizon problem, the matrices ${\displaystyle \mathbf {A} }$, ${\displaystyle \mathbf {B} }$, ${\displaystyle \mathbf {Q} }$, and ${\displaystyle \mathbf {R} }$ are all constant. It is noted that there are in general multiple solutions to the algebraic Riccati equation and the positive definite (or positive semi-definite) solution is the one that is used to compute the feedback gain. The LQ (LQR) problem was elegantly solved by Rudolf Kalman. [9]

## Numerical methods for optimal control

Optimal control problems are generally nonlinear and therefore, generally do not have analytic solutions (e.g., like the linear-quadratic optimal control problem). As a result, it is necessary to employ numerical methods to solve optimal control problems. In the early years of optimal control (c. 1950s to 1980s) the favored approach for solving optimal control problems was that of indirect methods. In an indirect method, the calculus of variations is employed to obtain the first-order optimality conditions. These conditions result in a two-point (or, in the case of a complex problem, a multi-point) boundary-value problem. This boundary-value problem actually has a special structure because it arises from taking the derivative of a Hamiltonian. Thus, the resulting dynamical system is a Hamiltonian system of the form [1]

${\displaystyle {\begin{array}{lcl}{\dot {\textbf {x}}}&=&\partial H/\partial {\boldsymbol {\lambda }}\\{\dot {\boldsymbol {\lambda }}}&=&-\partial H/\partial {\textbf {x}}\end{array}}}$

where

${\displaystyle H=F+{\boldsymbol {\lambda }}^{\mathsf {T}}{\textbf {f}}-{\boldsymbol {\mu }}^{\mathsf {T}}{\textbf {h}}}$

is the augmented Hamiltonian and in an indirect method, the boundary-value problem is solved (using the appropriate boundary or transversality conditions). The beauty of using an indirect method is that the state and adjoint (i.e., ${\displaystyle {\boldsymbol {\lambda }}}$) are solved for and the resulting solution is readily verified to be an extremal trajectory. The disadvantage of indirect methods is that the boundary-value problem is often extremely difficult to solve (particularly for problems that span large time intervals or problems with interior point constraints). A well-known software program that implements indirect methods is BNDSCO. [10]

The approach that has risen to prominence in numerical optimal control since the 1980s is that of so-called direct methods. In a direct method, the state or the control, or both, are approximated using an appropriate function approximation (e.g., polynomial approximation or piecewise constant parameterization). Simultaneously, the cost functional is approximated as a cost function. Then, the coefficients of the function approximations are treated as optimization variables and the problem is "transcribed" to a nonlinear optimization problem of the form:

Minimize

${\displaystyle F(\mathbf {z} )\,}$

subject to the algebraic constraints

${\displaystyle {\begin{array}{lcl}\mathbf {g} (\mathbf {z} )&=&\mathbf {0} \\\mathbf {h} (\mathbf {z} )&\leq &\mathbf {0} \end{array}}}$

Depending upon the type of direct method employed, the size of the nonlinear optimization problem can be quite small (e.g., as in a direct shooting or quasilinearization method), moderate (e.g. pseudospectral optimal control [11] ) or may be quite large (e.g., a direct collocation method [12] ). In the latter case (i.e., a collocation method), the nonlinear optimization problem may be literally thousands to tens of thousands of variables and constraints. Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP is sparse and many well-known software programs exist (e.g., SNOPT [13] ) to solve large sparse NLPs. As a result, the range of problems that can be solved via direct methods (particularly direct collocation methods which are very popular these days) is significantly larger than the range of problems that can be solved via indirect methods. In fact, direct methods have become so popular these days that many people have written elaborate software programs that employ these methods. In particular, many such programs include DIRCOL, [14] SOCS, [15] OTIS, [16] GESOP/ASTOS, [17] DITAN. [18] and PyGMO/PyKEP. [19] In recent years, due to the advent of the MATLAB programming language, optimal control software in MATLAB has become more common. Examples of academically developed MATLAB software tools implementing direct methods include RIOTS, [20] DIDO , [21] DIRECT, [22] FALCON.m, [23] and GPOPS, [24] while an example of an industry developed MATLAB tool is PROPT . [25] These software tools have increased significantly the opportunity for people to explore complex optimal control problems both for academic research and industrial problems. Finally, it is noted that general-purpose MATLAB optimization environments such as TOMLAB have made coding complex optimal control problems significantly easier than was previously possible in languages such as C and FORTRAN.

## Discrete-time optimal control

The examples thus far have shown continuous time systems and control solutions. In fact, as optimal control solutions are now often implemented digitally, contemporary control theory is now primarily concerned with discrete time systems and solutions. The Theory of Consistent Approximations [26] [27] provides conditions under which solutions to a series of increasingly accurate discretized optimal control problem converge to the solution of the original, continuous-time problem. Not all discretization methods have this property, even seemingly obvious ones. [28] For instance, using a variable step-size routine to integrate the problem's dynamic equations may generate a gradient which does not converge to zero (or point in the right direction) as the solution is approached. The direct method RIOTS is based on the Theory of Consistent Approximation.

## Examples

A common solution strategy in many optimal control problems is to solve for the costate (sometimes called the shadow price) ${\displaystyle \lambda (t)}$. The costate summarizes in one number the marginal value of expanding or contracting the state variable next turn. The marginal value is not only the gains accruing to it next turn but associated with the duration of the program. It is nice when ${\displaystyle \lambda (t)}$ can be solved analytically, but usually, the most one can do is describe it sufficiently well that the intuition can grasp the character of the solution and an equation solver can solve numerically for the values.

Having obtained ${\displaystyle \lambda (t)}$, the turn-t optimal value for the control can usually be solved as a differential equation conditional on knowledge of ${\displaystyle \lambda (t)}$. Again it is infrequent, especially in continuous-time problems, that one obtains the value of the control or the state explicitly. Usually, the strategy is to solve for thresholds and regions that characterize the optimal control and use a numerical solver to isolate the actual choice values in time.

### Finite time

Consider the problem of a mine owner who must decide at what rate to extract ore from their mine. They own rights to the ore from date ${\displaystyle 0}$ to date ${\displaystyle T}$. At date ${\displaystyle 0}$ there is ${\displaystyle x_{0}}$ ore in the ground, and the time-dependent amount of ore ${\displaystyle x(t)}$ left in the ground declines at the rate of ${\displaystyle u(t)}$ that the mine owner extracts it. The mine owner extracts ore at cost ${\displaystyle u(t)^{2}/x(t)}$ (the cost of extraction increasing with the square of the extraction speed and the inverse of the amount of ore left) and sells ore at a constant price ${\displaystyle p}$. Any ore left in the ground at time ${\displaystyle T}$ cannot be sold and has no value (there is no "scrap value"). The owner chooses the rate of extraction varying with time ${\displaystyle u(t)}$ to maximize profits over the period of ownership with no time discounting.

 1. Discrete-time version The manager maximizes profit ${\displaystyle \Pi }$:${\displaystyle \Pi =\sum \limits _{t=0}^{T-1}\left[pu_{t}-{\frac {u_{t}^{2}}{x_{t}}}\right]}$subject to the law of evolution for the state variable ${\displaystyle x_{t}}$${\displaystyle x_{t+1}-x_{t}=-u_{t}\!}$Form the Hamiltonian and differentiate:${\displaystyle H=pu_{t}-{\frac {u_{t}^{2}}{x_{t}}}-\lambda _{t+1}u_{t}}$${\displaystyle {\frac {\partial H}{\partial u_{t}}}=p-\lambda _{t+1}-2{\frac {u_{t}}{x_{t}}}=0}$${\displaystyle \lambda _{t+1}-\lambda _{t}=-{\frac {\partial H}{\partial x_{t}}}=-\left({\frac {u_{t}}{x_{t}}}\right)^{2}}$As the mine owner does not value the ore remaining at time ${\displaystyle T}$,${\displaystyle \lambda _{T}=0\!}$Using the above equations, it is easy to solve for the ${\displaystyle x_{t}}$ and ${\displaystyle \lambda _{t}}$ series${\displaystyle \lambda _{t}=\lambda _{t+1}+{\frac {(p-\lambda _{t+1})^{2}}{4}}}$${\displaystyle x_{t+1}=x_{t}{\frac {2-p+\lambda _{t+1}}{2}}}$and using the initial and turn-T conditions, the ${\displaystyle x_{t}}$ series can be solved explicitly, giving ${\displaystyle u_{t}}$. 2. Continuous-time version The manager maximizes profit ${\displaystyle \Pi }$:${\displaystyle \Pi =\int \limits _{0}^{T}\left[pu(t)-{\frac {u(t)^{2}}{x(t)}}\right]dt}$where the state variable ${\displaystyle x(t)}$ evolves as follows:${\displaystyle {\dot {x}}(t)=-u(t)}$Form the Hamiltonian and differentiate:${\displaystyle H=pu(t)-{\frac {u(t)^{2}}{x(t)}}-\lambda (t)u(t)}$${\displaystyle {\frac {\partial H}{\partial u}}=p-\lambda (t)-2{\frac {u(t)}{x(t)}}=0}$${\displaystyle {\dot {\lambda }}(t)=-{\frac {\partial H}{\partial x}}=-\left({\frac {u(t)}{x(t)}}\right)^{2}}$As the mine owner does not value the ore remaining at time ${\displaystyle T}$,${\displaystyle \lambda (T)=0}$Using the above equations, it is easy to solve for the differential equations governing ${\displaystyle u(t)}$ and ${\displaystyle \lambda (t)}$${\displaystyle {\dot {\lambda }}(t)=-{\frac {(p-\lambda (t))^{2}}{4}}}$${\displaystyle u(t)=x(t){\frac {p-\lambda (t)}{2}}}$and using the initial and turn-T conditions, the functions can be solved to yield${\displaystyle x(t)={\frac {(4-pt+pT)^{2}}{(4+pT)^{2}}}x_{0}}$

## Related Research Articles

Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.

In optimal control theory, the Hamilton–Jacobi–Bellman (HJB) equation gives a necessary and sufficient condition for optimality of a control with respect to a loss function. It is, in general, a nonlinear partial differential equation in the value function, which means its solution is the value function itself. Once this solution is known, it can be used to obtain the optimal control by taking the maximizer of the Hamiltonian involved in the HJB equation.

Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from Linear-Quadratic Regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.

The Gauss–Newton algorithm is used to solve non-linear least squares problems. It is a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.

A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality” prescribes.

In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.

Trajectory optimization is the process of designing a trajectory that minimizes some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory. If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC).

The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic function is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear–quadratic regulator (LQR), a feedback controller whose equations are given below. The LQR is an important part of the solution to the LQG (linear–quadratic–Gaussian) problem. Like the LQR problem itself, the LQG problem is one of the most fundamental problems in control theory.

In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.

The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Inspired by, but distinct from, the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian.

Given a set of images depicting a number of 3D points from different viewpoints, bundle adjustment can be defined as the problem of simultaneously refining the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acquire the images, according to an optimality criterion involving the corresponding image projections of all points.

DIDO is a software product for solving general-purpose optimal control problems. It is widely used in academia, industry, and NASA. Hailed as a breakthrough software, DIDO is based on the pseudospectral optimal control theory of Ross and Fahroo. The latest enhancements to DIDO are described in Ross.

An algebraic Riccati equation is a type of nonlinear equation that arises in the context of infinite-horizon optimal control problems in continuous time or discrete time.

Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.

Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by Mayne and subsequently analysed in Jacobson and Mayne's eponymous book. The algorithm uses locally-quadratic models of the dynamics and cost functions, and displays quadratic convergence. It is closely related to Pantoja's step-wise Newton's method.

Introduced by I. Michael Ross and F. Fahroo, the Ross–Fahroo pseudospectral methods are a broad collection of pseudospectral methods for optimal control. Examples of the Ross–Fahroo pseudospectral methods are the pseudospectral knotting method, the flat pseudospectral method, the Legendre-Gauss-Radau pseudospectral method and pseudospectral methods for infinite-horizon optimal control.

The covector mapping principle is a special case of Riesz' representation theorem, which is a fundamental theorem in functional analysis. The name was coined by Ross and co-workers, It provides conditions under which dualization can be commuted with discretization in the case of computational optimal control.

Isaac Michael Ross is a Distinguished Professor and Program Director of Control and Optimization at the Naval Postgraduate School in Monterey, CA. He has published papers in pseudospectral optimal control theory, energy-sink theory, the optimization and deflection of near-Earth asteroids and comets, robotics, attitude dynamics and control, real-time optimal control unscented optimal control and a textbook on optimal control. The Kang-Ross-Gong theorem, Ross' π lemma, Ross' time constant, the Ross–Fahroo lemma, and the Ross–Fahroo pseudospectral method are all named after him.

A Carathéodory-π solution is a generalized solution to an ordinary differential equation. The concept is due to I. Michael Ross and named in honor of Constantin Carathéodory. Its practicality was demonstrated in 2008 by Ross et al. in a laboratory implementation of the concept. The concept is most useful for implementing feedback controls, particularly those generated by an application of Ross' pseudospectral optimal control theory.

GPOPS-II is a general-purpose MATLAB software for solving continuous optimal control problems using hp-adaptive Gaussian quadrature collocation and sparse nonlinear programming. The acronym GPOPS stands for "General Purpose OPtimal Control Software", and the Roman numeral "II" refers to the fact that GPOPS-II is the second software of its type.

## References

1. Ross, Isaac (2015). A primer on Pontryagin's principle in optimal control. San Francisco: Collegiate Publishers. ISBN   978-0-9843571-0-9. OCLC   625106088.
2. Luenberger, David G. (1979). "Optimal Control". . New York: John Wiley & Sons. pp.  393–435. ISBN   0-471-02594-1.
3. Kamien, Morton I. (2013). Dynamic Optimization : the Calculus of Variations and Optimal Control in Economics and Management. Dover Publications. ISBN   978-1-306-39299-0. OCLC   869522905.
4. Ross, I. M.; Proulx, R. J.; Karpenko, M. (6 May 2020). "An Optimal Control Theory for the Traveling Salesman Problem and Its Variants". arXiv: [math.OC].
5. Ross, Isaac M.; Karpenko, Mark; Proulx, Ronald J. (1 January 2016). "A Nonsmooth Calculus for Solving Some Graph-Theoretic Control Problems**This research was sponsored by the U.S. Navy". IFAC-PapersOnLine. 10th IFAC Symposium on Nonlinear Control Systems NOLCOS 2016. 49 (18): 462–467. doi:. ISSN   2405-8963.
6. Sargent, R. W. H. (2000). "Optimal Control". Journal of Computational and Applied Mathematics. 124 (1–2): 361–371. Bibcode:2000JCoAM.124..361S. doi:.
7. Bryson, A. E. (1996). "Optimal Control—1950 to 1985". IEEE Control Systems Magazine. 16 (3): 26–33. doi:10.1109/37.506395.
8. Ross, I. M. (2009). A Primer on Pontryagin's Principle in Optimal Control. Collegiate Publishers. ISBN   978-0-9843571-0-9.
9. Kalman, Rudolf. A new approach to linear filtering and prediction problems. Transactions of the ASME, Journal of Basic Engineering, 82:34–45, 1960
10. Oberle, H. J. and Grimm, W., "BNDSCO-A Program for the Numerical Solution of Optimal Control Problems," Institute for Flight Systems Dynamics, DLR, Oberpfaffenhofen, 1989
11. Ross, I. M.; Karpenko, M. (2012). "A Review of Pseudospectral Optimal Control: From Theory to Flight". Annual Reviews in Control. 36 (2): 182–197. doi:10.1016/j.arcontrol.2012.09.002.
12. Betts, J. T. (2010). Practical Methods for Optimal Control Using Nonlinear Programming (2nd ed.). Philadelphia, Pennsylvania: SIAM Press. ISBN   978-0-89871-688-7.
13. Gill, P. E., Murray, W. M., and Saunders, M. A., User's Manual for SNOPT Version 7: Software for Large-Scale Nonlinear Programming, University of California, San Diego Report, 24 April 2007
14. von Stryk, O., User's Guide for DIRCOL (version 2.1): A Direct Collocation Method for the Numerical Solution of Optimal Control Problems, Fachgebiet Simulation und Systemoptimierung (SIM), Technische Universität Darmstadt (2000, Version of November 1999).
15. Betts, J.T. and Huffman, W. P., Sparse Optimal Control Software, SOCS, Boeing Information and Support Services, Seattle, Washington, July 1997
16. Hargraves, C. R.; Paris, S. W. (1987). "Direct Trajectory Optimization Using Nonlinear Programming and Collocation". Journal of Guidance, Control, and Dynamics. 10 (4): 338–342. Bibcode:1987JGCD...10..338H. doi:10.2514/3.20223.
17. Gath, P.F., Well, K.H., "Trajectory Optimization Using a Combination of Direct Multiple Shooting and Collocation", AIAA 2001–4047, AIAA Guidance, Navigation, and Control Conference, Montréal, Québec, Canada, 6–9 August 2001
18. Vasile M., Bernelli-Zazzera F., Fornasari N., Masarati P., "Design of Interplanetary and Lunar Missions Combining Low-Thrust and Gravity Assists", Final Report of the ESA/ESOC Study Contract No. 14126/00/D/CS, September 2002
19. Izzo, Dario. "PyGMO and PyKEP: open source tools for massively parallel optimization in astrodynamics (the case of interplanetary trajectory optimization)." Proceed. Fifth International Conf. Astrodynam. Tools and Techniques, ICATT. 2012.
20. RIOTS Archived 16 July 2011 at the Wayback Machine , based on Schwartz, Adam (1996). Theory and Implementation of Methods based on Runge–Kutta Integration for Solving Optimal Control Problems (Ph.D.). University of California at Berkeley. OCLC   35140322.
21. Ross, I. M., Enhancements to the DIDO Optimal Control Toolbox, arXiv 2020. https://arxiv.org/abs/2004.13112
22. Williams, P., User's Guide to DIRECT, Version 2.00, Melbourne, Australia, 2008
23. FALCON.m, described in Rieck, M., Bittner, M., Grüter, B., Diepolder, J., and Piprek, P., FALCON.m - User Guide, Institute of Flight System Dynamics, Technical University of Munich, October 2019
24. GPOPS Archived 24 July 2011 at the Wayback Machine , described in Rao, A. V., Benson, D. A., Huntington, G. T., Francolin, C., Darby, C. L., and Patterson, M. A., User's Manual for GPOPS: A MATLAB Package for Dynamic Optimization Using the Gauss Pseudospectral Method , University of Florida Report, August 2008.
25. Rutquist, P. and Edvall, M. M, PROPT – MATLAB Optimal Control Software," 1260 S.E. Bishop Blvd Ste E, Pullman, WA 99163, USA: Tomlab Optimization, Inc.
26. E. Polak, On the use of consistent approximations in the solution of semi-infinite optimization and optimal control problems Math. Prog. 62 pp. 385–415 (1993).
27. Ross, I M. (1 December 2005). "A Roadmap for Optimal Control: The Right Way to Commute". Annals of the New York Academy of Sciences. 1065 (1): 210–231. Bibcode:2005NYASA1065..210R. doi:10.1196/annals.1370.015. ISSN   0077-8923. PMID   16510411. S2CID   7625851.
28. Fahroo, Fariba; Ross, I. Michael (September 2008). "Convergence of the Costates Does Not Imply Convergence of the Control". Journal of Guidance, Control, and Dynamics. 31 (5): 1492–1497. Bibcode:2008JGCD...31.1492F. doi:10.2514/1.37331. ISSN   0731-5090.