Covector mapping principle

Last updated

The covector mapping principle is a special case of Riesz' representation theorem, which is a fundamental theorem in functional analysis. The name was coined by Ross and coauthors, [1] [2] [3] [4] [5] [6] It provides conditions under which dualization can be commuted with discretization in the case of computational optimal control.

Contents

Description

An application of Pontryagin's minimum principle to Problem , a given optimal control problem generates a boundary value problem. According to Ross, this boundary value problem is a Pontryagin lift and is represented as Problem .

Illustration of the Covector Mapping Principle (adapted from Ross and Fahroo. CMP-OptimalControl.png
Illustration of the Covector Mapping Principle (adapted from Ross and Fahroo.

Now suppose one discretizes Problem . This generates Problem where represents the number of discrete points. For convergence, it is necessary to prove that as

In the 1960s Kalman and others [8] showed that solving Problem is extremely difficult. This difficulty, known as the curse of complexity, [9] is complementary to the curse of dimensionality.

In a series of papers starting in the late 1990s, Ross and Fahroo showed that one could arrive at a solution to Problem (and hence Problem ) more easily by discretizing first (Problem ) and dualizing afterwards (Problem ). The sequence of operations must be done carefully to ensure consistency and convergence. The covector mapping principle asserts that a covector mapping theorem can be discovered to map the solutions of Problem to Problem thus completing the circuit.

See also

Related Research Articles

The costate equation is related to the state equation used in optimal control. It is also referred to as auxiliary, adjoint, influence, or multiplier equation. It is stated as a vector of first order differential equations

<span class="mw-page-title-main">Optimal control</span> Mathematical way of attaining a desired output from a dynamic system

Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.

Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of the control Hamiltonian. These necessary conditions become sufficient under certain convexity conditions on the objective and constraint functions.

Trajectory optimization is the process of designing a trajectory that minimizes some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory. If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC).

In optimal control, problems of singular control are problems that are difficult to solve because a straightforward application of Pontryagin's minimum principle fails to yield a complete solution. Only a few such problems have been solved, such as Merton's portfolio problem in financial economics or trajectory optimization in aeronautics. A more technical explanation follows.

The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Inspired by—but distinct from—the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian.

In mathematics, the multiplication theorem is a certain type of identity obeyed by many special functions related to the gamma function. For the explicit case of the gamma function, the identity is a product of values; thus the name. The various relations all stem from the same underlying principle; that is, the relation for one special function can be derived from that for the others, and is simply a manifestation of the same identity in different guises.

The Gauss pseudospectral method (GPM), one of many topics named after Carl Friedrich Gauss, is a direct transcription method for discretizing a continuous optimal control problem into a nonlinear program (NLP). The Gauss pseudospectral method differs from several other pseudospectral methods in that the dynamics are not collocated at either endpoint of the time interval. This collocation, in conjunction with the proper approximation to the costate, leads to a set of KKT conditions that are identical to the discretized form of the first-order optimality conditions. This equivalence between the KKT conditions and the discretized first-order optimality conditions leads to an accurate costate estimate using the KKT multipliers of the NLP.

Pseudospectral optimal control is a joint theoretical-computational method for solving optimal control problems. It combines pseudospectral (PS) theory with optimal control theory to produce PS optimal control theory. PS optimal control theory has been used in ground and flight systems in military and industrial applications. The techniques have been extensively used to solve a wide range of problems such as those arising in UAV trajectory generation, missile guidance, control of robotic arms, vibration damping, lunar guidance, magnetic control, swing-up and stabilization of an inverted pendulum, orbit transfers, tether libration control, ascent guidance and quantum control.

DIDO is a MATLAB optimal control toolbox for solving general-purpose optimal control problems. It is widely used in academia, industry, and NASA. Hailed as a breakthrough software, DIDO is based on the pseudospectral optimal control theory of Ross and Fahroo. The latest enhancements to DIDO are described in Ross.

In applied mathematics, the pseudospectral knotting method is a generalization and enhancement of a standard pseudospectral method for optimal control. The concept was introduced by I. Michael Ross and F. Fahroo in 2004, and forms part of the collection of the Ross–Fahroo pseudospectral methods.

The Legendre pseudospectral method for optimal control problems is based on Legendre polynomials. It is part of the larger theory of pseudospectral optimal control, a term coined by Ross. A basic version of the Legendre pseudospectral was originally proposed by Elnagar and his coworkers in 1995. Since then, Ross, Fahroo and their coworkers have extended, generalized and applied the method for a large range of problems. An application that has received wide publicity is the use of their method for generating real time trajectories for the International Space Station.

The Chebyshev pseudospectral method for optimal control problems is based on Chebyshev polynomials of the first kind. It is part of the larger theory of pseudospectral optimal control, a term coined by Ross. Unlike the Legendre pseudospectral method, the Chebyshev pseudospectral (PS) method does not immediately offer high-accuracy quadrature solutions. Consequently, two different versions of the method have been proposed: one by Elnagar et al., and another by Fahroo and Ross. The two versions differ in their quadrature techniques. The Fahroo–Ross method is more commonly used today due to the ease in implementation of the Clenshaw–Curtis quadrature technique. In 2008, Trefethen showed that the Clenshaw–Curtis method was nearly as accurate as Gauss quadrature. This breakthrough result opened the door for a covector mapping theorem for Chebyshev PS methods. A complete mathematical theory for Chebyshev PS methods was finally developed in 2009 by Gong, Ross and Fahroo.

Introduced by I. Michael Ross and F. Fahroo, the Ross–Fahroo pseudospectral methods are a broad collection of pseudospectral methods for optimal control. Examples of the Ross–Fahroo pseudospectral methods are the pseudospectral knotting method, the flat pseudospectral method, the Legendre-Gauss-Radau pseudospectral method and pseudospectral methods for infinite-horizon optimal control.

Named after I. Michael Ross and F. Fahroo, the Ross–Fahroo lemma is a fundamental result in optimal control theory.

The Bellman pseudospectral method is a pseudospectral method for optimal control based on Bellman's principle of optimality. It is part of the larger theory of pseudospectral optimal control, a term coined by Ross. The method is named after Richard E. Bellman. It was introduced by Ross et al. first as a means to solve multiscale optimal control problems, and later expanded to obtain suboptimal solutions for general optimal control problems.

Isaac Michael Ross is a Distinguished Professor and Program Director of Control and Optimization at the Naval Postgraduate School in Monterey, CA. He has published a highly-regarded textbook on optimal control theory and seminal papers in pseudospectral optimal control theory, energy-sink theory, the optimization and deflection of near-Earth asteroids and comets, robotics, attitude dynamics and control, orbital mechanics, real-time optimal control and unscented optimal control. The Kang–Ross–Gong theorem, Ross' π lemma, Ross' time constant, the Ross–Fahroo lemma, and the Ross–Fahroo pseudospectral method are all named after him.

Fariba Fahroo is an American Persian mathematician, a program manager at the Air Force Office of Scientific Research, and a former program manager at the Defense Sciences Office. Along with I. M. Ross, she has published papers in pseudospectral optimal control theory. The Ross–Fahroo lemma and the Ross–Fahroo pseudospectral method are named after her. In 2010, she received, the AIAA Mechanics and Control of Flight Award for fundamental contributions to flight mechanics.

A Carathéodory-π solution is a generalized solution to an ordinary differential equation. The concept is due to I. Michael Ross and named in honor of Constantin Carathéodory. Its practicality was demonstrated in 2008 by Ross et al. in a laboratory implementation of the concept. The concept is most useful for implementing feedback controls, particularly those generated by an application of Ross' pseudospectral optimal control theory.

References

  1. Ross, I. M., “A Historical Introduction to the Covector Mapping Principle,” Proceedings of the 2005 AAS/AIAA Astrodynamics Specialist Conference, August 7–11, 2005 Lake Tahoe, CA. AAS 05-332.
  2. Q. Gong, I. M. Ross, W. Kang, F. Fahroo, Connections between the covector mapping theorem and convergence of pseudospectral methods for optimal control, Computational Optimization and Applications, Vol. 41, pp. 307–335, 2008
  3. Ross, I. M. and Fahroo, F., “Legendre Pseudospectral Approximations of Optimal Control Problems,” Lecture Notes in Control and Information Sciences, Vol. 295, Springer-Verlag, New York, 2003, pp 327–342.
  4. Ross, I. M. and Fahroo, F., “Discrete Verification of Necessary Conditions for Switched Nonlinear Optimal Control Systems,” Proceedings of the American Control Conference, June 2004, Boston, MA
  5. Ross, I. M. and Fahroo, F., “A Pseudospectral Transformation of the Covectors of Optimal Control Systems,” Proceedings of the First IFAC Symposium on System Structure and Control, Prague, Czech Republic, 29–31 August 2001.
  6. W. Kang, I. M. Ross, Q. Gong, Pseudospectral optimal control and its convergence theorems, Analysis and Design of Nonlinear Control Systems, Springer, pp.109–124, 2008.
  7. I. M. Ross and F. Fahroo, A Perspective on Methods for Trajectory Optimization, Proceedings of the AIAA/AAS Astrodynamics Conference, Monterey, CA, August 2002. Invited Paper No. AIAA 2002-4727.
  8. Bryson, A.E. and Ho, Y.C. Applied optimal control. Hemisphere, Washington, DC, 1969.
  9. Ross, I. M. A Primer on Pontryagin's Principle in Optimal Control. Collegiate Publishers. Carmel, CA, 2009. ISBN   978-0-9843571-0-9.