Henry J. Kelley

Last updated

Henry J. Kelley (1926-1988) was Christopher C. Kraft Professor of Aerospace and Ocean Engineering at the Virginia Polytechnic Institute. He produced major contributions to control theory, especially in aeronautical engineering and flight optimization. [1]

In 1948, he received a Bachelor of Science from NYU in aeronautical engineering, and began to work for Grumman Aircraft in the same year. His studies continued as a part-time student leading to a Master of Science in mathematics (1951) and to the Sc.D. in aeronautical engineering (1958). [1] In 1963, he left Grumman (as Assistant Chief of the Research Department) and founded Analytical Mechanics Associates with two partners. In 1978, he became a professor in the Aerospace and Ocean Engineering Department at the Virginia Polytechnic Institute. [1]

His paper on "Gradient Theory of Optimal Flight Paths" (1960) [2] is considered a major contribution to the field. [1] In the context of control theory, Kelley derived basics of backpropagation, [3] [4] now widely used for machine learning and artificial neural networks.

Kelley received NYU's Founder's Day Award in 1959, the IAS New York Section Award in 1961, the AIAA Guidance and Control of Flight Award in 1973, and the AIAA Pendray Award in 1979. He was a Fellow of the AIAA, and member of the AAS, the IEEE and SIAM, and founder and first chairman of IFAC's Mathematics of Control Committee. [1]

Notes

  1. 1 2 3 4 5 E. M. Cliff (1989). "Editorial: In Memory of Henry J. Kelley". Journal Of Optimization Theory And Applications: Vol. 60, No. 1, January 1989
  2. Henry J. Kelley (1960). "Gradient theory of optimal flight paths". Ars Journal, 30(10), 947-954. Online
  3. Stuart Dreyfus (1990). "Artificial Neural Networks, Back Propagation and the Kelley-Bryson Gradient Procedure". J. Guidance, Control and Dynamics, 1990.
  4. Jürgen Schmidhuber (2015). "Deep Learning". Scholarpedia, 10(11):32832. Section on Backpropagation

Related Research Articles

Artificial neural network Computational model used in machine learning, based on connected, hierarchical functions

Artificial neural networks (ANNs), usually simply called neural networks (NNs), are computing systems vaguely inspired by the biological neural networks that constitute animal brains.

Backpropagation Optimization algorithm for artificial neural networks

In machine learning, backpropagation is a widely used algorithm for training feedforward neural networks. Generalizations of backpropagation exists for other artificial neural networks (ANNs), and for functions generally. These classes of algorithms are all referred to generically as "backpropagation". In fitting a neural network, backpropagation computes the gradient of the loss function with respect to the weights of the network for a single input–output example, and does so efficiently, unlike a naive direct computation of the gradient with respect to each weight individually. This efficiency makes it feasible to use gradient methods for training multilayer networks, updating weights to minimize loss; gradient descent, or variants such as stochastic gradient descent, are commonly used. The backpropagation algorithm works by computing the gradient of the loss function with respect to each weight by the chain rule, computing the gradient one layer at a time, iterating backward from the last layer to avoid redundant calculations of intermediate terms in the chain rule; this is an example of dynamic programming.

Recurrent neural network Computational model used in machine learning

A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.

Neural network Structure in biology and artificial intelligence

A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.

Trajectory optimization is the process of designing a trajectory that minimizes some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory. If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC).

Nguyễn Xuân Vinh is a noted Vietnamese-American aerospace scientist and educator. Vinh is Professor Emeritus of Aerospace Engineering at the University of Michigan, where he taught for nearly thirty years. His seminal work on the guidance, dynamics and optimal control of space vehicles and their interaction with the atmosphere has played a fundamental role in space exploration and technological development.

A native of Terre Haute, Indiana, Stuart E. Dreyfus is Professor Emeritus at University of California, Berkeley in the Industrial Engineering and Operations Research Department. While at the Rand Corporation he was a programmer of the JOHNNIAC computer. While at Rand he coauthored Applied Dynamic Programming with Richard Bellman. Following that work, he was encouraged to pursue a Ph.D. which he completed in applied mathematics at Harvard University in 1964, on the calculus of variations. In 1962, Dreyfus simplified the Dynamic Programming-based derivation of backpropagation using only the chain rule. He also coauthored Mind Over Machine with his brother Hubert Dreyfus in 1986.

The Gauss pseudospectral method (GPM), one of many topics named after Carl Friedrich Gauss, is a direct transcription method for discretizing a continuous optimal control problem into a nonlinear program (NLP). The Gauss pseudospectral method differs from several other pseudospectral methods in that the dynamics are not collocated at either endpoint of the time interval. This collocation, in conjunction with the proper approximation to the costate, leads to a set of KKT conditions that are identical to the discretized form of the first-order optimality conditions. This equivalence between the KKT conditions and the discretized first-order optimality conditions leads to an accurate costate estimate using the KKT multipliers of the NLP.

DIDO is a software product for solving general-purpose optimal control problems. It is widely used in academia, industry, and NASA. Hailed as a breakthrough software, DIDO is based on the pseudospectral optimal control theory of Ross and Fahroo. The latest enhancements to DIDO are described in Ross.

Arthur Earl Bryson Jr. is the Paul Pigott Professor of Engineering Emeritus at Stanford University and the "father of modern optimal control theory". With Henry J. Kelley, he also pioneered an early version of the backpropagation procedure, now widely used for machine learning and artificial neural networks.

Yann LeCun

Yann André LeCun is a French computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics, and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University, and Vice President, Chief AI Scientist at Facebook.

Mehran Mesbahi is an Iranian-American control theorist and aerospace engineer. He is a Professor of Aeronautics and Astronautics, and Adjunct Professor of Electrical Engineering and Mathematics at the University of Washington in Seattle. His research is on systems and control theory over networks, optimization, and aerospace controls.

John L. Junkins is a Distinguished Professor of Aerospace Engineering at Texas A&M University specializing in spacecraft navigation, guidance, dynamics and control. He holds the Royce E. Wisenbaker Endowed Chair at Texas A&M University and also serves as the founding Director of the Texas A&M University Institute for Advanced Study. On the 24th of November, 2020, Dr Junkins was announced as the interim president of Texas A&M starting January 2021.

Named after I. Michael Ross and F. Fahroo, the Ross–Fahroo lemma is a fundamental result in optimal control theory.

Isaac Michael Ross is a Distinguished Professor and Program Director of Control and Optimization at the Naval Postgraduate School in Monterey, CA. He has published papers in pseudospectral optimal control theory, energy-sink theory, the optimization and deflection of near-Earth asteroids and comets, robotics, attitude dynamics and control, real-time optimal control unscented optimal control and a textbook on optimal control. The Kang-Ross-Gong theorem, Ross' π lemma, Ross' time constant, the Ross–Fahroo lemma, and the Ross–Fahroo pseudospectral method are all named after him.

Naira Hovakimyan Armenian control theorist

Naira Hovakimyan is an Armenian control theorist who holds the W. Grafton and Lillian B. Wilkins professorship of the Mechanical Science and Engineering at the University of Illinois at Urbana-Champaign. She was the inaugural director of the Intelligent Robotics Laboratory during 2015-2017, associated with the Coordinated Science Laboratory at University of Illinois at Urbana-Champaign.

Daniele Mortari

Daniele Mortari is Professor of Aerospace Engineering at Texas A&M University and Chief Scientist for Space for Texas A&M ASTRO Center. Mortari is known for inventing the Flower Constellations and the k-vector range searching technique and the Theory of Functional Connections.

Machine learning control (MLC) is a subfield of machine learning, intelligent control and control theory which solves optimal control problems with methods of machine learning. Key applications are complex nonlinear systems for which linear control theory methods are not applicable.

In mathematics, unscented optimal control combines the notion of the unscented transform with deterministic optimal control to address a class of uncertain optimal control problems. It is a specific application of Riemmann-Stieltjes optimal control theory, a concept introduced by Ross and his coworkers.

Silvia Ferrari American aerospace engineer

Silvia Ferrari is an American aerospace engineer. She is John Brancaccio Professor at the Sibley School of Mechanical and Aerospace Engineering at Cornell University and also the director of the Laboratory for Intelligent Systems and Control (LISC) at the same university.