Hilbert's twenty-third problem is the last of Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. In contrast with Hilbert's other 22 problems, his 23rd is not so much a specific "problem" as an encouragement towards further development of the calculus of variations. His statement of the problem is a summary of the state-of-the-art (in 1900) of the theory of calculus of variations, with some introductory comments decrying the lack of work that had been done of the theory in the mid to late 19th century.
The problem statement begins with the following paragraph:
So far, I have generally mentioned problems as definite and special as possible.... Nevertheless, I should like to close with a general problem, namely with the indication of a branch of mathematics repeatedly mentioned in this lecture-which, in spite of the considerable advancement lately given it by Weierstrass, does not receive the general appreciation which, in my opinion, it is due—I mean the calculus of variations. [1]
Calculus of variations is a field of mathematical analysis that deals with maximizing or minimizing functionals, which are mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. The interest is in extremal functions that make the functional attain a maximum or minimum value – or stationary functions – those where the rate of change of the functional is zero.
Following the problem statement, David Hilbert, Emmy Noether, Leonida Tonelli, Henri Lebesgue and Jacques Hadamard among others made significant contributions to the calculus of variations. [2] Marston Morse applied calculus of variations in what is now called Morse theory. [3] Lev Pontryagin, Ralph Rockafellar and F. H. Clarke developed new mathematical tools for the calculus of variations in optimal control theory. [3] The dynamic programming of Richard Bellman is an alternative to the calculus of variations. [4] [5] [6]
David Hilbert was a German mathematician and one of the most influential mathematicians of his time.
Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.
Hilbert's problems are 23 problems in mathematics published by German mathematician David Hilbert in 1900. They were all unsolved at the time, and several proved to be very influential for 20th-century mathematics. Hilbert presented ten of the problems at the Paris conference of the International Congress of Mathematicians, speaking on August 8 at the Sorbonne. The complete list of 23 problems was published later, in English translation in 1902 by Mary Frances Winston Newson in the Bulletin of the American Mathematical Society. Earlier publications appeared in Archiv der Mathematik und Physik.
The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.
Ernst Friedrich Ferdinand Zermelo was a German logician and mathematician, whose work has major implications for the foundations of mathematics. He is known for his role in developing Zermelo–Fraenkel axiomatic set theory and his proof of the well-ordering theorem. Furthermore, his 1929 work on ranking chess players is the first description of a model for pairwise comparison that continues to have a profound impact on various applied fields utilizing this method.
Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.
Richard Ernest Bellman was an American applied mathematician, who introduced dynamic programming in 1953, and made important contributions in other fields of mathematics, such as biomathematics. He founded the leading biomathematical journal Mathematical Biosciences, as well as the Journal of Mathematical Analysis and Applications.
The Hamilton-Jacobi-Bellman (HJB) equation is a nonlinear partial differential equation that provides necessary and sufficient conditions for optimality of a control with respect to a loss function. Its solution is the value function of the optimal control problem which, once known, can be used to obtain the optimal control by taking the maximizer of the Hamiltonian involved in the HJB equation.
Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of the control Hamiltonian. These necessary conditions become sufficient under certain convexity conditions on the objective and constraint functions.
A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes. The equation applies to algebraic structures with a total ordering; for algebraic structures with a partial ordering, the generic Bellman's equation can be used.
In mathematics, Hilbert's fourth problem in the 1900 list of Hilbert's problems is a foundational question in geometry. In one statement derived from the original, it was to find — up to an isomorphism — all geometries that have an axiomatic system of the classical geometry, with those axioms of congruence that involve the concept of the angle dropped, and `triangle inequality', regarded as an axiom, added.
Hilbert's twenty-second problem is the penultimate entry in the celebrated list of 23 Hilbert problems compiled in 1900 by David Hilbert. It entails the uniformization of analytic relations by means of automorphic functions.
Hilbert's fifteenth problem is one of the 23 Hilbert problems set out in a list compiled in 1900 by David Hilbert. The problem is to put Schubert's enumerative calculus on a rigorous foundation.
Hilbert's nineteenth problem is one of the 23 Hilbert problems, set out in a list compiled by David Hilbert in 1900. It asks whether the solutions of regular problems in the calculus of variations are always analytic. Informally, and perhaps less directly, since Hilbert's concept of a "regular variational problem" identifies this precisely as a variational problem whose Euler–Lagrange equation is an elliptic partial differential equation with analytic coefficients, Hilbert's nineteenth problem, despite its seemingly technical statement, simply asks whether, in this class of partial differential equations, any solution inherits the relatively simple and well understood property of being an analytic function from the equation it satisfies. Hilbert's nineteenth problem was solved independently in the late 1950s by Ennio De Giorgi and John Forbes Nash, Jr.
Hilbert's twentieth problem is one of the 23 Hilbert problems set out in a celebrated list compiled in 1900 by David Hilbert. It asks whether all boundary value problems can be solved.
Gilbert Ames Bliss,, was an American mathematician, known for his work on the calculus of variations.
A native of Terre Haute, Indiana, Stuart E. Dreyfus is professor emeritus at University of California, Berkeley in the Industrial Engineering and Operations Research Department. While at the Rand Corporation he was a programmer of the JOHNNIAC computer. While at Rand he coauthored Applied Dynamic Programming with Richard Bellman. Following that work, he was encouraged to pursue a Ph.D. which he completed in applied mathematics at Harvard University in 1964, on the calculus of variations. In 1962, Dreyfus simplified the Dynamic Programming-based derivation of backpropagation using only the chain rule. He also coauthored Mind Over Machine with his brother Hubert Dreyfus in 1986.
Moshe Zakai was a Distinguished Professor at the Technion, Israel in electrical engineering, member of the Israel Academy of Sciences and Humanities and Rothschild Prize winner.
In economics, non-convexity refers to violations of the convexity assumptions of elementary economics. Basic economics textbooks concentrate on consumers with convex preferences and convex budget sets and on producers with convex production sets; for convex models, the predicted economic behavior is well understood. When convexity assumptions are violated, then many of the good properties of competitive markets need not hold: Thus, non-convexity is associated with market failures, where supply and demand differ or where market equilibria can be inefficient. Non-convex economies are studied with nonsmooth analysis, which is a generalization of convex analysis.