![]() |
Ralph Tyrrell Rockafellar | |
---|---|
![]() R. Tyrrell ("Terry") Rockafellar in 1977 | |
Born | Milwaukee, Wisconsin, U.S. | February 10, 1935
Alma mater | Harvard University |
Known for | Convex analysis Monotone operator Calculus of variation Stochastic programming Oriented matroid |
Awards | Dantzig Prize of SIAM and MPS 1982 von Neumann citation of SIAM 1992 Frederick W. Lanchester Prize of INFORMS 1998 John von Neumann Theory Prize of INFORMS 1999 Doctor Honoris Causa : Groningen, Montpellier, Chile, Alicante |
Scientific career | |
Fields | Mathematical optimization |
Institutions | University of Washington 1966- University of Florida (adjunct) 2003- University of Texas, Austin 1963–1965 |
Thesis | Convex Functions and Dual Extremum Problems (1963) |
Doctoral advisor | Garrett Birkhoff |
Notable students | Peter Wolenski Francis Clarke |
Ralph Tyrrell Rockafellar (born February 10, 1935) is an American mathematician and one of the leading scholars in optimization theory and related fields of analysis and combinatorics. He is the author of four major books including the landmark text "Convex Analysis" (1970), [1] which has been cited more than 27,000 times according to Google Scholar and remains the standard reference on the subject, and "Variational Analysis" (1998, with Roger J-B Wets) for which the authors received the Frederick W. Lanchester Prize from the Institute for Operations Research and the Management Sciences (INFORMS).
He is professor emeritus at the departments of mathematics and applied mathematics at the University of Washington, Seattle.
Ralph Tyrrell Rockafellar was born in Milwaukee, Wisconsin. [2] He is named after his father Ralph Rockafellar, with Tyrrell being his mother’s maiden name. Since his mother was fond of the name Terry, the parents adopted it as a nickname for Tyrrell and soon everybody referred to him as Terry. [3]
Rockafellar is a distant relative of the American business magnate and philanthropist John D. Rockefeller. They both can trace their ancestors back to two brothers named Rockenfelder that came to America from the Rhineland-Pfaltz region of Germany in 1728. Soon the spelling of the family name evolved, resulting in Rockafellar, Rockefeller, and many other versions of the name. [4]
Rockafellar moved to Cambridge, Massachusetts to attend Harvard College in 1953. Majoring in mathematics, he graduated from Harvard in 1957 with summa cum laude. He was also elected for the Phi Beta Kappa honor society. Rockafellar was a Fulbright Scholar at the University of Bonn in 1957–58 and completed a Master of Science degree at Marquette University in 1959. Formally under the guidance of Professor Garrett Birkhoff, Rockafellar completed his Doctor of Philosophy degree in mathematics from Harvard University in 1963 with the dissertation “Convex Functions and Dual Extremum Problems.” However, at the time there was little interest in convexity and optimization at Harvard and Birkhoff was neither involved with the research nor familiar with the subject. [5] The dissertation was inspired by the duality theory of linear programming developed by John von Neumann, which Rockafellar learned about through volumes of recent papers compiled by Albert W. Tucker at Princeton University. [6] Rockafellar’s dissertation together with the contemporary work by Jean-Jacques Moreau in France are regarded as the birth of convex analysis.
After graduating from Harvard, Rockafellar became Assistant Professor of Mathematics at the University of Texas, Austin, where he also was affiliated with the Department of Computer Science. After two years, he moved to University of Washington in Seattle where he filled joint positions in the Departments of Mathematics and Applied Mathematics from 1966 to 2003 when he retired. He is presently Professor Emeritus at the university. He has held adjunct positions at the University of Florida and Hong Kong Polytechnic University.
Rockafellar was a visiting professor at the Mathematics Institute, Copenhagen (1964), Princeton University (1965–66), University of Grenoble (1973–74), University of Colorado, Boulder (1978), International Institute of Applied Systems Analysis, Vienna (1980–81), University of Pisa (1991), University of Paris-Dauphine (1996), University of Pau (1997), Keio University (2009), National University of Singapore (2011), University of Vienna (2011), and Yale University (2012).
Rockafellar received the Dantzig Prize from the Society for Industrial and Applied Mathematics (SIAM) and the Mathematical Optimization Society in 1982, delivered the 1992 John von Neumann Lecture, received with Roger J-B Wets the Frederick W. Lanchester Prize from the Institute for Operations Research and the Management Sciences (INFORMS) in 1998 for the book “Variational Analysis.” In 1999, he was awarded the John von Neumann Theory Prize from INFORMS. He was elected to the 2002 class of Fellows of INFORMS. [7] He is the recipient of honorary doctoral degrees from University of Groningen (1984), University of Montpellier (1995), University of Chile (1998), and University of Alicante (2000). The Institute for Scientific Information (ISI) lists Rockafellar as a highly cited researcher. [8]
Rockafellar’s research is motivated by the goal of organizing mathematical ideas and concepts into robust frameworks that yield new insights and relations. [9] This approach is most salient in his seminal book "Variational Analysis" (1998, with Roger J-B Wets), where numerous threads developed in the areas of convex analysis, nonlinear analysis, calculus of variation, mathematical optimization, equilibrium theory, and control systems were brought together to produce a unified approach to variational problems in finite dimensions. These various fields of study are now referred to as variational analysis. In particular, the text dispenses of differentiability as a necessary property in many areas of analysis and embraces nonsmoothness, set-valuedness, and extended real-valuedness, while still developing far-reaching calculus rules.
The approach of extending the real line with the values infinity and negative infinity and then allowing (convex) functions to take these values can be traced back to Rockafellar’s dissertation and, independently, the work by Jean-Jacques Moreau around the same time. The central role of set-valued mappings (also called multivalued functions) was also recognized in Rockafellar’s dissertation and, in fact, the standard notation ∂f(x) for the set of subgradients of a function f at x originated there.
Rockafellar contributed to nonsmooth analysis by extending the rule of Fermat, which characterizes solutions of optimization problems, to composite problems using subgradient calculus and variational geometry and thereby bypassing the implicit function theorem. The approach broadens the notion of Lagrange multipliers to settings beyond smooth equality and inequality systems. In his doctoral dissertation and numerous later publications, Rockafellar developed a general duality theory based on convex conjugate functions that centers on embedding a problem within a family of problems obtained by a perturbation of parameters. This encapsulates linear programming duality and Lagrangian duality, and extends to general convex problems as well as nonconvex ones, especially when combined with an augmentation.
Rockafellar also worked on applied problems and computational aspects. In the 1970s, he contributed to the development of the proximal point method, which underpins several successful algorithms including the proximal gradient method often used in statistical applications. He placed the analysis of expectation functions in stochastic programming on solid footing by defining and analyzing normal integrands. Rockafellar also contributed to the analysis of control systems and general equilibrium theory in economics.
Since the late 1990s, Rockafellar has been actively involved with organizing and expanding the mathematical concepts for risk assessment and decision making in financial engineering and reliability engineering. This includes examining the mathematical properties of risk measures and coining the terms "conditional value-at-risk," in 2000 as well as "superquantile" and "buffered failure probability" in 2010, which either coincide with or are closely related to expected shortfall.
Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
In mathematical analysis, in particular the subfields of convex analysis and optimization, a proper convex function is an extended real-valued convex function with a non-empty domain, that never takes on the value and also is not identically equal to
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory.
In mathematics, subderivatives generalizes the derivative to convex functions which are not necessarily differentiable. The set of subderivatives at a point is called the subdifferential at that point. Subderivatives arise in convex analysis, the study of convex functions, often in connection to convex optimization.
In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form is a convex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be quasiconcave.
Subgradient methods are convex optimization methods which use subderivatives. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of steepest descent.
A set-valued function, also called a correspondence or set-valued relation, is a mathematical function that maps elements from one set, the domain of the function, to subsets of another set. Set-valued functions are used in a variety of mathematical fields, including optimization, control theory and game theory.
Roger Jean-Baptiste Robert Wets is a "pioneer" in stochastic programming and a leader in variational analysis who publishes as Roger J-B Wets. His research, expositions, graduate students, and his collaboration with R. Tyrrell Rockafellar have had a profound influence on optimization theory, computations, and applications. Since 2009, Wets has been a distinguished research professor at the mathematics department of the University of California, Davis.
Claude Lemaréchal is a French applied mathematician, and former senior researcher at INRIA near Grenoble, France.
The Shapley–Folkman lemma is a result in convex geometry that describes the Minkowski addition of sets in a vector space. It is named after mathematicians Lloyd Shapley and Jon Folkman, but was first published by the economist Ross M. Starr.
In economics, non-convexity refers to violations of the convexity assumptions of elementary economics. Basic economics textbooks concentrate on consumers with convex preferences and convex budget sets and on producers with convex production sets; for convex models, the predicted economic behavior is well understood. When convexity assumptions are violated, then many of the good properties of competitive markets need not hold: Thus, non-convexity is associated with market failures, where supply and demand differ or where market equilibria can be inefficient. Non-convex economies are studied with nonsmooth analysis, which is a generalization of convex analysis.
Convexity is a geometric property with a variety of applications in economics. Informally, an economic phenomenon is convex when "intermediates are better than extremes". For example, an economic agent with convex preferences prefers combinations of goods over having a lot of any one sort of good; this represents a kind of diminishing marginal utility of having more of the same good.
Robert Ralph Phelps was an American mathematician who was known for his contributions to analysis, particularly to functional analysis and measure theory. He was a professor of mathematics at the University of Washington from 1962 until his death.
Ivar I. Ekeland is a French mathematician of Norwegian descent. Ekeland has written influential monographs and textbooks on nonlinear functional analysis, the calculus of variations, and mathematical economics, as well as popular books on mathematics, which have been published in French, English, and other languages. Ekeland is known as the author of Ekeland's variational principle and for his use of the Shapley–Folkman lemma in optimization theory. He has contributed to the periodic solutions of Hamiltonian systems and particularly to the theory of Kreĭn indices for linear systems. Ekeland is cited in the credits of Steven Spielberg's 1993 movie Jurassic Park as an inspiration of the fictional chaos theory specialist Ian Malcolm appearing in Michael Crichton's 1990 novel Jurassic Park.
In convex analysis, a branch of mathematics, the effective domain extends of the domain of a function defined for functions that take values in the extended real number line
Andrzej Piotr Ruszczyński is a Polish-American applied mathematician, noted for his contributions to mathematical optimization, in particular, stochastic programming and risk-averse optimization.
In mathematics, variational analysis is the combination and extension of methods from convex optimization and the classical calculus of variations to a more general theory. This includes the more general problems of optimization theory, including topics in set-valued analysis, e.g. generalized derivatives.
In mathematics, cyclical monotonicity is a generalization of the notion of monotonicity to the case of vector-valued function.
Buffered probability of exceedance (bPOE) is a function of a random variable used in statistics and risk management, including financial risk. The bPOE is the probability of a tail with known mean value . The figure shows the bPOE at threshold as the blue shaded area. Therefore, by definition, bPOE is equal to one minus the confidence level at which the Conditional Value at Risk (CVaR) is equal to . bPOE is similar to the probability of exceedance of the threshold , but the tail is defined by its mean rather than the lowest point of the tail.