R. Tyrrell Rockafellar

Last updated
Ralph Tyrrell Rockafellar
Rockafellar.jpg
R. Tyrrell ("Terry") Rockafellar in 1977
Born (1935-02-10) February 10, 1935 (age 88)
Milwaukee, Wisconsin, U.S.
Alma mater Harvard University
Known for Convex analysis
Monotone operator
Calculus of variation
Stochastic programming
Oriented matroid
Awards Dantzig Prize of SIAM and MPS 1982
von Neumann citation of SIAM 1992
Frederick W. Lanchester Prize of INFORMS 1998
John von Neumann Theory Prize of INFORMS 1999
Doctor Honoris Causa :
Groningen, Montpellier, Chile, Alicante
Scientific career
Fields Mathematical optimization
Institutions University of Washington 1966-
University of Florida (adjunct) 2003-
University of Texas, Austin 1963–1965
Thesis Convex Functions and Dual Extremum Problems  (1963)
Doctoral advisor Garrett Birkhoff
Notable studentsPeter Wolenski
Francis Clarke

Ralph Tyrrell Rockafellar (born February 10, 1935) is an American mathematician and one of the leading scholars in optimization theory and related fields of analysis and combinatorics. He is the author of four major books including the landmark text "Convex Analysis" (1970), [1] which has been cited more than 27,000 times according to Google Scholar and remains the standard reference on the subject, and "Variational Analysis" (1998, with Roger J-B Wets) for which the authors received the Frederick W. Lanchester Prize from the Institute for Operations Research and the Management Sciences (INFORMS).

Contents

He is professor emeritus at the departments of mathematics and applied mathematics at the University of Washington, Seattle.

Early life and education

Ralph Tyrrell Rockafellar was born in Milwaukee, Wisconsin. [2] He is named after his father Ralph Rockafellar, with Tyrrell being his mother’s maiden name. Since his mother was fond of the name Terry, the parents adopted it as a nickname for Tyrrell and soon everybody referred to him as Terry. [3]

Rockafellar is a distant relative of the American business magnate and philanthropist John D. Rockefeller. They both can trace their ancestors back to two brothers named Rockenfelder that came to America from the Rhineland-Pfaltz region of Germany in 1728. Soon the spelling of the family name evolved, resulting in Rockafellar, Rockefeller, and many other versions of the name. [4]

Rockafellar moved to Cambridge, Massachusetts to attend Harvard College in 1953. Majoring in mathematics, he graduated from Harvard in 1957 with summa cum laude. He was also elected for the Phi Beta Kappa honor society. Rockafellar was a Fulbright Scholar at the University of Bonn in 1957–58 and completed a Master of Science degree at Marquette University in 1959. Formally under the guidance of Professor Garrett Birkhoff, Rockafellar completed his Doctor of Philosophy degree in mathematics from Harvard University in 1963 with the dissertation “Convex Functions and Dual Extremum Problems.” However, at the time there was little interest in convexity and optimization at Harvard and Birkhoff was neither involved with the research nor familiar with the subject. [5] The dissertation was inspired by the duality theory of linear programming developed by John von Neumann, which Rockafellar learned about through volumes of recent papers compiled by Albert W. Tucker at Princeton University. [6] Rockafellar’s dissertation together with the contemporary work by Jean-Jacques Moreau in France are regarded as the birth of convex analysis.

Career

After graduating from Harvard, Rockafellar became Assistant Professor of Mathematics at the University of Texas, Austin, where he also was affiliated with the Department of Computer Science. After two years, he moved to University of Washington in Seattle where he filled joint positions in the Departments of Mathematics and Applied Mathematics from 1966 to 2003 when he retired. He is presently Professor Emeritus at the university. He has held adjunct positions at the University of Florida and Hong Kong Polytechnic University.

Rockafellar was a visiting professor at the Mathematics Institute, Copenhagen (1964), Princeton University (1965–66), University of Grenoble (1973–74), University of Colorado, Boulder (1978), International Institute of Applied Systems Analysis, Vienna (1980–81), University of Pisa (1991), University of Paris-Dauphine (1996), University of Pau (1997), Keio University (2009), National University of Singapore (2011), University of Vienna (2011), and Yale University (2012).

Rockafellar received the Dantzig Prize from the Society for Industrial and Applied Mathematics (SIAM) and the Mathematical Optimization Society in 1982, delivered the 1992 John von Neumann Lecture, received with Roger J-B Wets the Frederick W. Lanchester Prize from the Institute for Operations Research and the Management Sciences (INFORMS) in 1998 for the book “Variational Analysis.” In 1999, he was awarded the John von Neumann Theory Prize from INFORMS. He was elected to the 2002 class of Fellows of INFORMS. [7] He is the recipient of honorary doctoral degrees from University of Groningen (1984), University of Montpellier (1995), University of Chile (1998), and University of Alicante (2000). The Institute for Scientific Information (ISI) lists Rockafellar as a highly cited researcher. [8]

Research

Rockafellar’s research is motivated by the goal of organizing mathematical ideas and concepts into robust frameworks that yield new insights and relations. [9] This approach is most salient in his seminal book "Variational Analysis" (1998, with Roger J-B Wets), where numerous threads developed in the areas of convex analysis, nonlinear analysis, calculus of variation, mathematical optimization, equilibrium theory, and control systems were brought together to produce a unified approach to variational problems in finite dimensions. These various fields of study are now referred to as variational analysis. In particular, the text dispenses of differentiability as a necessary property in many areas of analysis and embraces nonsmoothness, set-valuedness, and extended real-valuedness, while still developing far-reaching calculus rules.

Contributions to Mathematics

The approach of extending the real line with the values infinity and negative infinity and then allowing (convex) functions to take these values can be traced back to Rockafellar’s dissertation and, independently, the work by Jean-Jacques Moreau around the same time. The central role of set-valued mappings (also called multivalued functions) was also recognized in Rockafellar’s dissertation and, in fact, the standard notation ∂f(x) for the set of subgradients of a function f at x originated there.

Rockafellar contributed to nonsmooth analysis by extending the rule of Fermat, which characterizes solutions of optimization problems, to composite problems using subgradient calculus and variational geometry and thereby bypassing the implicit function theorem. The approach broadens the notion of Lagrange multipliers to settings beyond smooth equality and inequality systems. In his doctoral dissertation and numerous later publications, Rockafellar developed a general duality theory based on convex conjugate functions that centers on embedding a problem within a family of problems obtained by a perturbation of parameters. This encapsulates linear programming duality and Lagrangian duality, and extends to general convex problems as well as nonconvex ones, especially when combined with an augmentation.

Contributions to Applications

Rockafellar also worked on applied problems and computational aspects. In the 1970s, he contributed to the development of the proximal point method, which underpins several successful algorithms including the proximal gradient method often used in statistical applications. He placed the analysis of expectation functions in stochastic programming on solid footing by defining and analyzing normal integrands. Rockafellar also contributed to the analysis of control systems and general equilibrium theory in economics.

Since the late 1990s, Rockafellar has been actively involved with organizing and expanding the mathematical concepts for risk assessment and decision making in financial engineering and reliability engineering. This includes examining the mathematical properties of risk measures and coining the terms "conditional value-at-risk," in 2000 as well as "superquantile" and "buffered failure probability" in 2010, which either coincide with or are closely related to expected shortfall.

Selected publications

Books

Papers

See also

Notes

  1. Rockafeller, Ralph Tyrell (12 January 1997). Convex Analysis: (PMS-28) (Princeton Landmarks in Mathematics and Physics, 18). ISBN   978-0691015866.
  2. Kalte, Pamela M.; Nemeh, Katherine H.; Schusterbauer, Noah (2005). Q - S. ISBN   9780787673987.
  3. Rockafellar, R.T. "About my name". Personal webpage. Retrieved 7 August 2020.
  4. Rockafellar, R.T. "About my name". Personal webpage. Retrieved 7 August 2020.
  5. "An Interview with R. Tyrrell Rockafellar" (PDF). SIAG/Opt News and Views. 15 (1). 2004.
  6. "An Interview with R. Tyrrell Rockafellar" (PDF). SIAG/Opt News and Views. 15 (1). 2004.
  7. Fellows: Alphabetical List, Institute for Operations Research and the Management Sciences, archived from the original on 2019-05-10, retrieved 2019-10-09
  8. In the Institute for Scientific Information highly cited researcher list, Rockafellar's author id is "A0071-2003-A".
  9. "An Interview with R. Tyrrell Rockafellar" (PDF). SIAG/Opt News and Views. 15 (1). 2004.

Related Research Articles

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

In mathematical analysis, in particular the subfields of convex analysis and optimization, a proper convex function is an extended real-valued convex function with a non-empty domain, that never takes on the value Failed to parse : -\infty and also is not identically equal to

In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate. It allows in particular for a far reaching generalization of Lagrangian duality.

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.

<span class="mw-page-title-main">Convex analysis</span>

Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory.

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.

<span class="mw-page-title-main">Subderivative</span> Generalization of derivatives to real-valued functions

In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connection to convex optimization.

<span class="mw-page-title-main">Quasiconvex function</span> Mathematical function with convex lower level sets

In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form is a convex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be quasiconcave.

Subgradient methods are iterative methods for solving convex minimization problems. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of steepest descent.

<span class="mw-page-title-main">Set-valued function</span> Function whose values are sets (mathematics)

A set-valued function is a mathematical function that maps elements from one set, the domain of the function, to subsets of another set. Set-valued functions are used in a variety of mathematical fields, including optimization, control theory and game theory.

Roger Jean-Baptiste Robert Wets is a "pioneer" in stochastic programming and a leader in variational analysis who publishes as Roger J-B Wets. His research, expositions, graduate students, and his collaboration with R. Tyrrell Rockafellar have had a profound influence on optimization theory, computations, and applications. Since 2009, Wets has been a distinguished research professor at the mathematics department of the University of California, Davis.

<span class="mw-page-title-main">Claude Lemaréchal</span>

Claude Lemaréchal is a French applied mathematician, and former senior researcher at INRIA near Grenoble, France.

<span class="mw-page-title-main">Shapley–Folkman lemma</span> Sums of sets of vectors are nearly convex

The Shapley–Folkman lemma is a result in convex geometry that describes the Minkowski addition of sets in a vector space. It is named after mathematicians Lloyd Shapley and Jon Folkman, but was first published by the economist Ross M. Starr.

Convexity is an important topic in economics. In the Arrow–Debreu model of general economic equilibrium, agents have convex budget sets and convex preferences: At equilibrium prices, the budget hyperplane supports the best attainable indifference curve. The profit function is the convex conjugate of the cost function. Convex analysis is the standard tool for analyzing textbook economics. Non‑convex phenomena in economics have been studied with nonsmooth analysis, which generalizes convex analysis.

<span class="mw-page-title-main">Robert Phelps</span>

Robert Ralph Phelps was an American mathematician who was known for his contributions to analysis, particularly to functional analysis and measure theory. He was a professor of mathematics at the University of Washington from 1962 until his death.

<span class="mw-page-title-main">Ivar Ekeland</span> French mathematician

Ivar I. Ekeland is a French mathematician of Norwegian descent. Ekeland has written influential monographs and textbooks on nonlinear functional analysis, the calculus of variations, and mathematical economics, as well as popular books on mathematics, which have been published in French, English, and other languages. Ekeland is known as the author of Ekeland's variational principle and for his use of the Shapley–Folkman lemma in optimization theory. He has contributed to the periodic solutions of Hamiltonian systems and particularly to the theory of Kreĭn indices for linear systems. Ekeland helped to inspire the discussion of chaos theory in Michael Crichton's 1990 novel Jurassic Park.

In convex analysis, a branch of mathematics, the effective domain extends of the domain of a function defined for functions that take values in the extended real number line

<span class="mw-page-title-main">Andrzej Piotr Ruszczyński</span> Polish-American mathematician (born 1951)

Andrzej Piotr Ruszczyński is a Polish-American applied mathematician, noted for his contributions to mathematical optimization, in particular, stochastic programming and risk-averse optimization.

In mathematics, the term variational analysis usually denotes the combination and extension of methods from convex optimization and the classical calculus of variations to a more general theory. This includes the more general problems of optimization theory, including topics in set-valued analysis, e.g. generalized derivatives.

In mathematics, cyclical monotonicity is a generalization of the notion of monotonicity to the case of vector-valued function.

References