In mathematics, subderivatives (or subgradient) generalizes the derivative to convex functions which are not necessarily differentiable. The set of subderivatives at a point is called the subdifferential at that point. [1] Subderivatives arise in convex analysis, the study of convex functions, often in connection to convex optimization.
Let be a real-valued convex function defined on an open interval of the real line. Such a function need not be differentiable at all points: For example, the absolute value function is non-differentiable when . However, as seen in the graph on the right (where in blue has non-differentiable kinks similar to the absolute value function), for any in the domain of the function one can draw a line which goes through the point and which is everywhere either touching or below the graph of f. The slope of such a line is called a subderivative.
Rigorously, a subderivative of a convex function at a point in the open interval is a real number such that for all . By the converse of the mean value theorem, the set of subderivatives at for a convex function is a nonempty closed interval , where and are the one-sided limits The interval of all subderivatives is called the subdifferential of the function at , denoted by . If is convex, then its subdifferential at any point is non-empty. Moreover, if its subdifferential at contains exactly one subderivative, then is differentiable at and . [2]
Consider the function which is convex. Then, the subdifferential at the origin is the interval . The subdifferential at any point is the singleton set , while the subdifferential at any point is the singleton set . This is similar to the sign function, but is not single-valued at , instead including all possible subderivatives.
The concepts of subderivative and subdifferential can be generalized to functions of several variables. If is a real-valued convex function defined on a convex open set in the Euclidean space , a vector in that space is called a subgradient at if for any one has that
where the dot denotes the dot product. The set of all subgradients at is called the subdifferential at and is denoted . The subdifferential is always a nonempty convex compact set.
These concepts generalize further to convex functions on a convex set in a locally convex space . A functional in the dual space is called the subgradient at in if for all ,
The set of all subgradients at is called the subdifferential at and is again denoted . The subdifferential is always a convex closed set. It can be an empty set; consider for example an unbounded operator, which is convex, but has no subgradient. If is continuous, the subdifferential is nonempty.
The subdifferential on convex functions was introduced by Jean Jacques Moreau and R. Tyrrell Rockafellar in the early 1960s. The generalized subdifferential for nonconvex functions was introduced by F.H. Clarke and R.T. Rockafellar in the early 1980s. [4]
In mathematics, the mean value theorem states, roughly, that for a given planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. It is one of the most important results in real analysis. This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval.
In mathematics, the branch of real analysis studies the behavior of real numbers, sequences and series of real numbers, and real functions. Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.
In mathematics, the sign function or signum function is a function that has the value −1, +1 or 0 according to whether the sign of a given real number is positive or negative, or the given number is itself zero. In mathematical notation the sign function is often represented as or .
In mathematical analysis, in particular the subfields of convex analysis and optimization, a proper convex function is an extended real-valued convex function with a non-empty domain, that never takes on the value and also is not identically equal to
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.
In mathematics, the derivative is a fundamental construction of differential calculus and admits many possible generalizations within the fields of mathematical analysis, combinatorics, algebra, geometry, etc.
In mathematics, the Fréchet derivative is a derivative defined on normed spaces. Named after Maurice Fréchet, it is commonly used to generalize the derivative of a real-valued function of a single real variable to the case of a vector-valued function of multiple real variables, and to define the functional derivative used widely in the calculus of variations.
In convex analysis and the calculus of variations, both branches of mathematics, a pseudoconvex function is a function that behaves like a convex function with respect to finding its local minima, but need not actually be convex. Informally, a differentiable function is pseudoconvex if it is increasing in any direction where it has a positive directional derivative. The property must hold in all of the function domain, and not only for nearby points.
Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory.
In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form is a convex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be quasiconcave.
In mathematics, the support functionhA of a non-empty closed convex set A in describes the (signed) distances of supporting hyperplanes of A from the origin. The support function is a convex function on . Any non-empty closed convex set A is uniquely determined by hA. Furthermore, the support function, as a function of the set A, is compatible with many natural geometric operations, like scaling, translation, rotation and Minkowski addition. Due to these properties, the support function is one of the most central basic concepts in convex geometry.
Subgradient methods are convex optimization methods which use subderivatives. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of steepest descent.
In convex analysis, Danskin's theorem is a theorem which provides information about the derivatives of a function of the form
In vector calculus, an invex function is a differentiable function from to for which there exists a vector valued function such that
Ralph Tyrrell Rockafellar is an American mathematician and one of the leading scholars in optimization theory and related fields of analysis and combinatorics. He is the author of four major books including the landmark text "Convex Analysis" (1970), which has been cited more than 27,000 times according to Google Scholar and remains the standard reference on the subject, and "Variational Analysis" for which the authors received the Frederick W. Lanchester Prize from the Institute for Operations Research and the Management Sciences (INFORMS).
In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article.
In mathematical optimization, the proximal operator is an operator associated with a proper, lower semi-continuous convex function from a Hilbert space to , and is defined by:
Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.
In mathematics, cyclical monotonicity is a generalization of the notion of monotonicity to the case of vector-valued function.
In mathematics, the Clarke generalized derivatives are types generalized of derivatives that allow for the differentiation of nonsmooth functions. The Clarke derivatives were introduced by Francis Clarke in 1975.