Real function with secant line between points above the graph itself
In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph (the set of points on or above the graph of the function) is a convex set. In simple terms, a convex function graph is shaped like a cup (or a straight line like a linear function), while a concave function's graph is shaped like a cap .
Convex functions play an important role in many areas of mathematics. They are especially important in the study of optimization problems where they are distinguished by a number of convenient properties. For instance, a strictly convex function on an open set has no more than one minimum. Even in infinite-dimensional spaces, under suitable additional hypotheses, convex functions continue to satisfy such properties and as a result, they are the most well-understood functionals in the calculus of variations. In probability theory, a convex function applied to the expected value of a random variable is always bounded above by the expected value of the convex function of the random variable. This result, known as Jensen's inequality, can be used to deduce inequalities such as the arithmetic–geometric mean inequality and Hölder's inequality.
Then is called convex if and only if any of the following equivalent conditions hold:
For all and all :
The right hand side represents the straight line between and in the graph of as a function of increasing from to or decreasing from to sweeps this line. Similarly, the argument of the function in the left hand side represents the straight line between and in or the -axis of the graph of So, this condition requires that the straight line between any pair of points on the curve of to be above or just meets the graph.[2]
For all and all such that :
The difference of this second condition with respect to the first condition above is that this condition does not include the intersection points (for example, and ) between the straight line passing through a pair of points on the curve of (the straight line is represented by the right hand side of this condition) and the curve of the first condition includes the intersection points as it becomes or at or or In fact, the intersection points do not need to be considered in a condition of convex using
because and are always true (so not useful to be a part of a condition).
The second statement characterizing convex functions that are valued in the real line is also the statement used to define convex functions that are valued in the extended real number line where such a function is allowed to take as a value. The first statement is not used because it permits to take or as a value, in which case, if or respectively, then would be undefined (because the multiplications and are undefined). The sum is also undefined so a convex extended real-valued function is typically only allowed to take exactly one of and as a value.
The second statement can also be modified to get the definition of strict convexity, where the latter is obtained by replacing with the strict inequality Explicitly, the map is called strictly convex if and only if for all real and all such that :
A strictly convex function is a function that the straight line between any pair of points on the curve is above the curve except for the intersection points between the straight line and the curve. An example of a function which is convex but not strictly convex is . This function is not strictly convex because any two points sharing an x coordinate will have a straight line between them, while any two points NOT sharing an x coordinate will have a greater value of the function than the points between them.
The function is said to be concave (resp. strictly concave) if ( multiplied by −1) is convex (resp. strictly convex).
Alternative naming
The term convex is often referred to as convex down or concave upward, and the term concave is often referred as concave down or convex upward.[3][4][5] If the term "convex" is used without an "up" or "down" keyword, then it refers strictly to a cup shaped graph . As an example, Jensen's inequality refers to an inequality involving a convex or convex-(down), function.[6]
Properties
Many properties of convex functions have the same simple formulation for functions of many variables as for functions of one variable. See below the properties for the case of many variables, as some of them are not listed for functions of one variable.
Functions of one variable
Suppose is a function of one real variable defined on an interval, and let
(note that is the slope of the purple line in the first drawing; the function is symmetric in means that does not change by exchanging and ). is convex if and only if is monotonically non-decreasing in for every fixed (or vice versa). This characterization of convexity is quite useful to prove the following results.
A convex function of one real variable defined on some open interval is continuous on admits left and right derivatives, and these are monotonically non-decreasing. In addition, the left derivative is left-continuous and the right-derivative is right-continuous. As a consequence, is differentiable at all but at most countably many points, the set on which is not differentiable can however still be dense. If is closed, then may fail to be continuous at the endpoints of (an example is shown in the examples section).
A differentiable function of one variable is convex on an interval if and only if its graph lies above all of its tangents:[7]:69
for all and in the interval.
A twice differentiable function of one variable is convex on an interval if and only if its second derivative is non-negative there; this gives a practical test for convexity. Visually, a twice differentiable convex function "curves up", without any bends the other way (inflection points). If its second derivative is positive at all points then the function is strictly convex, but the converse does not hold. For example, the second derivative of is , which is zero for but is strictly convex.
This property and the above property in terms of "...its derivative is monotonically non-decreasing..." are not equal since if is non-negative on an interval then is monotonically non-decreasing on while its converse is not true, for example, is monotonically non-decreasing on while its derivative is not defined at some points on .
If is a convex function of one real variable, and , then is superadditive on the positive reals, that is for positive real numbers and .
Proof Since is convex, by using one of the convex function definitions above and letting it follows that for all real
From , it follows that
Namely, .
A function is midpoint convex on an interval if for all
This condition is only slightly weaker than convexity. For example, a real-valued Lebesgue measurable function that is midpoint-convex is convex: this is a theorem of Sierpiński.[8] In particular, a continuous function that is midpoint convex will be convex.
Functions of several variables
A function that is marginally convex in each individual variable is not necessarily (jointly) convex. For example, the function is marginally linear, and thus marginally convex, in each variable, but not (jointly) convex.
For a convex function the sublevel sets and with are convex sets. A function that satisfies this property is called a quasiconvex function and may fail to be a convex function.
Consequently, the set of global minimisers of a convex function is a convex set: - convex.
Any local minimum of a convex function is also a global minimum. A strictly convex function will have at most one global minimum.[9]
Jensen's inequality applies to every convex function . If is a random variable taking values in the domain of then where denotes the mathematical expectation. Indeed, convex functions are exactly those that satisfies the hypothesis of Jensen's inequality.
A first-order homogeneous function of two positive variables and (that is, a function satisfying for all positive real ) that is convex in one variable must be convex in the other variable.[10]
Operations that preserve convexity
is concave if and only if is convex.
If is any real number then is convex if and only if is convex.
Nonnegative weighted sums:
if and are all convex, then so is In particular, the sum of two convex functions is convex.
this property extends to infinite sums, integrals and expected values as well (provided that they exist).
Elementwise maximum: let be a collection of convex functions. Then is convex. The domain of is the collection of points where the expression is finite. Important special cases:
If are convex functions then so is
Danskin's theorem: If is convex in then is convex in even if is not a convex set.
Composition:
If and are convex functions and is non-decreasing over a univariate domain, then is convex. For example, if is convex, then so is because is convex and monotonically increasing.
If is concave and is convex and non-increasing over a univariate domain, then is convex.
Convexity is invariant under affine maps: that is, if is convex with domain , then so is , where with domain
Minimization: If is convex in then is convex in provided that is a convex set and that
If is convex, then its perspective with domain is convex.
Let be a vector space. is convex and satisfies if and only if for any and any non-negative real numbers that satisfy
Strongly convex functions
The concept of strong convexity extends and parametrizes the notion of strict convexity. Intuitively, a strongly-convex function is a function that grows as fast as a quadratic function.[11] A strongly convex function is also strictly convex, but not vice versa. If a one-dimensional function is twice continuously differentiable and the domain is the real line, then we can characterize it as follows:
convex if and only if for all
strictly convex if for all (note: this is sufficient, but not necessary).
strongly convex if and only if for all
For example, let be strictly convex, and suppose there is a sequence of points such that . Even though , the function is not strongly convex because will become arbitrarily small.
More generally, a differentiable function is called strongly convex with parameter if the following inequality holds for all points in its domain:[12]
or, more generally,
where is any inner product, and is the corresponding norm. Some authors, such as [13] refer to functions satisfying this inequality as elliptic functions.
It is not necessary for a function to be differentiable in order to be strongly convex. A third definition[14] for a strongly convex function, with parameter is that, for all in the domain and
Notice that this definition approaches the definition for strict convexity as and is identical to the definition of a convex function when Despite this, functions exist that are strictly convex but are not strongly convex for any (see example below).
If the function is twice continuously differentiable, then it is strongly convex with parameter if and only if for all in the domain, where is the identity and is the Hessian matrix, and the inequality means that is positive semi-definite. This is equivalent to requiring that the minimum eigenvalue of be at least for all If the domain is just the real line, then is just the second derivative so the condition becomes . If then this means the Hessian is positive semidefinite (or if the domain is the real line, it means that ), which implies the function is convex, and perhaps strictly convex, but not strongly convex.
Assuming still that the function is twice continuously differentiable, one can show that the lower bound of implies that it is strongly convex. Using Taylor's Theorem there exists
such that
Then
by the assumption about the eigenvalues, and hence we recover the second strong convexity equation above.
A function is strongly convex with parameter m if and only if the function
is convex.
A twice continuously differentiable function on a compact domain that satisfies for all is strongly convex. The proof of this statement follows from the extreme value theorem, which states that a continuous function on a compact set has a maximum and minimum.
Strongly convex functions are in general easier to work with than convex or strictly convex functions, since they are a smaller class. Like strictly convex functions, strongly convex functions have unique minima on compact sets.
Properties of strongly-convex functions
If f is a strongly-convex function with parameter m, then:[15]:Prop.6.1.4
For every real number r, the level set {x | f(x) ≤ r} is compact.
A uniformly convex function,[16][17] with modulus , is a function that, for all in the domain and satisfies
where is a function that is non-negative and vanishes only at0. This is a generalization of the concept of strongly convex function; by taking we recover the definition of strong convexity.
It is worth noting that some authors require the modulus to be an increasing function,[17] but this condition is not required by all authors.[16]
Examples
Functions of one variable
The function has , so f is a convex function. It is also strongly convex (and hence strictly convex too), with strong convexity constant 2.
The function has , so f is a convex function. It is strictly convex, even though the second derivative is not strictly positive at all points. It is not strongly convex.
The absolute value function is convex (as reflected in the triangle inequality), even though it does not have a derivative at the point It is not strictly convex.
The function for is convex.
The exponential function is convex. It is also strictly convex, since , but it is not strongly convex since the second derivative can be arbitrarily close to zero. More generally, the function is logarithmically convex if is a convex function. The term "superconvex" is sometimes used instead.[18]
The function with domain [0,1] defined by for is convex; it is continuous on the open interval but not continuous at 0 and1.
The function has second derivative ; thus it is convex on the set where and concave on the set where
Every real-valued linear transformation is convex but not strictly convex, since if is linear, then . This statement also holds if we replace "convex" by "concave".
Every real-valued affine function, that is, each function of the form is simultaneously convex and concave.
↑ Altenberg, L., 2012. Resolvent positive linear operators exhibit the reduction phenomenon. Proceedings of the National Academy of Sciences, 109(10), pp.3705-3710.
↑ Kingman, J. F. C. (1961). "A Convexity Property of Positive Matrices". The Quarterly Journal of Mathematics. 12: 283–284. doi:10.1093/qmath/12.1.283.
In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.
In mathematics, a monotonic function is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order theory.
In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let be a measure space, let and let and be elements of Then is in and we have the triangle inequality
In mathematics, a concave function is one for which the value at any convex combination of elements in the domain is greater than or equal to the convex combination of the values at the endpoints. Equivalently, a concave function is any function for which the hypograph is convex. The class of concave functions is in a sense the opposite of the class of convex functions. A concave function is also synonymously called concave downwards, concave down, convex upwards, convex cap, or upper convex.
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.
In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate. It allows in particular for a far reaching generalization of Lagrangian duality.
In mathematics, a function f is logarithmically convex or superconvex if , the composition of the logarithm with f, is itself a convex function.
In convex analysis, a non-negative function f : Rn → R+ is logarithmically concave if its domain is a convex set, and if it satisfies the inequality
Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory.
In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory.
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem. Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal. This fact is called weak duality.
In mathematics, specifically statistics and information geometry, a Bregman divergence or Bregman distance is a measure of difference between two points, defined in terms of a strictly convex function; they form an important class of divergences. When the points are interpreted as probability distributions – notably as either values of the parameter of a parametric model or as a data set of observed values – the resulting distance is a statistical distance. The most basic Bregman divergence is the squared Euclidean distance.
Hardy's inequality is an inequality in mathematics, named after G. H. Hardy. It states that if is a sequence of non-negative real numbers, then for every real number p > 1 one has
In mathematics, uniformly convex spaces are common examples of reflexive Banach spaces. The concept of uniform convexity was first introduced by James A. Clarkson in 1936.
In mathematics, the Brascamp–Lieb inequality is either of two inequalities. The first is a result in geometry concerning integrable functions on n-dimensional Euclidean space . It generalizes the Loomis–Whitney inequality and Hölder's inequality. The second is a result of probability theory which gives a concentration inequality for log-concave probability distributions. Both are named after Herm Jan Brascamp and Elliott H. Lieb.
In probability theory and statistics, a stochastic order quantifies the concept of one random variable being "bigger" than another. These are usually partial orders, so that one random variable may be neither stochastically greater than, less than, nor equal to another random variable . Many different orders exist, which have different applications.
In mathematics, the direct method in the calculus of variations is a general method for constructing a proof of the existence of a minimizer for a given functional, introduced by Stanisław Zaremba and David Hilbert around 1900. The method relies on methods of functional analysis and topology. As well as being used to prove the existence of a solution, direct methods may be used to compute the solution to desired accuracy.
In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.
K-convex functions, first introduced by Scarf, are a special weakening of the concept of convex function which is crucial in the proof of the optimality of the policy in inventory control theory. The policy is characterized by two numbers s and S, , such that when the inventory level falls below level s, an order is issued for a quantity that brings the inventory up to level S, and nothing is ordered otherwise. Gallego and Sethi have generalized the concept of K-convexity to higher dimensional Euclidean spaces.
In mathematics, Young's inequality for products is a mathematical inequality about the product of two numbers. The inequality is named after William Henry Young and should not be confused with Young's convolution inequality.
References
Bertsekas, Dimitri (2003). Convex Analysis and Optimization. Athena Scientific.
Borwein, Jonathan, and Lewis, Adrian. (2000). Convex Analysis and Nonlinear Optimization. Springer.
Donoghue, William F. (1969). Distributions and Fourier Transforms. Academic Press.
Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.
Lauritzen, Niels (2013). Undergraduate Convexity. World Scientific Publishing.
Luenberger, David (1984). Linear and Nonlinear Programming. Addison-Wesley.
Luenberger, David (1969). Optimization by Vector Space Methods. Wiley & Sons.
Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press.
Thomson, Brian (1994). Symmetric Properties of Real Functions. CRC Press.
Zălinescu, C. (2002). Convex analysis in general vector spaces. River Edge,NJ: World Scientific Publishing Co.,Inc. pp.xx+367. ISBN981-238-067-1. MR1921556.
This page is based on this Wikipedia article Text is available under the CC BY-SA 4.0 license; additional terms may apply. Images, videos and audio are available under their respective licenses.