# Concave function

Last updated

In mathematics, a concave function is one for which the value at any convex combination of elements in the domain is greater than or equal to the convex combination of the values at the endpoints. Equivalently, a concave function is any function for which the hypograph is convex. The class of concave functions is in a sense the opposite of the class of convex functions. A concave function is also synonymously called concave downwards, concave down, convex upwards, convex cap, or upper convex.

## Definition

A real-valued function ${\displaystyle f}$ on an interval (or, more generally, a convex set in vector space) is said to be concave if, for any ${\displaystyle x}$ and ${\displaystyle y}$ in the interval and for any ${\displaystyle \alpha \in [0,1]}$, [1]

${\displaystyle f((1-\alpha )x+\alpha y)\geq (1-\alpha )f(x)+\alpha f(y)}$

A function is called strictly concave if

${\displaystyle f((1-\alpha )x+\alpha y)>(1-\alpha )f(x)+\alpha f(y)\,}$

for any ${\displaystyle \alpha \in (0,1)}$ and ${\displaystyle x\neq y}$.

For a function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$, this second definition merely states that for every ${\displaystyle z}$ strictly between ${\displaystyle x}$ and ${\displaystyle y}$, the point ${\displaystyle (z,f(z))}$ on the graph of ${\displaystyle f}$ is above the straight line joining the points ${\displaystyle (x,f(x))}$ and ${\displaystyle (y,f(y))}$.

A function ${\displaystyle f}$ is quasiconcave if the upper contour sets of the function ${\displaystyle S(a)=\{x:f(x)\geq a\}}$ are convex sets. [2]

## Properties

### Functions of a single variable

1. A differentiable function f is (strictly) concave on an interval if and only if its derivative function f is (strictly) monotonically decreasing on that interval, that is, a concave function has a non-increasing (decreasing) slope. [3] [4]
2. Points where concavity changes (between concave and convex) are inflection points. [5]
3. If f is twice-differentiable, then f is concave if and only if f is non-positive (or, informally, if the "acceleration" is non-positive). If f is negative then f is strictly concave, but the converse is not true, as shown by f(x) = x4.
4. If f is concave and differentiable, then it is bounded above by its first-order Taylor approximation: [2] ${\displaystyle f(y)\leq f(x)+f'(x)[y-x]}$
5. A Lebesgue measurable function on an interval C is concave if and only if it is midpoint concave, that is, for any x and y in C${\displaystyle f\left({\frac {x+y}{2}}\right)\geq {\frac {f(x)+f(y)}{2}}}$
6. If a function f is concave, and f(0) ≥ 0, then f is subadditive on ${\displaystyle [0,\infty )}$. Proof:
• Since f is concave and 1 ≥ t ≥ 0, letting y = 0 we have ${\displaystyle f(tx)=f(tx+(1-t)\cdot 0)\geq tf(x)+(1-t)f(0)\geq tf(x).}$
• For ${\displaystyle a,b\in [0,\infty )}$: ${\displaystyle f(a)+f(b)=f\left((a+b){\frac {a}{a+b}}\right)+f\left((a+b){\frac {b}{a+b}}\right)\geq {\frac {a}{a+b}}f(a+b)+{\frac {b}{a+b}}f(a+b)=f(a+b)}$

### Functions of n variables

1. A function f is concave over a convex set if and only if the function −f is a convex function over the set.
2. The sum of two concave functions is itself concave and so is the pointwise minimum of two concave functions, i.e. the set of concave functions on a given domain form a semifield.
3. Near a strict local maximum in the interior of the domain of a function, the function must be concave; as a partial converse, if the derivative of a strictly concave function is zero at some point, then that point is a local maximum.
4. Any local maximum of a concave function is also a global maximum. A strictly concave function will have at most one global maximum.

## Examples

• The functions ${\displaystyle f(x)=-x^{2}}$ and ${\displaystyle g(x)={\sqrt {x}}}$ are concave on their domains, as their second derivatives ${\displaystyle f''(x)=-2}$ and ${\textstyle g''(x)=-{\frac {1}{4x^{3/2}}}}$ are always negative.
• The logarithm function ${\displaystyle f(x)=\log {x}}$ is concave on its domain ${\displaystyle (0,\infty )}$, as its derivative ${\displaystyle {\frac {1}{x}}}$ is a strictly decreasing function.
• Any affine function ${\displaystyle f(x)=ax+b}$ is both concave and convex, but neither strictly-concave nor strictly-convex.
• The sine function is concave on the interval ${\displaystyle [0,\pi ]}$.
• The function ${\displaystyle f(B)=\log |B|}$, where ${\displaystyle |B|}$ is the determinant of a nonnegative-definite matrix B, is concave. [6]

## Related Research Articles

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

A sigmoid function is any mathematical function whose graph has a characteristic S-shaped or sigmoid curve.

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph is a convex set. In simple terms, a convex function graph is shaped like a cup , while a concave function's graph is shaped like a cap .

In mathematics, subadditivity is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. There are numerous examples of subadditive functions in various areas of mathematics, particularly norms and square roots. Additive maps are special cases of subadditive functions.

In quantum mechanics, information theory, and Fourier analysis, the entropic uncertainty or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is stronger than the usual statement of the uncertainty principle in terms of the product of standard deviations.

In mathematics, a function f is logarithmically convex or superconvex if , the composition of the logarithm with f, is itself a convex function.

In convex analysis, a non-negative function f : RnR+ is logarithmically concave if its domain is a convex set, and if it satisfies the inequality

In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.

In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.

Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory.

In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory.

In mathematics, a Schur-convex function, also known as S-convex, isotonic function and order-preserving function is a function that for all such that is majorized by , one has that . Named after Issai Schur, Schur-convex functions are used in the study of majorization.

In mathematics, a real or complex-valued function f on d-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are real constants C ≥ 0, α > 0, such that for all x and y in the domain of f. More generally, the condition can be formulated for functions between any two metric spaces. The number is called the exponent of the Hölder condition. A function on an interval satisfying the condition with α > 1 is constant. If α = 1, then the function satisfies a Lipschitz condition. For any α > 0, the condition implies the function is uniformly continuous. The condition is named after Otto Hölder.

In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form is a convex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be quasiconcave.

In probability theory, an -divergence is a certain type of function that measures the difference between two probability distributions and . Many common divergences, such as KL-divergence, Hellinger distance, and total variation distance, are special cases of -divergence.

In the mathematical field of analysis, a well-known theorem describes the set of discontinuities of a monotone real-valued function of a real variable; all discontinuities of such a (monotone) function are necessarily jump discontinuities and there are at most countably many of them.

In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.

K-convex functions, first introduced by Scarf, are a special weakening of the concept of convex function which is crucial in the proof of the optimality of the policy in inventory control theory. The policy is characterized by two numbers s and S, , such that when the inventory level falls below level s, an order is issued for a quantity that brings the inventory up to level S, and nothing is ordered otherwise. Gallego and Sethi have generalized the concept of K-convexity to higher dimensional Euclidean spaces.

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.

## References

1. Lenhart, S.; Workman, J. T. (2007). Optimal Control Applied to Biological Models. Mathematical and Computational Biology Series. Chapman & Hall/ CRC. ISBN   978-1-58488-640-2.
2. Varian, Hal R. (1992). Microeconomic analysis (3rd ed.). New York: Norton. p. 489. ISBN   0-393-95735-7. OCLC   24847759.
3. Rudin, Walter (1976). Analysis. p. 101.
4. Gradshteyn, I. S.; Ryzhik, I. M.; Hays, D. F. (1976-07-01). "Table of Integrals, Series, and Products". Journal of Lubrication Technology. 98 (3): 479. doi:. ISSN   0022-2305.
5. Hass, Joel (13 March 2017). Thomas' calculus. Heil, Christopher, 1960-, Weir, Maurice D.,, Thomas, George B. Jr. (George Brinton), 1914-2006. (Fourteenth ed.). [United States]. p. 203. ISBN   978-0-13-443898-6. OCLC   965446428.{{cite book}}: CS1 maint: location missing publisher (link)
6. Cover, Thomas M.; Thomas, J. A. (1988). "Determinant inequalities via information theory". SIAM Journal on Matrix Analysis and Applications . 9 (3): 384–392. doi:10.1137/0609033. S2CID   5491763.
7. Pemberton, Malcolm; Rau, Nicholas (2015). Mathematics for Economists: An Introductory Textbook. Oxford University Press. pp. 363–364. ISBN   978-1-78499-148-7.
8. Callen, Herbert B.; Callen, Herbert B. (1985). "8.1: Intrinsic Stability of Thermodynamic Systems". Thermodynamics and an introduction to thermostatistics (2nd ed.). New York: Wiley. pp. 203–206. ISBN   978-0-471-86256-7.