In mathematics, a **concave function** is one for which the value at any convex combination of elements in the domain is greater than or equal to the convex combination of the values at the endpoints. Equivalently, a concave function is any function for which the hypograph is convex. The class of concave functions is in a sense the opposite of the class of convex functions. A concave function is also synonymously called **concave downwards**, **concave down**, **convex upwards**, **convex cap**, or **upper convex**.

A real-valued function on an interval (or, more generally, a convex set in vector space) is said to be *concave* if, for any and in the interval and for any ,^{ [1] }

A function is called *strictly concave* if

for any and .

For a function , this second definition merely states that for every strictly between and , the point on the graph of is above the straight line joining the points and .

A function is quasiconcave if the upper contour sets of the function are convex sets.^{ [2] }

- A differentiable function f is (strictly) concave on an interval if and only if its derivative function f ′ is (strictly) monotonically decreasing on that interval, that is, a concave function has a non-increasing (decreasing) slope.
^{ [3] }^{ [4] } - Points where concavity changes (between concave and convex) are inflection points.
^{ [5] } - If f is twice-differentiable, then f is concave if and only if f ′′ is non-positive (or, informally, if the "acceleration" is non-positive). If f ′′ is negative then f is strictly concave, but the converse is not true, as shown by
*f*(*x*) = −*x*^{4}. - If f is concave and differentiable, then it is bounded above by its first-order Taylor approximation:
^{ [2] } - A Lebesgue measurable function on an interval
**C**is concave if and only if it is midpoint concave, that is, for any x and y in**C** - If a function f is concave, and
*f*(0) ≥ 0, then f is subadditive on . Proof:- Since f is concave and 1 ≥ t ≥ 0, letting
*y*= 0 we have - For :

- Since f is concave and 1 ≥ t ≥ 0, letting

- A function f is concave over a convex set if and only if the function −f is a convex function over the set.
- The sum of two concave functions is itself concave and so is the pointwise minimum of two concave functions, i.e. the set of concave functions on a given domain form a semifield.
- Near a strict local maximum in the interior of the domain of a function, the function must be concave; as a partial converse, if the derivative of a strictly concave function is zero at some point, then that point is a local maximum.
- Any local maximum of a concave function is also a global maximum. A
*strictly*concave function will have at most one global maximum.

- The functions and are concave on their domains, as their second derivatives and are always negative.
- The logarithm function is concave on its domain , as its derivative is a strictly decreasing function.
- Any affine function is both concave and convex, but neither strictly-concave nor strictly-convex.
- The sine function is concave on the interval .
- The function , where is the determinant of a nonnegative-definite matrix
*B*, is concave.^{ [6] }

- Rays bending in the computation of radiowave attenuation in the atmosphere involve concave functions.
- In expected utility theory for choice under uncertainty, cardinal utility functions of risk averse decision makers are concave.
- In microeconomic theory, production functions are usually assumed to be concave over some or all of their domains, resulting in diminishing returns to input factors.
^{ [7] } - In Thermodynamics and Information Theory, Entropy is a concave function. In the case of thermodynamic entropy, without phase transition, entropy as a function of extensive variables is strictly concave. If the system can undergo phase transition, if it is allowed to split into two subsystems of different phase (phase separation, e.g. boiling), the entropy-maximal parameters of the subsystems will result in a combined entropy precisely on the straight line between the two phases. This means that the "Effective Entropy" of a system with phase transition is the convex envelope of entropy without phase separation; therefore, the entropy of a system including phase separation will be non-strictly concave.
^{ [8] }

In probability theory and statistics, the **exponential distribution** or **negative exponential distribution** is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

A **sigmoid function** is any mathematical function whose graph has a characteristic S-shaped or **sigmoid curve**.

In probability theory and statistics, the **beta distribution** is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by *alpha* (*α*) and *beta* (*β*), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

In mathematics, a real-valued function is called **convex** if the line segment between any two distinct points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its *epigraph* is a convex set. In simple terms, a convex function graph is shaped like a cup , while a concave function's graph is shaped like a cap .

In mathematics, **subadditivity** is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. There are numerous examples of subadditive functions in various areas of mathematics, particularly norms and square roots. Additive maps are special cases of subadditive functions.

In quantum mechanics, information theory, and Fourier analysis, the **entropic uncertainty** or **Hirschman uncertainty** is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is *stronger* than the usual statement of the uncertainty principle in terms of the product of standard deviations.

In mathematics, a function *f* is **logarithmically convex** or **superconvex** if , the composition of the logarithm with *f*, is itself a convex function.

In convex analysis, a non-negative function *f* : **R**^{n} → **R**_{+} is **logarithmically concave** if its domain is a convex set, and if it satisfies the inequality

In information theory, the **Rényi entropy** is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, **collision entropy**, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of **generalized dimensions**.

In statistics and information theory, a **maximum entropy probability distribution** has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.

**Convex analysis** is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory.

In mathematics, **subharmonic** and **superharmonic** functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory.

In mathematics, a **Schur-convex function**, also known as **S-convex**, **isotonic function** and **order-preserving function** is a function that for all such that is majorized by , one has that . Named after Issai Schur, Schur-convex functions are used in the study of majorization.

In mathematics, a real or complex-valued function *f* on d-dimensional Euclidean space satisfies a **Hölder condition**, or is **Hölder continuous**, when there are real constants *C* ≥ 0, *α* > 0, such that for all x and y in the domain of *f*. More generally, the condition can be formulated for functions between any two metric spaces. The number is called the *exponent* of the Hölder condition. A function on an interval satisfying the condition with *α* > 1 is constant. If *α* = 1, then the function satisfies a Lipschitz condition. For any *α* > 0, the condition implies the function is uniformly continuous. The condition is named after Otto Hölder.

In mathematics, a **quasiconvex function** is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form is a convex set. For a function of a single variable, along any stretch of the curve the highest point is one of the endpoints. The negative of a quasiconvex function is said to be **quasiconcave**.

In probability theory, an **-divergence** is a certain type of function that measures the difference between two probability distributions and . Many common divergences, such as KL-divergence, Hellinger distance, and total variation distance, are special cases of -divergence.

In the mathematical field of analysis, a well-known theorem describes the set of discontinuities of a monotone real-valued function of a real variable; all discontinuities of such a (monotone) function are necessarily jump discontinuities and there are at most countably many of them.

In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.

** K-convex functions**, first introduced by Scarf, are a special weakening of the concept of convex function which is crucial in the proof of the optimality of the policy in inventory control theory. The policy is characterized by two numbers s and S, , such that when the inventory level falls below level s, an order is issued for a quantity that brings the inventory up to level S, and nothing is ordered otherwise. Gallego and Sethi have generalized the concept of

*Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.*

- ↑ Lenhart, S.; Workman, J. T. (2007).
*Optimal Control Applied to Biological Models*. Mathematical and Computational Biology Series. Chapman & Hall/ CRC. ISBN 978-1-58488-640-2. - 1 2 Varian, Hal R. (1992).
*Microeconomic analysis*(3rd ed.). New York: Norton. p. 489. ISBN 0-393-95735-7. OCLC 24847759. - ↑ Rudin, Walter (1976).
*Analysis*. p. 101. - ↑ Gradshteyn, I. S.; Ryzhik, I. M.; Hays, D. F. (1976-07-01). "Table of Integrals, Series, and Products".
*Journal of Lubrication Technology*.**98**(3): 479. doi: 10.1115/1.3452897 . ISSN 0022-2305. - ↑ Hass, Joel (13 March 2017).
*Thomas' calculus*. Heil, Christopher, 1960-, Weir, Maurice D.,, Thomas, George B. Jr. (George Brinton), 1914-2006. (Fourteenth ed.). [United States]. p. 203. ISBN 978-0-13-443898-6. OCLC 965446428.`{{cite book}}`

: CS1 maint: location missing publisher (link) - ↑ Cover, Thomas M.; Thomas, J. A. (1988). "Determinant inequalities via information theory".
*SIAM Journal on Matrix Analysis and Applications*.**9**(3): 384–392. doi:10.1137/0609033. S2CID 5491763. - ↑ Pemberton, Malcolm; Rau, Nicholas (2015).
*Mathematics for Economists: An Introductory Textbook*. Oxford University Press. pp. 363–364. ISBN 978-1-78499-148-7. - ↑ Callen, Herbert B.; Callen, Herbert B. (1985). "8.1: Intrinsic Stability of Thermodynamic Systems".
*Thermodynamics and an introduction to thermostatistics*(2nd ed.). New York: Wiley. pp. 203–206. ISBN 978-0-471-86256-7.

- Crouzeix, J.-P. (2008). "Quasi-concavity". In Durlauf, Steven N.; Blume, Lawrence E (eds.).
*The New Palgrave Dictionary of Economics*(Second ed.). Palgrave Macmillan. pp. 815–816. doi:10.1057/9780230226203.1375. ISBN 978-0-333-78676-5. - Rao, Singiresu S. (2009).
*Engineering Optimization: Theory and Practice*. John Wiley and Sons. p. 779. ISBN 978-0-470-18352-6.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.