In mathematics, subadditivity is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. There are numerous examples of subadditive functions in various areas of mathematics, particularly norms and square roots. Additive maps are special cases of subadditive functions.
A subadditive function is a function , having a domain A and an ordered codomain B that are both closed under addition, with the following property:
An example is the square root function, having the non-negative real numbers as domain and codomain: since we have:
A sequence is called subadditive if it satisfies the inequality for all m and n. This is a special case of subadditive function, if a sequence is interpreted as a function on the set of natural numbers.
Note that while a concave sequence is subadditive, the converse is false. For example, randomly assign with values in ; then the sequence is subadditive but not concave.
A useful result pertaining to subadditive sequences is the following lemma due to Michael Fekete. [1]
Fekete's Subadditive Lemma — For every subadditive sequence , the limit is equal to the infimum . (The limit may be .)
Let .
By definition, . So it suffices to show .
If not, then there exists a subsequence , and an , such that for all .
Since , there exists an such that .
By infinitary pigeonhole principle, there exists a sub-subsequence , whose indices all belong to the same residue class modulo , and so they advance by multiples of . This sequence, continued for long enough, would be forced by subadditivity to dip below the slope line, a contradiction.
In more detail, by subadditivity, we have
which implies
The analogue of Fekete's lemma holds for superadditive sequences as well, that is: (The limit then may be positive infinity: consider the sequence .)
There are extensions of Fekete's lemma that do not require the inequality to hold for all m and n, but only for m and n such that
Continue the proof as before, until we have just used the infinite pigeonhole principle.
Consider the sequence . Since , we have . Similarly, we have , etc.
By the assumption, for any , we can use subadditivity on them if
If we were dealing with continuous variables, then we can use subadditivity to go from to , then to , and so on, which covers the entire interval .
Though we don't have continuous variables, we can still cover enough integers to complete the proof. Let be large enough, such that
then let be the smallest number in the intersection . By the assumption on , it's easy to see (draw a picture) that the intervals and touch in the middle. Thus, by repeating this process, we cover the entirety of .
With that, all are forced down as in the previous proof.
Moreover, the condition may be weakened as follows: provided that is an increasing function such that the integral converges (near the infinity). [2]
There are also results that allow one to deduce the rate of convergence to the limit whose existence is stated in Fekete's lemma if some kind of both superadditivity and subadditivity is present. [3] [4]
Besides, analogues of Fekete's lemma have been proved for subadditive real maps (with additional assumptions) from finite subsets of an amenable group [5] [6] , [7] and further, of a cancellative left-amenable semigroup. [8]
Theorem: [9] — For every measurable subadditive function the limit exists and is equal to (The limit may be )
If f is a subadditive function, and if 0 is in its domain, then f(0) ≥ 0. To see this, take the inequality at the top. . Hence
A concave function with is also subadditive. To see this, one first observes that . Then looking at the sum of this bound for and , will finally verify that f is subadditive. [10]
The negative of a subadditive function is superadditive.
Entropy plays a fundamental role in information theory and statistical physics, as well as in quantum mechanics in a generalized formulation due to von Neumann. Entropy appears always as a subadditive quantity in all of its formulations, meaning the entropy of a supersystem or a set union of random variables is always less or equal than the sum of the entropies of its individual components. Additionally, entropy in physics satisfies several more strict inequalities such as the Strong Subadditivity of Entropy in classical statistical mechanics and its quantum analog.
Subadditivity is an essential property of some particular cost functions. It is, generally, a necessary and sufficient condition for the verification of a natural monopoly. It implies that production from only one firm is socially less expensive (in terms of average costs) than production of a fraction of the original quantity by an equal number of firms.
Economies of scale are represented by subadditive average cost functions.
Except in the case of complementary goods, the price of goods (as a function of quantity) must be subadditive. Otherwise, if the sum of the cost of two items is cheaper than the cost of the bundle of two of them together, then nobody would ever buy the bundle, effectively causing the price of the bundle to "become" the sum of the prices of the two separate items. Thus proving that it is not a sufficient condition for a natural monopoly; since the unit of exchange may not be the actual cost of an item. This situation is familiar to everyone in the political arena where some minority asserts that the loss of some particular freedom at some particular level of government means that many governments are better; whereas the majority assert that there is some other correct unit of cost.[ citation needed ]
Subadditivity is one of the desirable properties of coherent risk measures in risk management. [11] The economic intuition behind risk measure subadditivity is that a portfolio risk exposure should, at worst, simply equal the sum of the risk exposures of the individual positions that compose the portfolio. The lack of subadditivity is one of the main critiques of VaR models which do not rely on the assumption of normality of risk factors. The Gaussian VaR ensures subadditivity: for example, the Gaussian VaR of a two unitary long positions portfolio at the confidence level is, assuming that the mean portfolio value variation is zero and the VaR is defined as a negative loss, where is the inverse of the normal cumulative distribution function at probability level , are the individual positions returns variances and is the linear correlation measure between the two individual positions returns. Since variance is always positive, Thus the Gaussian VaR is subadditive for any value of and, in particular, it equals the sum of the individual risk exposures when which is the case of no diversification effects on portfolio risk.
Subadditivity occurs in the thermodynamic properties of non-ideal solutions and mixtures like the excess molar volume and heat of mixing or excess enthalpy.
A factorial language is one where if a word is in , then all factors of that word are also in . In combinatorics on words, a common problem is to determine the number of length- words in a factorial language. Clearly , so is subadditive, and hence Fekete's lemma can be used to estimate the growth of . [12]
For every , sample two strings of length uniformly at random on the alphabet . The expected length of the longest common subsequence is a super-additive function of , and thus there exists a number , such that the expected length grows as . By checking the case with , we easily have . The exact value of even , however, is only known to be between 0.788 and 0.827. [13]
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function has a root at , then , taking the limit value at , is an entire function. On the other hand, the natural logarithm, the reciprocal function, and the square root are all not entire functions, nor can they be continued analytically to an entire function.
The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known.
In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if A convergent series that is not absolutely convergent is called conditionally convergent.
In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the good convergence behaviour of monotonic sequences, i.e. sequences that are non-increasing, or non-decreasing. In its simplest form, it says that a non-decreasing bounded-above sequence of real numbers converges to its smallest upper bound, its supremum. Likewise, a non-increasing bounded-below sequence converges to its largest lower bound, its infimum. In particular, infinite sums of non-negative numbers converge to the supremum of the partial sums if and only if the partial sums are bounded.
In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.
In mathematics, a Dirichlet series is any series of the form where s is complex, and is a complex sequence. It is a special case of general Dirichlet series.
In mathematical statistics, the Kullback–Leibler (KL) divergence, denoted , is a type of statistical distance: a measure of how one reference probability distribution P is different from a second probability distribution Q. Mathematically, it is defined as
In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.
Differential entropy is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.
In mathematics, the Stirling polynomials are a family of polynomials that generalize important sequences of numbers appearing in combinatorics and analysis, which are closely related to the Stirling numbers, the Bernoulli numbers, and the generalized Bernoulli polynomials. There are multiple variants of the Stirling polynomial sequence considered below most notably including the Sheffer sequence form of the sequence, , defined characteristically through the special form of its exponential generating function, and the Stirling (convolution) polynomials, , which also satisfy a characteristic ordinary generating function and that are of use in generalizing the Stirling numbers to arbitrary complex-valued inputs. We consider the "convolution polynomial" variant of this sequence and its properties second in the last subsection of the article. Still other variants of the Stirling polynomials are studied in the supplementary links to the articles given in the references.
In mathematics, Fejér's theorem, named after Hungarian mathematician Lipót Fejér, states the following:
In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance. A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.
Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is the expected return on the portfolio in the worst of cases. ES is an alternative to value at risk that is more sensitive to the shape of the tail of the loss distribution.
In statistics, the generalized Pareto distribution (GPD) is a family of continuous probability distributions. It is often used to model the tails of another distribution. It is specified by three parameters: location , scale , and shape . Sometimes it is specified by only scale and shape and sometimes only by its shape parameter. Some references give the shape parameter as .
In the field of mathematical analysis, a general Dirichlet series is an infinite series that takes the form of
The uncertainty theory invented by Baoding Liu is a branch of mathematics based on normality, monotonicity, self-duality, countable subadditivity, and product measure axioms.
In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large deviations theory, Sanov's theorem identifies the rate function for large deviations of the empirical measure of a sequence of i.i.d. random variables.
Generalized relative entropy is a measure of dissimilarity between two quantum states. It is a "one-shot" analogue of quantum relative entropy and shares many properties of the latter quantity.
In mathematics, Kingman's subadditive ergodic theorem is one of several ergodic theorems. It can be seen as a generalization of Birkhoff's ergodic theorem. Intuitively, the subadditive ergodic theorem is a kind of random variable version of Fekete's lemma. As a result, it can be rephrased in the language of probability, e.g. using a sequence of random variables and expected values. The theorem is named after John Kingman.
In probability theory, a subgaussian distribution, the distribution of a subgaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a subgaussian distribution are dominated by the tails of a Gaussian. This property gives subgaussian distributions their name.
This article incorporates material from subadditivity on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.