Generalized mean

Last updated

In mathematics, generalized means (or power mean or Hölder mean from Otto Hölder) [1] are a family of functions for aggregating sets of numbers. These include as special cases the Pythagorean means (arithmetic, geometric, and harmonic means).



If p is a non-zero real number, and are positive real numbers, then the generalized mean or power mean with exponent p of these positive real numbers is: [2]

(See p-norm). For p = 0 we set it equal to the geometric mean (which is the limit of means with exponents approaching zero, as proved below):

Furthermore, for a sequence of positive weights wi with sum we define the weighted power mean as:

The unweighted means correspond to setting all wi = 1/n.

Special cases

A visual depiction of some of the specified cases for n = 2 with a = x1 = M[?] and b = x2 = M-[?]:
.mw-parser-output .legend{page-break-inside:avoid;break-inside:avoid-column}.mw-parser-output .legend-color{display:inline-block;min-width:1.25em;height:1.25em;line-height:1.25;margin:1px 0;text-align:center;border:1px solid black;background-color:transparent;color:black}.mw-parser-output .legend-text{}
harmonic mean, H = M-1(a, b),
geometric mean, G = M0(a, b)
arithmetic mean, A = M1(a, b)
quadratic mean, Q = M2(a, b) MathematicalMeans.svg
A visual depiction of some of the specified cases for n = 2 with a = x1 = M and b = x2 = M−∞:
  harmonic mean, H = M−1(a, b),
  geometric mean, G = M0(a, b)
  arithmetic mean, A = M1(a, b)
  quadratic mean, Q = M2(a, b)

A few particular values of p yield special cases with their own names: [3]

harmonic mean
geometric mean
arithmetic mean
root mean square
or quadratic mean [4] [5]
cubic mean

Proof of (geometric mean) We can rewrite the definition of Mp using the exponential function

In the limit p → 0, we can apply L'Hôpital's rule to the argument of the exponential function. Differentiating the numerator and denominator with respect to p, we have

By the continuity of the exponential function, we can substitute back into the above relation to obtain

as desired. [2]

Proof of and  

Assume (possibly after relabeling and combining terms together) that . Then

The formula for follows from


Let be a sequence of positive real numbers, then the following properties hold: [1]

  1. .
    Each generalized mean always lies between the smallest and largest of the x values.
  2. , where is a permutation operator.
    Each generalized mean is a symmetric function of its arguments; permuting the arguments of a generalized mean does not change its value.
  3. .
    Like most means, the generalized mean is a homogeneous function of its arguments x1, ..., xn. That is, if b is a positive real number, then the generalized mean with exponent p of the numbers is equal to b times the generalized mean of the numbers x1, ..., xn.
  4. .
    Like the quasi-arithmetic means, the computation of the mean can be split into computations of equal sized sub-blocks. This enables use of a divide and conquer algorithm to calculate the means, when desirable.

Generalized mean inequality

Geometric proof without words that max (a,b) > root mean square (RMS) or quadratic mean (QM) > arithmetic mean (AM) > geometric mean (GM) > harmonic mean (HM) > min (a,b) of two positive numbers a and b QM AM GM HM inequality visual proof.svg
Geometric proof without words that max(a,b)> root mean square (RMS) or quadratic mean (QM)> arithmetic mean (AM)> geometric mean (GM)> harmonic mean (HM)>min(a,b) of two positive numbers a and b

In general, if p<q, then

and the two means are equal if and only if x1=x2=...=xn.

The inequality is true for real values of p and q, as well as positive and negative infinity values.

It follows from the fact that, for all real p,

which can be proved using Jensen's inequality.

In particular, for p in {−1, 0, 1} , the generalized mean inequality implies the Pythagorean means inequality as well as the inequality of arithmetic and geometric means.

Proof of power means inequality

We will prove weighted power means inequality, for the purpose of the proof we will assume the following without loss of generality:

Proof for unweighted power means is easily obtained by substituting wi = 1/n.

Equivalence of inequalities between means of opposite signs

Suppose an average between power means with exponents p and q holds:

applying this, then:

We raise both sides to the power of −1 (strictly decreasing function in positive reals):

We get the inequality for means with exponents p and q, and we can use the same reasoning backwards, thus proving the inequalities to be equivalent, which will be used in some of the later proofs.

Geometric mean

For any q > 0 and non-negative weights summing to 1, the following inequality holds:

The proof follows from Jensen's inequality, making use of the fact the logarithm is concave:

By applying the exponential function to both sides and observing that as a strictly increasing function it preserves the sign of the inequality, we get

Taking qth powers of the xi, we are done for the inequality with positive q; the case for negatives is identical.

Inequality between any two power means

We are to prove that for any p < q the following inequality holds:

if p is negative, and q is positive, the inequality is equivalent to the one proved above:

The proof for positive p and q is as follows: Define the following function: f : R+R+. f is a power function, so it does have a second derivative:

which is strictly positive within the domain of f, since q > p, so we know f is convex.

Using this, and the Jensen's inequality we get:

after raising both side to the power of 1/q (an increasing function, since 1/q is positive) we get the inequality which was to be proven:

Using the previously shown equivalence we can prove the inequality for negative p and q by replacing them with q and p, respectively.

Generalized f-mean

The power mean could be generalized further to the generalized f-mean:

This covers the geometric mean without using a limit with f(x) = log(x). The power mean is obtained for f(x) = xp.


Signal processing

A power mean serves a non-linear moving average which is shifted towards small signal values for small p and emphasizes big signal values for big p. Given an efficient implementation of a moving arithmetic mean called smooth one can implement a moving power mean according to the following Haskell code.


See also


  1. 1 2 Sýkora, Stanislav (2009). Mathematical means and averages: basic properties. 3. Stan’s Library: Castano Primo, Italy. doi:10.3247/SL3Math09.001.
  2. 1 2 P. S. Bullen: Handbook of Means and Their Inequalities. Dordrecht, Netherlands: Kluwer, 2003, pp. 175-177
  3. Weisstein, Eric W. "Power Mean". MathWorld . (retrieved 2019-08-17)
  4. Thompson, Sylvanus P. (1965). Calculus Made Easy. Macmillan International Higher Education. p. 185. ISBN   9781349004874 . Retrieved 5 July 2020.
  5. Jones, Alan R. (2018). Probability, Statistics and Other Frightening Stuff. Routledge. p. 48. ISBN   9781351661386 . Retrieved 5 July 2020.
  6. If AC = a and BC = b. OC = AM of a and b, and radius r = QO = OG.
    Using Pythagoras' theorem, QC² = QO² + OC² QC = QO² + OC² = QM.
    Using Pythagoras' theorem, OC² = OG² + GC² GC = OC² OG² = GM.
    Using similar triangles, HC/GC = GC/OC HC = GC²/OC = HM.

References and further reading

Related Research Articles

Arithmetic–geometric mean Mathematical function of two real arguments

In mathematics, the arithmetic–geometric mean of two positive real numbers x and y is defined as follows:

In probability theory, the expected value of a random variable , often denoted , , or , is a generalization of the weighted average, and is intuitively the arithmetic mean of a large number of independent realizations of . The expectation operator is also commonly stylized as or . The expected value is also known as the expectation, mathematical expectation, mean, average, or first moment. Expected value is a key concept in economics, finance, and many other subjects.

In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any finite sums, products and compositions of these, such as the trigonometric functions sine and cosine and their hyperbolic counterparts sinh and cosh, as well as derivatives and integrals of entire functions such as the error function. If an entire function f(z) has a root at w, then f(z)/(z−w), taking the limit value at w, is an entire function. On the other hand, neither the natural logarithm nor the square root is an entire function, nor can they be continued analytically to an entire function.

Gamma function Extension of the factorial function

In mathematics, the gamma function is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except the non-positive integers. For any positive integer n,

In mathematics, the harmonic mean is one of several kinds of average, and in particular, one of the Pythagorean means. Typically, it is appropriate for situations when the average rate is desired.

Harmonic number Sum of the first n whole number reciprocals; 1/1 + 1/2 + 1/3 + ... + 1/n

In mathematics, the n-th harmonic number is the sum of the reciprocals of the first n natural numbers:

In mathematics, the ratio test is a test for the convergence of a series

Inequality of arithmetic and geometric means

In mathematics, the inequality of arithmetic and geometric means, or more briefly the AM–GM inequality, states that the arithmetic mean of a list of non-negative real numbers is greater than or equal to the geometric mean of the same list; and further, that the two means are equal if and only if every number in the list is the same.

Exponential integral Special function defined by an integral

In mathematics, the exponential integral Ei is a special function on the complex plane. It is defined as one particular definite integral of the ratio between an exponential function and its argument.

In mathematics, the explicit formulae for L-functions are relations between sums over the complex number zeroes of an L-function and sums over prime powers, introduced by Riemann (1859) for the Riemann zeta function. Such explicit formulae have been applied also to questions on bounding the discriminant of an algebraic number field, and the conductor of a number field.

Empirical distribution function

In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

Logarithmic mean

In mathematics, the logarithmic mean is a function of two non-negative numbers which is equal to their difference divided by the logarithm of their quotient. This calculation is applicable in engineering problems involving heat and mass transfer.

In mathematics, the Lehmer mean of a tuple of positive real numbers, named after Derrick Henry Lehmer, is defined as:

Carleman's inequality is an inequality in mathematics, named after Torsten Carleman, who proved it in 1923 and used it to prove the Denjoy–Carleman theorem on quasi-analytic classes.

Anatoly Karatsuba Russian mathematician

Anatoly Alexeyevich Karatsuba was a Russian mathematician working in the field of analytic number theory, p-adic numbers and Dirichlet series.

In mathematics, for a sequence of complex numbers a1, a2, a3, ... the infinite product

In probability and statistics, the generalized beta distribution is a continuous probability distribution with five parameters, including more than thirty named distributions as limiting or special cases. It has been used in the modeling of income distribution, stock returns, as well as in regression analysis. The exponential generalized beta (EGB) distribution follows directly from the GB and generalizes other common distributions.

In mathematics, Young's inequality for products is a mathematical inequality about the product of two numbers. The inequality is named after William Henry Young and should not be confused with Young's convolution inequality.