Convolution power

Last updated

In mathematics, the convolution power is the n-fold iteration of the convolution with itself. Thus if is a function on Euclidean space Rd and is a natural number, then the convolution power is defined by

Contents

where denotes the convolution operation of functions on Rd and δ0 is the Dirac delta distribution. This definition makes sense if x is an integrable function (in L1), a rapidly decreasing distribution (in particular, a compactly supported distribution) or is a finite Borel measure.

If x is the distribution function of a random variable on the real line, then the nth convolution power of x gives the distribution function of the sum of n independent random variables with identical distribution x. The central limit theorem states that if x is in L1 and L2 with mean zero and variance σ2, then

where Φ is the cumulative standard normal distribution on the real line. Equivalently, tends weakly to the standard normal distribution.

In some cases, it is possible to define powers x*t for arbitrary real t > 0. If μ is a probability measure, then μ is infinitely divisible provided there exists, for each positive integer n, a probability measure μ1/n such that

That is, a measure is infinitely divisible if it is possible to define all nth roots. Not every probability measure is infinitely divisible, and a characterization of infinitely divisible measures is of central importance in the abstract theory of stochastic processes. Intuitively, a measure should be infinitely divisible provided it has a well-defined "convolution logarithm." The natural candidate for measures having such a logarithm are those of (generalized) Poisson type, given in the form

In fact, the Lévy–Khinchin theorem states that a necessary and sufficient condition for a measure to be infinitely divisible is that it must lie in the closure, with respect to the vague topology, of the class of Poisson measures ( Stroock 1993 , §3.2).

Many applications of the convolution power rely on being able to define the analog of analytic functions as formal power series with powers replaced instead by the convolution power. Thus if is an analytic function, then one would like to be able to define

If x  L1(Rd) or more generally is a finite Borel measure on Rd, then the latter series converges absolutely in norm provided that the norm of x is less than the radius of convergence of the original series defining F(z). In particular, it is possible for such measures to define the convolutional exponential

It is not generally possible to extend this definition to arbitrary distributions, although a class of distributions on which this series still converges in an appropriate weak sense is identified by Ben Chrouda, El Oued & Ouerdiane (2002).

Properties

If x is itself suitably differentiable, then from the properties of convolution, one has

where denotes the derivative operator. Specifically, this holds if x is a compactly supported distribution or lies in the Sobolev space W1,1 to ensure that the derivative is sufficiently regular for the convolution to be well-defined.

Applications

In the configuration random graph, the size distribution of connected components can be expressed via the convolution power of the excess degree distribution (Kryven (2017)):

Here, is the size distribution for connected components, is the excess degree distribution, and denotes the degree distribution.

As convolution algebras are special cases of Hopf algebras, the convolution power is a special case of the (ordinary) power in a Hopf algebra. In applications to quantum field theory, the convolution exponential, convolution logarithm, and other analytic functions based on the convolution are constructed as formal power series in the elements of the algebra ( Brouder, Frabetti & Patras 2008 ). If, in addition, the algebra is a Banach algebra, then convergence of the series can be determined as above. In the formal setting, familiar identities such as

continue to hold. Moreover, by the permanence of functional relations, they hold at the level of functions, provided all expressions are well-defined in an open set by convergent series.

See also

Related Research Articles

<span class="mw-page-title-main">Convolution</span> Integral expressing the amount of overlap of one function as it is shifted over another

In mathematics, convolution is a mathematical operation on two functions that produces a third function that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The choice of which function is reflected and shifted before the integral does not change the integral result. The integral is evaluated for all values of shift, producing the convolution function.

<span class="mw-page-title-main">Entropy (information theory)</span> Expected amount of information needed to specify the output of a stochastic data source

In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable , which takes values in the alphabet and is distributed according to :

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

In probability theory, the central limit theorem (CLT) establishes that, in many situations, for independent and identically distributed random variables, the sampling distribution of the standardized sample mean tends towards the standard normal distribution even if the original variables themselves are not normally distributed.

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

<span class="mw-page-title-main">Log-normal distribution</span> Probability distribution

In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.

<span class="mw-page-title-main">Laplace distribution</span> Probability distribution

In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions spliced together along the abscissa, although the term is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.

In mathematical statistics, the Kullback–Leibler divergence, denoted , is a type of statistical distance: a measure of how one probability distribution P is different from a second, reference probability distribution Q. A simple interpretation of the KL divergence of P from Q is the expected excess surprise from using Q as a model when the actual distribution is P. While it is a measure of how different two distributions are, and in some sense is thus a "distance", it is not actually a metric, which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions, and does not satisfy the triangle inequality. Instead, in terms of information geometry, it is a type of divergence, a generalization of squared distance, and for certain classes of distributions, it satisfies a generalized Pythagorean theorem.

<span class="mw-page-title-main">Stable distribution</span> Distribution of variables which satisfies a stability property under linear combinations

In probability theory, a distribution is said to be stable if a linear combination of two independent random variables with this distribution has the same distribution, up to location and scale parameters. A random variable is said to be stable if its distribution is stable. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.

In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.

In probability theory, calculation of the sum of normally distributed random variables is an instance of the arithmetic of random variables.

Differential entropy is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average (surprisal) of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.

In mathematics, a π-system on a set is a collection of certain subsets of such that

The normal-inverse Gaussian distribution is a continuous probability distribution that is defined as the normal variance-mean mixture where the mixing density is the inverse Gaussian distribution. The NIG distribution was noted by Blaesild in 1977 as a subclass of the generalised hyperbolic distribution discovered by Ole Barndorff-Nielsen. In the next year Barndorff-Nielsen published the NIG in another paper. It was introduced in the mathematical finance literature in 1997.

In mathematics, in particular in measure theory, a content is a real-valued function defined on a collection of subsets such that

An -superprocess, , within mathematics probability theory is a stochastic process on that is usually constructed as a special limit of near-critical branching diffusions.

In mathematics, especially measure theory, a set function is a function whose domain is a family of subsets of some given set and that (usually) takes its values in the extended real number line which consists of the real numbers and

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product is a product distribution.

References