In mathematics, Young's convolution inequality is a mathematical inequality about the convolution of two functions, [1] named after William Henry Young.
In real analysis, the following result is called Young's convolution inequality: [2]
Suppose is in the Lebesgue space and is in and
with Then
Here the star denotes convolution, is Lebesgue space, and
denotes the usual norm.
Equivalently, if and then
Young's convolution inequality has a natural generalization in which we replace by a unimodular group If we let be a bi-invariant Haar measure on and we let or be integrable functions, then we define by
Then in this case, Young's inequality states that for and and such that
we have a bound
Equivalently, if and then
Since is in fact a locally compact abelian group (and therefore unimodular) with the Lebesgue measure the desired Haar measure, this is in fact a generalization.
This generalization may be refined. Let and be as before and assume satisfy Then there exists a constant such that for any and any measurable function on that belongs to the weak space which by definition means that the following supremum
is finite, we have and [3]
An example application is that Young's inequality can be used to show that the heat semigroup is a contracting semigroup using the norm (that is, the Weierstrass transform does not enlarge the norm).
Young's inequality has an elementary proof with the non-optimal constant 1. [4]
We assume that the functions are nonnegative and integrable, where is a unimodular group endowed with a bi-invariant Haar measure We use the fact that for any measurable Since
By the Hölder inequality for three functions we deduce that
The conclusion follows then by left-invariance of the Haar measure, the fact that integrals are preserved by inversion of the domain, and by Fubini's theorem.
Young's inequality can also be proved by interpolation; see the article on Riesz–Thorin interpolation for a proof.
In case Young's inequality can be strengthened to a sharp form, via
where the constant [5] [6] [7] When this optimal constant is achieved, the function and are multidimensional Gaussian functions.
In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.
In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.
In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.
In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let be a measure space, let and let and be elements of Then is in and we have the triangle inequality
In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.
In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.
In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector. This norm can be defined as the square root of the inner product of a vector with itself.
In mathematical statistics, the Kullback–Leibler (KL) divergence, denoted , is a type of statistical distance: a measure of how one probability distribution P is different from a second, reference probability distribution Q. A simple interpretation of the KL divergence of P from Q is the expected excess surprise from using Q as a model instead of P when the actual distribution is P. While it is a measure of how different two distributions are, and in some sense is thus a "distance", it is not actually a metric, which is the most familiar and formal type of distance. In particular, it is not symmetric in the two distributions, and does not satisfy the triangle inequality. Instead, in terms of information geometry, it is a type of divergence, a generalization of squared distance, and for certain classes of distributions, it satisfies a generalized Pythagorean theorem.
In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. Note that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.
Differential entropy is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average (surprisal) of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.
In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.
In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler.
In mathematical analysis, and especially in real, harmonic analysis and functional analysis, an Orlicz space is a type of function space which generalizes the Lp spaces. Like the Lp spaces, they are Banach spaces. The spaces are named for Władysław Orlicz, who was the first to define them in 1932.
Beliefs depend on the available information. This idea is formalized in probability theory by conditioning. Conditional probabilities, conditional expectations, and conditional probability distributions are treated on three levels: discrete probabilities, probability density functions, and measure theory. Conditioning leads to a non-random result if the condition is completely specified; otherwise, if the condition is left random, the result of conditioning is also random.
In mathematics, the Babenko–Beckner inequality (after Konstantin I. Babenko and William E. Beckner) is a sharpened form of the Hausdorff–Young inequality having applications to uncertainty principles in the Fourier analysis of Lp spaces. The (q, p)-norm of the n-dimensional Fourier transform is defined to be
In mathematical analysis, Lorentz spaces, introduced by George G. Lorentz in the 1950s, are generalisations of the more familiar spaces.
In the field of mathematical analysis, an interpolation inequality is an inequality of the form
In mathematics, Young's inequality for products is a mathematical inequality about the product of two numbers. The inequality is named after William Henry Young and should not be confused with Young's convolution inequality.
In mathematical analysis, the Young's inequality for integral operators, is a bound on the operator norm of an integral operator in terms of norms of the kernel itself.