Hardy's inequality

Last updated • 3 min readFrom Wikipedia, The Free Encyclopedia

Hardy's inequality is an inequality in mathematics, named after G. H. Hardy.

Contents

Its discrete version states that if is a sequence of non-negative real numbers, then for every real number p > 1 one has

If the right-hand side is finite, equality holds if and only if for all n.

An integral version of Hardy's inequality states the following: if f is a measurable function with non-negative values, then

If the right-hand side is finite, equality holds if and only if f(x) = 0 almost everywhere.

Hardy's inequality was first published and proved (at least the discrete version with a worse constant) in 1920 in a note by Hardy. [1] The original formulation was in an integral form slightly different from the above.

Statements

General discrete Hardy inequality

The general weighted one dimensional version reads as follows: [2] :§332: if , and ,

General one-dimensional integral Hardy inequality

The general weighted one dimensional version reads as follows: [2] :§330

Multidimensional Hardy inequalities with gradient

Multidimensional Hardy inequality around a point

In the multidimensional case, Hardy's inequality can be extended to -spaces, taking the form [3]

where , and where the constant is known to be sharp; by density it extends then to the Sobolev space .

Similarly, if , then one has for every

Multidimensional Hardy inequality near the boundary

If is an nonempty convex open set, then for every ,

and the constant cannot be improved. [4]

Fractional Hardy inequality

If and , , there exists a constant such that for every satisfying , one has [5] :Lemma 2

Proof of the inequality

Integral version (integration by parts and Hölder)

Hardy’s original proof [1] [2] :§327 (ii) begins with an integration by parts to get

Then, by Hölder's inequality,

and the conclusion follows.

Integral version (scaling and Minkowski)

A change of variables gives

which is less or equal than by Minkowski's integral inequality. Finally, by another change of variables, the last expression equals

Discrete version: from the continuous version

Assuming the right-hand side to be finite, we must have as . Hence, for any positive integer j, there are only finitely many terms bigger than . This allows us to construct a decreasing sequence containing the same positive terms as the original sequence (but possibly no zero terms). Since for every n, it suffices to show the inequality for the new sequence. This follows directly from the integral form, defining if and otherwise. Indeed, one has

and, for , there holds

(the last inequality is equivalent to , which is true as the new sequence is decreasing) and thus

.

Discrete version: Direct proof

Let and let be positive real numbers. Set . First we prove the inequality

Let and let be the difference between the -th terms in the right-hand side and left-hand side of * , that is, . We have:

or

According to Young's inequality we have:

from which it follows that:

By telescoping we have:

proving * . Applying Hölder's inequality to the right-hand side of * we have:

from which we immediately obtain:

Letting we obtain Hardy's inequality.

See also

Notes

  1. 1 2 Hardy, G. H. (1920). "Note on a theorem of Hilbert". Mathematische Zeitschrift. 6 (3–4): 314–317. doi:10.1007/BF01199965. S2CID   122571449.
  2. 1 2 3 Hardy, G. H.; Littlewood, J.E.; Pólya, G. (1952). Inequalities (Second ed.). Cambridge, UK.{{cite book}}: CS1 maint: location missing publisher (link)
  3. Ruzhansky, Michael; Suragan, Durvudkhan (2019). Hardy Inequalities on Homogeneous Groups: 100 Years of Hardy Inequalities. Birkhäuser Basel. ISBN   978-3-030-02894-7.
  4. Marcus, Moshe; Mizel, Victor J.; Pinchover, Yehuda (1998). "On the best constant for Hardy's inequality in $\mathbb {R}^n$". Transactions of the American Mathematical Society. 350 (8): 3237–3255. doi: 10.1090/S0002-9947-98-02122-9 .
  5. Mironescu, Petru (2018). "The role of the Hardy type inequalities in the theory of function spaces" (PDF). Revue roumaine de mathématiques pures et appliquées. 63 (4): 447–525.

Related Research Articles

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

In mathematics, a self-adjoint operator on a complex vector space V with inner product is a linear map A that is its own adjoint. That is, for all V. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let be a measure space, let and let and be elements of Then is in and we have the triangle inequality

In vector calculus, Green's theorem relates a line integral around a simple closed curve C to a double integral over the plane region D bounded by C. It is the two-dimensional special case of Stokes' theorem. In one dimension, it is equivalent to the fundamental theorem of calculus. In three dimensions, it is equivalent to the divergence theorem.

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation.

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.

<span class="mw-page-title-main">Dirichlet integral</span> Integral of sin(x)/x from 0 to infinity.

In mathematics, there are several integrals known as the Dirichlet integral, after the German mathematician Peter Gustav Lejeune Dirichlet, one of which is the improper integral of the sinc function over the positive real number line.

In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class, then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.

In mathematics, the von Mangoldt function is an arithmetic function named after German mathematician Hans von Mangoldt. It is an example of an important arithmetic function that is neither multiplicative nor additive.

Differential entropy is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.

In mathematics, the Hardy–Littlewood maximal operatorM is a significant non-linear operator used in real analysis and harmonic analysis.

In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler.

In mathematics, the Remez inequality, discovered by the Soviet mathematician Evgeny Yakovlevich Remez, gives a bound on the sup norms of certain polynomials, the bound being attained by the Chebyshev polynomials.

In mathematical analysis, Lorentz spaces, introduced by George G. Lorentz in the 1950s, are generalisations of the more familiar spaces.

In mathematical analysis, the Hardy–Littlewood inequality, named after G. H. Hardy and John Edensor Littlewood, states that if and are nonnegative measurable real functions vanishing at infinity that are defined on -dimensional Euclidean space , then

In probability theory, a subgaussian distribution, the distribution of a subgaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a subgaussian distribution are dominated by the tails of a Gaussian. This property gives subgaussian distributions their name.

In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on Euclidean space as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to multivariable calculus but is somewhat more sophisticated in that it uses linear algebra more extensively and covers some concepts from differential geometry such as differential forms and Stokes' formula in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.

References