Clarkson's inequalities

Last updated

In mathematics, Clarkson's inequalities, named after James A. Clarkson, are results in the theory of Lp spaces. They give bounds for the Lp-norms of the sum and difference of two measurable functions in Lp in terms of the Lp-norms of those functions individually.

Contents

Statement of the inequalities

Let (X, Σ, μ) be a measure space; let f, g : X  R be measurable functions in Lp. Then, for 2  p < +∞,

For 1 < p < 2,

where

i.e., q = p  (p  1).

Related Research Articles

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let be a measure space, let and let and be elements of Then is in and we have the triangle inequality with equality for if and only if and are positively linearly dependent; that is, for some or Here, the norm is given by: if or in the case by the essential supremum

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation.

In complex analysis, the Hardy spacesHp are certain spaces of holomorphic functions on the unit disk or upper half plane. They were introduced by Frigyes Riesz, who named them after G. H. Hardy, because of the paper. In real analysis Hardy spaces are certain spaces of distributions on the real line, which are boundary values of the holomorphic functions of the complex Hardy spaces, and are related to the Lp spaces of functional analysis. For 1 ≤ p < ∞ these real Hardy spaces Hp are certain subsets of Lp, while for p < 1 the Lp spaces have some undesirable properties, and the Hardy spaces are much better behaved.

In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.

In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude or length of the vector. This norm can be defined as the square root of the inner product of a vector with itself.

In mathematics, the Marcinkiewicz interpolation theorem, discovered by Józef Marcinkiewicz, is a result bounding the norms of non-linear operators acting on Lp spaces.

In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.

In mathematics, there is in mathematical analysis a class of Sobolev inequalities, relating norms including those of Sobolev spaces. These are used to prove the Sobolev embedding theorem, giving inclusions between certain Sobolev spaces, and the Rellich–Kondrachov theorem showing that under slightly stronger conditions some Sobolev spaces are compactly embedded in others. They are named after Sergei Lvovich Sobolev.

In harmonic analysis in mathematics, a function of bounded mean oscillation, also known as a BMO function, is a real-valued function whose mean oscillation is bounded (finite). The space of functions of bounded mean oscillation (BMO), is a function space that, in some precise sense, plays the same role in the theory of Hardy spaces Hp that the space L of essentially bounded functions plays in the theory of Lp-spaces: it is also called John–Nirenberg space, after Fritz John and Louis Nirenberg who introduced and studied it for the first time.

In mathematics, uniformly convex spaces are common examples of reflexive Banach spaces. The concept of uniform convexity was first introduced by James A. Clarkson in 1936.

In mathematics, the Hardy–Littlewood maximal operatorM is a significant non-linear operator used in real analysis and harmonic analysis.

In mathematics, the modulus of convexity and the characteristic of convexity are measures of "how convex" the unit ball in a Banach space is. In some sense, the modulus of convexity has the same relationship to the ε-δ definition of uniform convexity as the modulus of continuity does to the ε-δ definition of continuity.

In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space.

In mathematical analysis, and especially in real, harmonic analysis and functional analysis, an Orlicz space is a type of function space which generalizes the Lp spaces. Like the Lp spaces, they are Banach spaces. The spaces are named for Władysław Orlicz, who was the first to define them in 1932.

In the field of mathematical analysis, an interpolation space is a space which lies "in between" two other Banach spaces. The main applications are in Sobolev spaces, where spaces of functions that have a noninteger number of derivatives are interpolated from the spaces of functions with integer number of derivatives.

In mathematics, the Babenko–Beckner inequality (after Konstantin I. Babenko and William E. Beckner) is a sharpened form of the Hausdorff–Young inequality having applications to uncertainty principles in the Fourier analysis of Lp spaces. The (qp)-norm of the n-dimensional Fourier transform is defined to be

In mathematical analysis, Lorentz spaces, introduced by George G. Lorentz in the 1950s, are generalisations of the more familiar spaces.

In mathematics, Young's convolution inequality is a mathematical inequality about the convolution of two functions, named after William Henry Young.

References