Clarkson's inequalities

Last updated

In mathematics, Clarkson's inequalities, named after James A. Clarkson, are results in the theory of Lp spaces. They give bounds for the Lp-norms of the sum and difference of two measurable functions in Lp in terms of the Lp-norms of those functions individually.

Contents

Statement of the inequalities

Let (X, Σ, μ) be a measure space; let f, g : X  R be measurable functions in Lp. Then, for 2  p < +∞,

For 1 < p < 2,

where

i.e., q = p  (p  1).

The case p  2 is somewhat easier to prove, being a simple application of the triangle inequality and the convexity of

Related Research Articles

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let be a measure space, let and let and be elements of Then is in and we have the triangle inequality

<span class="mw-page-title-main">Convex function</span> Real function with secant line between points above the graph itself

In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph is a convex set. A twice-differentiable function of a single variable is convex if and only if its second derivative is nonnegative on its entire domain. Well-known examples of convex functions of a single variable include the quadratic function and the exponential function . In simple terms, a convex function refers to a function whose graph is shaped like a cup , while a concave function's graph is shaped like a cap .

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

In complex analysis, the Hardy spacesHp are certain spaces of holomorphic functions on the unit disk or upper half plane. They were introduced by Frigyes Riesz, who named them after G. H. Hardy, because of the paper. In real analysis Hardy spaces are certain spaces of distributions on the real line, which are boundary values of the holomorphic functions of the complex Hardy spaces, and are related to the Lp spaces of functional analysis. For 1 ≤ p < ∞ these real Hardy spaces Hp are certain subsets of Lp, while for p < 1 the Lp spaces have some undesirable properties, and the Hardy spaces are much better behaved.

In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.

In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector. This norm can be defined as the square root of the inner product of a vector with itself.

In mathematics, the Marcinkiewicz interpolation theorem, discovered by Józef Marcinkiewicz (1939), is a result bounding the norms of non-linear operators acting on Lp spaces.

In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.

In mathematics, there is in mathematical analysis a class of Sobolev inequalities, relating norms including those of Sobolev spaces. These are used to prove the Sobolev embedding theorem, giving inclusions between certain Sobolev spaces, and the Rellich–Kondrachov theorem showing that under slightly stronger conditions some Sobolev spaces are compactly embedded in others. They are named after Sergei Lvovich Sobolev.

In harmonic analysis in mathematics, a function of bounded mean oscillation, also known as a BMO function, is a real-valued function whose mean oscillation is bounded (finite). The space of functions of bounded mean oscillation (BMO), is a function space that, in some precise sense, plays the same role in the theory of Hardy spaces Hp that the space L of essentially bounded functions plays in the theory of Lp-spaces: it is also called John–Nirenberg space, after Fritz John and Louis Nirenberg who introduced and studied it for the first time.

In mathematics, uniformly convex spaces are common examples of reflexive Banach spaces. The concept of uniform convexity was first introduced by James A. Clarkson in 1936.

In mathematics, the modulus of convexity and the characteristic of convexity are measures of "how convex" the unit ball in a Banach space is. In some sense, the modulus of convexity has the same relationship to the ε-δ definition of uniform convexity as the modulus of continuity does to the ε-δ definition of continuity.

In mathematics, Hanner's inequalities are results in the theory of Lp spaces. Their proof was published in 1956 by Olof Hanner. They provide a simpler way of proving the uniform convexity of Lp spaces for p ∈ (1, +∞) than the approach proposed by James A. Clarkson in 1936.

In mathematical analysis, and especially in real, harmonic analysis and functional analysis, an Orlicz space is a type of function space which generalizes the Lp spaces. Like the Lp spaces, they are Banach spaces. The spaces are named for Władysław Orlicz, who was the first to define them in 1932.

In mathematics, the Babenko–Beckner inequality (after K. Ivan Babenko and William E. Beckner) is a sharpened form of the Hausdorff–Young inequality having applications to uncertainty principles in the Fourier analysis of Lp spaces. The (qp)-norm of the n-dimensional Fourier transform is defined to be

In mathematical analysis, Lorentz spaces, introduced by George G. Lorentz in the 1950s, are generalisations of the more familiar spaces.

In mathematics, Young's convolution inequality is a mathematical inequality about the convolution of two functions, named after William Henry Young.

In mathematics, a uniformly smooth space is a normed vector space satisfying the property that for every there exists such that if with and then

References