In the field of mathematical analysis, an interpolation inequality is an inequality of the form
where for , is an element of some particular vector space equipped with norm and is some real exponent, and is some constant independent of . The vector spaces concerned are usually function spaces, and many interpolation inequalities assume and so bound the norm of an element in one space with a combination norms in other spaces, such as Ladyzhenskaya's inequality and the Gagliardo-Nirenberg interpolation inequality, both given below. Nonetheless, some important interpolation inequalities involve distinct elements , including Hölder's Inequality and Young's inequality for convolutions which are also presented below.
The main applications of interpolation inequalities lie in fields of study, such as partial differential equations, where various function spaces are used. An important example are the Sobolev spaces, consisting of functions whose weak derivatives up to some (not necessarily integer) order lie in Lp spaces for some p. There interpolation inequalities are used, roughly speaking, to bound derivatives of some order with a combination of derivatives of other orders. They can also be used to bound products, convolutions, and other combinations of functions, often with some flexibility in the choice of function space. Interpolation inequalities are fundamental to the notion of an interpolation space, such as the space , which loosely speaking is composed of functions whose order weak derivatives lie in . Interpolation inequalities are also applied when working with Besov spaces , which are a generalization of the Sobolev spaces. [1] Another class of space admitting interpolation inequalities are the Hölder spaces.
A simple example of an interpolation inequality — one in which all the uk are the same u, but the norms ‖·‖k are different — is Ladyzhenskaya's inequality for functions u: ℝ2 → ℝ, which states that whenever u is a compactly supported function such that both u and its gradient ∇u are square integrable, it follows that the fourth power of u is integrable and [2]
i.e.
A slightly weaker form of Ladyzhenskaya's inequality applies in dimension 3, and Ladyzhenskaya's inequality is actually a special case of a general result that subsumes many of the interpolation inequalities involving Sobolev spaces, the Gagliardo-Nirenberg interpolation inequality. [3]
The following example, this one allowing interpolation of non-integer Sobolev spaces, is also a special case of the Gagliardo-Nirenberg interpolation inequality. [4] Denoting the Sobolev spaces by , and given real numbers and a function , we have
The elementary interpolation inequality for Lebesgue spaces, which is a direct consequence of the Hölder's inequality [3] reads: for exponents , every is also in and one has
where, in the case of is written as a convex combination , that is, with and ; in the case of , is written as with and
An example of an interpolation inequality where the elements differ is Young's inequality for convolutions. [5] Given exponents such that and functions , their convolution lies in and
In mathematics, convolution is a mathematical operation on two functions that produces a third function that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The choice of which function is reflected and shifted before the integral does not change the integral result. The integral is evaluated for all values of shift, producing the convolution function.
In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.
In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let be a measure space, let and let and be elements of Then is in and we have the triangle inequality
In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.
In mathematics, the Marcinkiewicz interpolation theorem, discovered by Józef Marcinkiewicz (1939), is a result bounding the norms of non-linear operators acting on Lp spaces.
In mathematics, a real or complex-valued function f on d-dimensional Euclidean space satisfies a Hölder condition, or is Hölder continuous, when there are real constants C ≥ 0, α > 0, such that
In mathematics, there is in mathematical analysis a class of Sobolev inequalities, relating norms including those of Sobolev spaces. These are used to prove the Sobolev embedding theorem, giving inclusions between certain Sobolev spaces, and the Rellich–Kondrachov theorem showing that under slightly stronger conditions some Sobolev spaces are compactly embedded in others. They are named after Sergei Lvovich Sobolev.
In harmonic analysis in mathematics, a function of bounded mean oscillation, also known as a BMO function, is a real-valued function whose mean oscillation is bounded (finite). The space of functions of bounded mean oscillation (BMO), is a function space that, in some precise sense, plays the same role in the theory of Hardy spaces Hp that the space L∞ of essentially bounded functions plays in the theory of Lp-spaces: it is also called John–Nirenberg space, after Fritz John and Louis Nirenberg who introduced and studied it for the first time.
The Sobolev conjugate of p for , where n is space dimensionality, is
In mathematics, the Poincaré inequality is a result in the theory of Sobolev spaces, named after the French mathematician Henri Poincaré. The inequality allows one to obtain bounds on a function using bounds on its derivatives and the geometry of its domain of definition. Such bounds are of great importance in the modern, direct methods of the calculus of variations. A very closely related result is Friedrichs' inequality.
In mathematics, particularly numerical analysis, the Bramble–Hilbert lemma, named after James H. Bramble and Stephen Hilbert, bounds the error of an approximation of a function by a polynomial of order at most in terms of derivatives of of order . Both the error of the approximation and the derivatives of are measured by norms on a bounded domain in . This is similar to classical numerical analysis, where, for example, the error of linear interpolation can be bounded using the second derivative of . However, the Bramble–Hilbert lemma applies in any number of dimensions, not just one dimension, and the approximation error and the derivatives of are measured by more general norms involving averages, not just the maximum norm.
In mathematics, the class of Muckenhoupt weightsAp consists of those weights ω for which the Hardy–Littlewood maximal operator is bounded on Lp(dω). Specifically, we consider functions f on Rn and their associated maximal functions M( f ) defined as
In mathematical analysis, and especially in real, harmonic analysis and functional analysis, an Orlicz space is a type of function space which generalizes the Lp spaces. Like the Lp spaces, they are Banach spaces. The spaces are named for Władysław Orlicz, who was the first to define them in 1932.
In the field of mathematical analysis, an interpolation space is a space which lies "in between" two other Banach spaces. The main applications are in Sobolev spaces, where spaces of functions that have a noninteger number of derivatives are interpolated from the spaces of functions with integer number of derivatives.
In mathematics, singular integral operators of convolution type are the singular integral operators that arise on Rn and Tn through convolution by distributions; equivalently they are the singular integral operators that commute with translations. The classical examples in harmonic analysis are the harmonic conjugation operator on the circle, the Hilbert transform on the circle and the real line, the Beurling transform in the complex plane and the Riesz transforms in Euclidean space. The continuity of these operators on L2 is evident because the Fourier transform converts them into multiplication operators. Continuity on Lp spaces was first established by Marcel Riesz. The classical techniques include the use of Poisson integrals, interpolation theory and the Hardy–Littlewood maximal function. For more general operators, fundamental new techniques, introduced by Alberto Calderón and Antoni Zygmund in 1952, were developed by a number of authors to give general criteria for continuity on Lp spaces. This article explains the theory for the classical operators and sketches the subsequent general theory.
In mathematics, Sobolev spaces for planar domains are one of the principal techniques used in the theory of partial differential equations for solving the Dirichlet and Neumann boundary value problems for the Laplacian in a bounded domain in the plane with smooth boundary. The methods use the theory of bounded operators on Hilbert space. They can be used to deduce regularity properties of solutions and to solve the corresponding eigenvalue problems.
In mathematics, Ladyzhenskaya's inequality is any of a number of related functional inequalities named after the Soviet Russian mathematician Olga Aleksandrovna Ladyzhenskaya. The original such inequality, for functions of two real variables, was introduced by Ladyzhenskaya in 1958 to prove the existence and uniqueness of long-time solutions to the Navier–Stokes equations in two spatial dimensions. There is an analogous inequality for functions of three real variables, but the exponents are slightly different; much of the difficulty in establishing existence and uniqueness of solutions to the three-dimensional Navier–Stokes equations stems from these different exponents. Ladyzhenskaya's inequality is one member of a broad class of inequalities known as interpolation inequalities.
In mathematics, and in particular in mathematical analysis, the Gagliardo–Nirenberg interpolation inequality is a result in the theory of Sobolev spaces that relates the -norms of different weak derivatives of a function through an interpolation inequality. The theorem is of particular importance in the framework of elliptic partial differential equations and was originally formulated by Emilio Gagliardo and Louis Nirenberg in 1958. The Gagliardo-Nirenberg inequality has found numerous applications in the investigation of nonlinear partial differential equations, and has been generalized to fractional Sobolev spaces by Haim Brezis and Petru Mironescu in the late 2010s.
In mathematics, Young's convolution inequality is a mathematical inequality about the convolution of two functions, named after William Henry Young.
{{cite book}}
: CS1 maint: location missing publisher (link){{cite book}}
: CS1 maint: location missing publisher (link)