Interpolation inequality

Last updated

In the field of mathematical analysis, an interpolation inequality is an inequality of the form

Contents

where for , is an element of some particular vector space equipped with norm and is some real exponent, and is some constant independent of . The vector spaces concerned are usually function spaces, and many interpolation inequalities assume and so bound the norm of an element in one space with a combination norms in other spaces, such as Ladyzhenskaya's inequality and the Gagliardo-Nirenberg interpolation inequality, both given below. Nonetheless, some important interpolation inequalities involve distinct elements , including Hölder's Inequality and Young's inequality for convolutions which are also presented below.

Applications

The main applications of interpolation inequalities lie in fields of study, such as partial differential equations, where various function spaces are used. An important example are the Sobolev spaces, consisting of functions whose weak derivatives up to some (not necessarily integer) order lie in Lp spaces for some p. There interpolation inequalities are used, roughly speaking, to bound derivatives of some order with a combination of derivatives of other orders. They can also be used to bound products, convolutions, and other combinations of functions, often with some flexibility in the choice of function space. Interpolation inequalities are fundamental to the notion of an interpolation space, such as the space , which loosely speaking is composed of functions whose order weak derivatives lie in . Interpolation inequalities are also applied when working with Besov spaces , which are a generalization of the Sobolev spaces. [1] Another class of space admitting interpolation inequalities are the Hölder spaces.

Examples

A simple example of an interpolation inequality — one in which all the uk are the same u, but the norms ‖·‖k are different — is Ladyzhenskaya's inequality for functions , which states that whenever u is a compactly supported function such that both u and its gradient u are square integrable, it follows that the fourth power of u is integrable and [2]

i.e.

A slightly weaker form of Ladyzhenskaya's inequality applies in dimension 3, and Ladyzhenskaya's inequality is actually a special case of a general result that subsumes many of the interpolation inequalities involving Sobolev spaces, the Gagliardo-Nirenberg interpolation inequality. [3] :276–280

The following example, this one allowing interpolation of non-integer Sobolev spaces, is also a special case of the Gagliardo-Nirenberg interpolation inequality. [4] Denoting the Sobolev spaces by , and given real numbers and a function , we have


The elementary interpolation inequality for Lebesgue spaces, which is a direct consequence of the Hölder's inequality [3] :707 reads: for exponents , every is also in and one has

where, in the case of is written as a convex combination , that is, with and ; in the case of , is written as with and


An example of an interpolation inequality where the elements differ is Young's inequality for convolutions. [5] Given exponents such that and functions , their convolution lies in and

Examples of interpolation inequalities

Related Research Articles

<span class="mw-page-title-main">Convolution</span> Integral expressing the amount of overlap of one function as it is shifted over another

In mathematics, convolution is a mathematical operation on two functions that produces a third function. The term convolution refers to both the resulting function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result. Graphically, it expresses how the 'shape' of one function is modified by the other.

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let be a measure space, let and let and be elements of Then is in and we have the triangle inequality

In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.

<span class="mw-page-title-main">Bump function</span> Smooth and compactly supported function

In mathematics, a bump function is a function on a Euclidean space which is both smooth and compactly supported. The set of all bump functions with domain forms a vector space, denoted or The dual space of this space endowed with a suitable topology is the space of distributions.

In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.

In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude or length of the vector. This norm can be defined as the square root of the inner product of a vector with itself.

In mathematics, there is in mathematical analysis a class of Sobolev inequalities, relating norms including those of Sobolev spaces. These are used to prove the Sobolev embedding theorem, giving inclusions between certain Sobolev spaces, and the Rellich–Kondrachov theorem showing that under slightly stronger conditions some Sobolev spaces are compactly embedded in others. They are named after Sergei Lvovich Sobolev.

In harmonic analysis in mathematics, a function of bounded mean oscillation, also known as a BMO function, is a real-valued function whose mean oscillation is bounded (finite). The space of functions of bounded mean oscillation (BMO), is a function space that, in some precise sense, plays the same role in the theory of Hardy spaces Hp that the space L of essentially bounded functions plays in the theory of Lp-spaces: it is also called John–Nirenberg space, after Fritz John and Louis Nirenberg who introduced and studied it for the first time.

The Sobolev conjugate of p for , where n is space dimensionality, is

In mathematics, the Poincaré inequality is a result in the theory of Sobolev spaces, named after the French mathematician Henri Poincaré. The inequality allows one to obtain bounds on a function using bounds on its derivatives and the geometry of its domain of definition. Such bounds are of great importance in the modern, direct methods of the calculus of variations. A very closely related result is Friedrichs' inequality.

In the field of mathematical analysis, an interpolation space is a space which lies "in between" two other Banach spaces. The main applications are in Sobolev spaces, where spaces of functions that have a noninteger number of derivatives are interpolated from the spaces of functions with integer number of derivatives.

In mathematical analysis, the Pólya–Szegő inequality states that the Sobolev energy of a function in a Sobolev space does not increase under symmetric decreasing rearrangement. The inequality is named after the mathematicians George Pólya and Gábor Szegő.

In mathematical analysis, Lorentz spaces, introduced by George G. Lorentz in the 1950s, are generalisations of the more familiar spaces.

In mathematics, Ladyzhenskaya's inequality is any of a number of related functional inequalities named after the Soviet Russian mathematician Olga Aleksandrovna Ladyzhenskaya. The original such inequality, for functions of two real variables, was introduced by Ladyzhenskaya in 1958 to prove the existence and uniqueness of long-time solutions to the Navier–Stokes equations in two spatial dimensions. There is an analogous inequality for functions of three real variables, but the exponents are slightly different; much of the difficulty in establishing existence and uniqueness of solutions to the three-dimensional Navier–Stokes equations stems from these different exponents. Ladyzhenskaya's inequality is one member of a broad class of inequalities known as interpolation inequalities.

In mathematics, and in particular in mathematical analysis, the Gagliardo–Nirenberg interpolation inequality is a result in the theory of Sobolev spaces that relates the -norms of different weak derivatives of a function through an interpolation inequality. The theorem is of particular importance in the framework of elliptic partial differential equations and was originally formulated by Emilio Gagliardo and Louis Nirenberg in 1958. The Gagliardo-Nirenberg inequality has found numerous applications in the investigation of nonlinear partial differential equations, and has been generalized to fractional Sobolev spaces by Haïm Brezis and Petru Mironescu in the late 2010s.

In mathematics, Young's convolution inequality is a mathematical inequality about the convolution of two functions, named after William Henry Young.

<span class="mw-page-title-main">Normalized solutions (nonlinear Schrödinger equation)</span>

In mathematics, a normalized solution to an ordinary or partial differential equation is a solution with prescribed norm, that is, a solution which satisfies a condition like In this article, the normalized solution is introduced by using the nonlinear Schrödinger equation. The nonlinear Schrödinger equation (NLSE) is a fundamental equation in quantum mechanics and other various fields of physics, describing the evolution of complex wave functions. In Quantum Physics, normalization means that the total probability of finding a quantum particle anywhere in the universe is unity.

<span class="mw-page-title-main">Normalized solution (mathematics)</span>

In mathematics, a normalized solution to an ordinary or partial differential equation is a solution with prescribed norm, that is, a solution which satisfies a condition like In this article, the normalized solution is introduced by using the nonlinear Schrödinger equation. The nonlinear Schrödinger equation (NLSE) is a fundamental equation in quantum mechanics and other various fields of physics, describing the evolution of complex wave functions. In Quantum Physics, normalization means that the total probability of finding a quantum particle anywhere in the universe is unity.

References

  1. DeVore, Ronald A.; Popov, Vasil A. (1988). "Interpolation of Besov spaces". Transactions of the American Mathematical Society. 305 (1): 397–414. doi: 10.1090/S0002-9947-1988-0920166-3 . ISSN   0002-9947.
  2. Foias, C.; Manley, O.; Rosa, R.; Temam, R. (2001). Navier-Stokes Equations and Turbulence. Encyclopedia of Mathematics and its Applications. Cambridge: Cambridge University Press. doi:10.1017/cbo9780511546754. ISBN   978-0-521-36032-6.
  3. 1 2 Evans, Lawrence C. (2010). Partial differential equations (2 ed.). Providence, R.I. ISBN   978-0-8218-4974-3. OCLC   465190110.{{cite book}}: CS1 maint: location missing publisher (link)
  4. Brézis, H. (2011). Functional analysis, Sobolev spaces and partial differential equations. H.. Brézis. New York: Springer. p. 233. ISBN   978-0-387-70914-7. OCLC   695395895.
  5. Leoni, Giovanni (2017). A first course in Sobolev spaces (2 ed.). Providence, Rhode Island. ISBN   978-1-4704-2921-8. OCLC   976406106.{{cite book}}: CS1 maint: location missing publisher (link)