Bounded mean oscillation

Last updated

In harmonic analysis in mathematics, a function of bounded mean oscillation, also known as a BMO function, is a real-valued function whose mean oscillation is bounded (finite). The space of functions of bounded mean oscillation (BMO), is a function space that, in some precise sense, plays the same role in the theory of Hardy spaces Hp that the space L of essentially bounded functions plays in the theory of Lp-spaces: it is also called John–Nirenberg space, after Fritz John and Louis Nirenberg who introduced and studied it for the first time.

Contents

Historical note

According to Nirenberg (1985 , p. 703 and p. 707), [1] the space of functions of bounded mean oscillation was introduced by John (1961 , pp. 410–411) in connection with his studies of mappings from a bounded set belonging to into and the corresponding problems arising from elasticity theory, precisely from the concept of elastic strain: the basic notation was introduced in a closely following paper by John & Nirenberg (1961), [2] where several properties of this function spaces were proved. The next important step in the development of the theory was the proof by Charles Fefferman [3] of the duality between BMO and the Hardy space , in the noted paper Fefferman & Stein 1972: a constructive proof of this result, introducing new methods and starting a further development of the theory, was given by Akihito Uchiyama. [4]

Definition

Definition 1. The mean oscillation of a locally integrable function over a hypercube [5] in is defined as the value of the following integral: where

Definition 2. A BMO function is a locally integrable function whose mean oscillation supremum, taken over the set of all cubes contained in , is finite.

Note 1. The supremum of the mean oscillation is called the BMO norm of . [6] and is denoted by (and in some instances it is also denoted ).

Note 2. The use of cubes in as the integration domains on which the mean oscillation is calculated, is not mandatory: Wiegerinck (2001) uses balls instead and, as remarked by Stein (1993 , p. 140), in doing so a perfectly equivalent definition of functions of bounded mean oscillation arises.

Notation

Basic properties

BMO functions are locally p–integrable

functions are locally if , but need not be locally bounded. In fact, using the John-Nirenberg Inequality, we can prove that

BMO is a Banach space

Constant functions have zero mean oscillation, therefore functions differing for a constant can share the same norm value even if their difference is not zero almost everywhere. Therefore, the function is properly a norm on the quotient space of functions modulo the space of constant functions on the domain considered.

Averages of adjacent cubes are comparable

As the name suggests, the mean or average of a function in does not oscillate very much when computing it over cubes close to each other in position and scale. Precisely, if and are dyadic cubes such that their boundaries touch and the side length of is no less than one-half the side length of (and vice versa), then

where is some universal constant. This property is, in fact, equivalent to being in , that is, if is a locally integrable function such that for all dyadic cubes and adjacent in the sense described above and is in dyadic (where the supremum is only taken over dyadic cubes ), then is in . [7]

BMO is the dual vector space of H1

Fefferman (1971) showed that the space is dual to , the Hardy space with . [8] The pairing between and is given by

though some care is needed in defining this integral, as it does not in general converge absolutely.

The John–Nirenberg Inequality

The John–Nirenberg Inequality is an estimate that governs how far a function of bounded mean oscillation may deviate from its average by a certain amount.

Statement

For each , there are constants (independent of ), such that for any cube in ,

Conversely, if this inequality holds over all cubes with some constant in place of , then is in with norm at most a constant times .

A consequence: the distance in BMO to L

The John–Nirenberg inequality can actually give more information than just the norm of a function. For a locally integrable function , let be the infimal for which

The John–Nirenberg inequality implies that for some universal constant . For an function, however, the above inequality will hold for all . In other words, if is in . Hence the constant gives us a way of measuring how far a function in is from the subspace . This statement can be made more precise: [9] there is a constant , depending only on the dimension , such that for any function the following two-sided inequality holds

Generalizations and extensions

The spaces BMOH and BMOA

When the dimension of the ambient space is 1, the space BMO can be seen as a linear subspace of harmonic functions on the unit disk and plays a major role in the theory of Hardy spaces: by using definition 2 , it is possible to define the BMO(T) space on the unit circle as the space of functions f : TR such that

i.e. such that its mean oscillation over every arc I of the unit circle [10] is bounded. Here as before fI is the mean value of f over the arc I.

Definition 3. An Analytic function on the unit disk is said to belong to the Harmonic BMO or in the BMOH space if and only if it is the Poisson integral of a BMO(T) function. Therefore, BMOH is the space of all functions u with the form:

equipped with the norm:

The subspace of analytic functions belonging BMOH is called the Analytic BMO space or the BMOA space.

BMOA as the dual space of H1(D)

Charles Fefferman in his original work proved that the real BMO space is dual to the real valued harmonic Hardy space on the upper half-space Rn × (0, ∞]. [11] In the theory of Complex and Harmonic analysis on the unit disk, his result is stated as follows. [12] Let Hp(D) be the Analytic Hardy space on the unit Disc. For p = 1 we identify (H1)* with BMOA by pairing fH1(D) and g ∈ BMOA using the anti-linear transformationTg

Notice that although the limit always exists for an H1 function f and Tg is an element of the dual space (H1)*, since the transformation is anti-linear, we don't have an isometric isomorphism between (H1)* and BMOA. However one can obtain an isometry if they consider a kind of space of conjugate BMOA functions.

The space VMO

The space VMO of functions of vanishing mean oscillation is the closure in BMO of the continuous functions that vanish at infinity. It can also be defined as the space of functions whose "mean oscillations" on cubes Q are not only bounded, but also tend to zero uniformly as the radius of the cube Q tends to 0 or ∞. The space VMO is a sort of Hardy space analogue of the space of continuous functions vanishing at infinity, and in particular the real valued harmonic Hardy space H1 is the dual of VMO. [13]

Relation to the Hilbert transform

A locally integrable function f on R is BMO if and only if it can be written as

where fiL, α is a constant and H is the Hilbert transform.

The BMO norm is then equivalent to the infimum of over all such representations.

Similarly f is VMO if and only if it can be represented in the above form with fi bounded uniformly continuous functions on R. [14]

The dyadic BMO space

Let Δ denote the set of dyadic cubes in Rn. The space dyadic BMO, written BMOd is the space of functions satisfying the same inequality as for BMO functions, only that the supremum is over all dyadic cubes. This supremum is sometimes denoted ||•||BMOd.

This space properly contains BMO. In particular, the function log(x)χ[0,∞) is a function that is in dyadic BMO but not in BMO. However, if a function f is such that ||f(•−x)||BMOdC for all x in Rn for some C > 0, then by the one-third trick f is also in BMO. In the case of BMO on Tn instead of Rn, a function f is such that ||f(•−x)||BMOdC for n+1 suitably chosen x, then f is also in BMO. This means BMO(Tn ) is the intersection of n+1 translation of dyadic BMO. By duality, H1(Tn ) is the sum of n+1 translation of dyadic H1. [15]

Although dyadic BMO is a much narrower class than BMO, many theorems that are true for BMO are much simpler to prove for dyadic BMO, and in some cases one can recover the original BMO theorems by proving them first in the special dyadic case. [16]

Examples

Examples of functions include the following:

Notes

  1. Aside with the collected papers of Fritz John, a general reference for the theory of functions of bounded mean oscillation, with also many (short) historical notes, is the noted book by Stein (1993 , chapter IV).
  2. The paper ( John 1961 ) just precedes the paper ( John & Nirenberg 1961 ) in volume 14 of the Communications on Pure and Applied Mathematics.
  3. Elias Stein credits only Fefferman for the discovery of this fact: see ( Stein 1993 , p. 139).
  4. See his proof in the paper Uchiyama 1982.
  5. When or , is respectively a cube or a square, while when the domain on integration is a bounded closed interval.
  6. Since, as shown in the " Basic properties " section, it is exactly a norm.
  7. Jones, Peter (1980). "Extension Theorems for BMO". Indiana University Mathematics Journal. 29 (1): 41–66. doi: 10.1512/iumj.1980.29.29005 .
  8. See the original paper by Fefferman & Stein (1972), or the paper by Uchiyama (1982) or the comprehensive monograph of Stein (1993 , p. 142) for a proof.
  9. See the paper Garnett & Jones 1978 for the details.
  10. An arc in the unit circle T can be defined as the image of a finite interval on the real line R under a continuous function whose codomain is T itself: a simpler, somewhat naive definition can be found in the entry "Arc (geometry)".
  11. See the section on Fefferman theorem of the present entry.
  12. See for example Girela (2001 , pp. 102–103).
  13. See reference Stein 1993 , p. 180.
  14. Garnett 2007
  15. T. Mei, BMO is the intersection of two translates of dyadic BMO. C. R. Math. Acad. Sci. Paris 336 (2003), no. 12, 1003-1006.
  16. See the referenced paper by Garnett & Jones 1982 for a comprehensive development of these themes.
  17. 1 2 See reference Stein 1993 , p. 140.
  18. See reference Stein 1993 , p. 197.

Related Research Articles

In mathematics, a continuous function is a function such that a small variation of the argument induces a small variation of the value of the function. This implies there are no abrupt changes in value, known as discontinuities. More precisely, a function is continuous if arbitrarily small changes in its value can be assured by restricting to sufficiently small changes of its argument. A discontinuous function is a function that is not continuous. Until the 19th century, mathematicians largely relied on intuitive notions of continuity and considered only continuous functions. The epsilon–delta definition of a limit was introduced to formalize the definition of continuity.

In mathematics, a real interval is the set of all real numbers lying between two fixed endpoints with no "gaps". Each endpoint is either a real number or positive or negative infinity, indicating the interval extends without a bound. A real interval can contain neither endpoint, either endpoint, or both endpoints, excluding any endpoint which is infinite.

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

<span class="mw-page-title-main">Harmonic function</span> Functions in mathematics

In mathematics, mathematical physics and the theory of stochastic processes, a harmonic function is a twice continuously differentiable function where U is an open subset of that satisfies Laplace's equation, that is, everywhere on U. This is usually written as or

In complex analysis, the Hardy spaces are spaces of holomorphic functions on the unit disk or upper half plane. They were introduced by Frigyes Riesz, who named them after G. H. Hardy, because of the paper. In real analysis Hardy spaces are spaces of distributions on the real n-space , defined as boundary values of the holomorphic functions. are related to the Lp spaces. For these Hardy spaces are subsets of spaces, while for the spaces have some undesirable properties, and the Hardy spaces are much better behaved. Hence, spaces can be considered extensions of spaces.

In mathematical analysis, a function of bounded variation, also known as BV function, is a real-valued function whose total variation is bounded (finite): the graph of a function having this property is well behaved in a precise sense. For a continuous function of a single variable, being of bounded variation means that the distance along the direction of the y-axis, neglecting the contribution of motion along x-axis, traveled by a point moving along the graph has a finite value. For a continuous function of several variables, the meaning of the definition is the same, except for the fact that the continuous path to be considered cannot be the whole graph of the given function, but can be every intersection of the graph itself with a hyperplane parallel to a fixed x-axis and to the y-axis.

The theory of functions of several complex variables is the branch of mathematics dealing with functions defined on the complex coordinate space, that is, n-tuples of complex numbers. The name of the field dealing with the properties of these functions is called several complex variables, which the Mathematics Subject Classification has as a top-level heading.

In mathematics, the Cauchy principal value, named after Augustin-Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined. In this method, a singularity on an integral interval is avoided by limiting the integral interval to the non singular domain.

In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.

In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations.

In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.

In Fourier analysis, a multiplier operator is a type of linear operator, or transformation of functions. These operators act on a function by altering its Fourier transform. Specifically they multiply the Fourier transform of a function by a specified function known as the multiplier or symbol. Occasionally, the term multiplier operator itself is shortened simply to multiplier. In simple terms, the multiplier reshapes the frequencies involved in any function. This class of operators turns out to be broad: general theory shows that a translation-invariant operator on a group which obeys some regularity conditions can be expressed as a multiplier operator, and conversely. Many familiar operators, such as translations and differentiation, are multiplier operators, although there are many more complicated examples such as the Hilbert transform.

In mathematics, a locally integrable function is a function which is integrable on every compact subset of its domain of definition. The importance of such functions lies in the fact that their function space is similar to Lp spaces, but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain : in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions.

In mathematics, there is in mathematical analysis a class of Sobolev inequalities, relating norms including those of Sobolev spaces. These are used to prove the Sobolev embedding theorem, giving inclusions between certain Sobolev spaces, and the Rellich–Kondrachov theorem showing that under slightly stronger conditions some Sobolev spaces are compactly embedded in others. They are named after Sergei Lvovich Sobolev.

In mathematics, particularly measure theory, the essential range, or the set of essential values, of a function is intuitively the 'non-negligible' range of the function: It does not change between two functions that are equal almost everywhere. One way of thinking of the essential range of a function is the set on which the range of the function is 'concentrated'.

In mathematics, quaternionic analysis is the study of functions with quaternions as the domain and/or range. Such functions can be called functions of a quaternion variable just as functions of a real variable or a complex variable are called.

In mathematics, and in particular in mathematical analysis, the Gagliardo–Nirenberg interpolation inequality is a result in the theory of Sobolev spaces that relates the -norms of different weak derivatives of a function through an interpolation inequality. The theorem is of particular importance in the framework of elliptic partial differential equations and was originally formulated by Emilio Gagliardo and Louis Nirenberg in 1958. The Gagliardo-Nirenberg inequality has found numerous applications in the investigation of nonlinear partial differential equations, and has been generalized to fractional Sobolev spaces by Haïm Brezis and Petru Mironescu in the late 2010s.

In the field of mathematical analysis, an interpolation inequality is an inequality of the form

References

Historical references

Scientific references