Set of uniqueness

Last updated

In mathematics, a set of uniqueness is a concept relevant to trigonometric expansions which are not necessarily Fourier series. Their study is a relatively pure branch of harmonic analysis.

Contents

Definition

A subset E of the circle is called a set of uniqueness, or a U-set, if any trigonometric expansion

which converges to zero for is identically zero; that is, such that

c(n) = 0 for all n.

Otherwise, E is a set of multiplicity (sometimes called an M-set or a Menshov set). Analogous definitions apply on the real line, and in higher dimensions. In the latter case, one needs to specify the order of summation, e.g. "a set of uniqueness with respect to summing over balls".

To understand the importance of the definition, it is important to get out of the Fourier mind-set. In Fourier analysis there is no question of uniqueness, since the coefficients c(n) are derived by integrating the function. Hence, in Fourier analysis the order of actions is

In the theory of uniqueness, the order is different:

In effect, it is usually sufficiently interesting (as in the definition above) to assume that the sum converges to zero and ask if that means that all the c(n) must be zero. As is usual in analysis, the most interesting questions arise when one discusses pointwise convergence. Hence the definition above, which arose when it became clear that neither convergence everywhere nor convergence almost everywhere give a satisfactory answer.

Early research

The empty set is a set of uniqueness. This simply means that if a trigonometric series converges to zero everywhere then it is trivial. This was proved by Riemann, using a delicate technique of double formal integration; and showing that the resulting sum has some generalized kind of second derivative using Toeplitz operators. Later on, Georg Cantor generalized Riemann's techniques to show that any countable, closed set is a set of uniqueness, a discovery which led him to the development of set theory. Paul Cohen, another innovator in set theory, started his career with a thesis on sets of uniqueness.

As the theory of Lebesgue integration developed, it was assumed that any set of zero measure would be a set of uniqueness in one dimension the locality principle for Fourier series shows that any set of positive measure is a set of multiplicity (in higher dimensions this is still an open question). This was disproved by Dimitrii E. Menshov who in 1916 constructed an example of a set of multiplicity which has measure zero.

Transformations

A translation and dilation of a set of uniqueness is a set of uniqueness. A union of a countable family of closed sets of uniqueness is a set of uniqueness. There exists an example of two sets of uniqueness whose union is not a set of uniqueness, but the sets in this example are not Borel. It is an open problem whether the union of any two Borel sets of uniqueness is a set of uniqueness.

Singular distributions

A closed set is a set of uniqueness if and only if there exists a distribution S supported on the set (so in particular it must be singular) such that

( here are the Fourier coefficients). In all early examples of sets of uniqueness, the distribution in question was in fact a measure. In 1954, though, Ilya Piatetski-Shapiro constructed an example of a set of uniqueness which does not support any measure with Fourier coefficients tending to zero. In other words, the generalization of distribution is necessary.

Complexity of structure

The first evidence that sets of uniqueness have complex structure came from the study of Cantor-like sets. Raphaël Salem and Zygmund showed that a Cantor-like set with dissection ratio ξ is a set of uniqueness if and only if 1/ξ is a Pisot number, that is an algebraic integer with the property that all its conjugates (if any) are smaller than 1. This was the first demonstration that the property of being a set of uniqueness has to do with arithmetic properties and not just some concept of size (Nina Bari had proved the case of ξ rational -- the Cantor-like set is a set of uniqueness if and only if 1/ξ is an integer -- a few years earlier).

Since the 50s[ clarification needed ], much work has gone into formalizing this complexity. The family of sets of uniqueness, considered as a set inside the space of compact sets (see Hausdorff distance), was located inside the analytical hierarchy. A crucial part in this research is played by the index of the set, which is an ordinal between 1 and ω1, first defined by Pyatetskii-Shapiro. Nowadays the research of sets of uniqueness is just as much a branch of descriptive set theory as it is of harmonic analysis.

Related Research Articles

<span class="mw-page-title-main">Dirac delta function</span> Generalized function whose value is zero everywhere except at zero

In mathematical physics, the Dirac delta distribution, also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one.

<span class="mw-page-title-main">Almost everywhere</span> Everywhere except a set of measure zero

In measure theory, a property holds almost everywhere if, in a technical sense, the set for which the property holds takes up nearly all possibilities. The notion of "almost everywhere" is a companion notion to the concept of measure zero, and is analogous to the notion of almost surely in probability theory.

<span class="mw-page-title-main">Fourier transform</span> Mathematical transform that expresses a function of time as a function of frequency

In physics and mathematics, the Fourier transform (FT) is a transform that converts a function into a form that describes the frequencies present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made the Fourier transform is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into terms of the intensity of its constituent pitches.

<span class="mw-page-title-main">Fourier series</span> Decomposition of periodic functions into sums of simpler sinusoidal forms

A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series, but not all trigonometric series are Fourier series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.

In mathematics, more specifically in harmonic analysis, Walsh functions form a complete orthogonal set of functions that can be used to represent any discrete function—just like trigonometric functions can be used to represent any continuous function in Fourier analysis. They can thus be viewed as a discrete, digital counterpart of the continuous, analog system of trigonometric functions on the unit interval. But unlike the sine and cosine functions, which are continuous, Walsh functions are piecewise constant. They take the values −1 and +1 only, on sub-intervals defined by dyadic fractions.

In mathematical analysis, Parseval's identity, named after Marc-Antoine Parseval, is a fundamental result on the summability of the Fourier series of a function. Geometrically, it is a generalized Pythagorean theorem for inner-product spaces.

<span class="mw-page-title-main">Sinc function</span> Special mathematical function defined as sin(x)/x

In mathematics, physics and engineering, the sinc function, denoted by sinc(x), has two forms, normalized and unnormalized.

In mathematics, the question of whether the Fourier series of a periodic function converges to a given function is researched by a field known as classical harmonic analysis, a branch of pure mathematics. Convergence is not necessarily given in the general case, and certain criteria must be met for convergence to occur.

In Fourier analysis, a multiplier operator is a type of linear operator, or transformation of functions. These operators act on a function by altering its Fourier transform. Specifically they multiply the Fourier transform of a function by a specified function known as the multiplier or symbol. Occasionally, the term multiplier operator itself is shortened simply to multiplier. In simple terms, the multiplier reshapes the frequencies involved in any function. This class of operators turns out to be broad: general theory shows that a translation-invariant operator on a group which obeys some regularity conditions can be expressed as a multiplier operator, and conversely. Many familiar operators, such as translations and differentiation, are multiplier operators, although there are many more complicated examples such as the Hilbert transform.

<span class="mw-page-title-main">Mixing (mathematics)</span>

In mathematics, mixing is an abstract concept originating from physics: the attempt to describe the irreversible thermodynamic process of mixing in the everyday world: e.g. mixing paint, mixing drinks, industrial mixing.

<span class="mw-page-title-main">Dirac comb</span> Periodic distribution ("function") of "point-mass" Dirac delta sampling

In mathematics, a Dirac comb is a periodic function with the formula

In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.

In mathematics, a trigonometric series is an infinite series of the form

<span class="mw-page-title-main">Lacunary function</span>

In analysis, a lacunary function, also known as a lacunary series, is an analytic function that cannot be analytically continued anywhere outside the radius of convergence within which it is defined by a power series. The word lacunary is derived from lacuna, meaning gap, or vacancy.

The Hausdorff−Young inequality is a foundational result in the mathematical field of Fourier analysis. As a statement about Fourier series, it was discovered by William Henry Young (1913) and extended by Hausdorff (1923). It is now typically understood as a rather direct corollary of the Plancherel theorem, found in 1910, in combination with the Riesz-Thorin theorem, originally discovered by Marcel Riesz in 1927. With this machinery, it readily admits several generalizations, including to multidimensional Fourier series and to the Fourier transform on the real line, Euclidean spaces, as well as more general spaces. With these extensions, it is one of the best-known results of Fourier analysis, appearing in nearly every introductory graduate-level textbook on the subject.

Carleson's theorem is a fundamental result in mathematical analysis establishing the pointwise (Lebesgue) almost everywhere convergence of Fourier series of L2 functions, proved by Lennart Carleson (1966). The name is also often used to refer to the extension of the result by Richard Hunt (1968) to Lp functions for p(1, ∞] and the analogous results for pointwise almost everywhere convergence of Fourier integrals, which can be shown to be equivalent by transference methods.

<span class="mw-page-title-main">Lebesgue integration</span> Method of integration

In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the x-axis. The Lebesgue integral, named after French mathematician Henri Lebesgue, extends the integral to a larger class of functions. It also extends the domains on which these functions can be defined.

<span class="mw-page-title-main">Dirichlet kernel</span>

In mathematical analysis, the Dirichlet kernel, named after the German mathematician Peter Gustav Lejeune Dirichlet, is the collection of periodic functions defined as

An ordinary fractal string is a bounded, open subset of the real number line. Such a subset can be written as an at-most-countable union of connected open intervals with associated lengths written in non-increasing order; we also refer to as a fractal string. For example, is a fractal string corresponding to the Cantor set. A fractal string is the analogue of a one-dimensional "fractal drum," and typically the set has a boundary which corresponds to a fractal such as the Cantor set. The heuristic idea of a fractal string is to study a (one-dimensional) fractal using the "space around the fractal." It turns out that the sequence of lengths of the set itself is "intrinsic," in the sense that the fractal string itself contains information about the fractal to which it corresponds.

In mathematics, Wiener's lemma is a well-known identity which relates the asymptotic behaviour of the Fourier coefficients of a Borel measure on the circle to its atomic part. This result admits an analogous statement for measures on the real line. It was first discovered by Norbert Wiener.

References