Hausdorff moment problem

Last updated

In mathematics, the Hausdorff moment problem , named after Felix Hausdorff, asks for necessary and sufficient conditions that a given sequence (m0, m1, m2, ...) be the sequence of moments

Contents

of some Borel measure μ supported on the closed unit interval [0, 1]. In the case m0 = 1, this is equivalent to the existence of a random variable X supported on [0, 1], such that E[Xn] = mn.

The essential difference between this and other well-known moment problems is that this is on a bounded interval, whereas in the Stieltjes moment problem one considers a half-line [0, ∞), and in the Hamburger moment problem one considers the whole line (−∞, ∞). The Stieltjes moment problems and the Hamburger moment problems, if they are solvable, may have infinitely many solutions (indeterminate moment problem) whereas a Hausdorff moment problem always has a unique solution if it is solvable (determinate moment problem). In the indeterminate moment problem case, there are infinite measures corresponding to the same prescribed moments and they consist of a convex set. The set of polynomials may or may not be dense in the associated Hilbert spaces if the moment problem is indeterminate, and it depends on whether measure is extremal or not. But in the determinate moment problem case, the set of polynomials is dense in the associated Hilbert space.

Completely monotonic sequences

In 1921, Hausdorff showed that (m0, m1, m2, ...) is such a moment sequence if and only if the sequence is completely monotonic, that is, its difference sequences satisfy the equation

for all n, k ≥ 0. Here, Δ is the difference operator given by

The necessity of this condition is easily seen by the identity

which is non-negative since it is the integral of a non-negative function. For example, it is necessary to have

See also

Related Research Articles

In mathematics, specifically in measure theory, a Borel measure on a topological space is a measure that is defined on all open sets. Some authors require additional restrictions on the measure, as described below.

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

In calculus and real analysis, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central operations of calculus—differentiation and integration. This relationship is commonly characterized in the framework of Riemann integration, but with absolute continuity it may be formulated in terms of Lebesgue integration. For real-valued functions on the real line, two interrelated notions appear: absolute continuity of functions and absolute continuity of measures. These two notions are generalized in different directions. The usual derivative of a function is related to the Radon–Nikodym derivative, or density, of a measure. We have the following chains of inclusions for functions over a compact subset of the real line:

In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.

<span class="mw-page-title-main">Moment problem</span> Trying to map moments to a measure that generates them

In mathematics, a moment problem arises as the result of trying to invert the mapping that takes a measure to the sequence of moments

In mathematics, the Gauss–Kuzmin–Wirsing operator is the transfer operator of the Gauss map that takes a positive number to the fractional part of its reciprocal. It is named after Carl Gauss, Rodion Kuzmin, and Eduard Wirsing. It occurs in the study of continued fractions; it is also related to the Riemann zeta function.

In general relativity, the Gibbons–Hawking–York boundary term is a term that needs to be added to the Einstein–Hilbert action when the underlying spacetime manifold has a boundary.

In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence (m0, m1, m2, ...), does there exist a positive Borel measure μ (for instance, the measure determined by the cumulative distribution function of a random variable) on the real line such that

In mathematics, the Stieltjes moment problem, named after Thomas Joannes Stieltjes, seeks necessary and sufficient conditions for a sequence (m0, m1, m2, ...) to be of the form

In real analysis, a branch of mathematics, Bernstein's theorem states that every real-valued function on the half-line [0, ∞) that is totally monotone is a mixture of exponential functions. In one important special case the mixture is a weighted average, or expected value.

Stochastic dominance is a partial order between random variables. It is a form of stochastic ordering. The concept arises in decision theory and decision analysis in situations where one gamble can be ranked as superior to another gamble for a broad class of decision-makers. It is based on shared preferences regarding sets of possible outcomes and their associated probabilities. Only limited knowledge of preferences is required for determining dominance. Risk aversion is a factor only in second order stochastic dominance.

In mathematical analysis, the Chebyshev–Markov–Stieltjes inequalities are inequalities related to the problem of moments that were formulated in the 1880s by Pafnuty Chebyshev and proved independently by Andrey Markov and by Thomas Jan Stieltjes. Informally, they provide sharp bounds on a measure from above and from below in terms of its first moments.

In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure satisfies Carleman's condition, there is no other measure having the same moments as The condition was discovered by Torsten Carleman in 1922.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

In mathematics, the concept of a generalised metric is a generalisation of that of a metric, in which the distance is not a real number but taken from an arbitrary ordered field.

In mathematical physics, the Belinfante–Rosenfeld tensor is a modification of the energy–momentum tensor that is constructed from the canonical energy–momentum tensor and the spin current so as to be symmetric yet still conserved.

In mathematical analysis, Krein's condition provides a necessary and sufficient condition for exponential sums

In mathematics, in the field of complex analysis, a Nevanlinna function is a complex function which is an analytic function on the open upper half-plane and has non-negative imaginary part. A Nevanlinna function maps the upper half-plane to itself or to a real constant, but is not necessarily injective or surjective. Functions with this property are sometimes also known as Herglotz, Pick or R functions.

In functional analysis, double operator integrals (DOI) are integrals of the form

In mathematics, the notions of an absolutely monotonic function and a completely monotonic function are two very closely related concepts. Both imply very strong monotonicity properties. Both types of functions have derivatives of all orders. In the case of an absolutely monotonic function, the function as well as its derivatives of all orders must be non-negative in its domain of definition which would imply that the function as well as its derivatives of all orders are monotonically increasing functions in the domain of definition. In the case of a completely monotonic function, the function and its derivatives must be alternately non-negative and non-positive in its domain of definition which would imply that function and its derivatives are alternately monotonically increasing and monotonically decreasing functions. Such functions were first studied by S. Bernshtein in 1914 and the terminology is also due to him. There are several other related notions like the concepts of almost completely monotonic function, logarithmically completely monotonic function, strongly logarithmically completely monotonic function, strongly completely monotonic function and almost strongly completely monotonic function. Another related concept is that of a completely/absolutely monotonic sequence. This notion was introduced by Hausdorff in 1921.

References