Absolutely and completely monotonic functions and sequences

Last updated

In mathematics, the notions of an absolutely monotonic function and a completely monotonic function are two very closely related concepts. Both imply very strong monotonicity properties. Both types of functions have derivatives of all orders. In the case of an absolutely monotonic function, the function as well as its derivatives of all orders must be non-negative in its domain of definition which would imply that the function as well as its derivatives of all orders are monotonically increasing functions in the domain of definition. In the case of a completely monotonic function, the function and its derivatives must be alternately non-negative and non-positive in its domain of definition which would imply that function and its derivatives are alternately monotonically increasing and monotonically decreasing functions. Such functions were first studied by S. Bernshtein in 1914 and the terminology is also due to him. [1] [2] [3] There are several other related notions like the concepts of almost completely monotonic function, logarithmically completely monotonic function, strongly logarithmically completely monotonic function, strongly completely monotonic function and almost strongly completely monotonic function. [4] [5] Another related concept is that of a completely/absolutely monotonic sequence. This notion was introduced by Hausdorff in 1921.

Contents

The notions of completely and absolutely monotone function/sequence play an important role in several areas of mathematics. For example, in classical analysis they occur in the proof of the positivity of integrals involving Bessel functions or the positivity of Cesàro means of certain Jacobi series. [6] Such functions occur in other areas of mathematics such as probability theory, numerical analysis, and elasticity. [7]

Definitions

Functions

A real valued function defined over an interval in the real line is called an absolutely monotonic function if it has derivatives of all orders and for all in . [1] The function is called a completely monotonic function if for all in . [1]

The two notions are mutually related. The function is completely monotonic if and only if is absolutely monotonic on where the interval obtained by reflecting with respect to the origin. (Thus, if is the interval then is the interval .)

In applications, the interval on the real line that is usually considered is the closed-open right half of the real line, that is, the interval .

Examples

The following functions are absolutely monotonic in the specified regions. [8]

  1. , where a non-negative constant, in the region
  2. , where for all , in the region
  3. in the region
  4. in the region

Sequences

A sequence is called an absolutely monotonic sequence if its elements are non-negative and its successive differences are all non-negative, that is, if

where .

A sequence is called a completely monotonic sequence if its elements are non-negative and its successive differences are alternately non-positive and non-negative, [9] that is, if

Examples

The sequences and for are completely monotonic sequences.

Some important properties

Both the extensions and applications of the theory of absolutely monotonic functions derive from theorems.

where is non-decreasing and bounded on .
The determination of this function from the sequence is referred to as the Hausdorff moment problem.

Further reading

The following is a random selection from the large body of literature on absolutely/completely monotonic functions/sequences.

See also

Related Research Articles

<span class="mw-page-title-main">Cumulative distribution function</span> Probability that random variable X is less than or equal to x

In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable , or just distribution function of , evaluated at , is the probability that will take a value less than or equal to .

<span class="mw-page-title-main">Expected value</span> Average value of a random variable

In probability theory, the expected value is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would "expect" to get in reality.

In mathematics, the classic Möbius inversion formula is a relation between pairs of arithmetic functions, each defined from the other by sums over divisors. It was introduced into number theory in 1832 by August Ferdinand Möbius.

In mathematics, the branch of real analysis studies the behavior of real numbers, sequences and series of real numbers, and real functions. Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.

<span class="mw-page-title-main">Monotonic function</span> Order-preserving mathematical function

In mathematics, a monotonic function is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order theory.

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions.

<span class="mw-page-title-main">Convex function</span> Real function with secant line between points above the graph itself

In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph is a convex set. In simple terms, a convex function graph is shaped like a cup , while a concave function's graph is shaped like a cap .

In calculus and real analysis, absolute continuity is a smoothness property of functions that is stronger than continuity and uniform continuity. The notion of absolute continuity allows one to obtain generalizations of the relationship between the two central operations of calculus—differentiation and integration. This relationship is commonly characterized in the framework of Riemann integration, but with absolute continuity it may be formulated in terms of Lebesgue integration. For real-valued functions on the real line, two interrelated notions appear: absolute continuity of functions and absolute continuity of measures. These two notions are generalized in different directions. The usual derivative of a function is related to the Radon–Nikodym derivative, or density, of a measure. We have the following chains of inclusions for functions over a compact subset of the real line:

In mathematics, Fatou's lemma establishes an inequality relating the Lebesgue integral of the limit inferior of a sequence of functions to the limit inferior of integrals of these functions. The lemma is named after Pierre Fatou.

In measure theory, Lebesgue's dominated convergence theorem provides sufficient conditions under which almost everywhere convergence of a sequence of functions implies convergence in the L1 norm. Its power and utility are two of the primary theoretical advantages of Lebesgue integration over Riemann integration.

In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.

<span class="mw-page-title-main">Integral test for convergence</span> Test for infinite series of monotonous terms for convergence

In mathematics, the integral test for convergence is a method used to test infinite series of monotonous terms for convergence. It was developed by Colin Maclaurin and Augustin-Louis Cauchy and is sometimes known as the Maclaurin–Cauchy test.

In mathematics, a series is the sum of the terms of an infinite sequence of numbers. More precisely, an infinite sequence defines a series S that is denoted

Renewal theory is the branch of probability theory that generalizes the Poisson process for arbitrary holding times. Instead of exponentially distributed holding times, a renewal process may have any independent and identically distributed (IID) holding times that have finite mean. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times.

In real analysis, a branch of mathematics, Bernstein's theorem states that every real-valued function on the half-line [0, ∞) that is totally monotone is a mixture of exponential functions. In one important special case the mixture is a weighted average, or expected value.

In mathematics, convergence tests are methods of testing for the convergence, conditional convergence, absolute convergence, interval of convergence or divergence of an infinite series .

In mathematics, Abel's test is a method of testing for the convergence of an infinite series. The test is named after mathematician Niels Henrik Abel, who proved it in 1826. There are two slightly different versions of Abel's test – one is used with series of real numbers, and the other is used with power series in complex analysis. Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions dependent on parameters.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

In the mathematical field of analysis, a well-known theorem describes the set of discontinuities of a monotone real-valued function of a real variable; all discontinuities of such a (monotone) function are necessarily jump discontinuities and there are at most countably many of them.

References

  1. 1 2 3 "Absolutely monotonic function". encyclopediaofmath.org. Encyclopedia of Mathematics. Retrieved 28 December 2023.
  2. S. Bernstein (1914). "Sur la définition et les propriétés des fonctions analytique d'une variable réelle". Mathematische Annalen. 75: 449–468.
  3. S. Bernstein (1928). "Sur les fonctions absolument monotones". Acta Mathematica. 52: 1–66.
  4. Senlin Guo (2017). "Some Properties of Functions Related to Completely Monotonic Functions" (PDF). Filomat. 31 (2): 247–254. Retrieved 29 December 2023.
  5. Senlin Guo, Andrea Laforgia, Necdet Batir and Qiu-Ming Luo (2014). "Completely Monotonic and Related Functions: Their Applications" (PDF). Journal of Applied Mathematics. 2014: 1–3. Retrieved 28 December 2023.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  6. R. Askey (1973). "Summability of Jacobi series". Transactions of American Mathematical Society. 179: 71–84.
  7. William Feller (1971). An Introduction to Probability Theory and Its Applications, Vol. 2 (3 ed.). New York: Wiley.
  8. David Vernon Widder (1946). The Laplace Transform. Princeton University Press. pp. 142–143.
  9. David Vernon Widder (1946). The Laplace Transform. Princeton University Press. p. 101.