Gautschi's inequality

Last updated

In real analysis, a branch of mathematics, Gautschi's inequality is an inequality for ratios of gamma functions. It is named after Walter Gautschi.

Contents

Statement

Let be a positive real number, and let . Then, [1]

History

In 1948, Wendel proved the inequalities

for and . [2] He used this to determine the asymptotic behavior of a ratio of gamma functions. The upper bound in this inequality is stronger than the one given above.

In 1959, Gautschi independently proved two inequalities for ratios of gamma functions. His lower bounds were identical to Wendel's. One of his upper bounds was the one given in the statement above, while the other one was sometimes stronger and sometimes weaker than Wendel's.

Consequences

An immediate consequence is the following description of the asymptotic behavior of ratios of gamma functions:

Proofs

There are several known proofs of Gautschi's inequality. One simple proof is based on the strict logarithmic convexity of Euler's gamma function. By definition, this means that for every and with and every , we have

Apply this inequality with , , and . Also apply it with , , and . The resulting inequalities are:

Rearranging the first of these gives the lower bound, while rearranging the second and applying the trivial estimate gives the upper bound.

A survey of inequalities for ratios of gamma functions was written by Qi. [3]

The proof by logarithmic convexity gives the stronger upper bound

Gautschi's original paper proved a different, stronger upper bound,

where is the digamma function. Neither of these upper bounds is always stronger than the other. [4]

Kershaw proved two tighter inequalities. Again assuming that and , [5]

Gautschi's inequality is specific to a quotient of gamma functions evaluated at two real numbers having a small difference. However, there are extensions to other situations. If and are positive real numbers, then the convexity of leads to the inequality: [6]

For , this leads to the estimates

A related but weaker inequality can be easily derived from the mean value theorem and the monotonicity of . [7]

A more explicit inequality valid for a wider class of arguments is due to Kečkić and Vasić, who proved that if , then: [8]

In particular, for , we have:

Guo, Qi, and Srivastava proved a similar-looking inequality, valid for all : [9]

For , this leads to:

Related Research Articles

<span class="mw-page-title-main">Cauchy distribution</span> Probability distribution

The Cauchy distribution, named after Augustin-Louis Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.

In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model.

<span class="mw-page-title-main">Gamma distribution</span> Probability distribution

In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:

  1. With a shape parameter k and a scale parameter θ
  2. With a shape parameter and an inverse scale parameter , called a rate parameter.
<span class="mw-page-title-main">Polygamma function</span> Meromorphic function

In mathematics, the polygamma function of order m is a meromorphic function on the complex numbers defined as the (m + 1)th derivative of the logarithm of the gamma function:

<span class="mw-page-title-main">Digamma function</span> Mathematical function

In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:

In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of exact simulation method. The method works for any distribution in with a density.

In probability theory, a Chernoff bound is an exponentially decreasing upper bound on the tail of a random variable based on its moment generating function. The minimum of all such exponential bounds forms the Chernoff or Chernoff-Cramér bound, which may decay faster than exponential. It is especially useful for sums of independent random variables, such as sums of Bernoulli random variables.

<span class="mw-page-title-main">Dirichlet distribution</span> Probability distribution

In probability and statistics, the Dirichlet distribution, often denoted , is a family of continuous multivariate probability distributions parameterized by a vector of positive reals. It is a multivariate generalization of the beta distribution, hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as prior distributions in Bayesian statistics, and in fact, the Dirichlet distribution is the conjugate prior of the categorical distribution and multinomial distribution.

<span class="mw-page-title-main">ARGUS distribution</span>

In physics, the ARGUS distribution, named after the particle physics experiment ARGUS, is the probability distribution of the reconstructed invariant mass of a decayed particle candidate in continuum background.

In mathematics, in the area of complex analysis, Nachbin's theorem is a result used to establish bounds on the growth rates for analytic functions. In particular, Nachbin's theorem may be used to give the domain of convergence of the generalized Borel transform, also called Nachbin summation.

In mathematics and economics, transportation theory or transport theory is a name given to the study of optimal transportation and allocation of resources. The problem was formalized by the French mathematician Gaspard Monge in 1781.

<span class="mw-page-title-main">Generalized Pareto distribution</span> Family of probability distributions often used to model tails or extreme values

In statistics, the generalized Pareto distribution (GPD) is a family of continuous probability distributions. It is often used to model the tails of another distribution. It is specified by three parameters: location , scale , and shape . Sometimes it is specified by only scale and shape and sometimes only by its shape parameter. Some references give the shape parameter as .

In mathematical analysis, and especially in real, harmonic analysis and functional analysis, an Orlicz space is a type of function space which generalizes the Lp spaces. Like the Lp spaces, they are Banach spaces. The spaces are named for Władysław Orlicz, who was the first to define them in 1932.

<span class="mw-page-title-main">Stokes' theorem</span> Theorem in vector calculus

Stokes' theorem, also known as the Kelvin–Stokes theorem after Lord Kelvin and George Stokes, the fundamental theorem for curls or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence:

In mathematics, the Fox–Wright function (also known as Fox–Wright Psi function, not to be confused with Wright Omega function) is a generalisation of the generalised hypergeometric function pFq(z) based on ideas of Charles Fox (1928) and E. Maitland Wright (1935):

In mathematics and physics, Lieb–Thirring inequalities provide an upper bound on the sums of powers of the negative eigenvalues of a Schrödinger operator in terms of integrals of the potential. They are named after E. H. Lieb and W. E. Thirring.

In probability theory, a subgaussian distribution, the distribution of a subgaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a subgaussian distribution are dominated by the tails of a Gaussian. This property gives subgaussian distributions their name.

<span class="mw-page-title-main">Superparabola</span>

A superparabola is a geometric curve defined in the Cartesian coordinate system as a set of points (x, y) with where p, a, and b are positive integers. This equation defines an open curve within the rectangle , .

Maximal entropy random walk (MERW) is a popular type of biased random walk on a graph, in which transition probabilities are chosen accordingly to the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. While standard random walk chooses for every vertex uniform probability distribution among its outgoing edges, locally maximizing entropy rate, MERW maximizes it globally by assuming uniform probability distribution among all paths in a given graph.

In probability theory and statistics, the modified half-normal distribution (MHN) is a three-parameter family of continuous probability distributions supported on the positive part of the real line. It can be viewed as a generalization of multiple families, including the half-normal distribution, truncated normal distribution, gamma distribution, and square root of the gamma distribution, all of which are special cases of the MHN distribution. Therefore, it is a flexible probability model for analyzing real-valued positive data. The name of the distribution is motivated by the similarities of its density function with that of the half-normal distribution.

References

  1. NIST Digital Library of Mathematical Functions, 5.6.4.
  2. J.G. Wendel, Note on the Gamma function, Amer. Math. Monthly 55 (9) (1948) 563–564.
  3. Feng Qi, Bounds for the Ratio of Two Gamma Functions, Journal of Inequalities and Applications, Volume 2010, doi:10.1155/2010/493058.
  4. Feng Qi, Bounds for the ratio of two Gamma functions, J. Inequal. Appl. (2010) 1–84.
  5. D. Kershaw, Some extensions of W. Gautschi’s inequalities for the gamma function, Math. Comp. 41 (1983) 607–611.
  6. M. Merkle, Conditions for convexity of a derivative and applications to the Gamma and Digamma function, Facta Universitatis (Niš), Ser. Math. Inform. 16 (2001), 13-20.
  7. A. Laforgia, P. Natalini, Exponential, gamma and polygamma functions: Simple proofs of classical and new inequalities, J. Math. Anal. Appl. 407 (2013), 495–504.
  8. J. D. Kečkić and P. M. Vasić, Some inequalities for the gamma function, Publications de l’Institut Mathématique, vol. 11 (25), pp. 107–114, 1971.
  9. S. Guo, F. Qi, and H. M. Srivastava, Necessary and sufficient conditions for two classes of functions to be logarithmically completely monotonic, Integral Transforms and Special Functions, vol. 18, no. 11-12, pp. 819–826, 2007, https://dx.doi.org/10.1080/10652460701528933.