Abel's inequality

Last updated

In mathematics, Abel's inequality, named after Niels Henrik Abel, supplies a simple bound on the absolute value of the inner product of two vectors in an important special case.

Mathematics field of study concerning quantity, patterns and change

Mathematics includes the study of such topics as quantity, structure, space, and change.

Niels Henrik Abel Norwegian mathematician

Niels Henrik Abel was a Norwegian mathematician who made pioneering contributions in a variety of fields. His most famous single result is the first complete proof demonstrating the impossibility of solving the general quintic equation in radicals. This question was one of the outstanding open problems of his day, and had been unresolved for over 350 years. He was also an innovator in the field of elliptic functions, discoverer of Abelian functions. Through the great works from Abel's hand he was known to the world's mathematicians; he made his discoveries while living in poverty and died at the age of 26 from tuberculosis.

Contents

Mathematical description

Let {a1, a2,...} be a sequence of real numbers that is either nonincreasing or nondecreasing, and let {b1, b2,...} be a sequence of real or complex numbers. If {an} is nondecreasing, it holds that

Real number number representing a continuous quantity

In mathematics, a real number is a value of a continuous quantity that can represent a distance along a line. The adjective real in this context was introduced in the 17th century by René Descartes, who distinguished between real and imaginary roots of polynomials. The real numbers include all the rational numbers, such as the integer −5 and the fraction 4/3, and all the irrational numbers, such as 2. Included within the irrationals are the transcendental numbers, such as π (3.14159265...). In addition to measuring distance, real numbers can be used to measure quantities such as time, mass, energy, velocity, and many more.

Complex number number that can be put in the form a + bi, where a and b are real numbers and i is called the imaginary unit

A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers, and i is a solution of the equation x2 = −1. Because no real number satisfies this equation, i is called an imaginary number. For the complex number a + bi, a is called the real part, and b is called the imaginary part. Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers, and are fundamental in many aspects of the scientific description of the natural world.

and if {an} is nonincreasing, it holds that

where

In particular, if the sequence {an} is nonincreasing and nonnegative, it follows that

Relation to Abel's transformation

Abel's inequality follows easily from Abel's transformation, which is the discrete version of integration by parts: If {a1, a2, ...} and {b1, b2, ...} are sequences of real or complex numbers, it holds that

In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be readily derived by integrating the product rule of differentiation.

Related Research Articles

Absolute value magnitude of the number on the real number line; (of a real number x) non-negative value of x without regard to its sign

In mathematics, the absolute value or modulus|x| of a real number x is the non-negative value of x without regard to its sign. Namely, |x| = x for a positive x, |x| = −x for a negative x, and |0| = 0. For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3. The absolute value of a number may be thought of as its distance from zero.

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the same experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up is 3.5 as the number of rolls approaches infinity. In other words, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment.

In mathematics, the Cauchy–Schwarz inequality, also known as the Cauchy–Bunyakovsky–Schwarz inequality, is a useful inequality encountered in many different settings, such as linear algebra, analysis, probability theory, vector algebra and other areas. It is considered to be one of the most important inequalities in all of mathematics.

In mathematics, an infinite series of numbers is said to converge absolutely if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number . Similarly, an improper integral of a function, , is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if

In the mathematical field of real analysis, the monotone convergence theorem is any of a number of related theorems proving the convergence of monotonic sequences that are also bounded. Informally, the theorems state that if a sequence is increasing and bounded above by a supremum, then the sequence will converge to the supremum; in the same way, if a sequence is decreasing and is bounded below by an infimum, it will converge to the infimum.

In probability theory, Chebyshev's inequality guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/k2 of the distribution's values can be more than k standard deviations away from the mean. The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

Jensens inequality

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proven by Jensen in 1906. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

In mathematics, the Weierstrass M-test is a test for determining whether an infinite series of functions converges uniformly and absolutely. It applies to series whose terms are functions with real or complex values, and is analogous to the comparison test for determining the convergence of series of real or complex numbers.

In mathematics, a low-discrepancy sequence is a sequence with the property that for all values of N, its subsequence x1, ..., xN has a low discrepancy.

In mathematics, the binomial series is the Maclaurin series for the function given by , where is an arbitrary complex number. Explicitly,

In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wassily Hoeffding in 1963.

In mathematics, especially functional analysis, Bessel's inequality is a statement about the coefficients of an element in a Hilbert space with respect to an orthonormal sequence.

In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices.

In mathematics, a univariate polynomial of degree n with real or complex coefficients has n complex roots, if counted with their multiplicities. They form a set of n points in the complex plane. This article concerns the geometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial.

Carleman's inequality is an inequality in mathematics, named after Torsten Carleman, who proved it in 1923 and used it to prove the Denjoy–Carleman theorem on quasi-analytic classes.

In mathematics, the Khintchine inequality, named after Aleksandr Khinchin and spelled in multiple ways in the Latin alphabet, is a theorem from probability, and is also frequently used in analysis. Heuristically, it says that if we pick complex numbers , and add them together each multiplied by a random sign , then the expected value of the sum's modulus, or the modulus it will be closest to on average, will be not too far off from .

In analysis, a branch of mathematics, Hilbert's inequality states that

In probability theory, concentration inequalities provide bounds on how a random variable deviates from some value. The laws of large numbers of classical probability theory states that sums of independent random variables are, under very mild conditions, close to their expectation with a large probability. Such sums are the most basic examples of random variables concentrated around their mean. Recent results show that such behavior is shared by other functions of independent random variables.

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

References

Eric Wolfgang Weisstein is an encyclopedist who created and maintains MathWorld and Eric Weisstein's World of Science (ScienceWorld). He is the author of the CRC Concise Encyclopedia of Mathematics. He currently works for Wolfram Research, Inc.

MathWorld is an online mathematics reference work, created and largely written by Eric W. Weisstein. It is sponsored by and licensed to Wolfram Research, Inc. and was partially funded by the National Science Foundation's National Science Digital Library grant to the University of Illinois at Urbana–Champaign.

<i>Encyclopedia of Mathematics</i> encyclopedia translated from the Soviet Matematicheskaya entsiklopediya (1977), published by Ky Kluwer Academic Publishers until 2003.

The Encyclopedia of Mathematics is a large reference work in mathematics. It is available in book form and on CD-ROM.