Popoviciu's inequality

Last updated

In convex analysis, Popoviciu's inequality is an inequality about convex functions. It is similar to Jensen's inequality and was found in 1965 by Tiberiu Popoviciu, [1] [2] a Romanian mathematician.

Contents

Formulation

Let f be a function from an interval to . If f is convex, then for any three points x, y, z in I,

If a function f is continuous, then it is convex if and only if the above inequality holds for all x, y, z from . When f is strictly convex, the inequality is strict except for x = y = z. [3]

Generalizations

It can be generalized to any finite number n of points instead of 3, taken on the right-hand side k at a time instead of 2 at a time: [4]

Let f be a continuous function from an interval to . Then f is convex if and only if, for any integers n and k where n ≥ 3 and , and any n points from I,

[5] [6] [7] [8]

Weighted inequality

Popoviciu's inequality can also be generalized to a weighted inequality. [9]

Let f be a continuous function from an interval to . Let be three points from , and let be three nonnegative reals such that and . Then,

Notes

  1. Tiberiu Popoviciu (1965), "Sur certaines inégalités qui caractérisent les fonctions convexes", Analele ştiinţifice Univ. "Al.I. Cuza" Iasi, Secţia I a Mat., 11: 155–164
  2. Popoviciu's paper has been published in Romanian language, but the interested reader can find his results in the review Zbl   0166.06303. Page 1 Page 2
  3. Constantin Niculescu; Lars-Erik Persson (2006), Convex functions and their applications: a contemporary approach, Springer Science & Business, p. 12, ISBN   978-0-387-24300-9
  4. J. E. Pečarić; Frank Proschan; Yung Liang Tong (1992), Convex functions, partial orderings, and statistical applications, Academic Press, p. 171, ISBN   978-0-12-549250-8
  5. P. M. Vasić; Lj. R. Stanković (1976), "Some inequalities for convex functions", Math. Balkanica, no. 6 (1976), pp. 281–288
  6. Grinberg, Darij (2008). "Generalizations of Popoviciu's inequality". arXiv: 0803.2958v1 [math.FA].
  7. M.Mihai; F.-C. Mitroi-Symeonidis (2016), "New extensions of Popoviciu's inequality", Mediterr. J. Math., Volume 13, vol. 13, no. 5, pp. 3121–3133, arXiv: 1507.05304 , doi:10.1007/s00009-015-0675-3, ISSN   1660-5446, S2CID   119720352
  8. M.W. Alomari (2021), "Popoviciu's type inequalities for h-MN-convex functions", e-Journal of Analysis and Applied Mathematics, Volume 2021, vol. 2021, no. 1, pp. 48–89, doi:10.2478/ejaam-2021-0005
  9. Darij Grinberg, Generalizations of Popoviciu’s inequality (PDF)

Related Research Articles

The Hahn–Banach theorem is a central tool in functional analysis. It allows the extension of bounded linear functionals defined on a subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting". Another version of the Hahn–Banach theorem is known as the Hahn–Banach separation theorem or the hyperplane separation theorem, and has numerous uses in convex geometry.

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

In mathematics, a topological vector space is one of the basic structures investigated in functional analysis. A topological vector space is a vector space that is also a topological space with the property that the vector space operations are also continuous functions. Such a topology is called a vector topology and every topological vector space has a uniform topological structure, allowing a notion of uniform convergence and completeness. Some authors also require that the space is a Hausdorff space. One of the most widely studied categories of TVSs are locally convex topological vector spaces. This article focuses on TVSs that are not necessarily locally convex. Banach spaces, Hilbert spaces and Sobolev spaces are other well-known examples of TVSs.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

<span class="mw-page-title-main">Convex function</span> Real function with secant line between points above the graph itself

In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of the function lies above the graph between the two points. Equivalently, a function is convex if its epigraph is a convex set. A twice-differentiable function of a single variable is convex if and only if its second derivative is nonnegative on its entire domain. Well-known examples of convex functions of a single variable include the quadratic function and the exponential function . In simple terms, a convex function refers to a function whose graph is shaped like a cup , while a concave function's graph is shaped like a cap .

In mathematics, the uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm.

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

In functional analysis and related areas of mathematics, locally convex topological vector spaces (LCTVS) or locally convex spaces are examples of topological vector spaces (TVS) that generalize normed spaces. They can be defined as topological vector spaces whose topology is generated by translations of balanced, absorbent, convex sets. Alternatively they can be defined as a vector space with a family of seminorms, and a topology can be defined in terms of that family. Although in general such spaces are not necessarily normable, the existence of a convex local base for the zero vector is strong enough for the Hahn–Banach theorem to hold, yielding a sufficiently rich theory of continuous linear functionals.

In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector. This norm can be defined as the square root of the inner product of a vector with itself.

In mathematics, a function f is logarithmically convex or superconvex if , the composition of the logarithm with f, is itself a convex function.

In functional analysis and related areas of mathematics an absorbing set in a vector space is a set which can be "inflated" or "scaled up" to eventually always include any given point of the vector space. Alternative terms are radial or absorbent set. Every neighborhood of the origin in every topological vector space is an absorbing subset.

<span class="mw-page-title-main">Convex analysis</span>

Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory.

In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory.

In mathematics, the Brunn–Minkowski theorem is an inequality relating the volumes of compact subsets of Euclidean space. The original version of the Brunn–Minkowski theorem applied to convex sets; the generalization to compact nonconvex sets stated here is due to Lazar Lyusternik (1935).

In metric geometry, the metric envelope or tight span of a metric space M is an injective metric space into which M can be embedded. In some sense it consists of all points "between" the points of M, analogous to the convex hull of a point set in a Euclidean space. The tight span is also sometimes known as the injective envelope or hyperconvex hull of M. It has also been called the injective hull, but should not be confused with the injective hull of a module in algebra, a concept with a similar description relative to the category of R-modules rather than metric spaces.

In probability theory and theoretical computer science, McDiarmid's inequality is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent random variables. McDiarmid's inequality applies to functions that satisfy a bounded differences property, meaning that replacing a single argument to the function while leaving all other arguments unchanged cannot cause too large of a change in the value of the function.

In optimization, a self-concordant function is a function for which

In mathematical analysis, Ekeland's variational principle, discovered by Ivar Ekeland, is a theorem that asserts that there exist nearly optimal solutions to some optimization problems.

In probability theory, concentration inequalities provide bounds on how a random variable deviates from some value. The law of large numbers of classical probability theory states that sums of independent random variables are, under very mild conditions, close to their expectation with a large probability. Such sums are the most basic examples of random variables concentrated around their mean. Recent results show that such behavior is shared by other functions of independent random variables.

In mathematics, a submodular set function is a set function whose value, informally, has the property that the difference in the incremental value of the function that a single element makes when added to an input set decreases as the size of the input set increases. Submodular functions have a natural diminishing returns property which makes them suitable for many applications, including approximation algorithms, game theory and electrical networks. Recently, submodular functions have also found immense utility in several real world problems in machine learning and artificial intelligence, including automatic summarization, multi-document summarization, feature selection, active learning, sensor placement, image collection summarization and many other domains.