Divergence (statistics)

Last updated

In statistics and information geometry, a divergence is a kind of statistical distance: a binary function which establishes the "distance" from one probability distribution to another on a statistical manifold. Historically the term "divergence" was used informally for various statistical distances, and what are now known as divergences were known by various names (see § History), but today there is a commonly used definition (see § Definition).


Divergences differ from metrics (a more familiar notion of distance) in a number of ways. Firstly, divergences do not need to be symmetric (though a divergence can always be symmetrized), and the asymmetry is an important part of their structure. [1] Accordingly, one often refers asymmetrically to the divergence "of q from p" or "from p to q", in contrast to referring symmetrically to the distance "between p and q". Secondly, divergences generalize squared distance, not linear distance, and thus do not satisfy the triangle inequality (which applies to linear distances), [1] but instead in some cases satisfy a form of the Pythagorean theorem (which applies to squared distances). [2]

The simplest divergence is squared Euclidean distance (SED), and divergences can be viewed as generalizations of SED. The other most important divergence is relative entropy (Kullback–Leibler divergence, KL divergence), which is central to information theory. There are numerous other specific divergences and classes of divergences, notably f-divergences and Bregman divergences (see § Examples).


Given a differentiable manifold [lower-alpha 1] M, a divergence on M is a function D(p, q): M×MR satisfying: [3] [4]

  1. D(p, q) ≥ 0 for all p, qM ( non-negativity ),
  2. D(p, q) = 0 if and only if p = q ( identity of indiscernibles ),
  3. the quadratic part of the Taylor expansion of D(p, p + dp) defines a Riemannian metric on M.
    • Concretely, for every point in M, given a coordinate chart with coordinate denoted by x, the divergence is infinitesimally expressed as for a positive-definite matrix gx, where the matrix depends on the coordinate chart and the point x. The corresponding inner product gp on the tangent space TpM, which is independent of the coordinate chart (but varies by point p), is a Riemannian metric on M.

Condition 1 and 2 together produce (global) positive definiteness , and are common general conditions for statistical distances; condition 3 is the distinguishing characteristic of divergences. Conditions 1 and 2 imply that D(p, p + dp) has no constant part (since D(p, p) = 0), no linear part, and that gp has no negative direction (there is no dp for which gp(dp, dp) < 0; i.e. it is positive semi-definite), as either a linear or negative quadratic part would mean that D is negative for a sufficiently small step in that direction. Condition 3 additionally requires that gp not only be positive semi-definite (positive or zero), but positive definite (non-zero if dp is non-zero).

The factor of 1/2 is the coefficient of the quadratic term in the Taylor expansion, and means that g agrees with the Riemannian metric that is induced by the divergence, instead of differing by a factor of 2; see § Geometrical properties. Dimensional analysis of condition 3 shows that divergence has the dimension of squared distance. [1]

Informally, a divergence is a globally positive-definite statistical distance that is infinitesimally positive-definite on its diagonal (equivalently, that infinitesimally agrees with a Riemannian metric g on its diagonal).

In statistics, the manifold M is typically a space of probability distributions with common support.

The dual divergenceD* is defined as:

When necessary to specify the original function D, it may be referred to as the primal divergence.

A divergence can always be symmetrized by averaging it with its dual divergence: [1]


Notation for divergences varies significantly between fields, though there are some conventions.

Divergences are generally notated with an uppercase 'D', as in , to distinguish them from metric distances, which are notated with a lowercase 'd'. When multiple divergences are in use, they are commonly distinguished with subscripts, as in for Kullback–Leibler divergence (KL divergence).

Often a different separator between parameters is used, particularly to emphasize the asymmetry. In information theory, a double bar is commonly used: ; this is similar to, but distinct from, the notation for conditional probability, , and emphasizes interpreting the divergence as a relative measurement, as in relative entropy; this notation is common for the KL divergence. A colon may be used instead, [lower-alpha 2] as ; this emphasizes the relative information supporting the two distributions.

The notation for parameters varies as well. Uppercase interprets the parameters as probability distributions, while lowercase or interprets them geometrically as points in a space, and or interprets them as measures.

Geometrical properties

Many properties of divergences can be derived if we restrict S to be a statistical manifold, meaning that it can be parametrized with a finite-dimensional coordinate system θ, so that for a distribution pS we can write p = p(θ).

For a pair of points p, qS with coordinates θp and θq, denote the partial derivatives of D(p, q) as

Now we restrict these functions to a diagonal p = q, and denote [5]

By definition, the function D(p, q) is minimized at p = q, and therefore

where matrix g(D) is positive semi-definite and defines a unique Riemannian metric on the manifold S.

Divergence D(·, ·) also defines a unique torsion-free affine connection(D) with coefficients

and the dual to this connection ∇* is generated by the dual divergence D*.

Thus, a divergence D(·, ·) generates on a statistical manifold a unique dualistic structure (g(D), ∇(D), ∇(D*)). The converse is also true: every torsion-free dualistic structure on a statistical manifold is induced from some globally defined divergence function (which however need not be unique). [6]

For example, when D is an f-divergence for some function ƒ(·), then it generates the metric g(Df) = c·g and the connection (Df) = ∇(α), where g is the canonical Fisher information metric, ∇(α) is the α-connection, c = ƒ′′(1), and α = 3 + 2ƒ′′′(1)/ƒ′′(1).


The two most important divergences are the relative entropy (Kullback–Leibler divergence, KL divergence), which is central to information theory and statistics, and the squared Euclidean distance (SED). Minimizing these two divergences is the main way that linear inverse problem are solved, via the principle of maximum entropy and least squares, notably in logistic regression and linear regression. [7]

The two most important classes of divergences are the f-divergences and Bregman divergences; however, other types of divergence functions are also encountered in the literature. The only divergence that is both an f-divergence and a Bregman divergence is the Kullback–Leibler divergence; the squared Euclidean divergence is a Bregman divergence (corresponding to the function ), but not an f-divergence.


This family of divergences are generated through functions f(u), convex on u > 0 and such that f(1) = 0. Then an f-divergence is defined as

Kullback–Leibler divergence:
squared Hellinger distance:
Jeffreys divergence:
Chernoff's α-divergence:
exponential divergence:
Kagan's divergence:
(α,β)-product divergence:

If a Markov process has a positive equilibrium probability distribution then is a monotonic (non-increasing) function of time, where the probability distribution is a solution of the Kolmogorov forward equations (or Master equation), used to describe the time evolution of the probability distribution in the Markov process. This means that all f-divergences are the Lyapunov functions of the Kolmogorov forward equations. Reverse statement is also true: If is a Lyapunov function for all Markov chains with positive equilibrium and is of the trace-form () then , for some convex function f. [8] [9] Bregman divergences in general do not have such property and can increase in Markov processes.

Bregman divergences

Bregman divergences correspond to convex functions on convex sets. Given a strictly convex, continuously-differentiable function F on a convex set, known as the Bregman generator, the Bregman divergence measures the convexity of: the error of the linear approximation of F from q as an approximation of the value at p:

The dual divergence to a Bregman divergence is the divergence generated by the convex conjugate F* of the Bregman generator of the original divergence. For example, for the squared Euclidean distance, the generator is , while for the relative entropy the generator is the negative entropy .


The use of the term "divergence" – both what functions it refers to, and what various statistical distances are called – has varied significantly over time, but by c. 2000 had settled on the current usage within information geometry, notably in the textbook Amari & Nagaoka (2000). [3]

The term "divergence" for a statistical distance was used informally in various contexts from c. 1910 to c. 1940. Its formal use dates at least to Bhattacharyya (1943) , entitled "On a measure of divergence between two statistical populations defined by their probability distributions", which defined the Bhattacharyya distance, and Bhattacharyya (1946) , entitled "On a Measure of Divergence between Two Multinomial Populations", which defined the Bhattacharyya angle. The term was popularized by its use for the Kullback–Leibler divergence in Kullback & Leibler (1951) and its use in the textbook Kullback (1959). The term "divergence" was used generally by Ali & Silvey (1966) for statistically distances. Numerous references to earlier uses of statistical distances are given in Adhikari & Joshi (1956) and Kullback (1959 , pp. 6–7, 1.3 Divergence).

Kullback & Leibler (1951) actually used "divergence" to refer to the symmetrized divergence (this function had already been defined and used by Harold Jeffreys in 1948 [10] ), referring to the asymmetric function as "the mean information for discrimination ... per observation", [11] while Kullback (1959) referred to the asymmetric function as the "directed divergence". [12] Ali & Silvey (1966) referred generally to such a function as a "coefficient of divergence", and showed that many existing functions could be expressed as f-divergences, referring to Jeffreys' function as "Jeffreys' measure of divergence" (today "Jeffreys divergence"), and Kullback–Leibler's asymmetric function (in each direction) as "Kullback's and Leibler's measures of discriminatory information" (today "Kullback–Leibler divergence"). [13]

The information geometry definition of divergence (the subject of this article) was initially referred to by alternative terms, including "quasi-distance" Amari (1982 , p. 369) and "contrast function" Eguchi (1985), though "divergence" was used in Amari (1985) for the α-divergence, and has become standard for the general class. [3] [4]

The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality. [14] For example, the term "Bregman distance" is still found, but "Bregman divergence" is now preferred.

Notationally, Kullback & Leibler (1951) denoted their asymmetric function as , while Ali & Silvey (1966) denote their functions with a lowercase 'd' as .

See also


  1. Throughout, we only require differentiability class C2 (continuous with continuous first and second derivatives), since only second derivatives are required. In practice, commonly used statistical manifolds and divergences are infinitely differentiable ("smooth").
  2. A colon is used in Kullback & Leibler (1951, p. 80), where the KL divergence between measure and is written as .

Related Research Articles

Information theory is the scientific study of the quantification, storage, and communication of digital information. The field was fundamentally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. The field is at the intersection of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering.

Distance is a numerical measurement of how far apart objects or points are. In physics or everyday usage, distance may refer to a physical length or an estimation based on other criteria. The distance from a point A to a point B is sometimes denoted as . In most cases, "distance from A to B" is interchangeable with "distance from B to A". In mathematics, a distance function or metric is a generalization of the concept of physical distance; it is a way of describing what it means for elements of some space to be "close to", or "far away from" each other. In psychology and social sciences, distance is a non-numerical measurement; Psychological distance is defined as "the different ways in which an object might be removed from" the self along dimensions such as "time, space, social distance, and hypotheticality.

Information geometry Field that applies the techniques of differential geometry to study probability theory and statistics.

Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics. It studies statistical manifolds, which are Riemannian manifolds whose points correspond to probability distributions.

In information geometry, the Fisher information metric is a particular Riemannian metric which can be defined on a smooth statistical manifold, i.e., a smooth manifold whose points are probability measures defined on a common probability space. It can be used to calculate the informational difference between measurements.

In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information. In Bayesian statistics, the asymptotic distribution of the posterior mode depends on the Fisher information and not on the prior. The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized by the statistician Ronald Fisher. The Fisher information is also used in the calculation of the Jeffreys prior, which is used in Bayesian statistics.

In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes called the statistical distance, statistical difference or variational distance.

In mathematical statistics, the Kullback–Leibler divergence,, is a statistical distance: a measure of how one probability distribution Q is different from a second, reference probability distribution P. A simple interpretation of the divergence of Q from P is the expected excess surprise from using Q as a model when the actual distribution is P. While it is a distance, it is not a metric, the most familiar type of distance: it is asymmetric in the two distributions, and does not satisfy the triangle inequality. Instead, in terms of information geometry, it is a divergence, a generalization of squared distance, and for certain classes of distributions, it satisfies a generalized Pythagorean theorem.

In information theory, the Rényi entropy generalizes the Hartley entropy, the Shannon entropy, the collision entropy and the min-entropy. Entropies quantify the diversity, uncertainty, or randomness of a system. The entropy is named after Alfréd Rényi. In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.

In information theory, the cross-entropy between two probability distributions and over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set if a coding scheme used for the set is optimized for an estimated probability distribution , rather than the true distribution .

Gibbs inequality

In information theory, Gibbs' inequality is a statement about the information entropy of a discrete probability distribution. Several other bounds on the entropy of probability distributions are derived from Gibbs' inequality, including Fano's inequality. It was first presented by J. Willard Gibbs in the 19th century.

Differential entropy is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.

In probability theory and statistics, the Jensen–Shannon divergence is a method of measuring the similarity between two probability distributions. It is also known as information radius (IRad) or total divergence to the average. It is based on the Kullback–Leibler divergence, with some notable differences, including that it is symmetric and it always has a finite value. The square root of the Jensen–Shannon divergence is a metric often referred to as Jensen-Shannon distance.

In mathematics, specifically statistics and information geometry, a Bregman divergence or Bregman distance is a measure of difference between two points, defined in terms of a strictly convex function; they form an important class of divergences. When the points are interpreted as probability distributions – notably as either values of the parameter of a parametric model or as a data set of observed values – the resulting distance is a statistical distance. The most basic Bregman divergence is the squared Euclidean distance.

In statistics, probability theory, and information theory, a statistical distance quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be between an individual sample point and a population or a wider sample of points.

This article discusses how information theory is related to measure theory.

Quantities of information

The mathematical theory of information is based on probability theory and statistics, and measures information with several quantities of information. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. The most common unit of information is the bit, based on the binary logarithm. Other units include the nat, based on the natural logarithm, and the hartley, based on the base 10 or common logarithm.

In probability theory, an ƒ-divergence is a function Df (P  || Q) that measures the difference between two probability distributions P and Q. It helps the intuition to think of the divergence as an average, weighted by the function f, of the odds ratio given by P and Q.

Inequalities are very important in the study of information theory. There are a number of different contexts in which these inequalities appear.

In information theory, Pinsker's inequality, named after its inventor Mark Semenovich Pinsker, is an inequality that bounds the total variation distance in terms of the Kullback–Leibler divergence. The inequality is tight up to constant factors.

In information theory, the information projection or I-projection of a probability distribution q onto a set of distributions P is


  1. 1 2 3 4 Amari 2016, p. 10.
  2. Amari 2016, 1.6.1 Generalized Pythagorean Theorem.
  3. 1 2 3 Amari & Nagaoka 2000, chapter 3.2.
  4. 1 2 Amari 2016, p. 10, Definition 1.1.
  5. Eguchi (1992)
  6. Matumoto (1993)
  7. Csiszár 1991.
  8. Gorban, Pavel A. (15 October 2003). "Monotonically equivalent entropies and solution of additivity equation". Physica A. 328 (3–4): 380–390. arXiv: cond-mat/0304131 . doi:10.1016/S0378-4371(03)00578-8.
  9. Amari, Shun'ichi (2009). Leung, C.S.; Lee, M.; Chan, J.H. (eds.). Divergence, Optimization, Geometry. The 16th International Conference on Neural Information Processing (ICONIP 20009), Bangkok, Thailand, 1--5 December 2009. Lecture Notes in Computer Science, vol 5863. Berlin, Heidelberg: Springer. pp. 185–193. doi:10.1007/978-3-642-10677-4_21.
  10. Jeffreys 1948, p. 158.
  11. Kullback & Leibler 1951, p. 80.
  12. Kullback 1959, p. 7.
  13. Ali & Silvey 1966, p. 139.
  14. Kullback 1959, p. 6.