This article needs additional citations for verification .(December 2020) |

In statistics, probability theory, and information theory, a **statistical distance** quantifies the distance between two statistical objects, which can be two random variables, or two probability distributions or samples, or the distance can be between an individual sample point and a population or a wider sample of points.

- Terminology
- Distances as metrics
- Metrics
- Generalized metrics
- Examples
- See also
- Notes
- External links
- References

A distance between populations can be interpreted as measuring the distance between two probability distributions and hence they are essentially measures of distances between probability measures. Where statistical distance measures relate to the differences between random variables, these may have statistical dependence,^{ [1] } and hence these distances are not directly related to measures of distances between probability measures. Again, a measure of distance between random variables may relate to the extent of dependence between them, rather than to their individual values.

Statistical distance measures are not typically metrics, and they need not be symmetric. Some types of distance measures are referred to as (statistical) ** divergences **.

Many terms are used to refer to various notions of distance; these are often confusingly similar, and may be used inconsistently between authors and over time, either loosely or with precise technical meaning. In addition to "distance", similar terms include deviance, deviation, discrepancy, discrimination, and divergence, as well as others such as contrast function and metric. Terms from information theory include cross entropy, relative entropy, discrimination information, and information gain.

A **metric** on a set *X* is a function (called the *distance function* or simply **distance**) *d* : *X* × *X* → **R**^{+} (where **R**^{+} is the set of non-negative real numbers). For all *x*, *y*, *z* in *X*, this function is required to satisfy the following conditions:

*d*(*x*,*y*) ≥ 0 (*non-negativity*)*d*(*x*,*y*) = 0 if and only if*x*=*y*(*identity of indiscernibles*. Note that condition 1 and 2 together produce*positive definiteness*)*d*(*x*,*y*) =*d*(*y*,*x*) (*symmetry*)*d*(*x*,*z*) ≤*d*(*x*,*y*) +*d*(*y*,*z*) (*subadditivity*/*triangle inequality*).

Many statistical distances are not metrics, because they lack one or more properties of proper metrics. For example, pseudometrics violate the "positive definiteness" (alternatively, "identity of indescernibles") property (1 & 2 above); quasimetrics violate the symmetry property (3); and semimetrics violate the triangle inequality (4). Statistical distances that satisfy (1) and (2) are referred to as divergences.

Some important statistical distances include the following:

- f-divergence: includes
- Kullback–Leibler divergence
- Hellinger distance
- Total variation distance (sometimes just called "the" statistical distance)

- Rényi's divergence
- Jensen–Shannon divergence
- Lévy–Prokhorov metric
- Bhattacharyya distance
- Wasserstein metric: also known as the Kantorovich metric, or earth mover's distance
- The Kolmogorov–Smirnov statistic represents a distance between two probability distributions defined on a single real variable
- The
**maximum mean discrepancy**which is defined in terms of the kernel embedding of distributions - Signal-to-noise ratio distance
- Mahalanobis distance
- Discriminability index, specifically the Bayes discriminability index is a positive-definite symmetric measure of the overlap of two distributions.
- Energy distance
- Distance correlation is a measure of dependence between two random variables, it is zero if and only if the random variables are independent.

- The
*continuous ranked probability score*measures how well forecasts that are expressed as probability distributions match observed outcomes. Both the location and spread of the forecast distribution are taken into account in judging how close the distribution is the observed value: see probabilistic forecasting. - Łukaszyk–Karmowski metric is a function defining a distance between two random variables or two random vectors. It does not satisfy the identity of indiscernibles condition of the metric and is zero if and only if both its arguments are certain events described by Dirac delta density probability distribution functions.

This article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations .(February 2012) |

- ↑ Dodge, Y. (2003)—entry for distance

**Information theory** is the scientific study of the quantification, storage, and communication of digital information. The field was fundamentally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. The field is at the intersection of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering.

**Probability theory** is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes, which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion. Although it is not possible to perfectly predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem.

In statistics, **correlation ** or **dependence ** is any statistical relationship, whether causal or not, between two random variables or bivariate data. In the broadest sense **correlation** is any statistical association, though it actually refers to the degree to which a pair of variables are linearly related. Familiar examples of dependent phenomena include the correlation between the height of parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in the so-called demand curve.

In probability theory and information theory, the **mutual information** (**MI**) of two random variables is a measure of the mutual dependence between the two variables. More specifically, it quantifies the "amount of information" obtained about one random variable by observing the other random variable. The concept of mutual information is intimately linked to that of entropy of a random variable, a fundamental notion in information theory that quantifies the expected "amount of information" held in a random variable.

In Bayesian statistical inference, a **prior probability distribution**, often simply called the **prior**, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.

In mathematical statistics, the **Fisher information** is a way of measuring the amount of information that an observable random variable *X* carries about an unknown parameter *θ* of a distribution that models *X*. Formally, it is the variance of the score, or the expected value of the observed information. In Bayesian statistics, the asymptotic distribution of the posterior mode depends on the Fisher information and not on the prior. The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized by the statistician Ronald Fisher. The Fisher information is also used in the calculation of the Jeffreys prior, which is used in Bayesian statistics.

The **Mahalanobis distance** is a measure of the distance between a point *P* and a distribution *D*, introduced by P. C. Mahalanobis in 1936. It is a multi-dimensional generalization of the idea of measuring how many standard deviations away *P* is from the mean of *D*. This distance is zero for *P* at the mean of *D* and grows as *P* moves away from the mean along each principal component axis. If each of these axes is re-scaled to have unit variance, then the Mahalanobis distance corresponds to standard Euclidean distance in the transformed space. The Mahalanobis distance is thus unitless, scale-invariant, and takes into account the correlations of the data set.

In mathematical statistics, the **Kullback–Leibler divergence,**, is a measure of how one probability distribution is different from a second, reference probability distribution. Applications include characterizing the relative (Shannon) entropy in information systems, randomness in continuous time-series, and information gain when comparing statistical models of inference. In contrast to variation of information, it is a distribution-wise *asymmetric* measure and thus does not qualify as a statistical *metric* of spread – it also does not satisfy the triangle inequality. In the simple case, a relative entropy of 0 indicates that the two distributions in question have identical quantities of information. In simplified terms, it is a measure of surprise, with diverse applications such as applied statistics, fluid mechanics, neuroscience and bioinformatics.

The following is a glossary of terms used in the mathematical sciences statistics and probability.

The **Jaccard index**, also known as the **Jaccard similarity coefficient**, is a statistic used for gauging the similarity and diversity of sample sets. It was developed by Paul Jaccard, originally giving the French name *coefficient de communauté*, and independently formulated again by T. Tanimoto. Thus, the **Tanimoto index** or **Tanimoto coefficient** are also used in some fields. However, they are identical in generally taking the ratio of **Intersection over Union**. The Jaccard coefficient measures similarity between finite sample sets, and is defined as the size of the intersection divided by the size of the union of the sample sets:

**Differential entropy** is a concept in information theory that began as an attempt by Shannon to extend the idea of (Shannon) entropy, a measure of average surprisal of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.

In mathematics, **probabilistic metric spaces** are a generalization of metric spaces where the distance no longer takes values in the non-negative real numbersc **R**_{ ≥ 0}, but in distribution functions.

In decision theory, a **score function**, or **scoring rule**, measures the accuracy of probabilistic predictions. It is applicable to tasks in which predictions must assign probabilities to a set of mutually exclusive outcomes or classes. The set of possible outcomes can be either binary or categorical in nature, and the probabilities assigned to this set of outcomes must sum to one. A score can be thought of as either a measure of the "calibration" of a set of probabilistic predictions, or as a "cost function" or "loss function".

The mathematical theory of information is based on probability theory and statistics, and measures information with several **quantities of information**. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. The most common unit of information is the bit, based on the binary logarithm. Other units include the nat, based on the natural logarithm, and the hartley, based on the base 10 or common logarithm.

This page lists articles related to probability theory. In particular, it lists many articles corresponding to specific probability distributions. Such articles are marked here by a code of the form (X:Y), which refers to number of random variables involved and the type of the distribution. For example (2:DC) indicates a distribution with two random variables, discrete or continuous. Other codes are just abbreviations for topics. The list of codes can be found in the table of contents.

In statistics and information geometry, **divergence** or a **contrast function** is a function which establishes the "distance" of one probability distribution to the other on a statistical manifold. The divergence is a weaker notion than that of the distance, in particular the divergence need not be symmetric, and need not satisfy the triangle inequality.

**Energy distance** is a statistical distance between probability distributions. If X and Y are independent random vectors in **R**^{d} with cumulative distribution functions (cdf) F and G respectively, then the energy distance between the distributions F and G is defined to be the square root of

In mathematics, the **Łukaszyk–Karmowski metric** is a function defining a distance between two random variables or two random vectors. This function is not a metric as it does not satisfy the identity of indiscernibles condition of the metric, that is for two identical arguments its value is greater than zero. The concept is named after Szymon Łukaszyk and Wojciech Karmowski.

- Dodge, Y. (2003)
*Oxford Dictionary of Statistical Terms*, OUP. ISBN 0-19-920613-9

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.