Wrapped normal distribution

Last updated
Wrapped Normal
Probability density function
WrappedNormalPDF.png
The support is chosen to be [-π,π] with μ=0
Cumulative distribution function
WrappedNormalCDF.png
The support is chosen to be [-π,π] with μ=0
Parameters real
Support any interval of length 2π
PDF
Mean if support is on interval
Median if support is on interval
Mode
Variance (circular)
Entropy (see text)
CF

In probability theory and directional statistics, a wrapped normal distribution is a wrapped probability distribution that results from the "wrapping" of the normal distribution around the unit circle. It finds application in the theory of Brownian motion and is a solution to the heat equation for periodic boundary conditions. It is closely approximated by the von Mises distribution, which, due to its mathematical simplicity and tractability, is the most commonly used distribution in directional statistics. [1]

Contents

Definition

The probability density function of the wrapped normal distribution is [2]

where μ and σ are the mean and standard deviation of the unwrapped distribution, respectively. Expressing the above density function in terms of the characteristic function of the normal distribution yields: [2]

where is the Jacobi theta function, given by

and

The wrapped normal distribution may also be expressed in terms of the Jacobi triple product: [3]

where and

Moments

In terms of the circular variable the circular moments of the wrapped normal distribution are the characteristic function of the normal distribution evaluated at integer arguments:

where is some interval of length . The first moment is then the average value of z, also known as the mean resultant, or mean resultant vector:

The mean angle is

and the length of the mean resultant is

The circular standard deviation, which is a useful measure of dispersion for the wrapped normal distribution and its close relative, the von Mises distribution is given by:

Estimation of parameters

A series of N measurements zn = e iθn drawn from a wrapped normal distribution may be used to estimate certain parameters of the distribution. The average of the series z is defined as

and its expectation value will be just the first moment:

In other words, z is an unbiased estimator of the first moment. If we assume that the mean μ lies in the interval [π, π), then Arg z will be a (biased) estimator of the mean μ.

Viewing the zn as a set of vectors in the complex plane, the R2 statistic is the square of the length of the averaged vector:

and its expected value is:

In other words, the statistic

will be an unbiased estimator of eσ2, and ln(1/Re2) will be a (biased) estimator of σ2

Entropy

The information entropy of the wrapped normal distribution is defined as: [2]

where is any interval of length . Defining and , the Jacobi triple product representation for the wrapped normal is:

where is the Euler function. The logarithm of the density of the wrapped normal distribution may be written:

Using the series expansion for the logarithm:

the logarithmic sums may be written as:

so that the logarithm of density of the wrapped normal distribution may be written as:

which is essentially a Fourier series in . Using the characteristic function representation for the wrapped normal distribution in the left side of the integral:

the entropy may be written:

which may be integrated to yield:

See also

Related Research Articles

Normal distribution Probability distribution

In probability theory, a normaldistribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

Multivariate normal distribution Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

Log-normal distribution Probability distribution

In probability theory, a log-normal distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

Error function

In mathematics, the error function, often denoted by erf, is a complex function of a complex variable defined as:

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the form

Rayleigh distribution

In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. It is essentially a chi distribution with two degrees of freedom.

Hotellings <i>T</i>-squared distribution

In statistics, particularly in hypothesis testing, the Hotelling's T-squared distribution (T2), proposed by Harold Hotelling, is a multivariate probability distribution that is tightly related to the F-distribution and is most notable for arising as the distribution of a set of sample statistics that are natural generalizations of the statistics underlying the Student's t-distribution.

Directional statistics

Directional statistics is the subdiscipline of statistics that deals with directions, axes or rotations in Rn. More generally, directional statistics deals with observations on compact Riemannian manifolds.

von Mises distribution Probability distribution on the circle

In probability theory and directional statistics, the von Mises distribution is a continuous probability distribution on the circle. It is a close approximation to the wrapped normal distribution, which is the circular analogue of the normal distribution. A freely diffusing angle on a circle is a wrapped normally distributed random variable with an unwrapped variance that grows linearly in time. On the other hand, the von Mises distribution is the stationary distribution of a drift and diffusion process on the circle in a harmonic potential, i.e. with a preferred orientation. The von Mises distribution is the maximum entropy distribution for circular data when the real and imaginary parts of the first circular moment are specified. The von Mises distribution is a special case of the von Mises–Fisher distribution on the N-dimensional sphere.

Rice distribution

In probability theory, the Rice distribution or Rician distribution is the probability distribution of the magnitude of a circularly-symmetric bivariate normal random variable, possibly with non-zero mean (noncentral). It was named after Stephen O. Rice.

Noncentral <i>t</i>-distribution

The noncentral t-distribution generalizes Student's t-distribution using a noncentrality parameter. Whereas the central probability distribution describes how a test statistic t is distributed when the difference tested is null, the noncentral distribution describes how t is distributed when the null is false. This leads to its use in statistics, especially calculating statistical power. The noncentral t-distribution is also known as the singly noncentral t-distribution, and in addition to its primary use in statistical inference, is also used in robust modeling for data.

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function. Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

Half-normal distribution probability distribution

In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.

Experimental uncertainty analysis is a technique that analyses a derived quantity, based on the uncertainties in the experimentally measured quantities that are used in some form of mathematical relationship ("model") to calculate that derived quantity. The model used to convert the measurements into the derived quantity is usually based on fundamental principles of a science or engineering discipline.

In probability theory and directional statistics, a wrapped probability distribution is a continuous probability distribution that describes data points that lie on a unit n-sphere. In one dimension, a wrapped distribution will consist of points on the unit circle. If φ is a random variate in the interval (-∞,∞) with probability density function p(φ), then z = e i φ will be a circular variable distributed according to the wrapped distribution pzw(z) and θ=arg(z) will be an angular variable in the interval (-π,π ] distributed according to the wrapped distribution pw.

Wrapped Cauchy distribution

In probability theory and directional statistics, a wrapped Cauchy distribution is a wrapped probability distribution that results from the "wrapping" of the Cauchy distribution around the unit circle. The Cauchy distribution is sometimes known as a Lorentzian distribution, and the wrapped Cauchy distribution may sometimes be referred to as a wrapped Lorentzian distribution.

In probability theory and directional statistics, a wrapped Lévy distribution is a wrapped probability distribution that results from the "wrapping" of the Lévy distribution around the unit circle.

A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables X and Y, the distribution of the random variable Z that is formed as the product

References

  1. Collett, D.; Lewis, T. (1981). "Discriminating Between the Von Mises and Wrapped Normal Distributions". Australian Journal of Statistics. 23 (1): 73–79. doi:10.1111/j.1467-842X.1981.tb00763.x.
  2. 1 2 3 Mardia, Kantilal; Jupp, Peter E. (1999). Directional Statistics. Wiley. ISBN   978-0-471-95333-3.
  3. Whittaker, E. T.; Watson, G. N. (2009). A Course of Modern Analysis . Book Jungle. ISBN   978-1-4385-2815-1.