Probability density function | |||
Cumulative distribution function | |||
Parameters | , distance between the reference point and the center of the bivariate distribution, , scale | ||
---|---|---|---|
Support | |||
CDF | where Contents
| ||
Mean | |||
Variance | |||
Skewness | (complicated) | ||
Ex. kurtosis | (complicated) |
In probability theory, the Rice distribution or Rician distribution (or, less commonly, Ricean distribution) is the probability distribution of the magnitude of a circularly-symmetric bivariate normal random variable, possibly with non-zero mean (noncentral). It was named after Stephen O. Rice (1907–1986).
The probability density function is
where I0(z) is the modified Bessel function of the first kind with order zero.
In the context of Rician fading, the distribution is often also rewritten using the Shape Parameter, defined as the ratio of the power contributions by line-of-sight path to the remaining multipaths, and the Scale parameter, defined as the total power received in all paths. [1]
The characteristic function of the Rice distribution is given as: [2] [3]
where is one of Horn's confluent hypergeometric functions with two variables and convergent for all finite values of and . It is given by: [4] [5]
where
is the rising factorial.
The first few raw moments are:
and, in general, the raw moments are given by
Here Lq(x) denotes a Laguerre polynomial:
where is the confluent hypergeometric function of the first kind. When k is even, the raw moments become simple polynomials in σ and ν, as in the examples above.
For the case q = 1/2:
The second central moment, the variance, is
Note that indicates the square of the Laguerre polynomial , not the generalized Laguerre polynomial
For large values of the argument, the Laguerre polynomial becomes [8]
It is seen that as ν becomes large or σ becomes small the mean becomes ν and the variance becomes σ2.
The transition to a Gaussian approximation proceeds as follows. From Bessel function theory we have
so, in the large region, an asymptotic expansion of the Rician distribution:
Moreover, when the density is concentrated around and because of the Gaussian exponent, we can also write and finally get the Normal approximation
The approximation becomes usable for
There are three different methods for estimating the parameters of the Rice distribution, (1) method of moments, [9] [10] [11] [12] (2) method of maximum likelihood, [9] [10] [11] [13] and (3) method of least squares.[ citation needed ] In the first two methods the interest is in estimating the parameters of the distribution, ν and σ, from a sample of data. This can be done using the method of moments, e.g., the sample mean and the sample standard deviation. The sample mean is an estimate of μ1' and the sample standard deviation is an estimate of μ21/2.
The following is an efficient method, known as the "Koay inversion technique". [14] for solving the estimating equations, based on the sample mean and the sample standard deviation, simultaneously . This inversion technique is also known as the fixed point formula of SNR. Earlier works [9] [15] on the method of moments usually use a root-finding method to solve the problem, which is not efficient.
First, the ratio of the sample mean to the sample standard deviation is defined as r, i.e., . The fixed point formula of SNR is expressed as
where is the ratio of the parameters, i.e., , and is given by:
where and are modified Bessel functions of the first kind.
Note that is a scaling factor of and is related to by:
To find the fixed point, , of , an initial solution is selected, , that is greater than the lower bound, which is and occurs when [14] (Notice that this is the of a Rayleigh distribution). This provides a starting point for the iteration, which uses functional composition,[ clarification needed ] and this continues until is less than some small positive value. Here, denotes the composition of the same function, , times. In practice, we associate the final for some integer as the fixed point, , i.e., .
Once the fixed point is found, the estimates and are found through the scaling function, , as follows:
and
To speed up the iteration even more, one can use the Newton's method of root-finding. [14] This particular approach is highly efficient.
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman–Darmois family. Sometimes loosely referred to as "the" exponential family, this class of distributions is distinct because they all possess a variety of desirable properties, most importantly the existence of a sufficient statistic.
In physics and astronomy, the Reissner–Nordström metric is a static solution to the Einstein–Maxwell field equations, which corresponds to the gravitational field of a charged, non-rotating, spherically symmetric body of mass M. The analogous solution for a charged, rotating body is given by the Kerr–Newman metric.
In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk.
The scaled inverse chi-squared distribution is the distribution for x = 1/s2, where s2 is a sample mean of the squares of ν independent normal random variables that have mean 0 and inverse variance 1/σ2 = τ2. The distribution is therefore parametrised by the two quantities ν and τ2, referred to as the number of chi-squared degrees of freedom and the scaling parameter, respectively.
In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior distribution over the quantity one wants to estimate. MAP estimation can therefore be seen as a regularization of maximum likelihood estimation.
The noncentral t-distribution generalizes Student's t-distribution using a noncentrality parameter. Whereas the central probability distribution describes how a test statistic t is distributed when the difference tested is null, the noncentral distribution describes how t is distributed when the null is false. This leads to its use in statistics, especially calculating statistical power. The noncentral t-distribution is also known as the singly noncentral t-distribution, and in addition to its primary use in statistical inference, is also used in robust modeling for data.
Ellipsoidal coordinates are a three-dimensional orthogonal coordinate system that generalizes the two-dimensional elliptic coordinate system. Unlike most three-dimensional orthogonal coordinate systems that feature quadratic coordinate surfaces, the ellipsoidal coordinate system is based on confocal quadrics.
In probability theory, the inverse Gaussian distribution is a two-parameter family of continuous probability distributions with support on (0,∞).
In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.
Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is the expected return on the portfolio in the worst of cases. ES is an alternative to value at risk that is more sensitive to the shape of the tail of the loss distribution.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.
In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.
In probability theory and directional statistics, a wrapped normal distribution is a wrapped probability distribution that results from the "wrapping" of the normal distribution around the unit circle. It finds application in the theory of Brownian motion and is a solution to the heat equation for periodic boundary conditions. It is closely approximated by the von Mises distribution, which, due to its mathematical simplicity and tractability, is the most commonly used distribution in directional statistics.
In statistics, identifiability is a property which a model must satisfy for precise inference to be possible. A model is identifiable if it is theoretically possible to learn the true values of this model's underlying parameters after obtaining an infinite number of observations from it. Mathematically, this is equivalent to saying that different values of the parameters must generate different probability distributions of the observable variables. Usually the model is identifiable only under certain technical restrictions, in which case the set of these requirements is called the identification conditions.
In statistics, an adaptive estimator is an estimator in a parametric or semiparametric model with nuisance parameters such that the presence of these nuisance parameters does not affect efficiency of estimation.
In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values. It is a measure of the skewness of a random variable's distribution—that is, the distribution's tendency to "lean" to one side or the other of the mean. Its calculation does not require any knowledge of the form of the underlying distribution—hence the name nonparametric. It has some desirable properties: it is zero for any symmetric distribution; it is unaffected by a scale shift; and it reveals either left- or right-skewness equally well. In some statistical samples it has been shown to be less powerful than the usual measures of skewness in detecting departures of the population from normality.
In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.
In probability theory, the stable count distribution is the conjugate prior of a one-sided stable distribution. This distribution was discovered by Stephen Lihn in his 2017 study of daily distributions of the S&P 500 and the VIX. The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it.