This article possibly contains original research .(June 2024) |
Circular error probable (CEP), [1] also circular error probability [2] or circle of equal probability, [3] is a measure of a weapon system's precision in the military science of ballistics. It is defined as the radius of a circle, centered on the aimpoint, that is expected to enclose the landing points of 50% of the rounds; said otherwise, it is the median error radius. [1] [4] That is, if a given munitions design has a CEP of 100 m, when 100 munitions are targeted at the same point, an average of 50 will fall within a circle with a radius of 100 m about that point.
There are associated concepts, such as the DRMS (distance root mean square), which is the square root of the average squared distance error, and R95, which is the radius of the circle where 95% of the values would fall in.
The concept of CEP also plays a role when measuring the accuracy of a position obtained by a navigation system, such as GPS or older systems such as LORAN and Loran-C.
The original concept of CEP was based on a circular bivariate normal distribution (CBN) with CEP as a parameter of the CBN just as μ and σ are parameters of the normal distribution. Munitions with this distribution behavior tend to cluster around the mean impact point, with most reasonably close, progressively fewer and fewer further away, and very few at long distance. That is, if CEP is n metres, 50% of shots land within n metres of the mean impact, 43.7% between n and 2n, and 6.1% between 2n and 3n metres, and the proportion of shots that land farther than three times the CEP from the mean is only 0.2%.
CEP is not a good measure of accuracy when this distribution behavior is not met. Munitions may also have larger standard deviation of range errors than the standard deviation of azimuth (deflection) errors, resulting in an elliptical confidence region. Munition samples may not be exactly on target, that is, the mean vector will not be (0,0). This is referred to as bias.
To incorporate accuracy into the CEP concept in these conditions, CEP can be defined as the square root of the mean square error (MSE). The MSE will be the sum of the variance of the range error plus the variance of the azimuth error plus the covariance of the range error with the azimuth error plus the square of the bias. Thus the MSE results from pooling all these sources of error, geometrically corresponding to radius of a circle within which 50% of rounds will land.
Several methods have been introduced to estimate CEP from shot data. Included in these methods are the plug-in approach of Blischke and Halpin (1966), the Bayesian approach of Spall and Maryak (1992), and the maximum likelihood approach of Winkler and Bickert (2012). The Spall and Maryak approach applies when the shot data represent a mixture of different projectile characteristics (e.g., shots from multiple munitions types or from multiple locations directed at one target).
While 50% is a very common definition for CEP, the circle dimension can be defined for percentages. Percentiles can be determined by recognizing that the horizontal position error is defined by a 2D vector which components are two orthogonal Gaussian random variables (one for each axis), assumed uncorrelated, each having a standard deviation . The distance error is the magnitude of that vector; it is a property of 2D Gaussian vectors that the magnitude follows the Rayleigh distribution, with scale factor . The distance root mean square (DRMS), is and doubles as a sort of standard deviation, since errors within this value make up 63% of the sample represented by the bivariate circular distribution. In turn, the properties of the Rayleigh distribution are that its percentile at level is given by the following formula:
or, expressed in terms of the DRMS:
The relation between and are given by the following table, where the values for DRMS and 2DRMS (twice the distance root mean square) are specific to the Rayleigh distribution and are found numerically, while the CEP, R95 (95% radius) and R99.7 (99.7% radius) values are defined based on the 68–95–99.7 rule
Measure of | Probability |
---|---|
DRMS | 63.213... |
CEP | 50 |
2DRMS | 98.169... |
R95 | 95 |
R99.7 | 99.7 |
We can then derive a conversion table to convert values expressed for one percentile level, to another. [5] [6] Said conversion table, giving the coefficients to convert into , is given by:
From to | RMS () | CEP | DRMS | R95 | 2DRMS | R99.7 |
---|---|---|---|---|---|---|
RMS () | 1.00 | 1.18 | 1.41 | 2.45 | 2.83 | 3.41 |
CEP | 0.849 | 1.00 | 1.20 | 2.08 | 2.40 | 2.90 |
DRMS | 0.707 | 0.833 | 1.00 | 1.73 | 2.00 | 2.41 |
R95 | 0.409 | 0.481 | 0.578 | 1.00 | 1.16 | 1.39 |
2DRMS | 0.354 | 0.416 | 0.500 | 0.865 | 1.00 | 1.21 |
R99.7 | 0.293 | 0.345 | 0.415 | 0.718 | 0.830 | 1.00 |
For example, a GPS receiver having a 1.25 m DRMS will have a 1.25 m × 1.73 = 2.16 m 95% radius.
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution, while the parameter is the variance. The standard deviation of the distribution is . A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.
In statistics, the standard deviation is a measure of the amount of variation of a random variable expected about its mean. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not.
The Box–Muller transform, by George Edward Pelham Box and Mervin Edgar Muller, is a random number sampling method for generating pairs of independent, standard, normally distributed random numbers, given a source of uniformly distributed random numbers. The method was first mentioned explicitly by Raymond E. A. C. Paley and Norbert Wiener in their 1934 treatise on Fourier transforms in the complex domain. Given the status of these latter authors and the widespread availability and use of their treatise, it is almost certain that Box and Muller were well aware of its contents.
In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable that is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in the natural sciences, engineering, as well as medicine, economics and other fields. It can be applied to diverse quantities such as energies, concentrations, lengths, prices of financial instruments, and other metrics, while acknowledging the inherent uncertainty in all measurements.
In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate. In machine learning, specifically empirical risk minimization, MSE may refer to the empirical risk, as an estimate of the true MSE.
In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. Up to rescaling, it coincides with the chi distribution with two degrees of freedom. The distribution is named after Lord Rayleigh.
In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "true value". The error of an observation is the deviation of the observed value from the true value of a quantity of interest. The residual is the difference between the observed value and the estimated value of the quantity of interest. The distinction is most important in regression analysis, where the concepts are sometimes called the regression errors and regression residuals and where they lead to the concept of studentized residuals. In econometrics, "errors" are also called disturbances.
The standard error (SE) of a statistic is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM). The standard error is a key ingredient in producing confidence intervals.
In probability theory and statistics, the coefficient of variation (CV), also known as normalized root-mean-square deviation (NRMSD), percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is defined as the ratio of the standard deviation to the mean , and often expressed as a percentage ("%RSD"). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R, by economists and investors in economic models, and in psychology/neuroscience.
Dilution of precision (DOP), or geometric dilution of precision (GDOP), is a term used in satellite navigation and geomatics engineering to specify the error propagation as a mathematical effect of navigation satellite geometry on positional measurement precision.
In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that numerically computes a definite integral. While other algorithms usually evaluate the integrand at a regular grid, Monte Carlo randomly chooses points at which the integrand is evaluated. This method is particularly useful for higher-dimensional integrals.
Directional statistics is the subdiscipline of statistics that deals with directions, axes or rotations in Rn. More generally, directional statistics deals with observations on compact Riemannian manifolds including the Stiefel manifold.
In probability and statistics, a circular distribution or polar distribution is a probability distribution of a random variable whose values are angles, usually taken to be in the range [0, 2π). A circular distribution is often a continuous probability distribution, and hence has a probability density, but such distributions can also be discrete, in which case they are called circular lattice distributions. Circular distributions can be used even when the variables concerned are not explicitly angles: the main consideration is that there is not usually any real distinction between events occurring at the opposite ends of the range, and the division of the range could notionally be made at any point.
In probability theory, the Rice distribution or Rician distribution is the probability distribution of the magnitude of a circularly-symmetric bivariate normal random variable, possibly with non-zero mean (noncentral). It was named after Stephen O. Rice (1907–1986).
In probability theory and statistics, the chi distribution is a continuous probability distribution over the non-negative real line. It is the distribution of the positive square root of a sum of squared independent Gaussian random variables. Equivalently, it is the distribution of the Euclidean distance between a multivariate Gaussian random variable and the origin. It is thus related to the chi-squared distribution by describing the distribution of the positive square roots of a variable obeying a chi-squared distribution.
In electronics and signal processing, mainly in digital signal processing, a Gaussian filter is a filter whose impulse response is a Gaussian function. Gaussian filters have the properties of having no overshoot to a step function input while minimizing the rise and fall time. This behavior is closely connected to the fact that the Gaussian filter has the minimum possible group delay. A Gaussian filter will have the best combination of suppression of high frequencies while also minimizing spatial spread, being the critical point of the uncertainty principle. These properties are important in areas such as oscilloscopes and digital telecommunication systems.
The sensitivity index or discriminability index or detectability index is a dimensionless statistic used in signal detection theory. A higher index indicates that the signal can be more readily detected.
In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function. Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation.
The root mean square deviation (RMSD) or root mean square error (RMSE) is either one of two closely related and frequently used measures of the differences between true or predicted values on the one hand and observed values or an estimator on the other.
In probability theory and directional statistics, a wrapped normal distribution is a wrapped probability distribution that results from the "wrapping" of the normal distribution around the unit circle. It finds application in the theory of Brownian motion and is a solution to the heat equation for periodic boundary conditions. It is closely approximated by the von Mises distribution, which, due to its mathematical simplicity and tractability, is the most commonly used distribution in directional statistics.