Circular error probable

Last updated
CEP concept and hit probability. 0.2% outside the outmost circle. Circular error probable - percentage.png
CEP concept and hit probability. 0.2% outside the outmost circle.

Circular error probable (CEP), [1] also circular error probability [2] or circle of equal probability, [3] is a measure of a weapon system's precision in the military science of ballistics. It is defined as the radius of a circle, centered on the aimpoint, that is expected to enclose the landing points of 50% of the rounds; said otherwise, it is the median error radius. [1] [4] That is, if a given munitions design has a CEP of 100 m, when 100 munitions are targeted at the same point, an average of 50 will fall within a circle with a radius of 100 m about that point.

Contents

There are associated concepts, such as the DRMS (distance root mean square), which is the square root of the average squared distance error, and R95, which is the radius of the circle where 95% of the values would fall in.

The concept of CEP also plays a role when measuring the accuracy of a position obtained by a navigation system, such as GPS or older systems such as LORAN and Loran-C.

Concept

circular bivariate normal distribution Multivariate Gaussian.png
circular bivariate normal distribution
20 hits distribution example Circular error probable - example.png
20 hits distribution example

The original concept of CEP was based on a circular bivariate normal distribution (CBN) with CEP as a parameter of the CBN just as μ and σ are parameters of the normal distribution. Munitions with this distribution behavior tend to cluster around the mean impact point, with most reasonably close, progressively fewer and fewer further away, and very few at long distance. That is, if CEP is n metres, 50% of shots land within n metres of the mean impact, 43.7% between n and 2n, and 6.1% between 2n and 3n metres, and the proportion of shots that land farther than three times the CEP from the mean is only 0.2%.

CEP is not a good measure of accuracy when this distribution behavior is not met. Precision-guided munitions generally have more "close misses" and so are not normally distributed. Munitions may also have larger standard deviation of range errors than the standard deviation of azimuth (deflection) errors, resulting in an elliptical confidence region. Munition samples may not be exactly on target, that is, the mean vector will not be (0,0). This is referred to as bias.

To incorporate accuracy into the CEP concept in these conditions, CEP can be defined as the square root of the mean square error (MSE). The MSE will be the sum of the variance of the range error plus the variance of the azimuth error plus the covariance of the range error with the azimuth error plus the square of the bias. Thus the MSE results from pooling all these sources of error, geometrically corresponding to radius of a circle within which 50% of rounds will land.

Several methods have been introduced to estimate CEP from shot data. Included in these methods are the plug-in approach of Blischke and Halpin (1966), the Bayesian approach of Spall and Maryak (1992), and the maximum likelihood approach of Winkler and Bickert (2012). The Spall and Maryak approach applies when the shot data represent a mixture of different projectile characteristics (e.g., shots from multiple munitions types or from multiple locations directed at one target).

Conversion

While 50% is a very common definition for CEP, the circle dimension can be defined for percentages. Percentiles can be determined by recognizing that the horizontal position error is defined by a 2D vector which components are two orthogonal Gaussian random variables (one for each axis), assumed uncorrelated, each having a standard deviation . The distance error is the magnitude of that vector; it is a property of 2D Gaussian vectors that the magnitude follows the Rayleigh distribution, with scale factor . The distance root mean square (DRMS), is and doubles as a sort of standard deviation, since errors within this value make up 63% of the sample represented by the bivariate circular distribution. In turn, the properties of the Rayleigh distribution are that its percentile at level is given by the following formula:

or, expressed in terms of the DRMS:

The relation between and are given by the following table, where the values for DRMS and 2DRMS (twice the distance root mean square) are specific to the Rayleigh distribution and are found numerically, while the CEP, R95 (95% radius) and R99.7 (99.7% radius) values are defined based on the 68–95–99.7 rule

Measure of Probability
DRMS63.213...
CEP50
2DRMS98.169...
R9595
R99.799.7

We can then derive a conversion table to convert values expressed for one percentile level, to another. [5] [6] Said conversion table, giving the coefficients to convert into , is given by:

From to RMS ()CEPDRMSR952DRMSR99.7
RMS ()1.001.181.412.452.833.41
CEP0.8491.001.202.082.402.90
DRMS0.7070.8331.001.732.002.41
R950.4090.4810.5781.001.161.39
2DRMS0.3540.4160.5000.8651.001.21
R99.70.2930.3450.4150.7180.8301.00

For example, a GPS receiver having a 1.25 m DRMS will have a 1.25 m × 1.73 = 2.16 m 95% radius.

Warning: often, sensor datasheets or other publications state "RMS" values which in general, but not always, [7] stand for "DRMS" values. Also, be wary of habits coming from properties of a 1D normal distribution, such as the 68–95–99.7 rule, in essence trying to say that "R95 = 2DRMS". As shown above, these properties simply do not translate to the distance errors. Finally, mind that these values are obtained for a theoretical distribution; while generally being true for real data, these may be affected by other effects, which the model does not represent.

See also

Related Research Articles

In statistics, a central tendency is a central or typical value for a probability distribution.

<span class="mw-page-title-main">Estimator</span> Rule for calculating an estimate of a given quantity based on observed data

In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule, the quantity of interest and its result are distinguished. For example, the sample mean is a commonly used estimator of the population mean.

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

<span class="mw-page-title-main">Standard deviation</span> In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation of a random variable expected about its mean. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not.

In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate. In machine learning, specifically empirical risk minimization, MSE may refer to the empirical risk, as an estimate of the true MSE.

<span class="mw-page-title-main">Pearson correlation coefficient</span> Measure of linear correlation

In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations. As a simple example, one would expect the age and height of a sample of teenagers from a high school to have a Pearson correlation coefficient significantly greater than 0, but less than 1.

<span class="mw-page-title-main">Rayleigh distribution</span> Probability distribution

In probability theory and statistics, the Rayleigh distribution is a continuous probability distribution for nonnegative-valued random variables. Up to rescaling, it coincides with the chi distribution with two degrees of freedom. The distribution is named after Lord Rayleigh.

<span class="mw-page-title-main">Margin of error</span> Statistic expressing the amount of random sampling error in a surveys results

The margin of error is a statistic expressing the amount of random sampling error in the results of a survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of a census of the entire population. The margin of error will be positive whenever a population is incompletely sampled and the outcome measure has positive variance, which is to say, whenever the measure varies.

<span class="mw-page-title-main">Standard error</span> Statistical property

The standard error (SE) of a statistic is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM). The standard error is a key ingredient in producing confidence intervals.

In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.

In statistics, a studentized residual is the dimensionless ratio resulting from the division of a residual by an estimate of its standard deviation, both expressed in the same units. It is a form of a Student's t-statistic, with the estimate of error varying between points.

In probability and statistics, a circular distribution or polar distribution is a probability distribution of a random variable whose values are angles, usually taken to be in the range [0, 2π). A circular distribution is often a continuous probability distribution, and hence has a probability density, but such distributions can also be discrete, in which case they are called circular lattice distributions. Circular distributions can be used even when the variables concerned are not explicitly angles: the main consideration is that there is not usually any real distinction between events occurring at the opposite ends of the range, and the division of the range could notionally be made at any point.

In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object.

<span class="mw-page-title-main">Gaussian blur</span> Type of image blur produced by a Gaussian function

In image processing, a Gaussian blur is the result of blurring an image by a Gaussian function.

<span class="mw-page-title-main">Gaussian filter</span> Filter in electronics and signal processing

In electronics and signal processing, mainly in digital signal processing, a Gaussian filter is a filter whose impulse response is a Gaussian function. Gaussian filters have the properties of having no overshoot to a step function input while minimizing the rise and fall time. This behavior is closely connected to the fact that the Gaussian filter has the minimum possible group delay. A Gaussian filter will have the best combination of suppression of high frequencies while also minimizing spatial spread, being the critical point of the uncertainty principle. These properties are important in areas such as oscilloscopes and digital telecommunication systems.

In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias is a distinct concept from consistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.

The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is either one of two closely related and frequently used measures of the differences between true or predicted values on the one hand and observed values or an estimator on the other.

In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated by the MAD calculated from a sample.

In statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation of a population of values, in such a way that the expected value of the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use of significance tests and confidence intervals, or by using Bayesian analysis.

<span class="mw-page-title-main">Error analysis for the Global Positioning System</span> Detail of the global positioning system

The error analysis for the Global Positioning System is important for understanding how GPS works, and for knowing what magnitude of error should be expected. The GPS makes corrections for receiver clock errors and other effects but there are still residual errors which are not corrected. GPS receiver position is computed based on data received from the satellites. Errors depend on geometric dilution of precision and the sources listed in the table below.

References

  1. 1 2 Circular Error Probable (CEP), Air Force Operational Test and Evaluation Center Technical Paper 6, Ver 2, July 1987, p. 1
  2. Nelson, William (1988). "Use of Circular Error Probability in Target Detection". Bedford, MA: The MITRE Corporation; United States Air Force. Archived (PDF) from the original on October 28, 2014.
  3. Ehrlich, Robert (1985). Waging Nuclear Peace: The Technology and Politics of Nuclear Weapons. Albany, NY: State University of New York Press. p.  63.
  4. Payne, Craig, ed. (2006). Principles of Naval Weapon Systems. Annapolis, MD: Naval Institute Press. p.  342.
  5. Frank van Diggelen, "GPS Accuracy: Lies, Damn Lies, and Statistics", GPS World, Vol 9 No. 1, January 1998
  6. Frank van Diggelen, "GNSS Accuracy – Lies, Damn Lies and Statistics", GPS World, Vol 18 No. 1, January 2007. Sequel to previous article with similar title
  7. For instance, the International Hydrographic Organization, in the IHO standard for hydrographic survey S-44 (fifth edition) defines "the 95% confidence level for 2D quantities (e.g. position) is defined as 2.45 × standard deviation", which is true only if we are speaking about the standard deviation of the underlying 1D variable, defined as above.

Further reading