Confidence and prediction bands

Last updated
96% confidence bands around a local polynomial fit to botanical data First bloom index.png
96% confidence bands around a local polynomial fit to botanical data

A confidence band is used in statistical analysis to represent the uncertainty in an estimate of a curve or function based on limited or noisy data. Similarly, a prediction band is used to represent the uncertainty about the value of a new data-point on the curve, but subject to noise. Confidence and prediction bands are often used as part of the graphical presentation of results of a regression analysis.

Contents

Confidence bands are closely related to confidence intervals, which represent the uncertainty in an estimate of a single numerical value. "As confidence intervals, by construction, only refer to a single point, they are narrower (at this point) than a confidence band which is supposed to hold simultaneously at many points." [1]

Pointwise and simultaneous confidence bands

Suppose our aim is to estimate a function f(x). For example, f(x) might be the proportion of people of a particular age x who support a given candidate in an election. If x is measured at the precision of a single year, we can construct a separate 95% confidence interval for each age. Each of these confidence intervals covers the corresponding true value f(x) with confidence 0.95. Taken together, these confidence intervals constitute a 95% pointwise confidence band for f(x).

In mathematical terms, a pointwise confidence band with coverage probability 1  α satisfies the following condition separately for each value of x:

where is the point estimate of f(x).

The simultaneous coverage probability of a collection of confidence intervals is the probability that all of them cover their corresponding true values simultaneously. In the example above, the simultaneous coverage probability is the probability that the intervals for x = 18,19,... all cover their true values (assuming that 18 is the youngest age at which a person can vote). If each interval individually has coverage probability 0.95, the simultaneous coverage probability is generally less than 0.95. A 95% simultaneous confidence band is a collection of confidence intervals for all values x in the domain of f(x) that is constructed to have simultaneous coverage probability 0.95.

In mathematical terms, a simultaneous confidence band with coverage probability 1  α satisfies the following condition:

In nearly all cases, a simultaneous confidence band will be wider than a pointwise confidence band with the same coverage probability. In the definition of a pointwise confidence band, that universal quantifier moves outside the probability function.

Confidence bands for simulated data depicting the proportion of voters supporting a given candidate in election, as a function of the voters' ages. Pointwise 95% confidence bands, and simultaneous 95% confidence bands constructed using the Bonferroni correction are shown. Binomial confidence band.svg
Confidence bands for simulated data depicting the proportion of voters supporting a given candidate in election, as a function of the voters' ages. Pointwise 95% confidence bands, and simultaneous 95% confidence bands constructed using the Bonferroni correction are shown.

Confidence bands in regression analysis

Confidence bands commonly arise in regression analysis. [2] In the case of a simple regression involving a single independent variable, results can be presented in the form of a plot showing the estimated regression line along with either point-wise or simultaneous confidence bands. Commonly used methods for constructing simultaneous confidence bands in regression are the Bonferroni and Scheffé methods; see Family-wise error rate controlling procedures for more.

Confidence bands for a simple linear regression analysis using simulated data. Pointwise 95% confidence bands, and simultaneous 95% confidence bands constructed using Scheffe's method are shown. Regression confidence band.svg
Confidence bands for a simple linear regression analysis using simulated data. Pointwise 95% confidence bands, and simultaneous 95% confidence bands constructed using Scheffé's method are shown.

Confidence bands for probability distributions

Confidence bands can be constructed around estimates of the empirical distribution function. Simple theory allows the construction of point-wise confidence intervals, but it is also possible to construct a simultaneous confidence band for the cumulative distribution function as a whole by inverting the Kolmogorov-Smirnov test, or by using non-parametric likelihood methods. [3]

Other applications of confidence bands

Confidence bands arise whenever a statistical analysis focuses on estimating a function.

Confidence bands have also been devised for estimates of density functions, spectral density functions, [4] quantile functions, scatterplot smooths, survival functions, and characteristic functions.[ citation needed ]

Prediction bands

Prediction bands are related to prediction intervals in the same way that confidence bands are related to confidence intervals. Prediction bands commonly arise in regression analysis. The goal of a prediction band is to cover with a prescribed probability the values of one or more future observations from the same population from which a given data set was sampled. Just as prediction intervals are wider than confidence intervals, prediction bands will be wider than confidence bands.

In mathematical terms, a prediction band with coverage probability 1  α satisfies the following condition for each value of x:

where y* is an observation taken from the data-generating process at the given point x that is independent of the data used to construct the point estimate and the confidence[ vague ] interval w(x). This is a pointwise prediction interval.[ vague ] It would be possible to construct a simultaneous interval[ vague ] for a finite number of independent observations using, for example, the Bonferroni method to widen the interval[ vague ] by an appropriate amount.

Related Research Articles

The likelihood function is the joint probability mass of observed data viewed as a function of the parameters of a statistical model. Intuitively, the likelihood function is the probability of observing data assuming is the actual parameter.

<span class="mw-page-title-main">Confidence interval</span> Range to estimate an unknown parameter

Informally, in frequentist statistics, a confidence interval (CI) is an interval which is expected to typically contain the parameter being estimated. More specifically, given a confidence level , a CI is a random interval which contains the parameter being estimated % of the time. The confidence level, degree of confidence or confidence coefficient represents the long-run proportion of CIs that theoretically contain the true value of the parameter; this is tantamount to the nominal coverage probability. For example, out of all intervals computed at the 95% level, 95% of them should contain the parameter's true value.

In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.

<span class="mw-page-title-main">Doomsday argument</span> Doomsday scenario on human births

The doomsday argument (DA), or Carter catastrophe, is a probabilistic argument that claims to predict the future population of the human species based on an estimation of the number of humans born to date. The doomsday argument was originally proposed by the astrophysicist Brandon Carter in 1983, leading to the initial name of the Carter catastrophe. The argument was subsequently championed by the philosopher John A. Leslie and has since been independently conceived by J. Richard Gott and Holger Bech Nielsen. Similar principles of eschatology were proposed earlier by Heinz von Foerster, among others. A more general form was given earlier in the Lindy effect, which proposes that for certain phenomena, the future life expectancy is proportional to the current age and is based on a decreasing mortality rate over time.

A tolerance interval (TI) is a statistical interval within which, with some confidence level, a specified sampled proportion of a population falls. "More specifically, a 100×p%/100×(1−α) tolerance interval provides limits within which at least a certain proportion (p) of the population falls with a given level of confidence (1−α)." "A (p, 1−α) tolerance interval (TI) based on a sample is constructed so that it would include at least a proportion p of the sampled population with confidence 1−α; such a TI is usually referred to as p-content − (1−α) coverage TI." "A (p, 1−α) upper tolerance limit (TL) is simply a 1−α upper confidence limit for the 100 p percentile of the population."

In statistics, a confidence region is a multi-dimensional generalization of a confidence interval. It is a set of points in an n-dimensional space, often represented as an ellipsoid around a point which is an estimated solution to a problem, although other shapes can occur.

<span class="mw-page-title-main">Continuous uniform distribution</span> Uniform distribution on an interval

In probability theory and statistics, the continuous uniform distributions or rectangular distributions are a family of symmetric probability distributions. Such a distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, and which are the minimum and maximum values. The interval can either be closed or open. Therefore, the distribution is often abbreviated where stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable under no constraint other than that it is contained in the distribution's support.

<span class="mw-page-title-main">Isotonic regression</span> Type of numerical analysis

In statistics and numerical analysis, isotonic regression or monotonic regression is the technique of fitting a free-form line to a sequence of observations such that the fitted line is non-decreasing everywhere, and lies as close to the observations as possible.

<span class="mw-page-title-main">Credible interval</span> Concept in Bayesian statistics

In Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution. The generalisation to multivariate problems is the credible region.

In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments. In other words, a binomial proportion confidence interval is an interval estimate of a success probability when only the number of experiments and the number of successes are known.

Bootstrapping is any test or metric that uses random sampling with replacement, and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.

<span class="mw-page-title-main">Dvoretzky–Kiefer–Wolfowitz inequality</span> Statistical inequality

In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality provides a bound on the worst case distance of an empirically determined distribution function from its associated population distribution function. It is named after Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz, who in 1956 proved the inequality

Neyman construction, named after Jerzy Neyman, is a frequentist method to construct an interval at a confidence level such that if we repeat the experiment many times the interval will contain the true value of some parameter a fraction of the time.

<span class="mw-page-title-main">Multiple comparisons problem</span> Statistical interpretation with many tests

In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously or estimates a subset of parameters selected based on the observed values.

<span class="mw-page-title-main">68–95–99.7 rule</span> Shorthand used in statistics

In statistics, the 68–95–99.7 rule, also known as the empirical rule, is a shorthand used to remember the percentage of values that lie within an interval estimate in a normal distribution: 68%, 95%, and 99.7% of the values lie within one, two, and three standard deviations of the mean, respectively.

In statistics, Scheffé's method, named after American statistician Henry Scheffé, is a method for adjusting significance levels in a linear regression analysis to account for multiple comparisons. It is particularly useful in analysis of variance, and in constructing simultaneous confidence bands for regressions involving basis functions.

In particle physics, CLs represents a statistical method for setting upper limits on model parameters, a particular form of interval estimation used for parameters that can take only non-negative values. Although CLs are said to refer to Confidence Levels, "The method's name is ... misleading, as the CLs exclusion region is not a confidence interval." It was first introduced by physicists working at the LEP experiment at CERN and has since been used by many high energy physics experiments. It is a frequentist method in the sense that the properties of the limit are defined by means of error probabilities, however it differs from standard confidence intervals in that the stated confidence level of the interval is not equal to its coverage probability. The reason for this deviation is that standard upper limits based on a most powerful test necessarily produce empty intervals with some fixed probability when the parameter value is zero, and this property is considered undesirable by most physicists and statisticians.

In statistics, cumulative distribution function (CDF)-based nonparametric confidence intervals are a general class of confidence intervals around statistical functionals of a distribution. To calculate these confidence intervals, all that is required is an independently and identically distributed (iid) sample from the distribution and known bounds on the support of the distribution. The latter requirement simply means that all the nonzero probability mass of the distribution must be contained in some known interval .

In statistics, a false coverage rate (FCR) is the average rate of false coverage, i.e. not covering the true parameters, among the selected intervals.

In statistics, particularly regression analysis, the Working–Hotelling procedure, named after Holbrook Working and Harold Hotelling, is a method of simultaneous estimation in linear regression models. One of the first developments in simultaneous inference, it was devised by Working and Hotelling for the simple linear regression model in 1929. It provides a confidence region for multiple mean responses, that is, it gives the upper and lower bounds of more than one value of a dependent variable at several levels of the independent variables at a certain confidence level. The resulting confidence bands are known as the Working–Hotelling–Scheffé confidence bands.

References

  1. p.65 in W. Härdle, M. Müller, S. Sperlich, A. Werwatz (2004), Nonparametric and Semiparametric Models, Springer, ISBN   3540207228 "3.5 Confidence Intervals and Confidence Bands". Archived from the original on 2013-04-12. Retrieved 2013-02-06.,
  2. Liu, W; Lin S.; Piegorsch W.W. (2008). "Construction of Exact Simultaneous Confidence Bands for a Simple Linear Regression Model". International Statistical Review. 76 (1): 39–57. doi: 10.1111/j.1751-5823.2007.00027.x .
  3. Owen, A. B. (1995). "Nonparametric likelihood confidence bands for a distribution function". Journal of the American Statistical Association. 90 (430). American Statistical Association: 516–521. doi:10.2307/2291062. JSTOR   2291062.
  4. Neumann, M.H.; Paparoditis, E. (2008). "Simultaneous confidence bands in spectral density estimation". Biometrika. 95 (2): 381. CiteSeerX   10.1.1.569.3978 . doi:10.1093/biomet/asn005.