Bootstrapping (statistics)

Last updated

Bootstrapping is any test or metric that uses random sampling with replacement (e.g. mimicking the sampling process), and falls under the broader class of resampling methods. Bootstrapping assigns measures of accuracy (bias, variance, confidence intervals, prediction error, etc.) to sample estimates. [1] [2] This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods. [3] [4]


Bootstrapping estimates the properties of an estimator (such as its variance) by measuring those properties when sampling from an approximating distribution. One standard choice for an approximating distribution is the empirical distribution function of the observed data. In the case where a set of observations can be assumed to be from an independent and identically distributed population, this can be implemented by constructing a number of resamples with replacement, of the observed data set (and of equal size to the observed data set).

It may also be used for constructing hypothesis tests. It is often used as an alternative to statistical inference based on the assumption of a parametric model when that assumption is in doubt, or where parametric inference is impossible or requires complicated formulas for the calculation of standard errors.


The bootstrap was published by Bradley Efron in "Bootstrap methods: another look at the jackknife" (1979), [5] [6] [7] inspired by earlier work on the jackknife. [8] [9] [10] Improved estimates of the variance were developed later. [11] [12] A Bayesian extension was developed in 1981. [13] The bias-corrected and accelerated (BCa) bootstrap was developed by Efron in 1987, [14] and the ABC procedure in 1992. [15]


The basic idea of bootstrapping is that inference about a population from sample data (sample → population) can be modelled by resampling the sample data and performing inference about a sample from resampled data (resampled → sample). As the population is unknown, the true error in a sample statistic against its population value is unknown. In bootstrap-resamples, the 'population' is in fact the sample, and this is known; hence the quality of inference of the 'true' sample from resampled data (resampled → sample) is measurable.

More formally, the bootstrap works by treating inference of the true probability distribution J, given the original data, as being analogous to inference of the empirical distribution Ĵ, given the resampled data. The accuracy of inferences regarding Ĵ using the resampled data can be assessed because we know Ĵ. If Ĵ is a reasonable approximation to J, then the quality of inference on J can in turn be inferred.

As an example, assume we are interested in the average (or mean) height of people worldwide. We cannot measure all the people in the global population, so instead we sample only a tiny part of it, and measure that. Assume the sample is of size N; that is, we measure the heights of N individuals. From that single sample, only one estimate of the mean can be obtained. In order to reason about the population, we need some sense of the variability of the mean that we have computed. The simplest bootstrap method involves taking the original data set of heights, and, using a computer, sampling from it to form a new sample (called a 'resample' or bootstrap sample) that is also of size N. The bootstrap sample is taken from the original by using sampling with replacement (e.g. we might 'resample' 5 times from [1,2,3,4,5] and get [2,5,4,4,1]), so, assuming N is sufficiently large, for all practical purposes there is virtually zero probability that it will be identical to the original "real" sample. This process is repeated a large number of times (typically 1,000 or 10,000 times), and for each of these bootstrap samples we compute its mean (each of these are called bootstrap estimates). We now can create a histogram of bootstrap means. This histogram provides an estimate of the shape of the distribution of the sample mean from which we can answer questions about how much the mean varies across samples. (The method here, described for the mean, can be applied to almost any other statistic or estimator.)



A great advantage of bootstrap is its simplicity. It is a straightforward way to derive estimates of standard errors and confidence intervals for complex estimators of the distribution, such as percentile points, proportions, odds ratio, and correlation coefficients. Bootstrap is also an appropriate way to control and check the stability of the results. Although for most problems it is impossible to know the true confidence interval, bootstrap is asymptotically more accurate than the standard intervals obtained using sample variance and assumptions of normality. [16] Bootstrapping is also a convenient method that avoids the cost of repeating the experiment to get other groups of sample data.


Although bootstrapping is (under some conditions) asymptotically consistent, it does not provide general finite-sample guarantees. The result may depend on the representative sample. The apparent simplicity may conceal the fact that important assumptions are being made when undertaking the bootstrap analysis (e.g. independence of samples) where these would be more formally stated in other approaches. Also, bootstrapping can be time-consuming.


Scholars have recommended more bootstrap samples as available computing power has increased. If the results may have substantial real-world consequences, then one should use as many samples as is reasonable, given available computing power and time. Increasing the number of samples cannot increase the amount of information in the original data; it can only reduce the effects of random sampling errors which can arise from a bootstrap procedure itself. Moreover, there is evidence that numbers of samples greater than 100 lead to negligible improvements in the estimation of standard errors. [17] In fact, according to the original developer of the bootstrapping method, even setting the number of samples at 50 is likely to lead to fairly good standard error estimates. [18]

Adèr et al. recommend the bootstrap procedure for the following situations: [19]

  • When the theoretical distribution of a statistic of interest is complicated or unknown. Since the bootstrapping procedure is distribution-independent it provides an indirect method to assess the properties of the distribution underlying the sample and the parameters of interest that are derived from this distribution.
  • When the sample size is insufficient for straightforward statistical inference. If the underlying distribution is well-known, bootstrapping provides a way to account for the distortions caused by the specific sample that may not be fully representative of the population.
  • When power calculations have to be performed, and a small pilot sample is available. Most power and sample size calculations are heavily dependent on the standard deviation of the statistic of interest. If the estimate used is incorrect, the required sample size will also be wrong. One method to get an impression of the variation of the statistic is to use a small pilot sample and perform bootstrapping on it to get impression of the variance.

However, Athreya has shown [20] that if one performs a naive bootstrap on the sample mean when the underlying population lacks a finite variance (for example, a power law distribution), then the bootstrap distribution will not converge to the same limit as the sample mean. As a result, confidence intervals on the basis of a Monte Carlo simulation of the bootstrap could be misleading. Athreya states that "Unless one is reasonably sure that the underlying distribution is not heavy tailed, one should hesitate to use the naive bootstrap".

Types of bootstrap scheme

In univariate problems, it is usually acceptable to resample the individual observations with replacement ("case resampling" below) unlike subsampling, in which resampling is without replacement and is valid under much weaker conditions compared to the bootstrap. In small samples, a parametric bootstrap approach might be preferred. For other problems, a smooth bootstrap will likely be preferred.

For regression problems, various other alternatives are available. [21]

Case resampling

Bootstrap is generally useful for estimating the distribution of a statistic (e.g. mean, variance) without using normal theory (e.g. z-statistic, t-statistic). Bootstrap comes in handy when there is no analytical form or normal theory to help estimate the distribution of the statistics of interest, since bootstrap methods can apply to most random quantities, e.g., the ratio of variance and mean. There are at least two ways of performing case resampling.

  1. The Monte Carlo algorithm for case resampling is quite simple. First, we resample the data with replacement, and the size of the resample must be equal to the size of the original data set. Then the statistic of interest is computed from the resample from the first step. We repeat this routine many times to get a more precise estimate of the Bootstrap distribution of the statistic.
  2. The 'exact' version for case resampling is similar, but we exhaustively enumerate every possible resample of the data set. This can be computationally expensive as there are a total of different resamples, where n is the size of the data set. Thus for n = 5, 10, 20, 30 there are 126, 92378, 6.89 × 1010 and 5.91 × 1016 different resamples respectively. [22]

Estimating the distribution of sample mean

Consider a coin-flipping experiment. We flip the coin and record whether it lands heads or tails. Let X = x1, x2, …, x10 be 10 observations from the experiment. xi = 1 if the i th flip lands heads, and 0 otherwise. From normal theory, we can use t-statistic to estimate the distribution of the sample mean,

Instead, we use bootstrap, specifically case resampling, to derive the distribution of . We first resample the data to obtain a bootstrap resample. An example of the first resample might look like this X1* = x2, x1, x10, x10, x3, x4, x6, x7, x1, x9. There are some duplicates since a bootstrap resample comes from sampling with replacement from the data. Also the number of data points in a bootstrap resample is equal to the number of data points in our original observations. Then we compute the mean of this resample and obtain the first bootstrap mean: μ1*. We repeat this process to obtain the second resample X2* and compute the second bootstrap mean μ2*. If we repeat this 100 times, then we have μ1*, μ2*, ..., μ100*. This represents an empirical bootstrap distribution of sample mean. From this empirical distribution, one can derive a bootstrap confidence interval for the purpose of hypothesis testing.


In regression problems, case resampling refers to the simple scheme of resampling individual cases – often rows of a data set. For regression problems, as long as the data set is fairly large, this simple scheme is often acceptable. However, the method is open to criticism[ citation needed ].

In regression problems, the explanatory variables are often fixed, or at least observed with more control than the response variable. Also, the range of the explanatory variables defines the information available from them. Therefore, to resample cases means that each bootstrap sample will lose some information. As such, alternative bootstrap procedures should be considered.

Bayesian bootstrap

Bootstrapping can be interpreted in a Bayesian framework using a scheme that creates new data sets through reweighting the initial data. Given a set of data points, the weighting assigned to data point in a new data set is , where is a low-to-high ordered list of uniformly distributed random numbers on , preceded by 0 and succeeded by 1. The distributions of a parameter inferred from considering many such data sets are then interpretable as posterior distributions on that parameter. [23]

Smooth bootstrap

Under this scheme, a small amount of (usually normally distributed) zero-centered random noise is added onto each resampled observation. This is equivalent to sampling from a kernel density estimate of the data. Assume K to be a symmetric kernel density function with unit variance. The standard kernel estimator of is


where is the smoothing parameter. And the corresponding distribution function estimator is


Parametric bootstrap

Based on the assumption that the original data set is a realization of a random sample from a distribution of a specific parametric type, in this case a parametric model is fitted by parameter θ, often by maximum likelihood, and samples of random numbers are drawn from this fitted model. Usually the sample drawn has the same sample size as the original data. Then the estimate of original function F can be written as . This sampling process is repeated many times as for other bootstrap methods. Considering the centered sample mean in this case, the random sample original distribution function is replaced by a bootstrap random sample with function , and the probability distribution of is approximated by that of , where , which is the expectation corresponding to . [25] The use of a parametric model at the sampling stage of the bootstrap methodology leads to procedures which are different from those obtained by applying basic statistical theory to inference for the same model.

Resampling residuals

Another approach to bootstrapping in regression problems is to resample residuals. The method proceeds as follows.

  1. Fit the model and retain the fitted values and the residuals .
  2. For each pair, (xi, yi), in which xi is the (possibly multivariate) explanatory variable, add a randomly resampled residual, , to the fitted value . In other words, create synthetic response variables where j is selected randomly from the list (1, ..., n) for every i.
  3. Refit the model using the fictitious response variables , and retain the quantities of interest (often the parameters, , estimated from the synthetic ).
  4. Repeat steps 2 and 3 a large number of times.

This scheme has the advantage that it retains the information in the explanatory variables. However, a question arises as to which residuals to resample. Raw residuals are one option; another is studentized residuals (in linear regression). Although there are arguments in favour of using studentized residuals; in practice, it often makes little difference, and it is easy to compare the results of both schemes.

Gaussian process regression bootstrap

When data are temporally correlated, straightforward bootstrapping destroys the inherent correlations. This method uses Gaussian process regression (GPR) to fit a probabilistic model from which replicates may then be drawn. GPR is a Bayesian non-linear regression method. A Gaussian process (GP) is a collection of random variables, and any finite number of which have a joint Gaussian (normal) distribution. A GP is defined by a mean function and a covariance function, which specify the mean vectors and covariance matrices for each finite collection of the random variables. [26]

Regression model:

is a noise term.

Gaussian process prior:

For any finite collection of variables, x1, ..., xn, the function outputs are jointly distributed according to a multivariate Gaussian with mean and covariance matrix

Assume Then ,

where , and is the standard Kronecker delta function. [26]

Gaussian process posterior:

According to GP prior, we can get


where and

Let x1*,...,xs* be another finite collection of variables, it's obvious that


where , ,

According to the equations above, the outputs y are also jointly distributed according to a multivariate Gaussian. Thus,

where , , , and is identity matrix. [26]

Wild bootstrap

The wild bootstrap, proposed originally by Wu (1986), [27] is suited when the model exhibits heteroskedasticity. The idea is, like the residual bootstrap, to leave the regressors at their sample value, but to resample the response variable based on the residuals values. That is, for each replicate, one computes a new based on

so the residuals are randomly multiplied by a random variable with mean 0 and variance 1. For most distributions of (but not Mammen's), this method assumes that the 'true' residual distribution is symmetric and can offer advantages over simple residual sampling for smaller sample sizes. Different forms are used for the random variable , such as

  • A distribution suggested by Mammen (1993). [28]
Approximately, Mammen's distribution is:

Block bootstrap

The block bootstrap is used when the data, or the errors in a model, are correlated. In this case, a simple case or residual resampling will fail, as it is not able to replicate the correlation in the data. The block bootstrap tries to replicate the correlation by resampling inside blocks of data. The block bootstrap has been used mainly with data correlated in time (i.e. time series) but can also be used with data correlated in space, or among groups (so-called cluster data).

Time series: Simple block bootstrap

In the (simple) block bootstrap, the variable of interest is split into non-overlapping blocks.

Time series: Moving block bootstrap

In the moving block bootstrap, introduced by Künsch (1989), [29] data is split into n  b + 1 overlapping blocks of length b: Observation 1 to b will be block 1, observation 2 to b + 1 will be block 2, etc. Then from these n  b + 1 blocks, n/b blocks will be drawn at random with replacement. Then aligning these n/b blocks in the order they were picked, will give the bootstrap observations.

This bootstrap works with dependent data, however, the bootstrapped observations will not be stationary anymore by construction. But, it was shown that varying randomly the block length can avoid this problem. [30] This method is known as the stationary bootstrap. Other related modifications of the moving block bootstrap are the Markovian bootstrap and a stationary bootstrap method that matches subsequent blocks based on standard deviation matching.

Time series: Maximum entropy bootstrap

Vinod (2006), [31] presents a method that bootstraps time series data using maximum entropy principles satisfying the Ergodic theorem with mean-preserving and mass-preserving constraints. There is an R package, meboot, [32] that utilizes the method, which has applications in econometrics and computer science.

Cluster data: block bootstrap

Cluster data describes data where many observations per unit are observed. This could be observing many firms in many states, or observing students in many classes. In such cases, the correlation structure is simplified, and one does usually make the assumption that data is correlated within a group/cluster, but independent between groups/clusters. The structure of the block bootstrap is easily obtained (where the block just corresponds to the group), and usually only the groups are resampled, while the observations within the groups are left unchanged. Cameron et al. (2008) discusses this for clustered errors in linear regression. [33]

Methods for improving computational efficiency

The bootstrap is a powerful technique although may require substantial computing resources in both time and memory. Some techniques have been developed to reduce this burden. They can generally be combined with many of the different types of Bootstrap schemes and various choices of statistic.

Poisson bootstrap

Graph showing the convergence of the Binomial Distribution to the Poisson as the Binomial parameters are n*p=1 and n grows Poisson approximation to Binomial.svg
Graph showing the convergence of the Binomial Distribution to the Poisson as the Binomial parameters are n*p=1 and n grows

The ordinary bootstrap requires the random selection of n elements from a list, which is equivalent to drawing from a multinomial distribution. This may require a large number of passes over the data and is challenging to run these computations in parallel. For large values of n, the Poisson bootstrap is an efficient method of generating bootstrapped data sets. [34] When generating a single bootstrap sample, instead of randomly drawing from the sample data with replacement, each data point is assigned a random weight distributed according to the Poisson distribution with . For large sample data, this will approximate random sampling with replacement. This is due to the following approximation:

This method also lends itself well to streaming data and growing data sets, since the total number of samples does not need to be known in advance of beginning to take bootstrap samples.

Bag of Little Bootstraps

For massive data sets, it is often computationally prohibitive to hold all the sample data in memory and resample from the sample data. The Bag of Little Bootstraps (BLB) [35] provides a method of pre-aggregating data before bootstrapping to reduce computational constraints. This works by partitioning the data set into equal sized buckets and aggregating the data within each bucket. This pre-aggregated data set becomes the new sample data over which to draw samples with replacement. This method is similar to the Block Bootstrap, but the motivations and definitions of the blocks are very different. Under certain assumptions, the sample distribution should approximate the full bootstrapped scenario. One constraint is the number of buckets where and the authors recommend usage of as a general solution.

Choice of statistic

The bootstrap distribution of a point estimator of a population parameter has been used to produce a bootstrapped confidence interval for the parameter's true value, if the parameter can be written as a function of the population's distribution.

Population parameters are estimated with many point estimators. Popular families of point-estimators include mean-unbiased minimum-variance estimators, median-unbiased estimators, Bayesian estimators (for example, the posterior distribution's mode, median, mean), and maximum-likelihood estimators.

A Bayesian point estimator and a maximum-likelihood estimator have good performance when the sample size is infinite, according to asymptotic theory. For practical problems with finite samples, other estimators may be preferable. Asymptotic theory suggests techniques that often improve the performance of bootstrapped estimators; the bootstrapping of a maximum-likelihood estimator may often be improved using transformations related to pivotal quantities. [36]

Deriving confidence intervals from the bootstrap distribution

The bootstrap distribution of a parameter-estimator has been used to calculate confidence intervals for its population-parameter.[ citation needed ]

Bias, asymmetry, and confidence intervals

Methods for bootstrap confidence intervals

There are several methods for constructing confidence intervals from the bootstrap distribution of a real parameter:

where denotes the percentile of the bootstrapped coefficients .
where denotes the percentile of the bootstrapped coefficients .
See Davison and Hinkley (1997, equ. 5.18 p. 203) and Efron and Tibshirani (1993, equ 13.5 p. 171).
This method can be applied to any statistic. It will work well in cases where the bootstrap distribution is symmetrical and centered on the observed statistic [38] and where the sample statistic is median-unbiased and has maximum concentration (or minimum risk with respect to an absolute value loss function). When working with small sample sizes (i.e., less than 50), the basic / reversed percentile and percentile confidence intervals for (for example) the variance statistic will be too narrow. So that with a sample of 20 points, 90% confidence interval will include the true variance only 78% of the time. [39] The basic / reverse percentile confidence intervals are easier to justify mathematically [40] [37] but they are less accurate in general than percentile confidence intervals, and some authors discourage their use. [37]
where denotes the percentile of the bootstrapped Student's t-test , and is the estimated standard error of the coefficient in the original model.
The studentized test enjoys optimal properties as the statistic that is bootstrapped is pivotal (i.e. it does not depend on nuisance parameters as the t-test follows asymptotically a N(0,1) distribution), unlike the percentile bootstrap.

Bootstrap hypothesis testing

Efron and Tibshirani [1] suggest the following algorithm for comparing the means of two independent samples: Let be a random sample from distribution F with sample mean and sample variance . Let be another, independent random sample from distribution G with mean and variance

  1. Calculate the test statistic
  2. Create two new data sets whose values are and where is the mean of the combined sample.
  3. Draw a random sample () of size with replacement from and another random sample () of size with replacement from .
  4. Calculate the test statistic
  5. Repeat 3 and 4 times (e.g. ) to collect values of the test statistic.
  6. Estimate the p-value as where when condition is true and 0 otherwise.

Example applications

Smoothed bootstrap

In 1878, Simon Newcomb took observations on the speed of light. [41] The data set contains two outliers, which greatly influence the sample mean. (The sample mean need not be a consistent estimator for any population mean, because no mean need exist for a heavy-tailed distribution.) A well-defined and robust statistic for central tendency is the sample median, which is consistent and median-unbiased for the population median.

The bootstrap distribution for Newcomb's data appears below. A convolution method of regularization reduces the discreteness of the bootstrap distribution by adding a small amount of N(0, σ2) random noise to each bootstrap sample. A conventional choice is for sample size n.[ citation needed ]

Histograms of the bootstrap distribution and the smooth bootstrap distribution appear below. The bootstrap distribution of the sample-median has only a small number of values. The smoothed bootstrap distribution has a richer support.


In this example, the bootstrapped 95% (percentile) confidence-interval for the population median is (26, 28.5), which is close to the interval for (25.98, 28.46) for the smoothed bootstrap.

Relation to other approaches to inference

Relationship to other resampling methods

The bootstrap is distinguished from:

For more details see bootstrap resampling.

Bootstrap aggregating (bagging) is a meta-algorithm based on averaging the results of multiple bootstrap samples.


In situations where an obvious statistic can be devised to measure a required characteristic using only a small number, r, of data items, a corresponding statistic based on the entire sample can be formulated. Given an r-sample statistic, one can create an n-sample statistic by something similar to bootstrapping (taking the average of the statistic over all subsamples of size r). This procedure is known to have certain good properties and the result is a U-statistic. The sample mean and sample variance are of this form, for r = 1 and r = 2.

See also

Related Research Articles

In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule, the quantity of interest and its result are distinguished. For example, the sample mean is a commonly used estimator of the population mean.

Median Middle quantile of a data set or probability distribution

In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the mean is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of a "typical" value. Median income, for example, may be a better way to suggest what a "typical" income is, because income distribution can be very skewed. The median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data are contaminated, the median is not an arbitrarily large or small result.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of a probability distribution by maximizing a likelihood function, so that under the assumed statistical model the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate.

In statistics, a confidence interval (CI) is a type of estimate computed from the statistics of the observed data. This gives a range of values for an unknown parameter. The interval has an associated confidence level that gives the probability with which an estimated interval will contain the true value of the parameter. The confidence level is chosen by the investigator. For a given estimation in a given sample, using a higher confidence level generates a wider confidence interval. In general terms, a confidence interval for an unknown parameter is based on sampling the distribution of a corresponding estimator.

In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948, a fact that can be used to determine statistical significance.

In statistics, the method of moments is a method of estimation of population parameters.

Simple linear regression

In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable and finds a linear function that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjective simple refers to the fact that the outcome variable is related to a single predictor.

Empirical distribution function

In statistics, an empirical distribution function is the distribution function associated with the empirical measure of a sample. This cumulative distribution function is a step function that jumps up by 1/n at each of the n data points. Its value at any specified value of the measured variable is the fraction of observations of the measured variable that are less than or equal to the specified value.

Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from parametric distribution. For example, robust methods work well for mixtures of two normal distributions with different standard-deviations; under this model, non-robust methods like a t-test work poorly.

In statistics, resampling is any of a variety of methods for doing one of the following:

  1. Estimating the precision of sample statistics by using subsets of available data (jackknifing) or drawing randomly with replacement from a set of data points (bootstrapping)
  2. Exchanging labels on data points when performing significance tests
  3. Validating models by using random subsets

The James–Stein estimator is a biased estimator of the mean, , of (possibly) correlated Gaussian distributed random vectors with unknown means .

In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special cases of M-estimators. The definition of M-estimators was motivated by robust statistics, which contributed new types of M-estimators. The statistical procedure of evaluating an M-estimator on a data set is called M-estimation. 48 samples of robust M-estimators can be founded in a recent review study.

In statistics, a pivotal quantity or pivot is a function of observations and unobservable parameters such that the function's probability distribution does not depend on the unknown parameters. A pivot quantity need not be a statistic—the function and its value can depend on the parameters of the model, but its distribution must not. If it is a statistic, then it is known as an ancillary statistic.

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function. Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation.

In statistics, the jackknife is a resampling technique especially useful for variance and bias estimation. The jackknife pre-dates other common resampling methods such as the bootstrap. The jackknife estimator of a parameter is found by systematically leaving out each observation from a dataset and calculating the estimate and then finding the average of these calculations. Given a sample of size , the jackknife estimate is found by aggregating the estimates of each -sized sub-sample.

Maximum spacing estimation

In statistics, maximum spacing estimation, or maximum product of spacing estimation (MPS), is a method for estimating the parameters of a univariate statistical model. The method requires maximization of the geometric mean of spacings in the data, which are the differences between the values of the cumulative distribution function at neighbouring data points.

Starting with a sample observed from a random variable X having a given distribution law with a set of non fixed parameters which we denote with a vector , a parametric inference problem consists of computing suitable values – call them estimates – of these parameters precisely on the basis of the sample. An estimate is suitable if replacing it with the unknown parameter does not cause major damage in next computations. In Algorithmic inference, suitability of an estimate reads in terms of compatibility with the observed sample.

In statistical inference, the concept of a confidence distribution (CD) has often been loosely referred to as a distribution function on the parameter space that can represent confidence intervals of all levels for a parameter of interest. Historically, it has typically been constructed by inverting the upper limits of lower sided confidence intervals of all levels, and it was also commonly associated with a fiducial interpretation, although it is a purely frequentist concept. A confidence distribution is NOT a probability distribution function of the parameter of interest, but may still be a function useful for making inferences.

The ratio estimator is a statistical parameter and is defined to be the ratio of means of two random variables. Ratio estimates are biased and corrections must be made when they are used in experimental or survey work. The ratio estimates are asymmetrical and symmetrical tests such as the t test should not be used to generate confidence intervals.


  1. 1 2 Efron, B.; Tibshirani, R. (1993). An Introduction to the Bootstrap. Boca Raton, FL: Chapman & Hall/CRC. ISBN   0-412-04231-2. software Archived 2012-07-12 at
  2. Second Thoughts on the Bootstrap – Bradley Efron, 2003
  3. Varian, H.(2005). "Bootstrap Tutorial". Mathematica Journal, 9, 768–775.
  4. Weisstein, Eric W. "Bootstrap Methods." From MathWorld--A Wolfram Web Resource.
  5. Notes for Earliest Known Uses of Some of the Words of Mathematics: Bootstrap (John Aldrich)
  6. Earliest Known Uses of Some of the Words of Mathematics (B) (Jeff Miller)
  7. Efron, B. (1979). "Bootstrap methods: Another look at the jackknife". The Annals of Statistics . 7 (1): 1–26. doi: 10.1214/aos/1176344552 .
  8. Quenouille M (1949) Approximate tests of correlation in time-series. J Roy Statist Soc Ser B 11 68–84
  9. Tukey J (1958) Bias and confidence in not-quite large samples (abstract). Ann Math Statist 29 614
  10. Jaeckel L (1972) The infinitesimal jackknife. Memorandum MM72-1215-11, Bell Lab
  11. Bickel P, Freeman D (1981) Some asymptotic theory for the bootstrap. Ann Statist 9 1196–1217
  12. Singh K (1981) On the asymptotic accuracy of Efron’s bootstrap. Ann Statist 9 1187–1195
  13. Rubin D (1981). The Bayesian bootstrap. Ann Statist 9 130–134
  14. 1 2 Efron, B. (1987). "Better Bootstrap Confidence Intervals". Journal of the American Statistical Association . Journal of the American Statistical Association, Vol. 82, No. 397. 82 (397): 171–185. doi:10.2307/2289144. JSTOR   2289144.
  15. Diciccio T, Efron B (1992) More accurate confidence intervals in exponential families. Biometrika 79 231–245
  16. DiCiccio TJ, Efron B (1996) Bootstrap confidence intervals (with Discussion). Statistical Science 11: 189–228
  17. Goodhue, D.L., Lewis, W., & Thompson, R. (2012). Does PLS have advantages for small sample size or non-normal data? MIS Quarterly, 36(3), 981–1001.
  18. Efron, B., Rogosa, D., & Tibshirani, R. (2004). Resampling methods of estimation. In N.J. Smelser, & P.B. Baltes (Eds.). International Encyclopedia of the Social & Behavioral Sciences (pp. 13216–13220). New York, NY: Elsevier.
  19. Adèr, H. J., Mellenbergh G. J., & Hand, D. J. (2008). Advising on research methods: A consultant's companion. Huizen, The Netherlands: Johannes van Kessel Publishing. ISBN   978-90-79418-01-5.
  20. Bootstrap of the mean in the infinite variance case Athreya, K.B. Ann Stats vol 15 (2) 1987 724–731
  21. Efron B., R. J. Tibshirani, An introduction to the bootstrap, Chapman & Hall/CRC 1998
  22. How many different bootstrap samples are there?
  23. Rubin, D. B. (1981). "The Bayesian bootstrap". Annals of Statistics , 9, 130.
  24. 1 2 WANG, SUOJIN (1995). "Optimizing the smoothed bootstrap". Ann. Inst. Statist. Math. 47: 65–80. doi:10.1007/BF00773412. S2CID   122041565.
  25. A modern introduction to probability and statistics : understanding why and how. Dekking, Michel, 1946-. London: Springer. 2005. ISBN   978-1-85233-896-1. OCLC   262680588.CS1 maint: others (link)
  26. 1 2 3 Kirk, Paul (2009). "Gaussian process regression bootstrapping: exploring the effects of uncertainty in time course data". Bioinformatics. 25 (10): 1300–1306. doi:10.1093/bioinformatics/btp139. PMC   2677737 . PMID   19289448.
  27. Wu, C.F.J. (1986). "Jackknife, bootstrap and other resampling methods in regression analysis (with discussions)" (PDF). Annals of Statistics. 14: 1261–1350. doi: 10.1214/aos/1176350142 .
  28. Mammen, E. (Mar 1993). "Bootstrap and wild bootstrap for high dimensional linear models". Annals of Statistics. 21 (1): 255–285. doi: 10.1214/aos/1176349025 .
  29. Künsch, H. R. (1989). "The Jackknife and the Bootstrap for General Stationary Observations". Annals of Statistics. 17 (3): 1217–1241. doi: 10.1214/aos/1176347265 .
  30. Politis, D. N.; Romano, J. P. (1994). "The Stationary Bootstrap". Journal of the American Statistical Association. 89 (428): 1303–1313. doi:10.1080/01621459.1994.10476870. hdl: 10983/25607 .
  31. Vinod, HD (2006). "Maximum entropy ensembles for time series inference in economics". Journal of Asian Economics. 17 (6): 955–978. doi:10.1016/j.asieco.2006.09.001.
  32. Vinod, Hrishikesh; López-de-Lacalle, Javier (2009). "Maximum entropy bootstrap for time series: The meboot R package". Journal of Statistical Software. 29 (5): 1–19. doi: 10.18637/jss.v029.i05 .
  33. Cameron, A. C.; Gelbach, J. B.; Miller, D. L. (2008). "Bootstrap-based improvements for inference with clustered errors" (PDF). Review of Economics and Statistics . 90 (3): 414–427. doi:10.1162/rest.90.3.414.
  34. Chamandy, N; Muralidharan, O; Najmi, A; Naidu, S (2012). "Estimating Uncertainty for Massive Data Streams".
  35. Kleiner, A; Talwalkar, A; Sarkar, P; Jordan, M. I. (2014). "A scalable bootstrap for massive data". Journal of the Royal Statistical Society, Series B (Statistical Methodology). 76 (4): 795–816. arXiv: 1112.5016 . doi:10.1111/rssb.12050. ISSN   1369-7412. S2CID   3064206.
  36. 1 2 Davison, A. C.; Hinkley, D. V. (1997). Bootstrap methods and their application. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press. ISBN   0-521-57391-2. software.
  37. 1 2 3 Hesterberg, Tim C (2014). "What Teachers Should Know about the Bootstrap: Resampling in the Undergraduate Statistics Curriculum". arXiv: 1411.5279 [stat.OT].
  38. Efron, B. (1982). The jackknife, the bootstrap, and other resampling plans. 38. Society of Industrial and Applied Mathematics CBMS-NSF Monographs. ISBN   0-89871-179-7.
  39. Scheiner, S. (1998). Design and Analysis of Ecological Experiments . CRC Press. ISBN   0412035618.
  40. Rice, John. Mathematical Statistics and Data Analysis (2 ed.). p. 272. "Although this direct equation of quantiles of the bootstrap sampling distribution with confidence limits may seem initially appealing, it’s rationale is somewhat obscure."
  41. Data from examples in Bayesian Data Analysis

Further reading