Best linear unbiased prediction

Last updated

In statistics, best linear unbiased prediction (BLUP) is used in linear mixed models for the estimation of random effects. BLUP was derived by Charles Roy Henderson in 1950 but the term "best linear unbiased predictor" (or "prediction") seems not to have been used until 1962. [1] "Best linear unbiased predictions" (BLUPs) of random effects are similar to best linear unbiased estimates (BLUEs) (see Gauss–Markov theorem) of fixed effects. The distinction arises because it is conventional to talk about estimating fixed effects but about predicting random effects, but the two terms are otherwise equivalent. (This is a bit strange since the random effects have already been "realized"; they already exist. The use of the term "prediction" may be because in the field of animal breeding in which Henderson worked, the random effects were usually genetic merit, which could be used to predict the quality of offspring (Robinson [1] page 28)). However, the equations for the "fixed" effects and for the random effects are different.

Contents

In practice, it is often the case that the parameters associated with the random effect(s) term(s) are unknown; these parameters are the variances of the random effects and residuals. Typically the parameters are estimated and plugged into the predictor, leading to the empirical best linear unbiased predictor (EBLUP). Notice that by simply plugging in the estimated parameter into the predictor, additional variability is unaccounted for, leading to overly optimistic prediction variances for the EBLUP.[ citation needed ]

Best linear unbiased predictions are similar to empirical Bayes estimates of random effects in linear mixed models, except that in the latter case, where weights depend on unknown values of components of variance, these unknown variances are replaced by sample-based estimates.

Example

Suppose that the model for observations {Yj ; j = 1, ..., n} is written as

where is the mean of all observations , and ξj and εj represent the random effect and observation error for observation j, and suppose they are uncorrelated and have known variances σξ2 and σε2, respectively. Further, xj is a vector of independent variables for the jth observation and is a vector of regression parameters.

The BLUP problem of providing an estimate of the observation-error-free value for the kth observation,

can be formulated as requiring that the coefficients of a linear predictor, defined as

should be chosen so as to minimise the variance of the prediction error,

subject to the condition that the predictor is unbiased,

BLUP vs BLUE

In contrast to the case of best linear unbiased estimation, the "quantity to be estimated", , not only has a contribution from a random element but one of the observed quantities, specifically which contributes to , also has a contribution from this same random element.

In contrast to BLUE, BLUP takes into account known or estimated variances. [2]

History of BLUP in breeding

Henderson explored breeding from a statistical point of view. His work assisted the development of the selection index (SI) and estimated breeding value (EBV). These statistical methods influenced the artificial insemination stud rankings used in the United States. These early statistical methods are confused with the BLUP now common in livestock breeding.

The actual term BLUP originated out of work at the University of Guelph in Canada by Daniel Sorensen and Brian Kennedy, in which they extended Henderson's results to a model that includes several cycles of selection. [3] This model was popularized by the University of Guelph in the dairy industry under the name BLUP. Further work by the University showed BLUP's superiority over EBV and SI leading to it becoming the primary genetic predictor[ citation needed ].

There is thus confusion between the BLUP model popularized above with the best linear unbiased prediction statistical method which was too theoretical for general use. The model was supplied for use on computers to farmers.

In Canada, all dairies report nationally. The genetics in Canada were shared making it the largest genetic pool and thus source of improvements. This and BLUP drove a rapid increase in Holstein cattle quality.

See also

Notes

  1. 1 2 Robinson, G.K. (1991). "That BLUP is a Good Thing: The Estimation of Random Effects". Statistical Science . 6 (1): 15–32. doi: 10.1214/ss/1177011926 . JSTOR   2245695. MR   1108815. Zbl   0955.62500.
  2. Stanek, Edward J. III; Well, Arnold; Ockene, Ira (1999). "Why not routinely use best linear unbiased predictors (BLUPs) as estimates of cholesterol, per cent fat from kcal and physical activity?". Statistics in Medicine. 18 (21): 2943–2959. doi:10.1002/(sici)1097-0258(19991115)18:21<2943::aid-sim241>3.0.co;2-0. PMID   10523752.
  3. Sorensen, D. A.; Kennedy, B. W. (1 May 1984). "Estimation of Response to Selection Using Least-Squares and Mixed Model Methodology". Journal of Animal Science. 58 (5): 1097–1106. doi:10.2527/jas1984.5851097x.

Related Research Articles

In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule, the quantity of interest and its result are distinguished. For example, the sample mean is a commonly used estimator of the population mean.

<span class="mw-page-title-main">Least squares</span> Approximation method in statistics

The method of least squares is a parameter estimation method in regression analysis based on minimizing the sum of the squares of the residuals made in the results of each individual equation.

In statistics, point estimation involves the use of sample data to calculate a single value which is to serve as a "best guess" or "best estimate" of an unknown population parameter. More formally, it is the application of a point estimator to the data to obtain a point estimate.

In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator, ridge regression, or simply any degenerate estimator.

In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate. In machine learning, specifically empirical risk minimization, MSE may refer to the empirical risk, as an estimate of the true MSE.

In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.

In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

<span class="mw-page-title-main">Regression analysis</span> Set of statistical processes for estimating the relationships among variables

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable and one or more error-free independent variables. The most common form of regression analysis is linear regression, in which one finds the line that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line that minimizes the sum of squared differences between the true data and that line. For specific mathematical reasons, this allows the researcher to estimate the conditional expectation of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters or estimate the conditional expectation across a broader collection of non-linear models.

<span class="mw-page-title-main">Coefficient of determination</span> Indicator for how well data points fit a line or curve

In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s).

In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to every unit in a randomized experiment. Intuitively, IVs are used when an explanatory variable of interest is correlated with the error term (endogenous), in which case ordinary least squares and ANOVA give biased results. A valid instrument induces changes in the explanatory variable but has no independent effect on the dependent variable and is not correlated with the error term, allowing a researcher to uncover the causal effect of the explanatory variable on the dependent variable.

<span class="mw-page-title-main">Ordinary least squares</span> Method for estimating the unknown parameters in a linear regression model

In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the input dataset and the output of the (linear) function of the independent variable. Some sources consider OLS to be linear regression.

In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary.

<span class="mw-page-title-main">Simple linear regression</span> Linear regression model with a single explanatory variable

In statistics, simple linear regression (SLR) is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable and finds a linear function that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjective simple refers to the fact that the outcome variable is related to a single predictor.

In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a linear regression model. It is used when there is a non-zero amount of correlation between the residuals in the regression model. GLS is employed to improve statistical efficiency and reduce the risk of drawing erroneous inferences, as compared to conventional least squares and weighted least squares methods. It was first described by Alexander Aitken in 1935.

A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units, or where measurements are made on clusters of related statistical units. Mixed models are often preferred over traditional analysis of variance regression models because they don't rely on the independent observations assumption. Further, they have their flexibility in dealing with missing values and uneven spacing of repeated measurements. The Mixed model analysis allows measurements to be explicitly modeled in a wider variety of correlation and variance-covariance avoiding biased estimations structures.

Bootstrapping is a procedure for estimating the distribution of an estimator by resampling one's data or a model estimated from the data. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.

In statistics, principal component regression (PCR) is a regression analysis technique that is based on principal component analysis (PCA). More specifically, PCR is used for estimating the unknown regression coefficients in a standard linear regression model.

In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.

In statistics, linear regression is a statistical model which estimates the linear relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable. If the explanatory variables are measured with error then errors-in-variables models are required, also known as measurement error models.

In statistics, the Innovation Method provides an estimator for the parameters of stochastic differential equations given a time series of observations of the state variables. In the framework of continuous-discrete state space models, the innovation estimator is obtained by maximizing the log-likelihood of the corresponding discrete-time innovation process with respect to the parameters. The innovation estimator can be classified as a M-estimator, a quasi-maximum likelihood estimator or a prediction error estimator depending on the inferential considerations that want to be emphasized. The innovation method is a system identification technique for developing mathematical models of dynamical systems from measured data and for the optimal design of experiments.

References