MINQUE

Last updated

In statistics, the theory of minimum norm quadratic unbiased estimation (MINQUE) [1] [2] [3] was developed by C. R. Rao. Its application was originally to the problem of heteroscedasticity and the estimation of variance components in random effects models.

The theory involves three stages:

  • defining a general class of potential estimators as quadratic functions of the observed data, where the estimators relate to a vector of model parameters;
  • specifying certain constraints on the desired properties of the estimators, such as unbiasedness;
  • choosing the optimal estimator by minimising a "norm" which measures the size of the covariance matrix of the estimators.

Related Research Articles

Least squares Approximation method in statistics

The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems by minimizing the sum of the squares of the residuals made in the results of each individual equation.

In statistics, point estimation involves the use of sample data to calculate a single value which is to serve as a "best guess" or "best estimate" of an unknown population parameter. More formally, it is the application of a point estimator to the data to obtain a point estimate.

Optimal design

In the design of experiments, optimal designs are a class of experimental designs that are optimal with respect to some statistical criterion. The creation of this field of statistics has been credited to Danish statistician Kirstine Smith.

In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordinary least squares and weighted least squares can be statistically inefficient, or even give misleading inferences. GLS was first described by Alexander Aitken in 1936.

In statistics, the White test is a statistical test that establishes whether the variance of the errors in a regression model is constant: that is for homoskedasticity.

A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units, or where measurements are made on clusters of related statistical units. Because of their advantage in dealing with missing values, mixed effects models are often preferred over more traditional approaches such as repeated measures analysis of variance.

In statistics, the restrictedmaximum likelihood (REML) approach is a particular form of maximum likelihood estimation that does not base estimates on a maximum likelihood fit of all the information, but instead uses a likelihood function calculated from a transformed set of data, so that nuisance parameters have no effect.

Shayle Robert Searle PhD was a New Zealand mathematician who was Professor Emeritus of Biological Statistics at Cornell University. He was a leader in the field of linear and mixed models in statistics, and published widely on the topics of linear models, mixed models, and variance component estimation.

Charles Roy Henderson was an American statistician and a pioneer in animal breeding — the application of quantitative methods for the genetic evaluation of domestic livestock. This is critically important because it allows farmers and geneticists to predict whether a crop or animal will have a desired trait, and to what extent the trait will be expressed. He developed mixed model equations to obtain best linear unbiased predictions of breeding values and, in general, any random effect. He invented three methods for the estimation of variance components in unbalanced settings of mixed models, and invented a method for constructing the inverse of Wright's numerator relationship matrix based on a simple list of pedigree information. He, with his Ph.D. student Shayle R. Searle, greatly extended the use of matrix notation in statistics. His methods are widely used by the domestic livestock industry throughout the world and are a cornerstone of linear model theory.

In statistical theory, the field of high-dimensional statistics studies data whose dimension is larger than typically considered in classical multivariate analysis. The area arose owing to the emergence of many modern data sets in which the dimension of the data vectors may be comparable to, or even larger than, the sample size, so that justification for the use of traditional techniques, often based on asymptotic arguments with the dimension held fixed as the sample size increased, was lacking.

Minimum-distance estimation (MDE) is a conceptual method for fitting a statistical model to data, usually the empirical distribution. Often-used estimators such as ordinary least squares can be thought of as special cases of minimum-distance estimation.

A Newey–West estimator is used in statistics and econometrics to provide an estimate of the covariance matrix of the parameters of a regression-type model when this model is applied in situations where the standard assumptions of regression analysis do not apply. It was devised by Whitney K. Newey and Kenneth D. West in 1987, although there are a number of later variants. The estimator is used to try to overcome autocorrelation, and heteroskedasticity in the error terms in the models, often for regressions applied to time series data. The abbreviation "HAC," sometimes used for the estimator, stands for "heteroskedasticity and autocorrelation consistent."

Pietro Balestra was a Swiss economist specializing in econometrics. He was born in Lugano and earned a B.A. in economics from the University of Fribourg. Balestra moved for graduate work to the University of Kansas and Stanford University. He was awarded the Ph.D. in Economics by Stanford University in 1965.

Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods.

Theil–Sen estimator Statistical method for fitting a line

In non-parametric statistics, the Theil–Sen estimator is a method for robustly fitting a line to sample points in the plane by choosing the median of the slopes of all lines through pairs of points. It has also been called Sen's slope estimator, slope selection, the single median method, the Kendall robust line-fit method, and the Kendall–Theil robust line. It is named after Henri Theil and Pranab K. Sen, who published papers on this method in 1950 and 1968 respectively, and after Maurice Kendall because of its relation to the Kendall tau rank correlation coefficient.

Anil K. Bera is an Indian econometrician. He is Professor of Economics at University of Illinois at Urbana–Champaign's Department of Economics. He is most noted for his work with Carlos Jarque on the Jarque–Bera test.

In statistical theory, the Pitman closeness criterion, named after E. J. G. Pitman, is a way of comparing two candidate estimators for the same parameter. Under this criterion, estimator A is preferred to estimator B if the probability that estimator A is closer to the true value than estimator B is greater than one half. Here the meaning of closer is determined by the absolute difference in the case of a scalar parameter, or by the Mahalanobis distance for a vector parameter.

Homoscedasticity and heteroscedasticity Statistical property

In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The spellings homoskedasticity and heteroskedasticity are also frequently used.

References

  1. Rao, C.R. (1970). "Estimation of heteroscedastic variances in linear models". Journal of the American Statistical Association. 65 (329): 161–172. doi:10.1080/01621459.1970.10481070. JSTOR   2283583.
  2. Rao, C.R. (1971). "Estimation of variance and covariance components MINQUE theory". J Multivar Anal. 1: 257–275. doi:10.1016/0047-259x(71)90001-7. hdl: 10338.dmlcz/104230 .
  3. Rao, C.R. (1972). "Estimation of variance and covariance components in linear models". Journal of the American Statistical Association. 67 (337): 112–115. doi:10.1080/01621459.1972.10481212. JSTOR   2284708.