Homoscedasticity and heteroscedasticity

Last updated
Plot with random data showing homoscedasticity: at each value of x, the y-value of the dots has about the same variance. Homoscedasticity.png
Plot with random data showing homoscedasticity: at each value of x, the y-value of the dots has about the same variance.
Plot with random data showing heteroscedasticity: The variance of the y-values of the dots increase with increasing values of x. Heteroscedasticity.png
Plot with random data showing heteroscedasticity: The variance of the y-values of the dots increase with increasing values of x.

In statistics, a sequence (or a vector) of random variables is homoscedastic /ˌhmskəˈdæstɪk/ if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The spellings homoskedasticity and heteroskedasticity are also frequently used. [1] [2] [3]

Contents

Assuming a variable is homoscedastic when in reality it is heteroscedastic ( /ˌhɛtərskəˈdæstɪk/ ) results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.

The existence of heteroscedasticity is a major concern in regression analysis and the analysis of variance, as it invalidates statistical tests of significance that assume that the modelling errors all have the same variance. While the ordinary least squares estimator is still unbiased in the presence of heteroscedasticity, it is inefficient and generalized least squares should be used instead. [4] [5]

Because heteroscedasticity concerns expectations of the second moment of the errors, its presence is referred to as misspecification of the second order. [6]

The econometrician Robert Engle was awarded the 2003 Nobel Memorial Prize for Economics for his studies on regression analysis in the presence of heteroscedasticity, which led to his formulation of the autoregressive conditional heteroscedasticity (ARCH) modeling technique. [7]

Definition

Consider the linear regression equation where the dependent random variable equals the deterministic variable times coefficient plus a random disturbance term that has mean zero. The disturbances are homoscedastic if the variance of is a constant ; otherwise, they are heteroscedastic. In particular, the disturbances are heteroscedastic if the variance of depends on i or on the value of . One way they might be heteroscedastic is if (an example of a scedastic function), so the variance is proportional to the value of x.

More generally, if the variance-covariance matrix of disturbance across i has a nonconstant diagonal, the disturbance is heteroscedastic. [8] The matrices below are covariances when there are just three observations across time. The disturbance in matrix A is homoscedastic; this is the simple case where OLS is the best linear unbiased estimator. The disturbances in matrices B and C are heteroscedastic. In matrix B, the variance is time-varying, increasing steadily across time; in matrix C, the variance depends on the value of x. The disturbance in matrix D is homoscedastic because the diagonal variances are constant, even though the off-diagonal covariances are non-zero and ordinary least squares is inefficient for a different reason: serial correlation.

Examples

Heteroscedasticity often occurs when there is a large difference among the sizes of the observations.

Consequences of heteroscedasticity

One of the assumptions of the classical linear regression model is that there is no heteroscedasticity. Breaking this assumption means that the Gauss–Markov theorem does not apply, meaning that OLS estimators are not the Best Linear Unbiased Estimators (BLUE) and their variance is not the lowest of all other unbiased estimators. Heteroscedasticity does not cause ordinary least squares coefficient estimates to be biased, although it can cause ordinary least squares estimates of the variance (and, thus, standard errors) of the coefficients to be biased, possibly above or below the true of population variance. Thus, regression analysis using heteroscedastic data will still provide an unbiased estimate for the relationship between the predictor variable and the outcome, but standard errors and therefore inferences obtained from data analysis are suspect. Biased standard errors lead to biased inference, so results of hypothesis tests are possibly wrong. For example, if OLS is performed on a heteroscedastic data set, yielding biased standard error estimation, a researcher might fail to reject a null hypothesis at a given significance level, when that null hypothesis was actually uncharacteristic of the actual population (making a type II error).

Under certain assumptions, the OLS estimator has a normal asymptotic distribution when properly normalized and centered (even when the data does not come from a normal distribution). This result is used to justify using a normal distribution, or a chi square distribution (depending on how the test statistic is calculated), when conducting a hypothesis test. This holds even under heteroscedasticity. More precisely, the OLS estimator in the presence of heteroscedasticity is asymptotically normal, when properly normalized and centered, with a variance-covariance matrix that differs from the case of homoscedasticity. In 1980, White proposed a consistent estimator for the variance-covariance matrix of the asymptotic distribution of the OLS estimator. [2] This validates the use of hypothesis testing using OLS estimators and White's variance-covariance estimator under heteroscedasticity.

Heteroscedasticity is also a major practical issue encountered in ANOVA problems. [9] The F test can still be used in some circumstances. [10]

However, it has been said that students in econometrics should not overreact to heteroscedasticity. [3] One author wrote, "unequal error variance is worth correcting only when the problem is severe." [11] In addition, another word of caution was in the form, "heteroscedasticity has never been a reason to throw out an otherwise good model." [3] [12] With the advent of heteroscedasticity-consistent standard errors allowing for inference without specifying the conditional second moment of error term, testing conditional homoscedasticity is not as important as in the past.[ citation needed ]

For any non-linear model (for instance Logit and Probit models), however, heteroscedasticity has more severe consequences: the maximum likelihood estimates (MLE) of the parameters will be biased, as well as inconsistent (unless the likelihood function is modified to correctly take into account the precise form of heteroscedasticity). [13] Yet, in the context of binary choice models (Logit or Probit), heteroscedasticity will only result in a positive scaling effect on the asymptotic mean of the misspecified MLE (i.e. the model that ignores heteroscedasticity). [14] As a result, the predictions which are based on the misspecified MLE will remain correct. In addition, the misspecified Probit and Logit MLE will be asymptotically normally distributed which allows performing the usual significance tests (with the appropriate variance-covariance matrix). However, regarding the general hypothesis testing, as pointed out by Greene, “simply computing a robust covariance matrix for an otherwise inconsistent estimator does not give it redemption. Consequently, the virtue of a robust covariance matrix in this setting is unclear.” [15]

Correcting for heteroscedasticity

There are five common corrections for heteroscedasticity. They are:

Testing for heteroscedasticity

Absolute value of residuals for simulated first order heteroscedastic data Hsked residual compare.svg
Absolute value of residuals for simulated first order heteroscedastic data

Residuals can be tested for homoscedasticity using the Breusch–Pagan test, [18] which performs an auxiliary regression of the squared residuals on the independent variables. From this auxiliary regression, the explained sum of squares is retained, divided by two, and then becomes the test statistic for a chi-squared distribution with the degrees of freedom equal to the number of independent variables. [19] The null hypothesis of this chi-squared test is homoscedasticity, and the alternative hypothesis would indicate heteroscedasticity. Since the Breusch–Pagan test is sensitive to departures from normality or small sample sizes, the Koenker–Bassett or 'generalized Breusch–Pagan' test is commonly used instead. [20] [ additional citation(s) needed ] From the auxiliary regression, it retains the R-squared value which is then multiplied by the sample size, and then becomes the test statistic for a chi-squared distribution (and uses the same degrees of freedom). Although it is not necessary for the Koenker–Bassett test, the Breusch–Pagan test requires that the squared residuals also be divided by the residual sum of squares divided by the sample size. [20] Testing for groupwise heteroscedasticity can be done with the Goldfeld–Quandt test. [21]

List of heteroscedasticity tests

Although tests for heteroscedasticity between groups can formally be considered as a special case of testing within regression models, some tests have structures specific to this case.

Generalisations

Homoscedastic distributions

Two or more normal distributions, , are homoscedastic if they share a common covariance (or correlation) matrix, . Homoscedastic distributions are especially useful to derive statistical pattern recognition and machine learning algorithms. One popular example of an algorithm that assumes homoscedasticity is Fisher's linear discriminant analysis. The concept of homoscedasticity can be applied to distributions on spheres. [25]

Multivariate data

The study of homescedasticity and heteroscedasticity has been generalized to the multivariate case, which deals with the covariances of vector observations instead of the variance of scalar observations. One version of this is to use covariance matrices as the multivariate measure of dispersion. Several authors have considered tests in this context, for both regression and grouped-data situations. [26] [27] Bartlett's test for heteroscedasticity between grouped data, used most commonly in the univariate case, has also been extended for the multivariate case, but a tractable solution only exists for 2 groups. [28] Approximations exist for more than two groups, and they are both called Box's M test.

See also

Related Research Articles

Variance Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

Multivariate normal distribution Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator, ridge regression, or simply any degenerate estimator.

In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "true value". The error of an observation is the deviation of the observed value from the true value of a quantity of interest. The residual is the difference between the observed value and the estimated value of the quantity of interest. The distinction is most important in regression analysis, where the concepts are sometimes called the regression errors and regression residuals and where they lead to the concept of studentized residuals. In econometrics, "errors" are also called disturbances.

In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the given dataset and those predicted by the linear function of the independent variable.

Vector autoregression (VAR) is a statistical model used to capture the relationship between multiple quantities as they change over time. VAR is a type of stochastic process model. VAR models generalize the single-variable (univariate) autoregressive model by allowing for multivariate time series. VAR models are often used in economics and the natural sciences.

Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which knowledge of the variance of observations is incorporated into the regression. WLS is also a specialization of generalized least squares.

In robust statistics, robust regression is a form of regression analysis designed to overcome some limitations of traditional parametric and non-parametric methods. Regression analysis seeks to find the relationship between one or more independent variables and a dependent variable. Certain widely used methods of regression, such as ordinary least squares, have favourable properties if their underlying assumptions are true, but can give misleading results if those assumptions are not true; thus ordinary least squares is said to be not robust to violations of its assumptions. Robust regression methods are designed to be not overly affected by violations of assumptions by the underlying data-generating process.

In econometrics, the seemingly unrelated regressions (SUR) or seemingly unrelated regression equations (SURE) model, proposed by Arnold Zellner in (1962), is a generalization of a linear regression model that consists of several regression equations, each having its own dependent variable and potentially different sets of exogenous explanatory variables. Each equation is a valid linear regression on its own and can be estimated separately, which is why the system is called seemingly unrelated, although some authors suggest that the term seemingly related would be more appropriate, since the error terms are assumed to be correlated across the equations.

In statistics, the Breusch–Pagan test, developed in 1979 by Trevor Breusch and Adrian Pagan, is used to test for heteroskedasticity in a linear regression model. It was independently suggested with some extension by R. Dennis Cook and Sanford Weisberg in 1983. Derived from the Lagrange multiplier test principle, it tests whether the variance of the errors from a regression is dependent on the values of the independent variables. In that case, heteroskedasticity is present.

The Durbin–Wu–Hausman test is a statistical hypothesis test in econometrics named after James Durbin, De-Min Wu, and Jerry A. Hausman. The test evaluates the consistency of an estimator when compared to an alternative, less efficient estimator which is already known to be consistent. It helps one evaluate if a statistical model corresponds to the data.

In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordinary least squares and weighted least squares can be statistically inefficient, or even give misleading inferences. GLS was first described by Alexander Aitken in 1936.

In statistics, the White test is a statistical test that establishes whether the variance of the errors in a regression model is constant: that is for homoskedasticity.

In statistics, the Durbin–Watson statistic is a test statistic used to detect the presence of autocorrelation at lag 1 in the residuals from a regression analysis. It is named after James Durbin and Geoffrey Watson. The small sample distribution of this ratio was derived by John von Neumann. Durbin and Watson applied this statistic to the residuals from least squares regressions, and developed bounds tests for the null hypothesis that the errors are serially uncorrelated against the alternative that they follow a first order autoregressive process. Note that the distribution of this test statistic does not depend on the estimated regression coefficients and the variance of the errors.

The topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors, Eicker–Huber–White standard errors, to recognize the contributions of Friedhelm Eicker, Peter J. Huber, and Halbert White.

A Newey–West estimator is used in statistics and econometrics to provide an estimate of the covariance matrix of the parameters of a regression-type model when this model is applied in situations where the standard assumptions of regression analysis do not apply. It was devised by Whitney K. Newey and Kenneth D. West in 1987, although there are a number of later variants. The estimator is used to try to overcome autocorrelation, and heteroskedasticity in the error terms in the models, often for regressions applied to time series data. The abbreviation "HAC," sometimes used for the estimator, stands for "heteroskedasticity and autocorrelation consistent."

Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods.

In econometrics, the Park test is a test for heteroscedasticity. The test is based on the method proposed by Rolla Edward Park for estimating linear regression parameters in the presence of heteroscedastic error terms.

In statistics, the variance function is a smooth function which depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.

In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

References

  1. For the Greek etymology of the term, see McCulloch, J. Huston (1985). "On Heteros*edasticity". Econometrica . 53 (2): 483. JSTOR   1911250.
  2. 1 2 3 4 White, Halbert (1980). "A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity". Econometrica. 48 (4): 817–838. CiteSeerX   10.1.1.11.7646 . doi:10.2307/1912934. JSTOR   1912934.
  3. 1 2 3 Gujarati, D. N.; Porter, D. C. (2009). Basic Econometrics (Fifth ed.). Boston: McGraw-Hill Irwin. p. 400. ISBN   9780073375779.
  4. Goldberger, Arthur S. (1964). Econometric Theory . New York: John Wiley & Sons. pp.  238–243.
  5. Johnston, J. (1972). Econometric Methods. New York: McGraw-Hill. pp. 214–221.
  6. Long, J. Scott; Trivedi, Pravin K. (1993). "Some Specification Tests for the Linear Regression Model". In Bollen, Kenneth A.; Long, J. Scott (eds.). Testing Structural Equation Models. London: Sage. pp. 66–110. ISBN   978-0-8039-4506-7.
  7. Engle, Robert F. (July 1982). "Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation". Econometrica. 50 (4): 987–1007. doi:10.2307/1912773. ISSN   0012-9682. JSTOR   1912773.
  8. Peter Kennedy, A Guide to Econometrics, 5th edition, p. 137.
  9. Jinadasa, Gamage; Weerahandi, Sam (1998). "Size performance of some tests in one-way anova". Communications in Statistics - Simulation and Computation. 27 (3): 625. doi:10.1080/03610919808813500.
  10. Bathke, A (2004). "The ANOVA F test can still be used in some balanced designs with unequal variances and nonnormal data". Journal of Statistical Planning and Inference. 126 (2): 413–422. doi:10.1016/j.jspi.2003.09.010.
  11. Fox, J. (1997). Applied Regression Analysis, Linear Models, and Related Methods. California: Sage Publications. p. 306. (Cited in Gujarati et al. 2009, p. 400)
  12. Mankiw, N. G. (1990). "A Quick Refresher Course in Macroeconomics". Journal of Economic Literature . 28 (4): 1645–1660 [p. 1648]. doi: 10.3386/w3256 . JSTOR   2727441.
  13. Giles, Dave (May 8, 2013). "Robust Standard Errors for Nonlinear Models". Econometrics Beat.
  14. Ginker, T.; Lieberman, O. (2017). "Robustness of binary choice models to conditional heteroscedasticity". Economics Letters. 150: 130–134. doi:10.1016/j.econlet.2016.11.024.
  15. Greene, William H. (2012). "Estimation and Inference in Binary Choice Models". Econometric Analysis (Seventh ed.). Boston: Pearson Education. pp. 730–755 [p. 733]. ISBN   978-0-273-75356-8.
  16. Tofallis, C (2008). "Least Squares Percentage Regression". Journal of Modern Applied Statistical Methods. 7: 526–534. doi:10.2139/ssrn.1406472. SSRN   1406472.
  17. J. N. K. Rao (March 1973). "On the Estimation of Heteroscedastic Variances". Biometrics. 29 (1): 11–24. doi:10.2307/2529672. JSTOR   2529672.
  18. Breusch, T. S.; Pagan, A. R. (1979). "A Simple Test for Heteroscedasticity and Random Coefficient Variation". Econometrica. 47 (5): 1287–1294. doi:10.2307/1911963. ISSN   0012-9682. JSTOR   1911963.
  19. Ullah, Muhammad Imdad (2012-07-26). "Breusch Pagan Test for Heteroscedasticity". Basic Statistics and Data Analysis. Retrieved 2020-11-28.
  20. 1 2 Pryce, Gwilym. "Heteroscedasticity: Testing and Correcting in SPSS" (PDF). pp. 12–18. Archived (PDF) from the original on 2017-03-27. Retrieved 26 March 2017.
  21. Baum, Christopher F. (2006). "Stata Tip 38: Testing for Groupwise Heteroskedasticity". The Stata Journal: Promoting communications on statistics and Stata. 6 (4): 590–592. doi:10.1177/1536867X0600600412. ISSN   1536-867X.
  22. R. E. Park (1966). "Estimation with Heteroscedastic Error Terms". Econometrica. 34 (4): 888. doi:10.2307/1910108. JSTOR   1910108.
  23. Glejser, H. (1969). "A new test for heteroscedasticity". Journal of the American Statistical Association . 64 (325): 316–323. doi:10.1080/01621459.1969.10500976.
  24. Machado, José A. F.; Silva, J. M. C. Santos (2000). "Glejser's test revisited". Journal of Econometrics . 97 (1): 189–202. doi:10.1016/S0304-4076(00)00016-6.
  25. Hamsici, Onur C.; Martinez, Aleix M. (2007) "Spherical-Homoscedastic Distributions: The Equivalency of Spherical and Normal Distributions in Classification", Journal of Machine Learning Research, 8, 1583-1623
  26. Holgersson, H. E. T.; Shukur, G. (2004). "Testing for multivariate heteroscedasticity". Journal of Statistical Computation and Simulation. 74 (12): 879. doi:10.1080/00949650410001646979. hdl: 2077/24416 . S2CID   121576769.
  27. Gupta, A. K.; Tang, J. (1984). "Distribution of likelihood ratio statistic for testing equality of covariance matrices of multivariate Gaussian models". Biometrika. 71 (3): 555–559. doi:10.1093/biomet/71.3.555. JSTOR   2336564.
  28. d'Agostino, R. B.; Russell, H. K. (2005). "Multivariate Bartlett Test". Encyclopedia of Biostatistics. doi:10.1002/0470011815.b2a13048. ISBN   978-0470849071.

Further reading

Most statistics textbooks will include at least some material on homoscedasticity and heteroscedasticity. Some examples are: