Structural break

Last updated

Linear regression with a structural break Chowtest2.svg
Linear regression with a structural break

In econometrics and statistics, a structural break is an unexpected change over time in the parameters of regression models, which can lead to huge forecasting errors and unreliability of the model in general. [1] [2] [3] This issue was popularised by David Hendry, who argued that lack of stability of coefficients frequently caused forecast failure, and therefore we must routinely test for structural stability. Structural stability − i.e., the time-invariance of regression coefficients − is a central issue in all applications of linear regression models. [4]

Contents

Structural break tests

A single break in mean with a known breakpoint

For linear regression models, the Chow test is often used to test for a single break in mean at a known time period K for K  [1,T]. [5] [6] This test assesses whether the coefficients in a regression model are the same for periods [1,2, ...,K] and [K + 1, ...,T]. [6]

Other forms of structural breaks

Other challenges occur where there are:

Case 1: a known number of breaks in mean with unknown break points;
Case 2: an unknown number of breaks in mean with unknown break points;
Case 3: breaks in variance.

The Chow test is not applicable in these situations, since it only applies to models with a known breakpoint and where the error variance remains constant before and after the break. [7] [5] [6]

In general, the CUSUM (cumulative sum) and CUSUM-sq (CUSUM squared) tests can be used to test the constancy of the coefficients in a model. The bounds test can also be used. [6] [8] For cases 1 and 2, the sup-Wald (i.e., the supremum of a set of Wald statistics), sup-LM (i.e., the supremum of a set of Lagrange multiplier statistics), and sup-LR (i.e., the supremum of a set of likelihood ratio statistics) tests developed by Andrews (1993, 2003) may be used to test for parameter instability when the number and location of structural breaks are unknown. [9] [10] These tests were shown to be superior to the CUSUM test in terms of statistical power, [9] and are the most commonly used tests for the detection of structural change involving an unknown number of breaks in mean with unknown break points. [4] The sup-Wald, sup-LM, and sup-LR tests are asymptotic in general (i.e., the asymptotic critical values for these tests are applicable for sample size n as n → ∞), [9] and involve the assumption of homoskedasticity across break points for finite samples; [4] however, an exact test with the sup-Wald statistic may be obtained for a linear regression model with a fixed number of regressors and independent and identically distributed (IID) normal errors. [9] A method developed by Bai and Perron (2003) also allows for the detection of multiple structural breaks from data. [11]

The MZ test developed by Maasoumi, Zaman, and Ahmed (2010) allows for the simultaneous detection of one or more breaks in both mean and variance at a known break point. [4] [12] The sup-MZ test developed by Ahmed, Haider, and Zaman (2016) is a generalization of the MZ test which allows for the detection of breaks in mean and variance at an unknown break point. [4]

Structural breaks in cointegration models

For a cointegration model, the Gregory–Hansen test (1996) can be used for one unknown structural break, [13] and the Hatemi–J test (2006) can be used for two unknown breaks. [14]

Statistical packages

There are several statistical packages that can be used to find structural breaks, including R, [15] GAUSS, and Stata, among others.

See also

Related Research Articles

In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models based on the ratio of their likelihoods, specifically one found by maximization over the entire parameter space and another found after imposing some constraint. If the constraint is supported by the observed data, the two likelihoods should not differ by more than sampling error. Thus the likelihood-ratio test tests whether this ratio is significantly different from one, or equivalently whether its natural logarithm is significantly different from zero.

An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. It is most often used when comparing statistical models that have been fitted to a data set, in order to identify the model that best fits the population from which the data were sampled. Exact "F-tests" mainly arise when the models have been fitted to the data using least squares. The name was coined by George W. Snedecor, in honour of Sir Ronald A. Fisher. Fisher initially developed the statistic as the variance ratio in the 1920s.

Heteroscedasticity

In statistics, a vector of random variables is heteroscedastic if the variability of the random disturbance is different across elements of the vector. Here, variability could be quantified by the variance or any other measure of statistical dispersion. Thus heteroscedasticity is the absence of homoscedasticity. A typical example is the set of observations of income in different cities.

Mathematical statistics

Mathematical statistics is the application of probability theory, a branch of mathematics, to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure theory.

In statistics, multicollinearity is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. In this situation, the coefficient estimates of the multiple regression may change erratically in response to small changes in the model or the data. Multicollinearity does not reduce the predictive power or reliability of the model as a whole, at least within the sample data set; it only affects calculations regarding individual predictors. That is, a multivariate regression model with collinear predictors can indicate how well the entire bundle of predictors predicts the outcome variable, but it may not give valid results about any individual predictor, or about which predictors are redundant with respect to others.

Coefficient of determination

In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variance in the dependent variable that is predictable from the independent variable(s).

In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948, a fact that can be used to determine statistical significance.

In statistics, the Wald test assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate. Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis, a fact that can be used to determine statistical significance.

Ordinary least squares Method for estimating the unknown parameters in a linear regression model

In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the given dataset and those predicted by the linear function of the independent variable.

Cointegration is a statistical property of a collection (X1X2, ..., Xk) of time series variables. First, all of the series must be integrated of order d. Next, if a linear combination of this collection is integrated of order less than d, then the collection is said to be co-integrated. Formally, if (X,Y,Z) are each integrated of order d, and there exist coefficients a,b,c such that aX + bY + cZ is integrated of order less than d, then X, Y, and Z are cointegrated. Cointegration has become an important property in contemporary time series analysis. Time series often have trends—either deterministic or stochastic. In an influential paper, Charles Nelson and Charles Plosser (1982) provided statistical evidence that many US macroeconomic time series have stochastic trends.

The Chow test, proposed by econometrician Gregory Chow in 1960, is a test of whether the true coefficients in two linear regressions on different data sets are equal. In econometrics, it is most commonly used in time series analysis to test for the presence of a structural break at a period which can be assumed to be known a priori. In program evaluation, the Chow test is often used to determine whether the independent variables have different impacts on different subgroups of the population.

In statistics, the Breusch–Pagan test, developed in 1979 by Trevor Breusch and Adrian Pagan, is used to test for heteroskedasticity in a linear regression model. It was independently suggested with some extension by R. Dennis Cook and Sanford Weisberg in 1983. Derived from the Lagrange multiplier test principle, it tests whether the variance of the errors from a regression is dependent on the values of the independent variables. In that case, heteroskedasticity is present.

In statistics, the White test is a statistical test that establishes whether the variance of the errors in a regression model is constant: that is for homoskedasticity.

In statistics, the Durbin–Watson statistic is a test statistic used to detect the presence of autocorrelation at lag 1 in the residuals from a regression analysis. It is named after James Durbin and Geoffrey Watson. The small sample distribution of this ratio was derived by John von Neumann. Durbin and Watson applied this statistic to the residuals from least squares regressions, and developed bounds tests for the null hypothesis that the errors are serially uncorrelated against the alternative that they follow a first order autoregressive process. Note that the distribution of this test statistic does not depend on the estimated regression coefficients and the variance of the errors.

Omnibus tests are a kind of statistical test. They test whether the explained variance in a set of data is significantly greater than the unexplained variance, overall. One example is the F-test in the analysis of variance. There can be legitimate significant effects within a model even if the omnibus test is not significant. For instance, in a model with two independent variables, if only one variable exerts a significant effect on the dependent variable and the other does not, then the omnibus test may be non-significant. This fact does not affect the conclusions that may be drawn from the one significant variable. In order to test effects within an omnibus test, researchers often use contrasts.

Regression validation

In statistics, regression validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are acceptable as descriptions of the data. The validation process can involve analyzing the goodness of fit of the regression, analyzing whether the regression residuals are random, and checking whether the model's predictive performance deteriorates substantially when applied to data that were not used in model estimation.

Linear regression Statistical modeling method which shows linear correlation between variables

In statistics, linear regression is a linear approach to modelling the relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

In statistics, spike-and-slab regression is a Bayesian variable selection technique that is particularly useful when the number of possible predictors is larger than the number of observations.

In statistics and econometrics, optimal instruments are a technique for improving the efficiency of estimators in conditional moment models, a class of semiparametric models that generate conditional expectation functions. To estimate parameters of a conditional moment model, the statistician can derive an expectation function and use the generalized method of moments (GMM). However, there are infinitely many moment conditions that can be generated from a single model; optimal instruments provide the most efficient moment conditions.

References

  1. Antoch, Jaromír; Hanousek, Jan; Horváth, Lajos; Hušková, Marie; Wang, Shixuan (25 April 2018). "Structural breaks in panel data: Large number of panels and short length time series" (PDF). Econometric Reviews. 38 (7): 828–855. doi:10.1080/07474938.2018.1454378. S2CID   150379490. Structural changes and model stability in panel data are of general concern in empirical economics and finance research. Model parameters are assumed to be stable over time if there is no reason to believe otherwise. It is well-known that various economic and political events can cause structural breaks in financial data. ... In both the statistics and econometrics literature we can find very many of papers related to the detection of changes and structural breaks.
  2. Kruiniger, Hugo (December 2008). "Not So Fixed Effects: Correlated Structural Breaks in Panel Data" (PDF). IZA Institute of Labor Economics. pp. 1–33. Retrieved 20 February 2019.
  3. Hansen, Bruce E (November 2001). "The New Econometrics of Structural Change: Dating Breaks in U.S. Labor Productivity". Journal of Economic Perspectives. 15 (4): 117–128. doi: 10.1257/jep.15.4.117 .
  4. 1 2 3 4 5 Ahmed, Mumtaz; Haider, Gulfam; Zaman, Asad (October 2016). "Detecting structural change with heteroskedasticity". Communications in Statistics – Theory and Methods. 46 (21): 10446–10455. doi:10.1080/03610926.2016.1235200. S2CID   126189844. The hypothesis of structural stability that the regression coefficients do not change over time is central to all applications of linear regression models.
  5. 1 2 Hansen, Bruce E (November 2001). "The New Econometrics of Structural Change: Dating Breaks in U.S. Labor Productivity". Journal of Economic Perspectives. 15 (4): 117–128. doi: 10.1257/jep.15.4.117 .
  6. 1 2 3 4 Greene, William (2012). "Section 6.4: Modeling and testing for a structural break". Econometric Analysis (7th ed.). Pearson Education. pp. 208–211. ISBN   9780273753568. An important assumption made in using the Chow test is that the disturbance variance is the same in both (or all) regressions. ...
    6.4.4 TESTS OF STRUCTURAL BREAK WITH UNEQUAL VARIANCES ...
    In a small or moderately sized sample, the Wald test has the unfortunate property that the probability of a type I error is persistently larger than the critical level we use to carry it out. (That is, we shall too frequently reject the null hypothesis that the parameters are the same in the subsamples.) We should be using a larger critical value. Ohtani and Kobayashi (1986) have devised a “bounds” test that gives a partial remedy for the problem.15
  7. Gujarati, Damodar (2007). Basic Econometrics. New Delhi: Tata McGraw-Hill. pp. 278–284. ISBN   978-0-07-066005-2.
  8. Pesaran, M. H.; Shin, Y.; Smith, R. J. (2001). "Bounds testing approaches to the analysis of level relationships". Journal of Applied Econometrics . 16 (3): 289–326. doi:10.1002/jae.616. hdl: 10983/25617 .
  9. 1 2 3 4 Andrews, Donald (July 1993). "Tests for Parameter Instability and Structural Change with Unknown Change Point" (PDF). Econometrica. 61 (4): 821–856. doi:10.2307/2951764. JSTOR   2951764. Archived (PDF) from the original on 6 November 2017.
  10. Andrews, Donald (January 2003). "Tests for Parameter Instability and Structural Change with Unknown Change Point: A Corrigendum" (PDF). Econometrica. 71 (1): 395–397. doi:10.1111/1468-0262.00405. S2CID   55464774. Archived (PDF) from the original on 6 November 2017.
  11. Bai, Jushan; Perron, Pierre (January 2003). "Computation and analysis of multiple structural change models". Journal of Applied Econometrics. 18 (1): 1–22. doi:10.1002/jae.659. hdl: 10.1002/jae.659 .
  12. Maasoumi, Esfandiar; Zaman, Asad; Ahmed, Mumtaz (November 2010). "Tests for structural change, aggregation, and homogeneity". Economic Modelling. 27 (6): 1382–1391. doi:10.1016/j.econmod.2010.07.009.
  13. Gregory, Allan; Hansen, Bruce (1996). "Tests for Cointegration in Models with Regime and Trend Shifts". Oxford Bulletin of Economics and Statistics. 58 (3): 555–560. doi:10.1111/j.1468-0084.1996.mp58003008.x.
  14. Hacker, R. Scott; Hatemi-J, Abdulnasser (2006). "Tests for Causality between Integrated Variables Using Asymptotic and Bootstrap Distributions: Theory and Application". Applied Economics . 38 (15): 1489–1500. doi:10.1080/00036840500405763. S2CID   121999615.
  15. Kleiber, Christian; Zeileis, Achim (2008). Applied Econometrics with R. New York: Springer. pp. 169–176. ISBN   978-0-387-77316-2.