Heckman correction

Last updated

The Heckman correction is a statistical technique to correct bias from non-randomly selected samples or otherwise incidentally truncated dependent variables, a pervasive issue in quantitative social sciences when using observational data. [1] Conceptually, this is achieved by explicitly modelling the individual sampling probability of each observation (the so-called selection equation) together with the conditional expectation of the dependent variable (the so-called outcome equation). The resulting likelihood function is mathematically similar to the tobit model for censored dependent variables, a connection first drawn by James Heckman in 1974. [2] Heckman also developed a two-step control function approach to estimate this model, [3] which avoids the computational burden of having to estimate both equations jointly, albeit at the cost of inefficiency. [4] Heckman received the Nobel Memorial Prize in Economic Sciences in 2000 for his work in this field. [5]

Contents

Method

Statistical analyses based on non-randomly selected samples can lead to erroneous conclusions. The Heckman correction, a two-step statistical approach, offers a means of correcting for non-randomly selected samples.

Heckman discussed bias from using nonrandom selected samples to estimate behavioral relationships as a specification error. He suggests a two-stage estimation method to correct the bias. The correction uses a control function idea and is easy to implement. Heckman's correction involves a normality assumption, provides a test for sample selection bias and formula for bias corrected model.

Suppose that a researcher wants to estimate the determinants of wage offers, but has access to wage observations for only those who work. Since people who work are selected non-randomly from the population, estimating the determinants of wages from the subpopulation who work may introduce bias. The Heckman correction takes place in two stages.

In the first stage, the researcher formulates a model, based on economic theory, for the probability of working. The canonical specification for this relationship is a probit regression of the form

where D indicates employment (D = 1 if the respondent is employed and D = 0 otherwise), Z is a vector of explanatory variables, is a vector of unknown parameters, and Φ is the cumulative distribution function of the standard normal distribution. Estimation of the model yields results that can be used to predict this employment probability for each individual.

In the second stage, the researcher corrects for self-selection by incorporating a transformation of these predicted individual probabilities as an additional explanatory variable. The wage equation may be specified,

where denotes an underlying wage offer, which is not observed if the respondent does not work. The conditional expectation of wages given the person works is then

Under the assumption that the error terms are jointly normal, we have

where ρ is the correlation between unobserved determinants of propensity to work and unobserved determinants of wage offers u, σ u is the standard deviation of , and is the inverse Mills ratio evaluated at . This equation demonstrates Heckman's insight that sample selection can be viewed as a form of omitted-variables bias, as conditional on both X and on it is as if the sample is randomly selected. The wage equation can be estimated by replacing with Probit estimates from the first stage, constructing the term, and including it as an additional explanatory variable in linear regression estimation of the wage equation. Since , the coefficient on can only be zero if , so testing the null that the coefficient on is zero is equivalent to testing for sample selectivity.

Heckman's achievements have generated a large number of empirical applications in economics as well as in other social sciences. The original method has subsequently been generalized, by Heckman and by others. [6]

Statistical inference

The Heckman correction is a two-step M-estimator where the covariance matrix generated by OLS estimation of the second stage is inconsistent. [7] Correct standard errors and other statistics can be generated from an asymptotic approximation or by resampling, such as through a bootstrap. [8]

Disadvantages

Implementations in statistics packages

See also

Related Research Articles

Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference". An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships". Jan Tinbergen is one of the two founding fathers of econometrics. The other, Ragnar Frisch, also coined the term in the sense in which it is used today.

<span class="mw-page-title-main">Exponential distribution</span> Probability distribution

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

Simultaneous equations models are a type of statistical model in which the dependent variables are functions of other dependent variables, rather than just independent variables. This means some of the explanatory variables are jointly determined with the dependent variable, which in economics usually is the consequence of some underlying equilibrium mechanism. Take the typical supply and demand model: whilst typically one would determine the quantity supplied and demanded to be a function of the price set by the market, it is also possible for the reverse to be true, where producers observe the quantity that consumers demand and then set the price.

Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also known as Tikhonov regularization, named for Andrey Tikhonov, it is a method of regularization of ill-posed problems. It is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters. In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias.

In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to every unit in a randomized experiment. Intuitively, IVs are used when an explanatory variable of interest is correlated with the error term, in which case ordinary least squares and ANOVA give biased results. A valid instrument induces changes in the explanatory variable but has no independent effect on the dependent variable, allowing a researcher to uncover the causal effect of the explanatory variable on the dependent variable.

In econometrics, endogeneity broadly refers to situations in which an explanatory variable is correlated with the error term. The distinction between endogenous and exogenous variables originated in simultaneous equations models, where one separates variables whose values are determined by the model from variables which are predetermined; ignoring simultaneity in the estimation leads to biased estimates as it violates the exogeneity assumption of the Gauss–Markov theorem. The problem of endogeneity is often ignored by researchers conducting non-experimental research and doing so precludes making policy recommendations. Instrumental variable techniques are commonly used to address this problem.

In econometrics, the seemingly unrelated regressions (SUR) or seemingly unrelated regression equations (SURE) model, proposed by Arnold Zellner in (1962), is a generalization of a linear regression model that consists of several regression equations, each having its own dependent variable and potentially different sets of exogenous explanatory variables. Each equation is a valid linear regression on its own and can be estimated separately, which is why the system is called seemingly unrelated, although some authors suggest that the term seemingly related would be more appropriate, since the error terms are assumed to be correlated across the equations.

In statistics, the Breusch–Pagan test, developed in 1979 by Trevor Breusch and Adrian Pagan, is used to test for heteroskedasticity in a linear regression model. It was independently suggested with some extension by R. Dennis Cook and Sanford Weisberg in 1983. Derived from the Lagrange multiplier test principle, it tests whether the variance of the errors from a regression is dependent on the values of the independent variables. In that case, heteroskedasticity is present.

Difference in differences is a statistical technique used in econometrics and quantitative research in the social sciences that attempts to mimic an experimental research design using observational study data, by studying the differential effect of a treatment on a 'treatment group' versus a 'control group' in a natural experiment. It calculates the effect of a treatment on an outcome by comparing the average change over time in the outcome variable for the treatment group to the average change over time for the control group. Although it is intended to mitigate the effects of extraneous factors and selection bias, depending on how the treatment group is chosen, this method may still be subject to certain biases.

In statistics, a tobit model is any of a class of regression models in which the observed range of the dependent variable is censored in some way. The term was coined by Arthur Goldberger in reference to James Tobin, who developed the model in 1958 to mitigate the problem of zero-inflated data for observations of household expenditure on durable goods. Because Tobin's method can be easily extended to handle truncated and other non-randomly selected samples, some authors adopt a broader definition of the tobit model that includes these cases.

In statistics, a fixed effects model is a statistical model in which the model parameters are fixed or non-random quantities. This is in contrast to random effects models and mixed models in which all or some of the model parameters are random variables. In many applications including econometrics and biostatistics a fixed effects model refers to a regression model in which the group means are fixed (non-random) as opposed to a random effects model in which the group means are a random sample from a population. Generally, data can be grouped according to several observed factors. The group means could be modeled as fixed or random effects for each grouping. In a fixed effects model each group mean is a group-specific fixed quantity.

<span class="mw-page-title-main">Errors-in-variables models</span> Regression models accounting for possible errors in independent variables

In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.

In probability theory, the Mills ratio of a continuous random variable is the function

In econometrics, the Arellano–Bond estimator is a generalized method of moments estimator used to estimate dynamic models of panel data. It was proposed in 1991 by Manuel Arellano and Stephen Bond, based on the earlier work by Alok Bhargava and John Denis Sargan in 1983, for addressing certain endogeneity problems. The GMM-SYS estimator is a system that contains both the levels and the first difference equations. It provides an alternative to the standard first difference GMM estimator.

In linear panel analysis, it can be desirable to estimate the magnitude of the fixed effects, as they provide measures of the unobserved components. For instance, in wage equation regressions, Fixed Effects capture ability measures that are constant over time, such as motivation. Chamberlain's approach to unobserved effects models is a way of estimating the linear unobserved effects, under Fixed Effect assumptions, in the following unobserved effects model

Control functions are statistical methods to correct for endogeneity problems by modelling the endogeneity in the error term. The approach thereby differs in important ways from other models that try to account for the same econometric problem. Instrumental variables, for example, attempt to model the endogenous variable X as an often invertible model with respect to a relevant and exogenous instrument Z. Panel analysis uses special data properties to difference out unobserved heterogeneity that is assumed to be fixed over time.

Two-step M-estimators deals with M-estimation problems that require preliminary estimation to obtain the parameter of interest. Two-step M-estimation is different from usual M-estimation problem because asymptotic distribution of the second-step estimator generally depends on the first-step estimator. Accounting for this change in asymptotic distribution is important for valid inference.

In least squares estimation problems, sometimes one or more regressors specified in the model are not observable. One way to circumvent this issue is to estimate or generate regressors from observable data. This generated regressor method is also applicable to unobserved instrumental variables. Under some regularity conditions, consistency and asymptotic normality of least squares estimator is preserved, but asymptotic variance has a different form in general.

<span class="mw-page-title-main">Homoscedasticity and heteroscedasticity</span> Statistical property

In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellings homoskedasticity and heteroskedasticity are also frequently used. Assuming a variable is homoscedastic when in reality it is heteroscedastic results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.

References

  1. Winship, Christopher; Mare, Robert D. (1992). "Models for Sample Selection Bias". Annual Review of Sociology . 18: 327–350. doi:10.1146/annurev.so.18.080192.001551.
  2. Heckman, James (1974). "Shadow Prices, Market Wages, and Labor Supply". Econometrica . 42 (4): 679–694. doi:10.2307/1913937. JSTOR   1913937.
  3. Heckman, James (1976). "The Common Structure of Statistical Models of Truncation, Sample Selection and Limited Dependent Variables and a Simple Estimator for Such Models". Annals of Economic and Social Measurement. 5 (4): 475–492.
  4. Nawata, Kazumitsu (1994). "Estimation of Sample Selection Bias Models by the Maximum Likelihood Estimator and Heckman's Two-Step Estimator". Economics Letters . 45 (1): 33–40. doi:10.1016/0165-1765(94)90053-1.
  5. Uchitelle, Louis (October 12, 2000). "2 Americans Win the Nobel For Economics". New York Times .
  6. Lee, Lung-Fei (2001). "Self-selection". In Baltagi, B. (ed.). A Companion to Theoretical Econometrics. Oxford: Blackwell. pp. 383–409. doi:10.1002/9780470996249.ch19. ISBN   9780470996249.
  7. Amemiya, Takeshi (1985). Advanced Econometrics . Cambridge: Harvard University Press. pp.  368–372. ISBN   0-674-00560-0.
  8. Cameron, A. Colin; Trivedi, Pravin K. (2005). "Sequential Two-Step m-Estimation". Microeconometrics: Methods and Applications. New York: Cambridge University Press. pp. 200–202. ISBN   0-521-84805-9.
  9. 1 2 Puhani, P. (2000). "The Heckman Correction for sample selection and its critique". Journal of Economic Surveys. 14 (1): 53–68. doi:10.1111/1467-6419.00104.
  10. Goldberger, A. (1983). "Abnormal Selection Bias". In Karlin, Samuel; Amemiya, Takeshi; Goodman, Leo (eds.). Studies in Econometrics, Time Series, and Multivariate Statistics. New York: Academic Press. pp.  67–84. ISBN   0-12-398750-4.
  11. Newey, Whitney; Powell, J.; Walker, James R. (1990). "Semiparametric Estimation of Selection Models: Some Empirical Results". American Economic Review . 80 (2): 324–28. JSTOR   2006593.
  12. Lewbel, Arthur (2019-12-01). "The Identification Zoo: Meanings of Identification in Econometrics". Journal of Economic Literature. 57 (4): 835–903. doi:10.1257/jel.20181361. ISSN   0022-0515.
  13. Toomet, O.; Henningsen, A. (2008). "Sample Selection Models in R: Package sampleSelection". Journal of Statistical Software . 27 (7): 1–23. doi: 10.18637/jss.v027.i07 .
  14. "sampleSelection: Sample Selection Models". R Project. 3 May 2019.
  15. "heckman — Heckman selection model" (PDF). Stata Manual.
  16. Cameron, A. Colin; Trivedi, Pravin K. (2010). Microeconometrics Using Stata (Revised ed.). College Station: Stata Press. pp. 556–562. ISBN   978-1-59718-073-3.

Further reading