# Econometrics

Last updated

Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships.  More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference".  An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships".  The first known use of the term "econometrics" (in cognate form) was by Polish economist Paweł Ciompa in 1910.  Jan Tinbergen is considered by many to be one of the founding fathers of econometrics.    Ragnar Frisch is credited with coining the term in the sense in which it is used today. 

## Contents

A basic tool for econometrics is the multiple linear regression model.  Econometric theory uses statistical theory and mathematical statistics to evaluate and develop econometric methods.   Econometricians try to find estimators that have desirable statistical properties including unbiasedness, efficiency, and consistency. Applied econometrics uses theoretical econometrics and real-world data for assessing economic theories, developing econometric models, analysing economic history, and forecasting.

## Basic models: linear regression

A basic tool for econometrics is the multiple linear regression model.  In modern econometrics, other statistical tools are frequently used, but linear regression is still the most frequently used starting point for an analysis.  Estimating a linear regression on two variables can be visualised as fitting a line through data points representing paired values of the independent and dependent variables. Okun's law representing the relationship between GDP growth and the unemployment rate. The fitted line is found using regression analysis.

For example, consider Okun's law, which relates GDP growth to the unemployment rate. This relationship is represented in a linear regression where the change in unemployment rate ($\Delta \ {\text{Unemployment}}$ ) is a function of an intercept ($\beta _{0}$ ), a given value of GDP growth multiplied by a slope coefficient $\beta _{1}$ and an error term, $\varepsilon$ :

$\Delta \ {\text{Unemployment}}=\beta _{0}+\beta _{1}{\text{Growth}}+\varepsilon .$ The unknown parameters $\beta _{0}$ and $\beta _{1}$ can be estimated. Here $\beta _{1}$ is estimated to be −1.77 and $\beta _{0}$ is estimated to be 0.83. This means that if GDP growth increased by one percentage point, the unemployment rate would be predicted to drop by 1.77 points. The model could then be tested for statistical significance as to whether an increase in growth is associated with a decrease in the unemployment, as hypothesized. If the estimate of $\beta _{1}$ were not significantly different from 0, the test would fail to find evidence that changes in the growth rate and unemployment rate were related. The variance in a prediction of the dependent variable (unemployment) as a function of the independent variable (GDP growth) is given in polynomial least squares.

## Theory

Econometric theory uses statistical theory and mathematical statistics to evaluate and develop econometric methods.   Econometricians try to find estimators that have desirable statistical properties including unbiasedness, efficiency, and consistency. An estimator is unbiased if its expected value is the true value of the parameter; it is consistent if it converges to the true value as the sample size gets larger, and it is efficient if the estimator has lower standard error than other unbiased estimators for a given sample size. Ordinary least squares (OLS) is often used for estimation since it provides the BLUE or "best linear unbiased estimator" (where "best" means most efficient, unbiased estimator) given the Gauss-Markov assumptions. When these assumptions are violated or other statistical properties are desired, other estimation techniques such as maximum likelihood estimation, generalized method of moments, or generalized least squares are used. Estimators that incorporate prior beliefs are advocated by those who favour Bayesian statistics over traditional, classical or "frequentist" approaches.

## Methods

Applied econometrics uses theoretical econometrics and real-world data for assessing economic theories, developing econometric models, analysing economic history, and forecasting. 

Econometrics may use standard statistical models to study economic questions, but most often they are with observational data, rather than in controlled experiments.  In this, the design of observational studies in econometrics is similar to the design of studies in other observational disciplines, such as astronomy, epidemiology, sociology and political science. Analysis of data from an observational study is guided by the study protocol, although exploratory data analysis may be useful for generating new hypotheses.  Economics often analyses systems of equations and inequalities, such as supply and demand hypothesized to be in equilibrium. Consequently, the field of econometrics has developed methods for identification and estimation of simultaneous equations models. These methods are analogous to methods used in other areas of science, such as the field of system identification in systems analysis and control theory. Such methods may allow researchers to estimate models and investigate their empirical consequences, without directly manipulating the system.

One of the fundamental statistical methods used by econometricians is regression analysis.  Regression methods are important in econometrics because economists typically cannot use controlled experiments. Econometricians often seek illuminating natural experiments in the absence of evidence from controlled experiments. Observational data may be subject to omitted-variable bias and a list of other problems that must be addressed using causal analysis of simultaneous-equation models. 

In addition to natural experiments, quasi-experimental methods have been used increasingly commonly by econometricians since the 1980s, in order to credibly identify causal effects. 

## Example

A simple example of a relationship in econometrics from the field of labour economics is:

$\ln({\text{wage}})=\beta _{0}+\beta _{1}({\text{years of education}})+\varepsilon .$ This example assumes that the natural logarithm of a person's wage is a linear function of the number of years of education that person has acquired. The parameter $\beta _{1}$ measures the increase in the natural log of the wage attributable to one more year of education. The term $\varepsilon$ is a random variable representing all other factors that may have direct influence on wage. The econometric goal is to estimate the parameters, $\beta _{0}{\mbox{ and }}\beta _{1}$ under specific assumptions about the random variable $\varepsilon$ . For example, if $\varepsilon$ is uncorrelated with years of education, then the equation can be estimated with ordinary least squares.

If the researcher could randomly assign people to different levels of education, the data set thus generated would allow estimation of the effect of changes in years of education on wages. In reality, those experiments cannot be conducted. Instead, the econometrician observes the years of education of and the wages paid to people who differ along many dimensions. Given this kind of data, the estimated coefficient on Years of Education in the equation above reflects both the effect of education on wages and the effect of other variables on wages, if those other variables were correlated with education. For example, people born in certain places may have higher wages and higher levels of education. Unless the econometrician controls for place of birth in the above equation, the effect of birthplace on wages may be falsely attributed to the effect of education on wages.

The most obvious way to control for birthplace is to include a measure of the effect of birthplace in the equation above. Exclusion of birthplace, together with the assumption that $\epsilon$ is uncorrelated with education produces a misspecified model. Another technique is to include in the equation additional set of measured covariates which are not instrumental variables, yet render $\beta _{1}$ identifiable.  An overview of econometric methods used to study this problem were provided by Card (1999). 

## Journals

The main journals that publish work in econometrics are Econometrica , the Journal of Econometrics , The Review of Economics and Statistics , Econometric Theory , the Journal of Applied Econometrics , Econometric Reviews , The Econometrics Journal ,  Applied Econometrics and International Development , and the Journal of Business & Economic Statistics .

## Limitations and criticisms

Like other forms of statistical analysis, badly specified econometric models may show a spurious relationship where two variables are correlated but causally unrelated. In a study of the use of econometrics in major economics journals, McCloskey concluded that some economists report p-values (following the Fisherian tradition of tests of significance of point null-hypotheses) and neglect concerns of type II errors; some economists fail to report estimates of the size of effects (apart from statistical significance) and to discuss their economic importance. She also argues that some economists also fail to use economic reasoning for model selection, especially for deciding which variables to include in a regression.  

In some cases, economic variables cannot be experimentally manipulated as treatments randomly assigned to subjects.  In such cases, economists rely on observational studies, often using data sets with many strongly associated covariates, resulting in enormous numbers of models with similar explanatory ability but different covariates and regression estimates. Regarding the plurality of models compatible with observational data-sets, Edward Leamer urged that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions". 

• Econometric Theory book on Wikibooks
• Giovannini, Enrico Understanding Economic Statistics, OECD Publishing, 2008, ISBN   978-92-64-03312-2

## Related Research Articles The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems by minimizing the sum of the squares of the residuals made in the results of every single equation. In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator or ridge regression.

Simultaneous equation models are a type of statistical model in which the dependent variables are functions of other dependent variables, rather than just independent variables. This means some of the explanatory variables are jointly determined with the dependent variable, which in economics usually is the consequence of some underlying equilibrium mechanism. For instance, in the simple model of supply and demand, price and quantity are jointly determined. In statistics, a vector of random variables is heteroscedastic if the variability of the random disturbance is different across elements of the vector. Here, variability could be quantified by the variance or any other measure of statistical dispersion. Thus heteroscedasticity is the absence of homoscedasticity. A typical example is the set of observations of income in different cities. In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable and one or more independent variables. The most common form of regression analysis is linear regression, in which a researcher finds the line that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line that minimizes the sum of squared distances between the true data and that line. For specific mathematical reasons, this allows the researcher to estimate the conditional expectation of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters or estimate the conditional expectation across a broader collection of non-linear models. In statistics, a probit model is a type of regression where the dependent variable can take only two values, for example married or not married. The word is a portmanteau, coming from probability + unit. The purpose of the model is to estimate the probability that an observation with particular characteristics will fall into a specific one of the categories; moreover, classifying observations based on their predicted probabilities is a type of binary classification model. In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the given dataset and those predicted by the linear function. In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable and finds a linear function that, as accurately as possible, predicts the dependent variable values as a function of the independent variables. The adjective simple refers to the fact that the outcome variable is related to a single predictor.

In econometrics, the seemingly unrelated regressions (SUR) or seemingly unrelated regression equations (SURE) model, proposed by Arnold Zellner in (1962), is a generalization of a linear regression model that consists of several regression equations, each having its own dependent variable and potentially different sets of exogenous explanatory variables. Each equation is a valid linear regression on its own and can be estimated separately, which is why the system is called seemingly unrelated, although some authors suggest that the term seemingly related would be more appropriate, since the error terms are assumed to be correlated across the equations.

In statistics, the Breusch–Pagan test, developed in 1979 by Trevor Breusch and Adrian Pagan, is used to test for heteroskedasticity in a linear regression model. It was independently suggested with some extension by R. Dennis Cook and Sanford Weisberg in 1983. Derived from the Lagrange multiplier test principle, it tests whether the variance of the errors from a regression is dependent on the values of the independent variables. In that case, heteroskedasticity is present.

Cochrane–Orcutt estimation is a procedure in econometrics, which adjusts a linear model for serial correlation in the error term. Developed in the 1940s, it is named after statisticians Donald Cochrane and Guy Orcutt. In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordinary least squares and weighted least squares can be statistically inefficient, or even give misleading inferences. GLS was first described by Alexander Aitken in 1936.

The Heckman correction is a statistical technique to correct bias from non-randomly selected samples or otherwise incidentally truncated dependent variables, a pervasive issue in quantitative social sciences when using observational data. Conceptually, this is achieved by explicitly modelling the individual sampling probability of each observation together with the conditional expectation of the dependent variable. The resulting likelihood function is mathematically similar to the Tobit model for censored dependent variables, a connection first drawn by James Heckman in 1976. Heckman also developed a two-step control function approach to estimate this model, which reduced the computional burden of having to estimate both equations jointly, albeit at the cost of inefficiency. Heckman received the Nobel Memorial Prize in Economic Sciences in 2000 for his work in this field. In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x). Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.

An error correction model (ECM) belongs to a category of multiple time series models most commonly used for data where the underlying variables have a long-run stochastic trend, also known as cointegration. ECMs are a theoretically-driven approach useful for estimating both short-term and long-term effects of one time series on another. The term error-correction relates to the fact that last-period's deviation from a long-run equilibrium, the error, influences its short-run dynamics. Thus ECMs directly estimate the speed at which a dependent variable returns to equilibrium after a change in other variables. In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses. Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods.

In mathematical statistics, polynomial least squares comprises a broad range of statistical methods for estimating an underlying polynomial that describes observations. These methods include polynomial regression, curve fitting, linear regression, least squares, ordinary least squares, simple linear regression, linear least squares, approximation theory and method of moments. Polynomial least squares has applications in radar trackers, estimation theory, signal processing, statistics, and econometrics. In statistics, linear regression is a linear approach to modeling the relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.

A common statistical and econometric problem deals with conditional moment models, that satisfy regression relationships of the form where . [How can you say "where" followed by an expression involving epsilon when the foregoing statement does not mention epsilon?] No additional constraints are imposed on the class of probability distributions that have generated the data. However, this means that we can use infinitely many moments. More specifically, the moments , , are all consistent with the conditional moment restriction. A natural question to ask then, is whether an optimal set of instruments is available. Both econometricians and statisticians have extensively studied this subject.

1. M. Hashem Pesaran (1987). "Econometrics," The New Palgrave: A Dictionary of Economics , v. 2, p. 8 [pp. 8–22]. Reprinted in J. Eatwell et al., eds. (1990). Econometrics: The New Palgrave, p. 1 [pp. 1–34]. Abstract Archived 18 May 2012 at the Wayback Machine (2008 revision by J. Geweke, J. Horowitz, and H. P. Pesaran).
2. P. A. Samuelson, T. C. Koopmans, and J. R. N. Stone (1954). "Report of the Evaluative Committee for Econometrica," Econometrica 22(2), p. 142. [p p. 141-146], as described and cited in Pesaran (1987) above.
3. Paul A. Samuelson and William D. Nordhaus, 2004. Economics . 18th ed., McGraw-Hill, p. 5.
4. "Archived copy". Archived from the original on 2 May 2014. Retrieved 1 May 2014.CS1 maint: archived copy as title (link)
5. "1969 - Jan Tinbergen: Nobelprijs economie - Elsevierweekblad.nl". elsevierweekblad.nl. 12 October 2015. Archived from the original on 1 May 2018. Retrieved 1 May 2018.
6. Magnus, Jan & Mary S. Morgan (1987) The ET Interview: Professor J. Tinbergen in: 'Econometric Theory 3, 1987, 117–142.
7. Willlekens, Frans (2008) International Migration in Europe: Data, Models and Estimates. New Jersey. John Wiley & Sons: 117.
8. • H. P. Pesaran (1990), "Econometrics," Econometrics: The New Palgrave, p. 2, citing Ragnar Frisch (1936), "A Note on the Term 'Econometrics'," Econometrica, 4(1), p. 95.
• Aris Spanos (2008), "statistics and economics," The New Palgrave Dictionary of Economics , 2nd Edition. Abstract. Archived 18 May 2012 at the Wayback Machine
9. Greene, William (2012). "Chapter 1: Econometrics". Econometric Analysis (7th ed.). Pearson Education. pp. 47–48. ISBN   9780273753568. Ultimately, all of these will require a common set of tools, including, for example, the multiple regression model, the use of moment conditions for estimation, instrumental variables (IV) and maximum likelihood estimation. With that in mind, the organization of this book is as follows: The first half of the text develops fundamental results that are common to all the applications. The concept of multiple regression and the linear regression model in particular constitutes the underlying platform of most modeling, even if the linear model itself is not ultimately used as the empirical specification.
10. Greene, William (2012). Econometric Analysis (7th ed.). Pearson Education. pp. 34, 41–42. ISBN   9780273753568.
11. Wooldridge, Jeffrey (2012). "Chapter 1: The Nature of Econometrics and Economic Data". Introductory Econometrics: A Modern Approach (5th ed.). South-Western Cengage Learning. p. 2. ISBN   9781111531041.
12. Clive Granger (2008). "forecasting," The New Palgrave Dictionary of Economics, 2nd Edition. Abstract. Archived 18 May 2012 at the Wayback Machine
13. Wooldridge, Jeffrey (2013). Introductory Econometrics, A modern approach. South-Western, Cengage learning. ISBN   978-1-111-53104-1.
14. Herman O. Wold (1969). "Econometrics as Pioneering in Nonexperimental Model Building," Econometrica, 37(3), pp. 369-381.
15. For an overview of a linear implementation of this framework, see linear regression.
16. Edward E. Leamer (2008). "specification problems in econometrics," The New Palgrave Dictionary of Economics. Abstract. Archived 23 September 2015 at the Wayback Machine
17. Angrist, Joshua D; Pischke, Jörn-Steffen (May 2010). "The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con out of Econometrics". Journal of Economic Perspectives. 24 (2): 3–30. doi:. ISSN   0895-3309.
18. Pearl, Judea (2000). . Cambridge University Press. ISBN   978-0521773621.
19. Card, David (1999). "The Causal Effect of Education on Earning". In Ashenfelter, O.; Card, D. (eds.). Handbook of Labor Economics. Amsterdam: Elsevier. pp. 1801–1863. ISBN   978-0444822895.
20. "The Econometrics Journal – Wiley Online Library". Wiley.com. Retrieved 8 October 2013.
21. McCloskey (May 1985). "The Loss Function has been mislaid: the Rhetoric of Significance Tests". American Economic Review. 75 (2).
22. Stephen T. Ziliak and Deirdre N. McCloskey (2004). "Size Matters: The Standard Error of Regressions in the American Economic Review," Journal of Socio-economics, 33(5), pp. 527-46 Archived 25 June 2010 at the Wayback Machine (press +).
23. Leamer, Edward (March 1983). "Let's Take the Con out of Econometrics". American Economic Review. 73 (1): 31–43. JSTOR   1803924.