Criticisms of econometrics

Last updated

There have been many criticisms of econometrics' usefulness as a discipline and perceived widespread methodological shortcomings in econometric modelling practices.

Contents

Difficulties in model specification

Like other forms of statistical analysis, badly specified econometric models may show a spurious correlation where two variables are correlated but causally unrelated. Economist Ronald Coase is widely reported to have said "if you torture the data long enough it will confess". [1] McCloskey argues that in published econometric work, economists often fail to use economic reasoning for including or excluding variables, equivocate statistical significance with substantial significance, and fail to report the power of their findings. [2]

Economic variables are not readily isolated for experimental testing, but Edward Leamer argues that there is no essential difference between econometric analysis and randomized trials or controlled trials provided the use of statistical techniques reduces the specification bias, the effects of collinearity between the variables, to the same order as the uncertainty due to the sample size. [3]

Economists are often faced with a high number of often highly collinear potential explanatory variables, leaving researcher bias to play an important role in their selection. Leamer argues that economists can mitigate this by running statistical tests with different specified models and discarding any inferences which prove to be "fragile", concluding that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions". [4] However, as Sala-I-Martin [5] showed, it is often the case that you can specify two models suggesting contrary relation between two variables. The phenomenon was labeled 'emerging recalcitrant result' phenomenon by Robert Goldfarb. [6]

Lucas critique

Robert Lucas criticised the use of overly simplistic econometric models of the macroeconomy to predict the implications of economic policy, arguing that the structural relationships observed in historical models break down if decision makers adjust their preferences to reflect policy changes. Lucas argued that policy conclusions drawn from contemporary large-scale macroeconometric models were invalid as economic actors would change their expectations of the future and adjust their behaviour accordingly.

Lucas argued a good macroeconometric model should incorporate microfoundations to model the effects of policy change, with equations representing economic representative agents responding to economic changes based on rational expectations of the future; implying their pattern of behaviour might be quite different if economic policy changed.

Modern complex econometric models tend to be designed with the Lucas critique and rational expectations in mind, but Robert Solow argued that some of these modern dynamic stochastic general equilibrium models were no better as the assumptions they made about economic behaviour at the micro level were "generally phony". [7]

Other mainstream critiques

Looking primarily at macroeconomics, Lawrence Summers has criticized econometric formalism, arguing that "the empirical facts of which we are most confident and which provide the most secure basis for theory are those that require the least sophisticated statistical analysis to perceive." He looks at two highly praised macroeconometric studies (Hansen & Singleton (1982, 1983), and Bernanke (1986)), and argues that while both make brilliant use of econometric methods, both papers do not really prove anything that future theory can build on. Noting that in the natural sciences, "investigators rush to check out the validity of claims made by rival laboratories and then build on them," Summers points out that this rarely happen in economics, which to him is a result of the fact that "the results [of econometric studies] are rarely an important input to theory creation or the evolution of professional opinion more generally." To Summers: [8]

Successful empirical research has been characterized by attempts to gauge the strength of associations rather than to estimate structural parameters, verbal characterizations of how causal relations might operate rather than explicit mathematical models, and the skillful use of carefully chosen natural experiments rather than sophisticated statistical technique to achieve identification.

Austrian School critique

The current-day Austrian School of Economics typically rejects much of econometric modeling. The historical data used to make econometric models, they claim, represents behavior under circumstances idiosyncratic to the past; thus econometric models show correlational, not causal, relationships. Econometricians have addressed this criticism by adopting quasi-experimental methodologies. Austrian school economists remain skeptical of these corrected models, continuing in their belief that statistical methods are unsuited for the social sciences. [9]

The Austrian School holds that the counterfactual must be known for a causal relationship to be established. The changes due to the counterfactual could then be extracted from the observed changes, leaving only the changes caused by the variable. Meeting this critique is very challenging since "there is no dependable method for ascertaining the uniquely correct counterfactual" for historical data. [10] For non-historical data, the Austrian critique is met with randomized controlled trials. In randomized controlled trials, the control group acts as the counterfactual since they experience, on average, what the treatment group would have experienced had they not been treated. It is on this sound basis that parametric statistics (in the Gaussian sense) is based. Randomized controlled trials must be purposefully prepared, which historical data is not. [11] The use of randomized controlled trials is becoming more common in social science research. In the United States, for example, the Education Sciences Reform Act of 2002 made funding for education research contingent on scientific validity defined in part as "experimental designs using random assignment, when feasible." [12] In answering questions of causation, parametric statistics only addresses the Austrian critique in randomized controlled trials.

If the data is not from a randomized controlled trial, econometricians meet the Austrian critique with quasi-experimental methodologies. These methodologies attempt to extract the counterfactual post-hoc so that the use of the tools of parametric statistics is justified. Since parametric statistics depends on any observation following a Gaussian distribution, which is only guaranteed by the central limit theorem in a randomization methodology, the use of tools such as the confidence interval will be outside of their specification: the amount of selection bias will always be unknown. [13] A better approximation to a randomized controlled trial provided by a quasi-experimental method will reduce this selection bias, but these methods are not rigorous, and one cannot deduce precisely how incorrect the familiar parametric measures such as power and statistical significance will be if they are calculated on these additional assumptions. When parametric statistics are used beyond their specifications, Econometricians argue that the insight will exceed the inaccuracy while Austrians argue that the inaccuracy will exceed the insight. A historical example of this debate is the Friesh–Leontief "Pitfalls" debate, with Friesh holding the Austrian position and Leontief holding the econometric position. [14] Structural causal modeling, which attempts to formalize the limitations of quasi-experimental methods from a causality perspective, allowing experimenters to precisely quantify the risks of quasi-experimental research, is an emerging discipline originating with the work of Judea Pearl.

See also

Notes

  1. Gordon Tullock, "A Comment on Daniel Klein's 'A Plea to Economists Who Favor Liberty'", Eastern Economic Journal, Spring 2001, note 2 (Text: "As Ronald Coase says, 'if you torture the data long enough it will confess'." Note: "I have heard him say this several times. So far as I know he has never published it.")
  2. McCloskey, D.N. (May 1985). "The Loss Function has been mislaid: the Rhetoric of Significance Tests" (PDF). American Economic Review. 75 (2): 201–205.
  3. Leamer, Edward (March 1983). "Let's Take the Con out of Econometrics". American Economic Review. 73 (1): 31–43. JSTOR   1803924.
  4. Leamer, Edward (March 1983). "Let's Take the Con out of Econometrics". American Economic Review. 73 (1): 31–43. JSTOR   1803924.
  5. Sala-i-Martin, Xavier X (November 1997). "I Just Ran Four Million Regressions".{{cite journal}}: Cite journal requires |journal= (help)
  6. Goldfarb, Robert S. (December 1997). "Now you see it, now you don't: emerging contrary results in economics". Journal of Economic Methodology. 4 (2): 221–244. doi:10.1080/13501789700000016. ISSN   1350-178X.
  7. Solow, R. (2010) "Building a Science of Economics for the Real World" Archived February 4, 2011, at the Wayback Machine , Prepared Statement of Robert Solow, Professor Emeritus, MIT, to the House Committee on Science and Technology, Subcommittee on Investigations and Oversight: July 20, 2010
  8. Summers, Lawrence (June 1991). "The Scientific Illusion in Empirical Macroeconomics". Scandinavian Journal of Economics. 93 (2): 129–148. doi:10.2307/3440321. JSTOR   3440321.
  9. Garrison, Roger - in The Meaning of Ludwig von Mises: Contributions is Economics, Sociology, Epistemology, and Political Philosophy, ed. Herbener, pp. 102-117. "Mises and His Methods"
  10. DeMartino, George F. (2021). "The specter of irreparable ignorance: counterfactuals and causality in economics". Review of Evolutionary Political Economy. 2 (2): 253–276. doi:10.1007/s43253-020-00029-w. ISSN   2662-6136. PMC   7792558 .
  11. Angrist, Joshua; Pischke, Jörn-Steffen (15 December 2008). Mostly Harmless Econometrics. Princeton University Press. ISBN   978-1400829828.
  12. Education Sciences Reform Act of 2002, Pub. L. 107–279; Approved Nov. 5, 2002; 116 Stat. 1941, As Amended Through P.L. 117–286, Enacted December 27, 2022 "https://www.govinfo.gov/content/pkg/COMPS-747/pdf/COMPS-747.pdf"
  13. Harris, Anthony D.; McGregor, Jessina C.; Perencevich, Eli N.; Furuno, Jon P.; Zhu, Jingkun; Peterson, Dan E.; Finkelstein, Joseph (2006). "The Use and Interpretation of Quasi-Experimental Studies in Medical Informatics". Journal of the American Medical Informatics Association : JAMIA. 13 (1): 16–23. doi:10.1197/jamia.M1749. ISSN   1067-5027. PMC   1380192 . PMID   16221933.
  14. Leontief, Wassily W. (1934). "Pitfalls in the Construction of Demand and Supply Curves: A Reply". The Quarterly Journal of Economics. 48 (2): 355–361. doi:10.2307/1885615. ISSN   0033-5533. JSTOR   1885615.

Related Research Articles

Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference". An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships". Jan Tinbergen is one of the two founding fathers of econometrics. The other, Ragnar Frisch, also coined the term in the sense in which it is used today.

Economic data are data describing an actual economy, past or present. These are typically found in time-series form, that is, covering more than one time period or in cross-sectional data in one time period. Data may also be collected from surveys of for example individuals and firms or aggregated to sectors and industries of a single economy or for the international economy. A collection of such data in table form comprises a data set.

The Lucas critique argues that it is naive to try to predict the effects of a change in economic policy entirely on the basis of relationships observed in historical data, especially highly aggregated historical data. More formally, it states that the decision rules of Keynesian models—such as the consumption function—cannot be considered as structural in the sense of being invariant with respect to changes in government policy variables. It was named after American economist Robert Lucas's work on macroeconomic policymaking.

<span class="mw-page-title-main">Economic model</span> Simplified representation of economic reality

An economic model is a theoretical construct representing economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified, often mathematical, framework designed to illustrate complex processes. Frequently, economic models posit structural parameters. A model may have various exogenous variables, and those variables may change to create various responses by economic variables. Methodological uses of models include investigation, theorizing, and fitting theories to the world.

<span class="mw-page-title-main">Macroeconomic model</span> Model used in Macroeconomics

A macroeconomic model is an analytical tool designed to describe the operation of the problems of economy of a country or a region. These models are usually designed to examine the comparative statics and dynamics of aggregate quantities such as the total amount of goods and services produced, total income earned, the level of employment of productive resources, and the level of prices.

Econometric models are statistical models used in econometrics. An econometric model specifies the statistical relationship that is believed to hold between the various economic quantities pertaining to a particular economic phenomenon. An econometric model can be derived from a deterministic economic model by allowing for uncertainty, or from an economic model which itself is stochastic. However, it is also possible to use econometric models that are not tied to any specific economic theory.

Computational economics is an interdisciplinary research discipline that involves computer science, economics, and management science. This subject encompasses computational modeling of economic systems. Some of these areas are unique, while others established areas of economics by allowing robust data analytics and solutions of problems that would be arduous to research without computers and associated numerical methods.

<span class="mw-page-title-main">Granger causality</span> Statistical hypothesis test for forecasting

The Granger causality test is a statistical hypothesis test for determining whether one time series is useful in forecasting another, first proposed in 1969. Ordinarily, regressions reflect "mere" correlations, but Clive Granger argued that causality in economics could be tested for by measuring the ability to predict the future values of a time series using prior values of another time series. Since the question of "true causality" is deeply philosophical, and because of the post hoc ergo propter hoc fallacy of assuming that one thing preceding another can be used as a proof of causation, econometricians assert that the Granger test finds only "predictive causality". Using the term "causality" alone is a misnomer, as Granger-causality is better described as "precedence", or, as Granger himself later claimed in 1977, "temporally related". Rather than testing whether Xcauses Y, the Granger causality tests whether X forecastsY.

The McCloskey critique refers to a critique of post-1940s "official modernist" methodology in economics, inherited from logical positivism in philosophy. The critique maintains that the methodology neglects how economics can be done, is done, and should be done to advance the subject. Its recommendations include use of good rhetorical devices for "disciplined conversation."

Leontief's paradox in economics is that a country with a higher capital per worker has a lower capital/labor ratio in exports than in imports.

Economic methodology is the study of methods, especially the scientific method, in relation to economics, including principles underlying economic reasoning. In contemporary English, 'methodology' may reference theoretical or systematic aspects of a method. Philosophy and economics also takes up methodology at the intersection of the two subjects.

<span class="mw-page-title-main">Christopher A. Sims</span> American econometrician and macroeconomist

Christopher Albert Sims is an American econometrician and macroeconomist. He is currently the John J.F. Sherrerd '52 University Professor of Economics at Princeton University. Together with Thomas Sargent, he won the Nobel Memorial Prize in Economic Sciences in 2011. The award cited their "empirical research on cause and effect in the macroeconomy".

<span class="mw-page-title-main">Joshua Angrist</span> Israeli–American economist

Joshua David Angrist is an Israeli–American economist and Ford Professor of Economics at the Massachusetts Institute of Technology. Angrist, together with Guido Imbens, was awarded the Nobel Memorial Prize in Economics in 2021 "for their methodological contributions to the analysis of causal relationships".

In statistics, econometrics, political science, epidemiology, and related disciplines, a regression discontinuity design (RDD) is a quasi-experimental pretest-posttest design that aims to determine the causal effects of interventions by assigning a cutoff or threshold above or below which an intervention is assigned. By comparing observations lying closely on either side of the threshold, it is possible to estimate the average treatment effect in environments in which randomisation is unfeasible. However, it remains impossible to make true causal inference with this method alone, as it does not automatically reject causal effects by any potential confounding variable. First applied by Donald Thistlethwaite and Donald Campbell (1960) to the evaluation of scholarship programs, the RDD has become increasingly popular in recent years. Recent study comparisons of randomised controlled trials (RCTs) and RDDs have empirically demonstrated the internal validity of the design.

The methodology of econometrics is the study of the range of differing approaches to undertaking econometric analysis.

Following the development of Keynesian economics, applied economics began developing forecasting models based on economic data including national income and product accounting data. In contrast with typical textbook models, these large-scale macroeconometric models used large amounts of data and based forecasts on past correlations instead of theoretical relations. These models estimated the relations between different macroeconomic variables using regression analysis on time series data. These models grew to include hundreds or thousands of equations describing the evolution of hundreds or thousands of prices and quantities over time, making computers essential for their solution. While the choice of which variables to include in each equation was partly guided by economic theory, variable inclusion was mostly determined on purely empirical grounds. Large-scale macroeconometric model consists of systems of dynamic equations of the economy with the estimation of parameters using time-series data on a quarterly to yearly basis.

Edward Emory Leamer is a professor of economics and statistics at UCLA. He is Chauncey J. Medberry Professor of Management and director of the UCLA Anderson Forecast.

Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed. The study of why things occur is called etiology, and can be described using the language of scientific causal notation. Causal inference is said to provide the evidence of causality theorized by causal reasoning.

The LSE approach to econometrics, named for the London School of Economics, involves viewing econometric models as reductions from some unknown data generation process (DGP). A complex DGP is typically modelled as the starting point and this complexity allows information in the data from the real world but absent in the theory to be drawn upon. The complexity is then reduced by the econometrician by a series of restrictions which are tested.

In economics, the credibility revolution was the movement towards improved reliability in empirical economics through a focus on the quality of research design and the use of more experimental and quasi experimental methods. Developing in the 1990s and early 2000s, this movement was aided by advances in theoretical econometric understanding, but was especially driven by research studies that focused on the use of clean and credible research designs.