There have been many criticisms of econometrics' usefulness as a discipline and perceived widespread methodological shortcomings in econometric modelling practices.
Like other forms of statistical analysis, badly specified econometric models may show a spurious correlation where two variables are correlated but causally unrelated. Economist Ronald Coase is widely reported to have said "if you torture the data long enough it will confess". [1] McCloskey argues that in published econometric work, economists often fail to use economic reasoning for including or excluding variables, equivocate statistical significance with substantial significance, and fail to report the power of their findings. [2]
Economic variables are not readily isolated for experimental testing, but Edward Leamer argues that there is no essential difference between econometric analysis and randomized trials or controlled trials provided the use of statistical techniques reduces the specification bias, the effects of collinearity between the variables, to the same order as the uncertainty due to the sample size. [3]
Economists are often faced with a high number of often highly collinear potential explanatory variables, leaving researcher bias to play an important role in their selection. Leamer argues that economists can mitigate this by running statistical tests with different specified models and discarding any inferences which prove to be "fragile", concluding that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions". [4] However, as Sala-I-Martin [5] showed, it is often the case that you can specify two models suggesting contrary relation between two variables. The phenomenon was labeled 'emerging recalcitrant result' phenomenon by Robert Goldfarb. [6]
Robert Lucas criticised the use of overly simplistic econometric models of the macroeconomy to predict the implications of economic policy, arguing that the structural relationships observed in historical models break down if decision makers adjust their preferences to reflect policy changes. Lucas argued that policy conclusions drawn from contemporary large-scale macroeconometric models were invalid as economic actors would change their expectations of the future and adjust their behaviour accordingly.
Lucas argued a good macroeconometric model should incorporate microfoundations to model the effects of policy change, with equations representing economic representative agents responding to economic changes based on rational expectations of the future; implying their pattern of behaviour might be quite different if economic policy changed.
Modern complex econometric models tend to be designed with the Lucas critique and rational expectations in mind, but Robert Solow argued that some of these modern dynamic stochastic general equilibrium models were no better as the assumptions they made about economic behaviour at the micro level were "generally phony". [7]
Looking primarily at macroeconomics, Lawrence Summers has criticized econometric formalism, arguing that "the empirical facts of which we are most confident and which provide the most secure basis for theory are those that require the least sophisticated statistical analysis to perceive." He looks at two highly praised macroeconometric studies (Hansen & Singleton (1982, 1983), and Bernanke (1986)), and argues that while both make brilliant use of econometric methods, both papers do not really prove anything that future theory can build on. Noting that in the natural sciences, "investigators rush to check out the validity of claims made by rival laboratories and then build on them," Summers points out that this rarely happen in economics, which to him is a result of the fact that "the results [of econometric studies] are rarely an important input to theory creation or the evolution of professional opinion more generally." To Summers: [8]
Successful empirical research has been characterized by attempts to gauge the strength of associations rather than to estimate structural parameters, verbal characterizations of how causal relations might operate rather than explicit mathematical models, and the skillful use of carefully chosen natural experiments rather than sophisticated statistical technique to achieve identification.
The current-day Austrian School of Economics typically rejects much of econometric modeling. The historical data used to make econometric models, they claim, represents behavior under circumstances idiosyncratic to the past; thus econometric models show correlational, not causal, relationships. Econometricians have addressed this criticism by adopting quasi-experimental methodologies. Austrian school economists remain skeptical of these corrected models, continuing in their belief that statistical methods are unsuited for the social sciences. [9]
The Austrian School holds that the counterfactual must be known for a causal relationship to be established. The changes due to the counterfactual could then be extracted from the observed changes, leaving only the changes caused by the variable. Meeting this critique is very challenging since "there is no dependable method for ascertaining the uniquely correct counterfactual" for historical data. [10] For non-historical data, the Austrian critique is met with randomized controlled trials. In randomized controlled trials, the control group acts as the counterfactual since they experience, on average, what the treatment group would have experienced had they not been treated. It is on this sound basis that parametric statistics (in the Gaussian sense) is based. Randomized controlled trials must be purposefully prepared, which historical data is not. [11] The use of randomized controlled trials is becoming more common in social science research. In the United States, for example, the Education Sciences Reform Act of 2002 made funding for education research contingent on scientific validity defined in part as "experimental designs using random assignment, when feasible." [12] In answering questions of causation, parametric statistics only addresses the Austrian critique in randomized controlled trials.
If the data is not from a randomized controlled trial, econometricians meet the Austrian critique with quasi-experimental methodologies. These methodologies attempt to extract the counterfactual post-hoc so that the use of the tools of parametric statistics is justified. Since parametric statistics depends on any observation following a Gaussian distribution, which is only guaranteed by the central limit theorem in a randomization methodology, the use of tools such as the confidence interval will be outside of their specification: the amount of selection bias will always be unknown. [13] A better approximation to a randomized controlled trial provided by a quasi-experimental method will reduce this selection bias, but these methods are not rigorous, and one cannot deduce precisely how incorrect the familiar parametric measures such as power and statistical significance will be if they are calculated on these additional assumptions. When parametric statistics are used beyond their specifications, Econometricians argue that the insight will exceed the inaccuracy while Austrians argue that the inaccuracy will exceed the insight. A historical example of this debate is the Friesh–Leontief "Pitfalls" debate, with Friesh holding the Austrian position and Leontief holding the econometric position. [14] Structural causal modeling, which attempts to formalize the limitations of quasi-experimental methods from a causality perspective, allowing experimenters to precisely quantify the risks of quasi-experimental research, is an emerging discipline originating with the work of Judea Pearl.
Kennedy (1998, p 1-2) reports econometricians as being accused of using sledgehammers to crack open peanuts. That is they use a wide range of complex statistical techniques while turning a blind eye to data deficiencies and the many questional assumptions required for the appllication of these techniques. [15] Kennedy quotes Stefan Valavanis's 1959 Econometrics textbook's critique of practice:
Econometric theory is like an exquisitely balanced French recipe, spelling out precisely with how many turns to mix the sauce, how many carats of spice to add, and for how many milliseconds to bake the mixture at exactly 474 degrees of temperature. But when the statistical cook turns to raw materials, he finds that hearts of cactus fruit are unavailable, so he substitutes chunks of cantaloupe; where the recipe calls for vermicelli he used shredded wheat; and he substitutes green garment die for curry, ping-pong balls for turtles eggs, and for Chalifougnac vintage 1883, a can of turpentine. (1959, p.83) [16]
{{cite journal}}
: Cite journal requires |journal=
(help)Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference." An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships." Jan Tinbergen is one of the two founding fathers of econometrics. The other, Ragnar Frisch, also coined the term in the sense in which it is used today.
Economic data are data describing an actual economy, past or present. These are typically found in time-series form, that is, covering more than one time period or in cross-sectional data in one time period. Data may also be collected from surveys of for example individuals and firms or aggregated to sectors and industries of a single economy or for the international economy. A collection of such data in table form comprises a data set.
The Lucas critique argues that it is naïve to try to predict the effects of a change in economic policy entirely on the basis of relationships observed in historical data, especially highly aggregated historical data. More formally, it states that the decision rules of Keynesian models—such as the consumption function—cannot be considered as structural in the sense of being invariant with respect to changes in government policy variables. It was named after American economist Robert Lucas's work on macroeconomic policymaking.
An economic model is a theoretical construct representing economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified, often mathematical, framework designed to illustrate complex processes. Frequently, economic models posit structural parameters. A model may have various exogenous variables, and those variables may change to create various responses by economic variables. Methodological uses of models include investigation, theorizing, and fitting theories to the world.
A macroeconomic model is an analytical tool designed to describe the operation of the problems of economy of a country or a region. These models are usually designed to examine the comparative statics and dynamics of aggregate quantities such as the total amount of goods and services produced, total income earned, the level of employment of productive resources, and the level of prices.
Econometric models are statistical models used in econometrics. An econometric model specifies the statistical relationship that is believed to hold between the various economic quantities pertaining to a particular economic phenomenon. An econometric model can be derived from a deterministic economic model by allowing for uncertainty, or from an economic model which itself is stochastic. However, it is also possible to use econometric models that are not tied to any specific economic theory.
Computational economics is an interdisciplinary research discipline that combines methods in computational science and economics to solve complex economic problems. This subject encompasses computational modeling of economic systems. Some of these areas are unique, while others established areas of economics by allowing robust data analytics and solutions of problems that would be arduous to research without computers and associated numerical methods.
The Granger causality test is a statistical hypothesis test for determining whether one time series is useful in forecasting another, first proposed in 1969. Ordinarily, regressions reflect "mere" correlations, but Clive Granger argued that causality in economics could be tested for by measuring the ability to predict the future values of a time series using prior values of another time series. Since the question of "true causality" is deeply philosophical, and because of the post hoc ergo propter hoc fallacy of assuming that one thing preceding another can be used as a proof of causation, econometricians assert that the Granger test finds only "predictive causality". Using the term "causality" alone is a misnomer, as Granger-causality is better described as "precedence", or, as Granger himself later claimed in 1977, "temporally related". Rather than testing whether Xcauses Y, the Granger causality tests whether X forecastsY.
The McCloskey critique refers to a critique of post-1940s "official modernist" methodology in economics, inherited from logical positivism in philosophy. The critique maintains that the methodology neglects how economics can be done, is done, and should be done to advance the subject. Its recommendations include use of good rhetorical devices for "disciplined conversation."
In economics, the Leontief's paradox is that a country with a higher capital per worker has a lower capital/labor ratio in exports than in imports.
Economic methodology is the study of methods, especially the scientific method, in relation to economics, including principles underlying economic reasoning. In contemporary English, 'methodology' may reference theoretical or systematic aspects of a method. Philosophy and economics also takes up methodology at the intersection of the two subjects.
Joshua David Angrist is an Israeli–American economist and Ford Professor of Economics at the Massachusetts Institute of Technology. Angrist, together with Guido Imbens, was awarded the Nobel Memorial Prize in Economics in 2021 "for their methodological contributions to the analysis of causal relationships".
In statistics, econometrics, political science, epidemiology, and related disciplines, a regression discontinuity design (RDD) is a quasi-experimental pretest–posttest design that aims to determine the causal effects of interventions by assigning a cutoff or threshold above or below which an intervention is assigned. By comparing observations lying closely on either side of the threshold, it is possible to estimate the average treatment effect in environments in which randomisation is unfeasible. However, it remains impossible to make true causal inference with this method alone, as it does not automatically reject causal effects by any potential confounding variable. First applied by Donald Thistlethwaite and Donald Campbell (1960) to the evaluation of scholarship programs, the RDD has become increasingly popular in recent years. Recent study comparisons of randomised controlled trials (RCTs) and RDDs have empirically demonstrated the internal validity of the design.
The methodology of econometrics is the study of the range of differing approaches to undertaking econometric analysis.
Following the development of Keynesian economics, applied economics began developing forecasting models based on economic data including national income and product accounting data. In contrast with typical textbook models, these large-scale macroeconometric models used large amounts of data and based forecasts on past correlations instead of theoretical relations. These models estimated the relations between different macroeconomic variables using regression analysis on time series data. These models grew to include hundreds or thousands of equations describing the evolution of hundreds or thousands of prices and quantities over time, making computers essential for their solution. While the choice of which variables to include in each equation was partly guided by economic theory, variable inclusion was mostly determined on purely empirical grounds. Large-scale macroeconometric model consists of systems of dynamic equations of the economy with the estimation of parameters using time-series data on a quarterly to yearly basis.
Edward Emory Leamer is a professor of economics and statistics at UCLA. He is Chauncey J. Medberry Professor of Management and director of the UCLA Anderson Forecast.
Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed. The study of why things occur is called etiology, and can be described using the language of scientific causal notation. Causal inference is said to provide the evidence of causality theorized by causal reasoning.
The LSE approach to econometrics, named for the London School of Economics, involves viewing econometric models as reductions from some unknown data generation process (DGP). A complex DGP is typically modelled as the starting point and this complexity allows information in the data from the real world but absent in the theory to be drawn upon. The complexity is then reduced by the econometrician by a series of restrictions which are tested.
Control functions are statistical methods to correct for endogeneity problems by modelling the endogeneity in the error term. The approach thereby differs in important ways from other models that try to account for the same econometric problem. Instrumental variables, for example, attempt to model the endogenous variable X as an often invertible model with respect to a relevant and exogenous instrument Z. Panel analysis uses special data properties to difference out unobserved heterogeneity that is assumed to be fixed over time.
In economics, the credibility revolution was the movement towards improved reliability in empirical economics through a focus on the quality of research design and the use of more experimental and quasi experimental methods. Developing in the 1990s and early 2000s, this movement was aided by advances in theoretical econometric understanding, but was especially driven by research studies that focused on the use of clean and credible research designs.