Goodhart's law is an adage often stated as, "When a measure becomes a target, it ceases to be a good measure". [1] It is named after British economist Charles Goodhart, who is credited with expressing the core idea of the adage in a 1975 article on monetary policy in the United Kingdom: [2]
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes. [3]
It was used to criticize the British Thatcher government for trying to conduct monetary policy on the basis of targets for broad and narrow money, [4] but the law reflects a much more general phenomenon. [5]
Numerous concepts are related to this idea, at least one of which predates Goodhart's statement. [6] Notably, Campbell's law likely has precedence, as Jeff Rodamar has argued, since various formulations date to 1969. [7] Other academics had similar insights at the time. Jerome Ravetz's 1971 book Scientific Knowledge and Its Social Problems [8] also predates Goodhart, though it does not formulate the same law. He discusses how systems in general can be gamed, focuses on cases where the goals of a task are complex, sophisticated, or subtle. In such cases, the persons possessing the skills to execute the tasks properly seek their own goals to the detriment of the assigned tasks. When the goals are instantiated as metrics, this could be seen as equivalent to Goodhart and Campbell's claim.
Shortly after Goodhart's publication, others suggested closely related ideas, including the Lucas critique (1976). As applied in economics, the law is also implicit in the idea of rational expectations, a theory in economics that states that those who are aware of a system of rewards and punishments will optimize their actions within that system to achieve their desired results. For example, if an employee is rewarded by the number of cars sold each month, they will try to sell more cars, even at a loss.
While it originated in the context of market responses, the law has profound implications for the selection of high-level targets in organizations. [3] Jon Danielsson states the law as
Any statistical relationship will break down when used for policy purposes.
And suggested a corollary for use in financial risk modelling:
A risk model breaks down when used for regulatory purposes. [9]
Mario Biagioli related the concept to consequences of using citation impact measures to estimate the importance of scientific publications: [10] [11]
All metrics of scientific evaluation are bound to be abused. Goodhart's law [...] states that when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it.
Later writers generalized Goodhart's point about monetary policy into a more general adage about measures and targets in accounting and evaluation systems. In a book chapter published in 1996, Keith Hoskin wrote:
'Goodhart's Law' – That every measure which becomes a target becomes a bad measure – is inexorably, if ruefully, becoming recognized as one of the overriding laws of our times. Ruefully, for this law of the unintended consequence seems so inescapable. But it does so, I suggest, because it is the inevitable corollary of that invention of modernity: accountability. [12] [ full citation needed ]
In a 1997 paper responding to the work of Hoskin and others on financial accounting and grades in education, anthropologist Marilyn Strathern expressed Goodhart's Law as "When a measure becomes a target, it ceases to be a good measure", and linked the sentiment to the history of accounting stretching back into Britain in the 1800s:
When a measure becomes a target, it ceases to be a good measure. The more a 2.1 examination performance becomes an expectation, the poorer it becomes as a discriminator of individual performances. Hoskin describes this as 'Goodhart's law', after the latter's observation on instruments for monetary control which led to other devices for monetary flexibility having to be invented. However, targets that seem measurable become enticing tools for improvement. The linking of improvement to commensurable increase produced practices of wide application. It was that conflation of 'is' and 'ought', alongside the techniques of quantifiable written assessments, which led in Hoskin's view to the modernist invention of accountability. This was articulated in Britain for the first time around 1800 as 'the awful idea of accountability' (Ref. 3, p. 268). [1]
Macroeconomics is a branch of economics that deals with the performance, structure, behavior, and decision-making of an economy as a whole. This includes regional, national, and global economies. Macroeconomists study topics such as output/GDP and national income, unemployment, price indices and inflation, consumption, saving, investment, energy, international trade, and international finance.
This aims to be a complete article list of economics topics:
In the economics study of the public sector, economic and social development is the process by which the economic well-being and quality of life of a nation, region, local community, or an individual are improved according to targeted goals and objectives.
In macroeconomics, money supply refers to the total volume of money held by the public at a particular point in time. There are several ways to define "money", but standard measures usually include currency in circulation and demand deposits. Money supply data is recorded and published, usually by the national statistical agency or the central bank of the country. Empirical money supply measures are usually named M1, M2, M3, etc., according to how wide a definition of money they embrace. The precise definitions vary from country to country, in part depending on national financial institutional traditions.
The No Child Left Behind Act of 2001 (NCLB) was a 2002 U.S. Act of Congress promoted by the presidency of George W. Bush. It reauthorized the Elementary and Secondary Education Act and included Title I provisions applying to disadvantaged students. It mandated standards-based education reform based on the premise that setting high standards and establishing measurable goals could improve individual outcomes in education. To receive federal school funding, states had to create and give assessments to all students at select grade levels.
The Lucas critique argues that it is naïve to try to predict the effects of a change in economic policy entirely on the basis of relationships observed in historical data, especially highly aggregated historical data. More formally, it states that the decision rules of Keynesian models—such as the consumption function—cannot be considered as structural in the sense of being invariant with respect to changes in government policy variables. It was named after American economist Robert Lucas's work on macroeconomic policymaking.
Post-normal science (PNS) was developed in the 1990s by Silvio Funtowicz and Jerome R. Ravetz. It is a problem-solving strategy appropriate when "facts [are] uncertain, values in dispute, stakes high and decisions urgent", conditions often present in policy-relevant research. In those situations, PNS recommends suspending temporarily the traditional scientific ideal of truth, concentrating on quality as assessed by internal and extended peer communities.
Charles Albert Eric Goodhart, is a British economist. He worked at the Bank of England on its public policy from 1968–1985, and worked at the London School of Economics from 1966–1968 and 1986–2002. Charles Goodhart's work focuses on central bank governance practices and monetary frameworks. He also conducted academic research into foreign exchange markets. He is best known for formulating Goodhart's Law, which states: "When a measure becomes a target, it ceases to be a good measure."
Campbell's law is an adage developed by Donald T. Campbell, a psychologist and social scientist who often wrote about research methodology, which states:
The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.
Reification is a fallacy of ambiguity, when an abstraction is treated as if it were a concrete real event or physical entity. In other words, it is the error of treating something that is not concrete, such as an idea, as a concrete thing. A common case of reification is the confusion of a model with reality: "the map is not the territory".
The neoclassical synthesis (NCS), or neoclassical–Keynesian synthesis is an academic movement and paradigm in economics that worked towards reconciling the macroeconomic thought of John Maynard Keynes in his book The General Theory of Employment, Interest and Money (1936) with neoclassical economics.
Jerome (Jerry) Ravetz is a philosopher of science. He is best known for his books analysing scientific knowledge from a social and ethical perspective, focussing on issues of quality. He is the co-author of the NUSAP notational system and of Post-normal science. He is currently an Associate Fellow at the Institute for Science, Innovation and Society, University of Oxford.
The McNamara fallacy, named for Robert McNamara, the US Secretary of Defense from 1961 to 1968, involves making a decision based solely on quantitative observations and ignoring all others. The reason given is often that these other observations cannot be proven.
But when the McNamara discipline is applied too literally, the first step is to measure whatever can be easily measured. The second step is to disregard that which can't easily be measured or given a quantitative value. The third step is to presume that what can't be measured easily really isn't important. The fo[u]rth step is to say that what can't be easily measured really doesn't exist. This is suicide.
Market monetarism is a school of macroeconomics that advocates that central banks use a nominal GDP level target instead of inflation, unemployment, or other measures of economic activity, with the goal of mitigating demand shocks such those experienced in the 2007–2008 financial crisis and during the post-pandemic inflation surge. Market monetarists criticize the fallacy that low interest rates always correspond to easy money. Market monetarists are sceptical about fiscal stimulus, noting that it is usually offset by monetary policy.
The zero lower bound (ZLB) or zero nominal lower bound (ZNLB) is a macroeconomic problem that occurs when the short-term nominal interest rate is at or near zero, causing a liquidity trap and limiting the central bank's capacity to stimulate economic growth.
Sensitivity auditing is an extension of sensitivity analysis for use in policy-relevant modelling studies. Its use is recommended - i.a. in the European Commission Impact assessment guidelines and by the European Science Academies- when a sensitivity analysis (SA) of a model-based study is meant to demonstrate the robustness of the evidence provided by the model in the context whereby the inference feeds into a policy or decision-making process.
Silvio O. Funtowicz is a philosopher of science active in the field of science and technology studies. He created the NUSAP, a notational system for characterising uncertainty and quality in quantitative expressions, and together with Jerome R. Ravetz he introduced the concept of post-normal science. He is currently a guest researcher at the Centre for the Study of the Sciences and the Humanities (SVT), University of Bergen (Norway).
Science on the verge is a book written in 2016 by group of eight scholars working in the tradition of Post-normal science. The book analyzes the main features and possible causes of the present science's crisis.
Metric fixation refers to a tendency for decision-makers to place excessively large emphases on selected metrics.
How to Read Numbers: A Guide to Statistics in the News is a 2021 British book by Tom and David Chivers. It describes misleading uses of statistics in the news, with contemporary examples about the COVID-19 pandemic, healthcare, politics and crime. The book was conceived by the authors, who are cousins, in early 2020. It received positive reviews for its readability, engagingness, accessibility to non-mathematicians and applicability to journalistic writing.
Our results suggest that the use of the h-index in ranking scientists should be reconsidered, and that fractional allocation measures such as h-frac provide more robust alternatives.Companion webpage