The Taylor rule is a monetary policy targeting rule. The rule was proposed in 1992 by American economist John B. Taylor [1] for central banks to use to stabilize economic activity by appropriately setting short-term interest rates. [2] The rule considers the federal funds rate, the price level and changes in real income. [3] The Taylor rule computes the optimal federal funds rate based on the gap between the desired (targeted) inflation rate and the actual inflation rate; and the output gap between the actual and natural output level. According to Taylor, monetary policy is stabilizing when the nominal interest rate is higher/lower than the increase/decrease in inflation. [4] Thus the Taylor rule prescribes a relatively high interest rate when actual inflation is higher than the inflation target.
In the United States, the Federal Open Market Committee controls monetary policy. The committee attempts to achieve an average inflation rate of 2% (with an equal likelihood of higher or lower inflation). The main advantage of a general targeting rule is that a central bank gains the discretion to apply multiple means to achieve the set target. [5]
The monetary policy of the Federal Reserve changed throughout the 20th century. Taylor and others evaluate the period between the 1960s and the 1970s as a period of poor monetary policy; the later years are typically characterized as stagflation. The inflation rate was high and increasing, while interest rates were kept low. [6] Since the mid-1970s monetary targets have been used in many countries as a means to target inflation. [7] However, in the 2000s the actual interest rate in advanced economies, notably in the US, was kept below the value suggested by the Taylor rule. [8]
The Taylor rule represents a rules-based approach to monetary policy, standing in contrast to discretionary policy where central bankers make decisions based on their judgment and interpretation of economic conditions. While the rule provides a systematic framework that can enhance policy predictability and transparency, critics argue that its simplified formula—focusing primarily on inflation and output—may not adequately capture important factors such as financial stability, exchange rates, or structural changes in the economy. This debate between rules and discretion remains central to discussions of monetary policy implementation.
According to Taylor's original version of the rule, the real policy interest rate should respond to divergences of actual inflation rates from target inflation rates and of actual Gross Domestic Product (GDP) from potential GDP:
In this equation, is the target short-term nominal policy interest rate (e.g. the federal funds rate in the US, the Bank of England base rate in the UK), is the rate of inflation as measured by the GDP deflator, is the desired rate of inflation, is the assumed natural/equilibrium interest rate, [9] is the actual GDP, and is the potential output, as determined by a linear trend. is the output gap, in percentage points.
Because of ,
In this equation, both and should be positive (as a rough rule of thumb, Taylor's 1993 paper proposed setting ). [10] That is, the rule produces a relatively high real interest rate (a "tight" monetary policy) when inflation is above its target or when output is above its full-employment level, in order to reduce inflationary pressure. It recommends a relatively low real interest rate ("easy" monetary policy) in the opposite situation, to stimulate output. Sometimes monetary policy goals may conflict, as in the case of stagflation, when inflation is above its target with a substantial output gap. In such a situation, a Taylor rule specifies the relative weights given to reducing inflation versus increasing output.
By specifying , the Taylor rule says that an increase in inflation by one percentage point should prompt the central bank to raise the nominal interest rate by more than one percentage point (specifically, by , the sum of the two coefficients on in the equation). Since the real interest rate is (approximately) the nominal interest rate minus inflation, stipulating implies that when inflation rises, the real interest rate should be increased. The idea that the nominal interest rate should be raised "more than one-for-one" to cool the economy when inflation increases (that is increasing the real interest rate) has been called the Taylor principle. The Taylor principle presumes a unique bounded equilibrium for inflation. If the Taylor principle is violated, then the inflation path may be unstable. [11]
The concept of a policy rule emerged as part of the discussion on whether monetary policy should be based on intuition/discretion. The discourse began at the beginning of the 19th century. The first formal debate forum was launched in the 1920s by the US House Committee on Banking and Currency. In the hearing on the so-called Strong bill, introduced in 1923 by Representative James G. Strong of Kansas, the conflict in the views on monetary policy clearly appeared. New York Fed Governor Benjamin Strong Jr. (no relation to Representative Strong), supported by Professors John R. Commons and Irving Fisher, was concerned about the Fed's practices that attempted to ensure price stability. In his opinion, Federal Reserve policy regarding the price level could not guarantee long-term stability. After the death of Governor Strong in 1928, political debate on changing the Fed's policy was suspended. The Fed had been dominated by Strong and his New York Reserve Bank.
After the Great Depression hit the country, policies came under debate. Irving Fisher opined, "this depression was almost wholly preventable and that it would have been prevented if Governor Strong had lived, who was conducting open-market operations with a view of bringing about stability". [12] Later on, monetarists such as Milton Friedman and Anna Schwartz agreed that high inflation could be avoided if the Fed managed the quantity of money more consistently. [4]
The economic downturn of the early 1960s in the United States occurred despite the Federal Reserve maintaining relatively high interest rates to defend the dollar under the Bretton Woods system. After the collapse of Bretton Woods in 1971, the Federal Reserve shifted its focus toward stimulating economic growth through expansionary monetary policy and lower interest rates. This accommodative policy stance, combined with supply shocks from oil price increases, contributed to the Great Inflation of the 1970s when annual inflation rates reached double digits.
Beginning in the mid-1970s, central banks increasingly adopted monetary targeting frameworks to combat inflation. During the Great Moderation from the mid-1980s through the early 2000s, major central banks including the Federal Reserve and the Bank of England generally followed policy approaches aligned with the Taylor rule, which provided a systematic framework for setting interest rates. This period was marked by low and stable inflation in most advanced economies. A significant shift in monetary policy frameworks began in 1990 when New Zealand pioneered explicit inflation targeting. The Reserve Bank of New Zealand underwent reforms that enhanced its independence and established price stability as its primary mandate. This approach was soon adopted by other central banks: the Bank of Canada implemented inflation targeting in 1991, followed by the central banks of Sweden, Finland, Australia, Spain, Israel, and Chile by 1994. [7]
From the early 2000s onward, major central banks in advanced economies, particularly the Federal Reserve, maintained policy rates consistently below levels prescribed by the Taylor rule. This deviation reflected a new policy framework where central banks increasingly focused on financial stability while still operating under inflation-targeting mandates. Central banks adopted an asymmetric approach: they responded aggressively to financial market stress and economic downturns with substantial rate cuts, but were more gradual in raising rates during recoveries. This pattern became especially pronounced following shocks like the dot-com bubble burst, the 2008 financial crisis, and subsequent economic disruptions, leading to extended periods of accommodative monetary policy. [8]
While the Taylor principle has proven influential, debate remains about what else the rule should incorporate. According to some New Keynesian macroeconomic models, insofar as the central bank keeps inflation stable, the degree of fluctuation in output will be optimized (economists Olivier Blanchard and Jordi Gali call this property the 'divine coincidence'). In this case, the central bank does not need to take fluctuations in the output gap into account when setting interest rates (that is, it may optimally set .)
Other economists proposed adding terms to the Taylor rule to take into account financial conditions: for example, the interest rate might be raised when stock prices, housing prices, or interest rate spreads increase. Taylor offered a modified rule in 1999: that specified .
The solvency rule was presented by Emiliano Brancaccio after the 2008 financial crisis. The banker follows a rule aimed at controlling the economy's solvency . [13] The inflation target and output gap are neglected, while the interest rate is conditional upon the solvency of workers and firms. The solvency rule was presented more as a benchmark than a mechanistic formula. [14] [15]
The McCallum rule was offered by economist Bennett T. McCallum at the end of the 20th century. It targets the nominal gross domestic product. He proposed that the Fed stabilize nominal GDP. The McCallum rule uses precise financial data. [16] Thus, it can overcome the problem of unobservable variables.
Market monetarism extended the idea of NGDP targeting to include level targeting (targeting a specific amount of growth per time period, and accelerating/decelerating growth to compensate for prior periods of weakness/strength). It also introduced the concept of targeting the forecast, such that policy is set to achieve the goal rather than merely to lean in one direction or the other. One proposed mechanism for assessing the impact of policy was to establish an NGDP futures market and use it to draw upon the insights of that market to direct policy.
Although the Federal Reserve does not follow the Taylor rule, [17] many analysts have argued that it provides a fairly accurate explanation of US monetary policy under Paul Volcker and Alan Greenspan [18] [19] and other developed economies. [20] [21] This observation has been cited by Clarida, Galí, and Gertler as a reason why inflation had remained under control and the economy had been relatively stable in most developed countries from the 1980s through the 2000s. [18] However, according to Taylor, the rule was not followed in part of the 2000s, possibly inflating the housing bubble. [22] [23] Some research has reported that households form expectations about the future path of interest rates, inflation, and unemployment in a way that is consistent with Taylor-type rules. [24] Other show that monetary policy rule estimations may differ under limited information, involving different considerations in terms of central bank objectives and on the monetary policy rule types. [25]
The Taylor rule is debated in the discourse of the rules vs. discretion. Limitations of the Taylor rule include.
Taylor highlighted that the rule should not be followed blindly: "…There will be episodes where monetary policy will need to be adjusted to deal with special factors." [3]
Athanasios Orphanides (2003) claimed that the Taylor rule can mislead policymakers who face real-time data. He claimed that the Taylor rule matches the US funds rate less perfectly when accounting for informational limitations and that an activist policy following the Taylor rule would have resulted in inferior macroeconomic performance during the 1970s. [27]
In 2015, "Bond King"[ clarification needed ] Bill Gross said the Taylor rule "must now be discarded into the trash bin of history", in light of tepid GDP growth in the years after 2009. [28] Gross believed that low interest rates were not the cure for decreased growth, but the source of the problem.
In economics, inflation is a general increase in the prices of goods and services in an economy. This is usually measured using a consumer price index (CPI). When the general price level rises, each unit of currency buys fewer goods and services; consequently, inflation corresponds to a reduction in the purchasing power of money. The opposite of CPI inflation is deflation, a decrease in the general price level of goods and services. The common measure of inflation is the inflation rate, the annualized percentage change in a general price index. As prices faced by households do not all increase at the same rate, the consumer price index (CPI) is often used for this purpose.
The IS–LM model, or Hicks–Hansen model, is a two-dimensional macroeconomic model which is used as a pedagogical tool in macroeconomic teaching. The IS–LM model shows the relationship between interest rates and output in the short run in a closed economy. The intersection of the "investment–saving" (IS) and "liquidity preference–money supply" (LM) curves illustrates a "general equilibrium" where supposed simultaneous equilibria occur in both the goods and the money markets. The IS–LM model shows the importance of various demand shocks on output and consequently offers an explanation of changes in national income in the short run when prices are fixed or sticky. Hence, the model can be used as a tool to suggest potential levels for appropriate stabilisation policies. It is also used as a building block for the demand side of the economy in more comprehensive models like the AD–AS model.
New Keynesian economics is a school of macroeconomics that strives to provide microeconomic foundations for Keynesian economics. It developed partly as a response to criticisms of Keynesian macroeconomics by adherents of new classical macroeconomics.
The Phillips curve is an economic model, named after Bill Phillips, that correlates reduced unemployment with increasing wages in an economy. While Phillips did not directly link employment and inflation, this was a trivial deduction from his statistical findings. Paul Samuelson and Robert Solow made the connection explicit and subsequently Milton Friedman and Edmund Phelps put the theoretical structure in place.
An interest rate is the amount of interest due per period, as a proportion of the amount lent, deposited, or borrowed. The total interest on an amount lent or borrowed depends on the principal sum, the interest rate, the compounding frequency, and the length of time over which it is lent, deposited, or borrowed.
Monetary policy is the policy adopted by the monetary authority of a nation to affect monetary and other financial conditions to accomplish broader objectives like high employment and price stability. Further purposes of a monetary policy may be to contribute to economic stability or to maintain predictable exchange rates with other currencies. Today most central banks in developed countries conduct their monetary policy within an inflation targeting framework, whereas the monetary policies of most developing countries' central banks target some kind of a fixed exchange rate system. A third monetary policy strategy, targeting the money supply, was widely followed during the 1980s, but has diminished in popularity since then, though it is still the official strategy in a number of emerging economies.
The quantity theory of money is a hypothesis within monetary economics which states that the general price level of goods and services is directly proportional to the amount of money in circulation, and that the causality runs from money to prices. This implies that the theory potentially explains inflation. It originated in the 16th century and has been proclaimed the oldest surviving theory in economics.
The Mundell–Fleming model, also known as the IS-LM-BoP model, is an economic model first set forth (independently) by Robert Mundell and Marcus Fleming. The model is an extension of the IS–LM model. Whereas the traditional IS-LM model deals with economy under autarky, the Mundell–Fleming model describes a small open economy.
In macroeconomics, inflation targeting is a monetary policy where a central bank follows an explicit target for the inflation rate for the medium-term and announces this inflation target to the public. The assumption is that the best that monetary policy can do to support long-term growth of the economy is to maintain price stability, and price stability is achieved by controlling inflation. The central bank uses interest rates as its main short-term monetary instrument.
The real interest rate is the rate of interest an investor, saver or lender receives after allowing for inflation. It can be described more formally by the Fisher equation, which states that the real interest rate is approximately the nominal interest rate minus the inflation rate.
A monetary policy reaction function describes how a central bank systematically adjusts its policy instruments in response to changes in economic conditions. This function provides a framework for understanding how central banks make policy decisions based on observable economic indicators.
In monetary economics, the demand for money is the desired holding of financial assets in the form of money: that is, cash or bank deposits rather than investments. It can refer to the demand for money narrowly defined as M1, or for money in the broader sense of M2 or M3.
Dynamic stochastic general equilibrium modeling is a macroeconomic method which is often employed by monetary and fiscal authorities for policy analysis, explaining historical time-series data, as well as future forecasting purposes. DSGE econometric modelling applies general equilibrium theory and microeconomic principles in a tractable manner to postulate economic phenomena, such as economic growth and business cycles, as well as policy effects and market shocks.
The Great Moderation is a period of macroeconomic stability in the United States of America coinciding with the rise of independent central banking beginning from 1980 and continuing to the present day. It is characterized by generally milder business cycle fluctuations in developed nations, compared with decades before. Throughout this period, major economic variables such as real GDP growth, industrial production, unemployment, and price levels have become less volatile, while average inflation has fallen and recessions have become less common.
Athanasios Orphanides is a Cypriot economist who served as Governor of the Central Bank of Cyprus between 3 May 2007 to 2 May 2012 and as a member of the Governing Council of the European Central Bank between 1 January 2008 and 2 May 2012.
A nominal income target is a monetary policy target. Such targets are adopted by central banks to manage national economic activity. Nominal aggregates are not adjusted for inflation. Nominal income aggregates that can serve as targets include nominal gross domestic product (NGDP) and nominal gross domestic income (GDI). Central banks use a variety of techniques to hit their targets, including conventional tools such as interest rate targeting or open market operations, unconventional tools such as quantitative easing or interest rates on excess reserves and expectations management to hit its target. The concept of NGDP targeting was formally proposed by neo-Keynesian economists James Meade in 1977 and James Tobin in 1980, although Austrian School economist Friedrich Hayek argued in favor of the stabilization of nominal income as a monetary policy norm as early as 1931 and as late as 1975.
In economics, divine coincidence refers to the property of New Keynesian models that there is no trade-off between the stabilization of inflation and the stabilization of the welfare-relevant output gap for central banks. This property is attributed to a feature of the model, namely the absence of real imperfections such as real wage rigidities. Conversely, if New Keynesian models are extended to account for these real imperfections, divine coincidence disappears and central banks again face a trade-off between inflation and output gap stabilization. The definition of divine coincidence is usually attributed to the seminal article by Olivier Blanchard and Jordi Galí in 2007.
The interest rate channel is a mechanism of monetary policy, whereby a policy-induced change in the short-term nominal interest rate by the central bank affects the price level, and subsequently output and employment.
A Calvo contract is the name given in macroeconomics to the pricing model that when a firm sets a nominal price there is a constant probability that a firm might be able to reset its price which is independent of the time since the price was last reset. The model was first put forward by Guillermo Calvo in his 1983 article "Staggered Prices in a Utility-Maximizing Framework". The original article was written in a continuous time mathematical framework, but nowadays is mostly used in its discrete time version. The Calvo model is the most common way to model nominal rigidity in new Keynesian DSGE macroeconomic models.
The neutral or natural rate of interest is the real interest rate that supports the economy at full employment/maximum output while keeping inflation constant. It cannot be observed directly. Rather, policy makers and economic researchers aim to estimate the neutral rate of interest as a guide to monetary policy, usually using various economic models to help them do so.