Taylor contract (economics)

Last updated
John B Taylor JohnBTaylor.jpg
John B Taylor

The Taylor contract or staggered contract was first formulated by John B. Taylor in his two articles, in 1979 "Staggered wage setting in a macro model". [1] and in 1980 "Aggregate Dynamics and Staggered Contracts". [2] In its simplest form, one can think of two equal sized unions who set wages in an industry. Each period, one of the unions sets the nominal wage for two periods (i.e. it is constant over the two periods). This means that in any one period, only one of the unions (representing half of the labor in the industry) can reset its wage and react to events that have just happened. When the union sets its wage, it sets it for a known and fixed period of time (two periods). Whilst it will know what is happening in the first period when it sets the new wage, it will have to form expectations about the factors in the second period that determine the optimal wage to set. Although the model was first used to model wage setting, in new Keynesian models that followed it was also used to model price-setting by firms.

Contents

The importance of the Taylor contract is that it introduces nominal rigidity into the economy. In macroeconomics if all wages and prices are perfectly flexible, then money is neutral and the classical dichotomy holds. In previous Keynesian models, such as the IS–LM model it had simply been assumed that wages and/or prices were fixed in the short-run so that money could affect GDP and employment. John Taylor saw that by introducing staggered or overlapping contracts, he could allow some wages to respond to current shocks immediately, but the fact that some were set one period ago was enough to introduce a dynamics into wages (and prices). Even if there was a one off shock to the money supply, with Taylor contracts it will set off a process of wage adjustment that will take time to react during which output (GDP) and employment can differ from the long-run equilibrium.

Historical importance

The Taylor contract came as a response to results of new classical macroeconomics, in particular the policy-ineffectiveness proposition proposed in 1975 by Thomas J. Sargent and Neil Wallace [3] based upon the theory of rational expectations, which posits that monetary policy cannot systematically manage the levels of output and employment in the economy and that monetary shocks can only give rise to transitory deviations of output from equilibrium. The policy-ineffectiveness proposition relied on flexible wages and prices. With the Taylor overlapping contract approach, even with rational expectations, monetary shocks can have a sustained effect on output and employment.

Evaluation

Taylor contracts have not become the standard way of modelling nominal rigidity in new Keynesian DSGE models, which have favoured the Calvo model of nominal rigidity. The main reason for this is that Taylor models do not generate enough nominal rigidity to fit the data on the persistence of output shocks. [4] Calvo models appear to do this with more persistence that the comparable Taylor models [5]

Development of the concept

The notion that contracts just last for two periods can of course be generalized to any number. For example, if you believe that wages are set for periods of one-year and you have a quarterly model, then the length of the contract will be 4 periods (4 quarters). There would then be 4 unions, each representing 25% of the market. Each period, one of the unions resets its wage for four periods: i.e. 25% or wages change in a given period. In general, if contracts last for i periods, there are i unions and 1 resets wages (prices) each period. So, if contracts last 10 periods, there are 10 unions and 1 resets every period.

However, Taylor realized that in practice, there is much heterogeneity in the length of wage contract across the economy.

"There is a great deal of heterogeneity in wage and price setting. In fact, the data suggest that there is as much a difference between the average lengths of different types of price setting arrangements, or between the average lengths of different types of wage setting arrangements, as there is between wage setting and price setting. Grocery prices change much more frequently than magazine prices – frozen orange juice prices change every two weeks, while magazine prices change every three years! Wages in some industries change once per year on average, while others change per quarter and others once every two years. One might hope that a model with homogenous representative price or wage setting would be a good approximation to this more complex world, but most likely some degree of heterogeneity will be required to describe reality accurately." [6]

In his 1991 book Macroeconomic Policy in a World Economy, [7] Taylor developed a model of the US economy in which there a variety of contract lengths, from 1 to 8 quarters inclusive. The approach of having several sectors with different contract lengths is known as a Generalized Taylor Economy [8] and has been used in several new Keynesian studies. [9] [10] [11]

Mathematical example

We will take a simple macro model to illustrate the mechanics of the two period Taylor contract taken from Romer (2011) pp. 322–328. We express this in terms of wages, but the same algebra would apply to a Taylor model of prices. For the derivation of the Taylor model under a variety of assumptions, see the survey by Guido Ascari. [12] The variables are expressed in log-linear form, i.e. as proportional deviations for some steady state.

The economy is divided into two sectors of equal size: in each sector there are unions which set nominal wages for two periods. The sectors reset their wages in alternate periods (hence the overlapping or staggered nature of contracts). The reset wage in period t is denoted . Nominal prices are a markup on the wages in each sector, so that the price can be expressed as a markup on the prevailing wages: the reset wage for this period and wage in the other sector which was set in the previous period:

.

We can define the optimal flex-wage as the wage the union would like to set if it were free to reset the wage every period. This is usually assumed to take the form:

.

where is GDP and is a coefficient which captures the sensitivity of wages to demand. If , then the optimal flex wage depends only on prices and is insensitive to the level of demand (in effect, we have real rigidity). Larger values of indicate that the nominal wage responds to demand: more output means a higher real wage. The microfoundations for the optimal flex-wage or price can be found in Walsh (2011) chapter 5 and Woodford (2003) chapter 3.

In the Taylor model, the union has to set the same nominal wage for two periods. The reset wage is thus the expected average of the optimal flex wage over the next two periods:

where is the expectation of conditional on information at t.

To close the model we need a simple model of output determination. For simplicity, we can assume the simple Quantity Theory (QT) model with a constant velocity. Letting be the money supply:

Using the optimal flex wage equation we can substitute in terms of output and price (current and expected) to give the reset wage:

.

Using the QT equation, we can then eliminate in terms of the money supply and price:

.

Using the markup equation, we can express the price in each period in terms of the reset wages, to give us the second order stochastic difference equation in

.

where .

Lastly, we need to assume something about the stochastic process driving the money supply. The simplest case to consider is a random walk:

where is a monetary shock with mean zero mean and no serial correlation (so called white noise). In this case, the solution for the nominal reset wage can be shown to be:

where is the stable eigenvalue:

If there is perfect nominal rigidity and the reset wage this period is the same as the reset wage last period. wages and price remain fixed in both real and nominal terms. For nominal prices adjust to the new steady state. Since money follows a random walk, the monetary shock lasts forever and the new steady state price and wage are equal to . The wage will adjust towards the new steady state more quickly the smaller is. We can rewrite the above solution as:

The left hand side expresses the gap between the current reset wage and the new steady-state: this is a proportion of the preceding gap. Thus a smaller implies that the gap will shrink more rapidly. The value of thus determines how rapidly the nominal wage adjusts to its new steady-state value.

See also

Related Research Articles

<span class="mw-page-title-main">Adaptive expectations</span>

In economics, adaptive expectations is a hypothesized process by which people form their expectations about what will happen in the future based on what has happened in the past. For example, if people want to create an expectation of the inflation rate in the future, they can refer to past inflation rates to infer some consistencies and could derive a more accurate expectation the more years they consider.

<span class="mw-page-title-main">Rational expectations</span> Economics concept

In economics, "rational expectations" are model-consistent expectations, in that agents inside the model are assumed to "know the model" and on average take the model's predictions as valid. Rational expectations ensure internal consistency in models involving uncertainty. To obtain consistency within a model, the predictions of future values of economically relevant variables from the model are assumed to be the same as that of the decision-makers in the model, given their information set, the nature of the random processes involved, and model structure. The rational expectations assumption is used especially in many contemporary macroeconomic models.

<span class="mw-page-title-main">New Keynesian economics</span> School of macroeconomics

New Keynesian economics is a school of macroeconomics that strives to provide microeconomic foundations for Keynesian economics. It developed partly as a response to criticisms of Keynesian macroeconomics by adherents of new classical macroeconomics.

<span class="mw-page-title-main">Phillips curve</span> Single-equation economic model relating wages to unemployment

The Phillips curve is an economic model, named after William Phillips, that predicts a correlation between reduction in unemployment and increased rates of wage rises within an economy. While Phillips himself did not state a linked relationship between employment and inflation, this was a trivial deduction from his statistical findings. Paul Samuelson and Robert Solow made the connection explicit and subsequently Milton Friedman and Edmund Phelps put the theoretical structure in place.

<span class="mw-page-title-main">Weibull distribution</span> Continuous probability distribution

In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It models a broad range of random variables, largely in the nature of a time to failure or time between events. Examples are maximum one-day rainfalls and the time a user spends on a web page.

<span class="mw-page-title-main">Cobb–Douglas production function</span> Macroeconomic formula that describes productivity

In economics and econometrics, the Cobb–Douglas production function is a particular functional form of the production function, widely used to represent the technological relationship between the amounts of two or more inputs and the amount of output that can be produced by those inputs. The Cobb–Douglas form is developed and tested against statistical evidence by Charles Cobb and Paul Douglas between 1927 and 1947; according to Douglas, the functional form itself was developed earlier by Philip Wicksteed.

<span class="mw-page-title-main">Nominal rigidity</span> Inertia of prices in economics

Nominal rigidity, also known as price-stickiness or wage-stickiness, is a situation in which a nominal price is resistant to change. Complete nominal rigidity occurs when a price is fixed in nominal terms for a relevant period of time. For example, the price of a particular good might be fixed at $10 per unit for a year. Partial nominal rigidity occurs when a price may vary in nominal terms, but not as much as it would if perfectly flexible. For example, in a regulated market there might be limits to how much a price can change in a given year.

<span class="mw-page-title-main">John B. Taylor</span> American economist (born 1946)

John Brian Taylor is the Mary and Robert Raymond Professor of Economics at Stanford University, and the George P. Shultz Senior Fellow in Economics at Stanford University's Hoover Institution.

The policy-ineffectiveness proposition (PIP) is a new classical theory proposed in 1975 by Thomas J. Sargent and Neil Wallace based upon the theory of rational expectations, which posits that monetary policy cannot systematically manage the levels of output and employment in the economy.

<span class="mw-page-title-main">Regularization (mathematics)</span> Technique to make a model more generalizable and transferable

In mathematics, statistics, finance, computer science, particularly in machine learning and inverse problems, regularization is a process that changes the result answer to be "simpler". It is often used to obtain results for ill-posed problems or to prevent overfitting.

In mathematics, Mostow's rigidity theorem, or strong rigidity theorem, or Mostow–Prasad rigidity theorem, essentially states that the geometry of a complete, finite-volume hyperbolic manifold of dimension greater than two is determined by the fundamental group and hence unique. The theorem was proven for closed manifolds by Mostow (1968) and extended to finite volume manifolds by Marden (1974) in 3 dimensions, and by Prasad (1973) in all dimensions at least 3. Gromov (1981) gave an alternate proof using the Gromov norm. Besson, Courtois & Gallot (1996) gave the simplest available proof.

Difference in differences is a statistical technique used in econometrics and quantitative research in the social sciences that attempts to mimic an experimental research design using observational study data, by studying the differential effect of a treatment on a 'treatment group' versus a 'control group' in a natural experiment. It calculates the effect of a treatment on an outcome by comparing the average change over time in the outcome variable for the treatment group to the average change over time for the control group. Although it is intended to mitigate the effects of extraneous factors and selection bias, depending on how the treatment group is chosen, this method may still be subject to certain biases.

Also known as the (Moran-)Gamma Process, the gamma process is a random process studied in mathematics, statistics, probability theory, and stochastics. The gamma process is a stochastic or random process consisting of independently distributed gamma distributions where represents the number of event occurrences from time 0 to time . The gamma distribution has scale parameter and shape parameter , often written as . Both and must be greater than 0. The gamma process is often written as where represents the time from 0. The process is a pure-jump increasing Lévy process with intensity measure for all positive . Thus jumps whose size lies in the interval occur as a Poisson process with intensity The parameter controls the rate of jump arrivals and the scaling parameter inversely controls the jump size. It is assumed that the process starts from a value 0 at t = 0 meaning

The Heckman correction is a statistical technique to correct bias from non-randomly selected samples or otherwise incidentally truncated dependent variables, a pervasive issue in quantitative social sciences when using observational data. Conceptually, this is achieved by explicitly modelling the individual sampling probability of each observation together with the conditional expectation of the dependent variable. The resulting likelihood function is mathematically similar to the tobit model for censored dependent variables, a connection first drawn by James Heckman in 1974. Heckman also developed a two-step control function approach to estimate this model, which avoids the computational burden of having to estimate both equations jointly, albeit at the cost of inefficiency. Heckman received the Nobel Memorial Prize in Economic Sciences in 2000 for his work in this field.

Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of classical financial models. These classical models of financial time series typically assume homoskedasticity and normality cannot explain stylized phenomena such as skewness, heavy tails, and volatility clustering of the empirical asset returns in finance. In 1963, Benoit Mandelbrot first used the stable distribution to model the empirical distributions which have the skewness and heavy-tail property. Since -stable distributions have infinite -th moments for all , the tempered stable processes have been proposed for overcoming this limitation of the stable distribution.

The Gent hyperelastic material model is a phenomenological model of rubber elasticity that is based on the concept of limiting chain extensibility. In this model, the strain energy density function is designed such that it has a singularity when the first invariant of the left Cauchy-Green deformation tensor reaches a limiting value .

In differential geometry, a fibered manifold is surjective submersion of smooth manifolds YX. Locally trivial fibered manifolds are fiber bundles. Therefore, a notion of connection on fibered manifolds provides a general framework of a connection on fiber bundles.

A Calvo contract is the name given in macroeconomics to the pricing model that when a firm sets a nominal price there is a constant probability that a firm might be able to reset its price which is independent of the time since the price was last reset. The model was first put forward by Guillermo Calvo in his 1983 article "Staggered Prices in a Utility-Maximizing Framework". The original article was written in a continuous time mathematical framework, but nowadays is mostly used in its discrete time version. The Calvo model is the most common way to model nominal rigidity in new Keynesian DSGE macroeconomic models.

De-sparsified lasso contributes to construct confidence intervals and statistical tests for single or low-dimensional components of a large parameter vector in high-dimensional model.

<span class="mw-page-title-main">Dual graviton</span> Hypothetical particle found in supergravity

In theoretical physics, the dual graviton is a hypothetical elementary particle that is a dual of the graviton under electric-magnetic duality, as an S-duality, predicted by some formulations of supergravity in eleven dimensions.

References

  1. John B Taylor (1979), "Staggered wage setting in a macro model". American Economic Review, Papers and Proceedings 69 (2), pp. 108–13
  2. John B Taylor (1980). "Aggregate Dynamics and Staggered Contracts," Journal of Political Economy, 88(1), pp. 1–23, February.
  3. Sargent, T & Wallace, N (1975). "'Rational' Expectations, the Optimal Monetary Instrument, and the Optimal Money Supply Rule". Journal of Political Economy 83 (2): 241–254. doi : 10.1086/260321
  4. Chari, V. V., Kehoe, P. J. and McGrattan, E. R. (2000), "Sticky price models of the business cycle: Can the contract multiplier solve the persistence problem?", Econometrica, 68, (5), 1151–1179.
  5. Kiley, Michael (2002). "Price adjustment and Staggered Price-Setting." Journal of Money, Credit and Banking 34, 283–298
  6. John B Taylor, (1999) "Staggered Wage and Price Setting in Macroeconomics" in: J.B. Taylor and M. Woodford, eds, Handbook of Macroeconomics, Vol. 1, North-Holland, Amsterdam.
  7. John B. Taylor (1994), Macroeconomic Policy in a World Economy, Norton. ISBN   978-0393963168
  8. Taylor J.B. (2016), "The Staying Power of Staggered Wage and Price Setting Models" in Macroeconomics", Chapter 25 in Handbook of macroeconomics, volume 2, pp. 2009–2042. doi.org/10.1016/bs.hesmac.2016.04.008
  9. Coenen G, Levin AT, Christoffel K (2007), "Identifying the influences of nominal and real rigidities in aggregate price-setting behavior", Journal of Monetary Economics, 54, 2439–2466
  10. Kara, E (2010). Optimal Monetary Policy in the Generalised Taylor Economy", Journal of Economic Dynamics and Control. 34, pp. 2023–2037
  11. Dixon H, Le Bihan H (2012) "Generalised Taylor and Generalised Calvo Price and Wage Setting: Micro-evidence with Macro Implications, The Economic Journal, volume 122, pp. 532–554, doi : 10.1111/j.1468-0297.2012.02497.x
  12. Guido Ascari (2003), "Price/Wage Staggering and Persistence: a Unifying Framework", The Journal of Economic Surveys, 17 (4), pp. 511–540.

Sources