Welfare cost of business cycles

Last updated

In macroeconomics, the cost of business cycles is the decrease in social welfare, if any, caused by business cycle fluctuations.

Contents

Nobel economist Robert Lucas proposed measuring the cost of business cycles as the percentage increase in consumption that would be necessary to make a representative consumer indifferent between a smooth, non-fluctuating, consumption trend and one that is subject to business cycles.

Under the assumptions that business cycles represent random shocks around a trend growth path, Robert Lucas argued that the cost of business cycles is extremely small, [1] [2] and as a result the focus of both academic economists and policy makers on economic stabilization policy rather than on long term growth has been misplaced. [3] [4] Lucas himself, after calculating this cost back in 1987, reoriented his own macroeconomic research program away from the study of short run fluctuations.[ citation needed ]

However, Lucas' conclusion is controversial. In particular, Keynesian economists typically argue that business cycles should not be understood as fluctuations above and below a trend. Instead, they argue that booms are times when the economy is near its potential output trend, and that recessions are times when the economy is substantially below trend, so that there is a large output gap. [4] [5] Under this viewpoint, the welfare cost of business cycles is larger, because an economy with cycles not only suffers more variable consumption, but also lower consumption on average.

Basic intuition

Compensating an individual for volatility in consumption (click to enlarge) Lucas W Cost.JPG
Compensating an individual for volatility in consumption (click to enlarge)

If we consider two consumption paths, each with the same trend and the same initial level of consumption – and as a result same level of consumption per period on average – but with different levels of volatility, then, according to economic theory, the less volatile consumption path will be preferred to the more volatile one. This is due to risk aversion on part of individual agents. One way to calculate how costly this greater volatility is in terms of individual (or, under some restrictive conditions, social) welfare is to ask what percentage of her annual average consumption would an individual be willing to sacrifice in order to eliminate this volatility entirely. Another way to express this is by asking how much an individual with a smooth consumption path would have to be compensated in terms of average consumption in order to accept the volatile path instead of the one without the volatility. The resulting amount of compensation, expressed as a percentage of average annual consumption, is the cost of the fluctuations calculated by Lucas. It is a function of people's degree of risk aversion and of the magnitude of the fluctuations which are to be eliminated, as measured by the standard deviation of the natural log of consumption. [6]

Lucas' formula

Robert Lucas' baseline formula for the welfare cost of business cycles is given by (see mathematical derivation below):

where is the cost of fluctuations (the % of average annual consumption that a person would be willing to pay to eliminate all fluctuations in her consumption), is the standard deviation of the natural log of consumption and measures the degree of relative risk aversion. [6]

It is straightforward to measure from available data. Using US data from between 1947 and 2001 Lucas obtained . It is a little harder to obtain an empirical estimate of ; although it should be theoretically possible, many controversies in economics revolve around the precise and appropriate measurement of this parameter. However it is doubtful that is particularly high (most estimates are no higher than 4).

As an illustrative example consider the case of log utility (see below) in which case . In this case the welfare cost of fluctuations is

In other words, eliminating all the fluctuations from a person's consumption path (i.e., eliminating the business cycle entirely) is worth only 1/20 of 1 percent of average annual consumption. For example, an individual who consumes $50,000 worth of goods a year on average would be willing to pay only $25 to eliminate consumption fluctuations.

The implication is that, if the calculation is correct and appropriate, the ups and downs of the business cycles, the recessions and the booms, hardly matter for individual and possibly social welfare. It is the long run trend of economic growth that is crucial.

If is at the upper range of estimates found in literature, around 4, then

or 1/5 of 1 percent. An individual with average consumption of $50,000 would be willing to pay $100 to eliminate fluctuations. This is still a very small amount compared to the implications of long run growth on income.

One way to get an upper bound on the degree of risk aversion is to use the Ramsey model of intertemporal savings and consumption. In that case, the equilibrium real interest rate is given by

where is the real (after tax) rate of return on capital (the real interest rate), is the subjective rate of time preference (which measures impatience) and is the annual growth rate of consumption. is generally estimated to be around 5% (.05) and the annual growth rate of consumption is about 2% (.02). Then the upper bound on the cost of fluctuations occurs when is at its highest, which in this case occurs if . This implies that the highest possible degree of risk aversion is

which in turn, combined with estimates given above, yields a cost of fluctuations as

which is still extremely small (13% of 1%).

Mathematical representation and formula

Lucas sets up an infinitely lived representative agent model where total lifetime utility () is given by the present discounted value (with representing the discount factor) of per period utilities () which in turn depend on consumption in each period () [4]

In the case of a certain consumption path, consumption in each period is given by

where is initial consumption and is the growth rate of consumption (neither of these parameters turns out to matter for costs of fluctuations in the baseline model, so they can be normalized to 1 and 0 respectively).

In the case of a volatile, uncertain consumption path, consumption in each period is given by

where is the standard deviation of the natural log of consumption and is a random shock which is assumed to be log-normally distributed so that the mean of is zero, which in turn implies that the expected value of is 1 (i.e., on average, volatile consumption is same as certain consumption). In this case is the "compensation parameter" which measures the percentage by which average consumption has to be increased for the consumer to be indifferent between the certain path of consumption and the volatile one. is the cost of fluctuations.

We find this cost of fluctuations by setting

and solving for

For the case of isoelastic utility, given by

we can obtain an (approximate) closed form solution which has already been given above

A special case of the above formula occurs if utility is logarithmic, which corresponds to the case of , which means that the above simplifies to . In other words, with log utility the cost of fluctuations is equal to one half the variance of the natural logarithm of consumption. [6]

An alternative, more accurate solution gives losses that are somewhat larger, especially when volatility is large. [7]

Risk aversion and the equity premium puzzle

However, a major problem related to the above way of estimating (hence ) and in fact, possibly to Lucas' entire approach is the so-called equity premium puzzle, first observed by Mehra and Prescott in 1985. [8] The analysis above implies that since macroeconomic risk is unimportant, the premium associated with systematic risk, that is, risk in returns to an asset that is correlated with aggregate consumption should be small (less than 0.5 percentage points for the values of risk aversion considered above). In fact the premium has averaged around six percentage points.

In a survey of the implications of the equity premium, Simon Grant and John Quiggin note that 'A high cost of risk means that recessions are extremely destructive'. [9]

Evidence from effects on subjective wellbeing

Justin Wolfers has shown that macroeconomic volatility reduces subjective wellbeing; the effects are somewhat larger than expected under the Lucas approach. According to Wolfers, 'eliminating unemployment volatility would raise well-being by an amount roughly equal to that from lowering the average level of unemployment by a quarter of a percentage point'. [10]

See also

Related Research Articles

<span class="mw-page-title-main">Lorentz group</span> Lie group of Lorentz transformations

In physics and mathematics, the Lorentz group is the group of all Lorentz transformations of Minkowski spacetime, the classical and quantum setting for all (non-gravitational) physical phenomena. The Lorentz group is named for the Dutch physicist Hendrik Lorentz.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form

Linear elasticity is a mathematical model of how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.

<span class="mw-page-title-main">Granular material</span> Conglomeration of discrete solid, macroscopic particles

A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact. The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids.

<span class="mw-page-title-main">Klein–Nishina formula</span> Electron-photon scattering cross section

In particle physics, the Klein–Nishina formula gives the differential cross section of photons scattered from a single free electron, calculated in the lowest order of quantum electrodynamics. It was first derived in 1928 by Oskar Klein and Yoshio Nishina, constituting one of the first successful applications of the Dirac equation. The formula describes both the Thomson scattering of low energy photons and the Compton scattering of high energy photons, showing that the total cross section and expected deflection angle decrease with increasing photon energy.

In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher information matrix:

<span class="mw-page-title-main">Ornstein–Uhlenbeck process</span> Stochastic process modeling random walk with friction

In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the velocity of a massive Brownian particle under the influence of friction. It is named after Leonard Ornstein and George Eugene Uhlenbeck.

<span class="mw-page-title-main">Great-circle navigation</span> Flight or sailing route along the shortest path between two points on a globes surface

Great-circle navigation or orthodromic navigation is the practice of navigating a vessel along a great circle. Such routes yield the shortest distance between two points on the globe.

Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. Evolution strategies (ES) are stochastic, derivative-free methods for numerical optimization of non-linear or non-convex continuous optimization problems. They belong to the class of evolutionary algorithms and evolutionary computation. An evolutionary algorithm is broadly based on the principle of biological evolution, namely the repeated interplay of variation and selection: in each generation (iteration) new individuals are generated by variation, usually in a stochastic way, of the current parental individuals. Then, some individuals are selected to become the parents in the next generation based on their fitness or objective function value . Like this, over the generation sequence, individuals with better and better -values are generated.

In statistics, the bias of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator. Bias is a distinct concept from consistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

<span class="mw-page-title-main">Clutter (radar)</span> Unwanted echoes

Clutter is a term used for unwanted echoes in electronic systems, particularly in reference to radars. Such echoes are typically returned from ground, sea, rain, animals/insects, chaff and atmospheric turbulences, and can cause serious performance issues with radar systems. What one person considers to be unwanted clutter, another may consider to be a wanted target. However, targets usually refer to point scatterers and clutter to extended scatterers. The clutter may fill a volume or be confined to a surface. A knowledge of the volume or surface area illuminated is required to estimated the echo per unit volume, η, or echo per unit surface area, σ°.

In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal, gamma and inverse Gaussian distributions, the purely discrete scaled Poisson distribution, and the class of compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous. Tweedie distributions are a special case of exponential dispersion models and are often used as distributions for generalized linear models.

In probability and statistics, the class of exponential dispersion models (EDM), also called exponential dispersion family (EDF), is a set of probability distributions that represents a generalisation of the natural exponential family. Exponential dispersion models play an important role in statistical theory, in particular in generalized linear models because they have a special structure which enables deductions to be made about appropriate statistical inference.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

<span class="mw-page-title-main">Normal-inverse-gamma distribution</span>

In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.

In economics, elasticity of intertemporal substitution is a measure of responsiveness of the growth rate of consumption to the real interest rate. If the real interest rate rises, current consumption may decrease due to increased return on savings; but current consumption may also increase as the household decides to consume more immediately, as it is feeling richer. The net effect on current consumption is the elasticity of intertemporal substitution.

The Perry–Robertson formula is a mathematical formula which is able to produce a good approximation of buckling loads in long slender columns or struts, and is the basis for the buckling formulation adopted in EN 1993. The formula in question can be expressed in the following form:

For certain applications in linear algebra, it is useful to know properties of the probability distribution of the largest eigenvalue of a finite sum of random matrices. Suppose is a finite sequence of random matrices. Analogous to the well-known Chernoff bound for sums of scalars, a bound on the following is sought for a given parameter t:

In plasma physics and magnetic confinement fusion, neoclassical transport or neoclassical diffusion is a theoretical description of collisional transport in toroidal plasmas, usually found in tokamaks or stellerators. It is a modification of classical diffusion adding in effects of non-uniform magnetic fields due to the toroidal geometry, which give rise to new diffusion effects.

References

  1. Otrok, Christopher (2001). "On measuring the welfare cost of business cycles" (PDF). Journal of Monetary Economics . 47 (1): 61–92. doi:10.1016/S0304-3932(00)00052-0.
  2. Imrohoroglu, Ayse. "welfare costs of business cycles" (PDF). The New Palgrave Dictionary of Economics Online.
  3. Barlevy, Gadi (2004). "The Cost of Business Cycles under Endogenous Growth" (PDF). American Economic Review . 94 (4): 964–990. doi:10.1257/0002828042002615. JSTOR   3592801.
  4. 1 2 3 Yellen, Janet L.; Akerlof, George A. (January 1, 2006). "Stabilization policy: a reconsideration". Economic Inquiry. 44: 1–22. CiteSeerX   10.1.1.298.6467 . doi:10.1093/ei/cbj002.
  5. Galí, Jordi; Gertler, Mark; López-Salido, J. David (2007). "Markups, Gaps, and the Welfare Costs of Business Fluctuations". Review of Economics and Statistics . 98 (1): 44–59. CiteSeerX   10.1.1.384.1686 . doi:10.1162/rest.89.1.44.
  6. 1 2 3 Lucas, Robert E. Jr. (2003). "Macroeconomic Priorities". American Economic Review . 93 (1): 1–14. CiteSeerX   10.1.1.366.2404 . doi:10.1257/000282803321455133.
  7. Latty (2011) A note on the relationship between the Atkinson index and the Generalised entropy class of decomposable inequality indexes under the assumption of log-normality of income distribution or volatility, https://www.academia.edu/1816869/A_note_on_the_relationship_between_the_Atkinson_index_and_the_generalised_entropy_class_of_decomposable_inequality_indexes_under_the_assumption_of_log-normality_of_income_distribution_or_volatility
  8. Mehra, Rajnish; Prescott, Edward C. (1985). "The Equity Premium: A Puzzle" (PDF). Journal of Monetary Economics . 15 (2): 145–161. doi:10.1016/0304-3932(85)90061-3.
  9. Grant, Simon; Quiggin, John (2005). "What does the equity premium mean?". The Economists' Voice. 2 (4): Article 2. doi:10.2202/1553-3832.1088. S2CID   153516437.
  10. Wolfers, Justin (April 2003). "Is Business Cycle Volatility Costly? Evidence from Surveys of Subjective Wellbeing". National Bureau of Economic Research .

Further reading