Multiplier uncertainty

Last updated

In macroeconomics, multiplier uncertainty is lack of perfect knowledge of the multiplier effect of a particular policy action, such as a monetary or fiscal policy change, upon the intended target of the policy. For example, a fiscal policy maker may have a prediction as to the value of the fiscal multiplier—the ratio of the effect of a government spending change on GDP to the size of the government spending change—but is not likely to know the exact value of this ratio. Similar uncertainty may surround the magnitude of effect of a change in the monetary base or its growth rate upon some target variable, which could be the money supply, the exchange rate, the inflation rate, or GDP.

Macroeconomics is a branch of economics dealing with the performance, structure, behavior, and decision-making of an economy as a whole. This includes regional, national, and global economies. Macroeconomists study aggregated indicators such as GDP, unemployment rates, national income, price indices, and the interrelations among the different sectors of the economy to better understand how the whole economy functions. They also develop models that explain the relationship between such factors as national income, output, consumption, unemployment, inflation, saving, investment, international trade, and international finance.

In macroeconomics, a multiplier is a factor of proportionality that measures how much an endogenous variable changes in response to a change in some exogenous variable.

Fiscal policy use of government revenue collection and spending to influence the economy

In economics and political science, fiscal policy is the use of government revenue collection and expenditure (spending) to influence the economy. According to Keynesian economics, when the government changes the levels of taxation and government spending, it influences aggregate demand and the level of economic activity. Fiscal policy is often used to stabilize the economy over the course of the business cycle.

Contents

There are several policy implications of multiplier uncertainty: (1) If the multiplier uncertainty is uncorrelated with additive uncertainty, its presence causes greater cautiousness to be optimal (the policy tools should be used to a lesser extent). (2) In the presence of multiplier uncertainty, it is no longer redundant to have more policy tools than there are targeted economic variables. (3) Certainty equivalence no longer applies under quadratic loss: optimal policy is not equivalent to a policy of ignoring uncertainty.

In mathematical optimization, statistics, econometrics, decision theory, machine learning and computational neuroscience, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its negative, in which case it is to be maximized.

In decision theory and quantitative policy analysis, the expected value of including uncertainty (EVIU) is the expected difference in the value of a decision based on a probabilistic analysis versus a decision based on an analysis that ignores uncertainty.

Effect of multiplier uncertainty on the optimal magnitude of policy

For the simplest possible case, [1] let P be the size of a policy action (a government spending change, for example), let y be the value of the target variable (GDP for example), let a be the policy multiplier, and let u be an additive term capturing both the linear intercept and all unpredictable components of the determination of y. Both a and u are random variables (assumed here for simplicity to be uncorrelated), with respective means Ea and Eu and respective variances and . Then

Suppose the policy maker cares about the expected squared deviation of GDP from a preferred value ; then its loss function L is quadratic so that the objective function, expected loss, is given by:

where the last equality assumes there is no covariance between a and u. Optimizing with respect to the policy variable P gives the optimal value Popt:

Here the last term in the numerator is the gap between the preferred value yd of the target variable and its expected value Eu in the absence of any policy action. If there were no uncertainty about the policy multiplier, would be zero, and policy would be chosen so that the contribution of policy (the policy action P times its known multiplier a) would be to exactly close this gap, so that with the policy action Ey would equal yd. However, the optimal policy equation shows that, to the extent that there is multiplier uncertainty (the extent to which ), the magnitude of the optimal policy action is diminished.

Thus the basic effect of multiplier uncertainty is to make policy actions more cautious, although this effect can be modified in more complicated models.

Multiple targets or policy instruments

The above analysis of one target variable and one policy tool can readily be extended to multiple targets and tools. [2] In this case a key result is that, unlike in the absence of multiplier uncertainty, it is not superfluous to have more policy tools than targets: with multiplier uncertainty, the more tools are available the lower expected loss can be driven.

Analogy to portfolio theory

There is a mathematical and conceptual analogy between, on the one hand, policy optimization with multiple policy tools having multiplier uncertainty, and on the other hand, portfolio optimization involving multiple investment choices having rate-of-return uncertainty. [2] The usages of the policy variables correspond to the holdings of the risky assets, and the uncertain policy multipliers correspond to the uncertain rates of return on the assets. In both models, mutual fund theorems apply: under certain conditions, the optimal portfolios of all investors regardless of their preferences, or the optimal policy mixes of all policy makers regardless of their preferences, can be expressed as linear combinations of any two optimal portfolios or optimal policy mixes.

Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. It uses the variance of asset prices as a proxy for risk.

Dynamic policy optimization

The above discussion assumed a static world in which policy actions and outcomes for only one moment in time were considered. However, the analysis generalizes to a context of multiple time periods in which both policy actions take place and target variable outcomes matter, and in which time lags in the effects of policy actions exist. In this dynamic stochastic control context with multiplier uncertainty, [3] [4] [5] a key result is that the "certainty equivalence principle" does not apply: while in the absence of multiplier uncertainty (that is, with only additive uncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertainty were ignored, this no longer holds in the presence of multiplier uncertainty.

Related Research Articles

Normal distribution probability distribution

In probability theory, the normaldistribution is a very common continuous probability distribution. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.

Variance Statistical measure

In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its mean. Informally, it measures how far a set of (random) numbers are spread out from their average value. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Variance is an important tool in the sciences, where statistical analysis of data is common. The variance is the square of the standard deviation, the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , or .

Multivariate normal distribution

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.

Covariance matrix measure of covariance of components of a random vector

In probability theory and statistics, a covariance matrix, also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix, is a matrix whose element in the i, j position is the covariance between the i-th and j-th elements of a random vector. A random vector is a random variable with multiple dimensions. Each element of the vector is a scalar random variable. Each element has either a finite number of observed empirical values or a finite or infinite number of potential values. The potential values are specified by a theoretical joint probability distribution.

In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the form:

Canonical correlation

In statistics, canonical-correlation analysis is a way of inferring information from cross-covariance matrices. If we have two vectors X = (X1, ..., Xn) and Y = (Y1, ..., Ym) of random variables, and there are correlations among the variables, then canonical-correlation analysis will find linear combinations of X and Y which have maximum correlation with each other. T. R. Knapp notes that "virtually all of the commonly encountered parametric tests of significance can be treated as special cases of canonical-correlation analysis, which is the general procedure for investigating the relationships between two sets of variables." The method was first introduced by Harold Hotelling in 1936, although in the context of angles between flats the mathematical concept was published by Jordan in 1875.

In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. Whereas deterministic optimization problems are formulated with known parameters, real world problems almost invariably include some unknown parameters. When the parameters are known only within certain bounds, one approach to tackling such problems is called robust optimization. Here the goal is to find a solution which is feasible for all such data and optimal in some sense. Stochastic programming models are similar in style but take advantage of the fact that probability distributions governing the data are known or can be estimated. The goal here is to find some policy that is feasible for all the possible data instances and maximizes the expectation of some function of the decisions and the random variables. More generally, such models are formulated, solved analytically or numerically, and analyzed in order to provide useful information to a decision-maker.

In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.

Conjugate variables are pairs of variables mathematically defined in such a way that they become Fourier transform duals, or more generally are related through Pontryagin duality. The duality relations lead naturally to an uncertainty relation—in physics called the Heisenberg uncertainty principle—between them. In mathematical terms, conjugate variables are part of a symplectic basis, and the uncertainty relation corresponds to the symplectic form. Also, conjugate variables are related by Noether's theorem, which states that if the laws of physics are invariant with respect to a change in one of the conjugate variables, then the other conjugate variable will not change with time.

Uniform distribution (continuous) uniform distribution on an interval

In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable. The support is defined by the two parameters, a and b, which are its minimum and maximum values. The distribution is often abbreviated U(a,b). It is the maximum entropy probability distribution for a random variable X under no constraint other than that it is contained in the distribution's support.

The Mundell–Fleming model, also known as the IS-LM-BoP model, is an economic model first set forth (independently) by Robert Mundell and Marcus Fleming. The model is an extension of the IS-LM model. Whereas the traditional IS-LM model deals with economy under autarky, the Mundell–Fleming model describes a small open economy. Mundell's paper suggests that the model can be applied to Zurich, Brussels and so on.

In multivariate statistics, if is a vector of random variables, and is an -dimensional symmetric matrix, then the scalar quantity is known as a quadratic form in .

In macroeconomics, discretionary policy is an economic policy based on the ad hoc judgment of policymakers as opposed to policy set by predetermined rules. For instance, a central banker could make decisions on interest rates on a case-by-case basis instead of allowing a set rule, such as the Taylor rule, Friedman's k-percent rule, or a nominal income target to determine interest rates or the money supply. In practice most policy actions are discretionary in nature.

Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.

In probability theory and statistics, the specific name generalized chi-squared distribution arises in relation to one particular family of variants of the chi-squared distribution. There are several other such variants for which the same term is sometimes used, or which clearly are generalizations of the chi-squared distribution, and which are treated elsewhere: some are special cases of the family discussed here, for example the noncentral chi-squared distribution and the gamma distribution, while the generalized gamma distribution is outside this family. The type of generalisation of the chi-squared distribution that is discussed here is of importance because it arises in the context of the distribution of statistical estimates in cases where the usual statistical theory does not hold. For example, if a predictive model is fitted by least squares but the model errors have either autocorrelation or heteroscedasticity, then a statistical analysis of alternative model structures can be undertaken by relating changes in the sum of squares to an asymptotically valid generalized chi-squared distribution. More specifically, the distribution can be defined in terms of a quadratic form derived from a multivariate normal distribution.

In statistics, inverse-variance weighting is a method of aggregating two or more random variables to minimize the variance of the weighted average. Each random variable is weighted in inverse proportion to its variance.

In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix and an approximating matrix, subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.

Meta-regression is a tool used in meta-analysis to examine the impact of moderator variables on study effect size using regression-based techniques. Meta-regression is more effective at this task than are standard meta-analytic techniques.

References

  1. Brainard, William (1967). "Uncertainty and the effectiveness of policy". American Economic Review . 57 (2): 411–425. JSTOR   1821642.
  2. 1 2 Mitchell, Douglas W. (1990). "The efficient policy frontier under parameter uncertainty and multiple tools". Journal of Macroeconomics. 12 (1): 137–145. doi:10.1016/0164-0704(90)90061-E.
  3. Chow, Gregory P. (1976). Analysis and Control of Dynamic Economic Systems. New York: Wiley. ISBN   0-471-15616-7.
  4. Turnovsky, Stephen (1976). "Optimal stabilization policies for stochastic linear systems: The case of correlated multiplicative and additive disturbances". Review of Economic Studies . 43 (1): 191–194. JSTOR   2296741.
  5. Turnovsky, Stephen (1974). "The stability properties of optimal economic policies". American Economic Review. 64 (1): 136–148. JSTOR   1814888.