# Rational expectations

Last updated

In economics, "rational expectations" are model-consistent expectations, in that agents inside the model are assumed to "know the model" and on average take the model's predictions as valid. [1] Rational expectations ensure internal consistency in models involving uncertainty. To obtain consistency within a model, the predictions of future values of economically relevant variables from the model are assumed to be the same as that of the decision-makers in the model, given their information set, the nature of the random processes involved, and model structure. The rational expectations assumption is used especially in many contemporary macroeconomic models.

## Contents

Since most macroeconomic models today study decisions under uncertainty and over many periods, the expectations of individuals, firms, and government institutions about future economic conditions are an essential part of the model. To assume rational expectations is to assume that agents' expectations may be wrong, but are correct on average over time. In other words, although the future is not fully predictable, agents' expectations are assumed not to be systematically biased and collectively use all relevant information in forming expectations of economic variables. This way of modeling expectations was originally proposed by John F. Muth (1961) [2] and later became influential when it was used by Robert Lucas Jr. in macroeconomics.

Deirdre McCloskey emphasizes that "rational expectations" is an expression of intellectual modesty: [3]

Muth's notion was that the professors [of economics], even if correct in their model of man, could do no better in predicting than could the hog farmer or steelmaker or insurance company. The notion is one of intellectual modesty. The common sense is "rationality": therefore Muth called the argument "rational expectations".

Hence, it is important to distinguish the rational-expectations assumption from assumptions of individual rationality and to note that the first does not imply the latter. Rational expectations is an assumption of aggregate consistency in dynamic models. In contrast, rational choice theory studies individual decision making and is used extensively in, among others, game theory and contract theory. [4] In fact, Muth cited survey data exhibiting "considerable cross-sectional differences of opinion" and was quite explicit in stating that his rational-expectations hypothesis does not assert... that predictions of entrepreneurs are perfect or that their expectations are all the same. In Muth's version of rational expectations, each individual holds beliefs that are model inconsistent, although the distribution of these diverse beliefs is unbiased relative to the data generated by the actions resulting from these expectations.

## Theory

Rational expectations theory defines this kind of expectations as being the best guess of the future (the optimal forecast) that uses all available information. Thus, it is assumed that outcomes that are being forecast do not differ systematically from the market equilibrium results. As a result, rational expectations do not differ systematically or predictably from equilibrium results. That is, it assumes that people do not make systematic errors when predicting the future, and deviations from perfect foresight are only random. In an economic model, this is typically modelled by assuming that the expected value of a variable is equal to the expected value predicted by the model.

For example, suppose that P is the equilibrium price in a simple market, determined by supply and demand. The theory of rational expectations says that the actual price will only deviate from the expectation if there is an 'information shock' caused by information unforeseeable at the time expectations were formed. In other words, ex ante the price is anticipated to equal its rational expectation:

${\displaystyle P=P^{*}+\epsilon }$
${\displaystyle E[P]=P^{*}}$

where ${\displaystyle P^{*}}$ is the rational expectation and ${\displaystyle \epsilon }$ is the random error term, which has an expected value of zero, and is independent of ${\displaystyle P^{*}}$.

## Mathematical derivation

If rational expectations are applied to the Phillips curve analysis, the distinction between long and short term will be completely negated, that is, there is no Phillips curve, and there is no substitute relationship between inflation rate and unemployment rate that can be utilized.

The mathematical derivation is as follows:

Rational expectation is consistent with objective mathematical expectation:

${\displaystyle E{\dot {P}}_{t}={\dot {P}}_{t}+\varepsilon _{t}}$

Mathematical derivation (1)

Assuming that the actual process is known, the rate of inflation depends on previous monetary changes and changes in short-term variables such as X (for example, oil prices):

(1) ${\displaystyle {\dot {P}}=q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}+\varepsilon _{t}}$

(2) ${\displaystyle E{\dot {P}}_{t}=q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}}$

(3) ${\displaystyle {\dot {P}}_{t}=\alpha -\beta u_{t}+\gamma E_{t-1}({\dot {P}}_{t})}$ , ${\displaystyle \gamma =1}$

(4) ${\displaystyle \alpha -\mathrm {B} u_{t}+q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}=q{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}+\varepsilon _{t}}$

(5) ${\displaystyle u_{t}={\frac {\alpha -\epsilon _{t}}{\beta }}}$

Thus, even in the short run, there is no substitute relationship between inflation and unemployment. Random shocks, which are completely unpredictable, are the only reason why the unemployment rate deviates from the natural rate.

Mathematical derivation (2)

Even if the actual rate of inflation is dependent on current monetary changes, the public can make rational expectations as long as they know how monetary policy is being decided:

(1) ${\displaystyle {\dot {P}}_{t}=q{\dot {M}}_{t}+z{\dot {X}}_{t-1}+\varepsilon _{t}}$

(2) ${\displaystyle {\dot {M}}_{t}=g{\dot {M}}_{t-1}+\mu _{t}}$

(3) ${\displaystyle {\dot {P}}_{t}=qg{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}+q\mu _{t}+\varepsilon _{t}}$

(4) ${\displaystyle E{\dot {P}}=qg{\dot {M}}_{t-1}+z{\dot {X}}_{t-1}}$

(5) ${\displaystyle u_{t}={\frac {\alpha -q\mu _{t}-\varepsilon _{t}}{\beta }}}$

The conclusion is essentially the same: random shocks that are completely unpredictable are the only thing that can cause the unemployment rate to deviate from the natural rate.

## Implications

Rational expectations theories were developed in response to perceived flaws in theories based on adaptive expectations. Under adaptive expectations, expectations of the future value of an economic variable are based on past values. For example, people would be assumed to predict inflation by looking at inflation last year and in previous years. Under adaptive expectations, if the economy suffers from constantly rising inflation rates (perhaps due to government policies), people would be assumed to always underestimate inflation. Many economists have regarded this as unrealistic, believing that rational individuals would sooner or later realize the trend and take it into account in forming their expectations.

The rational expectations hypothesis has been used to support some strong conclusions about economic policymaking. An example is the policy ineffectiveness proposition developed by Thomas Sargent and Neil Wallace. If the Federal Reserve attempts to lower unemployment through expansionary monetary policy economic agents will anticipate the effects of the change of policy and raise their expectations of future inflation accordingly. This in turn will counteract the expansionary effect of the increased money supply. All that the government can do is raise the inflation rate, not employment. This is a distinctly New Classical outcome. During the 1970s rational expectations appeared to have made previous macroeconomic theory largely obsolete, which culminated with the Lucas critique. However, rational expectations theory has been widely adopted and is considered an innocuous assumption in macroeconomics. [5]

If agents do not (or cannot) form rational expectations or if prices are not completely flexible, discretional and completely anticipated economic policy actions can trigger real changes. [6]

## Criticism

Rational expectations are expected values in the mathematical sense. In order to be able to compute expected values, individuals must know the true economic model, its parameters, and the nature of the stochastic processes that govern its evolution. If these extreme assumptions are violated, individuals simply cannot form rational expectations. [7]

### Testing empirically for rational expectations

Suppose we have data on inflationary expectations, such as that from the Michigan survey. [8] We can test whether these expectations are rational by regressing the actual realized inflation rate ${\displaystyle I}$ on the prior expectation of it, X, at some specified lead time k:

${\displaystyle I_{t}=a+bX_{t-k}+\varepsilon _{t},}$

where a and b are parameters to be estimated and ${\displaystyle \varepsilon }$ is the error term. We can test the rationality of expectations by testing the joint null hypothesis that

${\displaystyle H_{0}:\quad a=0\quad {\text{and}}\quad b=1;}$

failure to reject this null hypothesis is evidence in favor of rational expectations. A stronger test can be conducted if the one above has failed to reject the null: the residuals of the above regression can themselves be regressed on other variables whose values are available to agents when they are forming the expectation. If any of these variables has a significant effect on the residuals, agents can be said to have failed to take them sufficiently into account when forming their expectations, leading to needlessly high variance of the forecasting residuals and thus more uncertainty than is necessary about their predictions, which hampers their effort to use the predictions in their economic choices for things such as money demand, consumption, fixed investment, etc.

## Notes

1. Snowdon, B., Vane, H., & Wynarczyk, P. (1994). A modern guide to macroeconomics. (pp. 236–79). Cambridge: Edward Elgar Publishing Limited.
2. Muth, John F. (1961). "Rational Expectations and the Theory of Price Movements" (PDF). Econometrica. 29 (3): 315–335. doi:10.2307/1909635. JSTOR   1909635. reprinted in Hoover, Kevin D., ed. (1992). The New Classical Macroeconomics, Volume 1. International Library of Critical Writings in Economics, vol. 19. Aldershot, Hants, England: Elgar. pp. 3–23. ISBN   978-1-85278-572-7.
3. McCloskey, Deirdre N. (1998). The Rhetoric of Economics (2 ed.). Univ of Wisconsin Press. p. 53. ISBN   978-0-299-15814-9.
4. Levine, David K. (2012-01-26). "Why Economists Are Right: Rational Expectations and the Uncertainty Principle in Economics". Huffington Post. Retrieved 2017-07-18.
5. Mankiw, Greg (2006), "The Macroeconomist as Scientist and Engineer", Journal of Economic Perspectives , 20 (4): 29–46, CiteSeerX  , doi:10.1257/jep.20.4.29
6. Galbács, Peter (2015). The Theory of New Classical Macroeconomics. A Positive Critique. Contributions to Economics. Heidelberg/New York/Dordrecht/London: Springer. doi:10.1007/978-3-319-17578-2. ISBN   978-3-319-17578-2.
7. Evans, G. W. and G. Ramey (2006) Adaptive Expectations, Underparameterization and the Lucas Critique. Journal of Monetary Economics, vol. 53, pp. 249-264.
8. "University of Michigan: Inflation Expectation". Economic Research, Federal Reserve Bank of St. Louis. January 1978.

## Related Research Articles

In economics, adaptive expectations is a hypothesized process by which people form their expectations about what will happen in the future based on what has happened in the past. For example, if people want to create an expectation of the inflation rate in the future, they can refer to past inflation rates to infer some consistencies and could derive a more accurate expectation the more years they consider.

Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships. More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference". An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships". Jan Tinbergen is one of the two founding fathers of econometrics. The other, Ragnar Frisch, also coined the term in the sense in which it is used today.

Macroeconomics is a branch of economics dealing with performance, structure, behavior, and decision-making of an economy as a whole. For example, using interest rates, taxes, and government spending to regulate an economy’s growth and stability. This includes regional, national, and global economies. According to a 2018 assessment by economists Emi Nakamura and Jón Steinsson, economic "evidence regarding the consequences of different macroeconomic policies is still highly imperfect and open to serious criticism."

New Keynesian economics is a school of macroeconomics that strives to provide microeconomic foundations for Keynesian economics. It developed partly as a response to criticisms of Keynesian macroeconomics by adherents of new classical macroeconomics.

Noether's theorem or Noether's first theorem states that every differentiable symmetry of the action of a physical system with conservative forces has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries over physical space.

The Phillips curve is a single-equation economic model, named after William Phillips, hypothesizing an inverse relationship between rates of unemployment and corresponding rates of rises in wages that result within an economy. Stated simply, decreased unemployment, in an economy will correlate with higher rates of wage rises. Phillips did not himself state there was any relationship between employment and inflation; this notion was a trivial deduction from his statistical findings. Paul Samuelson and Robert Solow made the connection explicit and subsequently Milton Friedman and Edmund Phelps put the theoretical structure in place. In so doing, Friedman was to successfully predict the imminent collapse of Phillips' a-theoretic correlation.

In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant. It is named after the Russian mathematician Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev, and many sources, especially in analysis, refer to it as Chebyshev's inequality or Bienaymé's inequality.

The path integral formulation is a description in quantum mechanics that generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.

(Interest rates to national income)

In monetary economics, the quantity theory of money is one of the directions of Western economic thought that emerged in the 16th-17th centuries. The QTM states that the general price level of goods and services is directly proportional to the amount of money in circulation, or money supply. For example, if the amount of money in an economy doubles, QTM predicts that price levels will also double. The theory was originally formulated by Polish mathematician Nicolaus Copernicus in 1517, and was influentially restated by philosophers John Locke, David Hume, Jean Bodin. The theory experienced a large surge in popularity with economists Anna Schwartz and Milton Friedman's book A Monetary History of the United States, published in 1963.

In the statistical analysis of time series, autoregressive–moving-average (ARMA) models provide a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the autoregression (AR) and the second for the moving average (MA). The general ARMA model was described in the 1951 thesis of Peter Whittle, Hypothesis testing in time series analysis, and it was popularized in the 1970 book by George E. P. Box and Gwilym Jenkins.

In finance, arbitrage pricing theory (APT) is a multi-factor model for asset pricing which relates various macro-economic (systematic) risk variables to the pricing of financial assets. Proposed by economist Stephen Ross in 1976, it is widely believed to be an improved alternative to its predecessor, the Capital Asset Pricing Model (CAPM). APT is founded upon the law of one price, which suggests that within an equilibrium market, rational investors will implement arbitrage such that the equilibrium price is eventually realised. As such, APT argues that when opportunities for arbitrage are exhausted in a given period, then the expected return of an asset is a linear function of various factors or theoretical market indices, where sensitivities of each factor is represented by a factor-specific beta coefficient or factor loading. Consequently, it provides traders with an indication of ‘true’ asset value and enables exploitation of market discrepancies via arbitrage. The linear factor model structure of the APT is used as the basis for evaluating asset allocation, the performance of managed funds as well as the calculation of cost of capital.

A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time in terms of the payoff from some initial choices and the "value" of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman's “principle of optimality" prescribes.

John Fraser Muth was an American economist. He is "the father of the rational expectations revolution in economics", primarily due to his article "Rational Expectations and the Theory of Price Movements" from 1961.

Thomas John Sargent is an American economist and the W.R. Berkley Professor of Economics and Business at New York University. He specializes in the fields of macroeconomics, monetary economics, and time series econometrics. As of 2020, he ranks as the 29th most cited economist in the world. He was awarded the Nobel Memorial Prize in Economics in 2011 together with Christopher A. Sims for their "empirical research on cause and effect in the macroeconomy".

New classical macroeconomics, sometimes simply called new classical economics, is a school of thought in macroeconomics that builds its analysis entirely on a neoclassical framework. Specifically, it emphasizes the importance of rigorous foundations based on microeconomics, especially rational expectations.

Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.

In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x). Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.

The random walk model of consumption was introduced by economist Robert Hall. This model uses the Euler numerical method to model consumption. He created his consumption theory in response to the Lucas critique. Using Euler equations to model the random walk of consumption has become the dominant approach to modeling consumption.

In decision theory, the von Neumann–Morgenstern (VNM) utility theorem shows that, under certain axioms of rational behavior, a decision-maker faced with risky (probabilistic) outcomes of different choices will behave as if he or she is maximizing the expected value of some function defined over the potential outcomes at some specified point in the future. This function is known as the von Neumann–Morgenstern utility function. The theorem is the basis for expected utility theory.

## References

• Hanish C. Lodhia (2005) "The Irrationality of Rational Expectations – An Exploration into Economic Fallacy". 1st Edition, Warwick University Press, UK.
• Maarten C. W. Janssen (1993) "Microfoundations: A Critical Inquiry". Routledge.
• John F. Muth (1961) "Rational Expectations and the Theory of Price Movements" reprinted in The new classical macroeconomics. Volume 1. (1992): 3–23 (International Library of Critical Writings in Economics, vol. 19. Aldershot, UK: Elgar.)
• Thomas J. Sargent (1987). "Rational expectations," The New Palgrave: A Dictionary of Economics , v. 4, pp. 76–79.
• N.E. Savin (1987). "Rational expectations: econometric implications," The New Palgrave: A Dictionary of Economics , v. 4, pp. 79–85.