In econometrics and official statistics, and particularly in banking, the Divisia monetary aggregates index is an index of money supply. It uses Divisia index methods.
The monetary aggregates used by most central banks (notably the US Federal Reserve) are simple-sum indexes in which all monetary components are assigned the same weight:
in which is one of the monetary components of the monetary aggregate . The summation index implies that all monetary components contribute equally to the money total, and it views all components as dollar-for-dollar perfect substitutes. It has been argued that such an index does not weight such components in a way that properly summarizes the services of the quantities of money.
There have been many attempts at weighting monetary components within a simple-sum aggregate. An index can rigorously apply microeconomic- and aggregation-theoretic foundations in the construction of monetary aggregates. That approach to monetary aggregation was derived and advocated by William A. Barnett (1980) and has led to the construction of monetary aggregates based on Diewert's (1976) class of superlative quantity index numbers. The new aggregates are called the Divisia aggregates or Monetary Services Indexes. Salam Fayyad's 1986 PhD dissertation did early research with those aggregates using U.S. data.
This index is a discrete-time approximation with this definition:
Here, the growth rate of the aggregate is the weighted average of the growth rates of the component quantities. The discrete time Divisia weights are defined as the expenditure shares averaged over the two periods of the change
for , where
is the expenditure share of asset during period , and is the user cost of asset , derived by Barnett (1978),
Which is the opportunity cost of holding a dollar's worth of the th asset. In the last equation, is the market yield on the th asset, and is the yield available on a benchmark asset, held only to carry wealth between different time periods.
In the literature on aggregation and index number theory, the Divisia approach to monetary aggregation, , is widely viewed as a viable and theoretically appropriate alternative to the simple-sum approach. See, for example, International Monetary Fund (2008), Macroeconomic Dynamics (2009), and Journal of Econometrics (2011). The simple-sum approach, , which is still in use by some central banks, adds up imperfect substitutes, such as currency and non-negotiable certificates of deposit, without weights reflecting differences in their contributions to the economy's liquidity. A primary source of theory, applications, and data from the aggregation-theoretic approach to monetary aggregation is the Center for Financial Stability in New York City. More details regarding the Divisia approach to monetary aggregation are provided by Barnett, Fisher, and Serletis (1992), Barnett and Serletis (2000), and Serletis (2007). Divisia Monetary Aggregates are available for the United Kingdom by the Bank of England, for the United States by the Federal Reserve Bank of St. Louis, and for Poland by the National Bank of Poland. Divisia monetary aggregates are maintained for internal use by the European Central Bank, the Bank of Japan, the Bank of Israel, and the International Monetary Fund.
Recent empirical research has explored the potential advantages of Divisia monetary aggregates compared to the federal funds rate in monetary policy shock analysis. Keating et al. (2019) [1] develop an econometric framework to evaluate monetary policy transmission mechanisms, conducting a systematic comparison between the federal funds rate and Divisia M4 over the period 1960-2017. Their findings suggest that Divisia M4 may provide more theoretically consistent counterfactuals across both crisis and non-crisis periods, whereas federal funds rate specifications sometimes produce empirical puzzles. The authors' model incorporating Divisia M4 appears to capture certain aspects of temporal heterogeneity in policy shock effects.
In mathematics, the gamma function is the most common extension of the factorial function to complex numbers. Derived by Daniel Bernoulli, the gamma function is defined for all complex numbers except non-positive integers, and for every positive integer , The gamma function can be defined via a convergent improper integral for complex numbers with positive real part:
In number theory, Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. It is written using the Greek letter phi as or , and may also be called Euler's phi function. In other words, it is the number of integers k in the range 1 ≤ k ≤ n for which the greatest common divisor gcd(n, k) is equal to 1. The integers k of this form are sometimes referred to as totatives of n.
A Fourier series is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in Common forms of the Fourier series below.
New Keynesian economics is a school of macroeconomics that strives to provide microeconomic foundations for Keynesian economics. It developed partly as a response to criticisms of Keynesian macroeconomics by adherents of new classical macroeconomics.
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
Euler's constant is a mathematical constant, usually denoted by the lowercase Greek letter gamma, defined as the limiting difference between the harmonic series and the natural logarithm, denoted here by log:
The Phillips curve is an economic model, named after Bill Phillips, that correlates reduced unemployment with increasing wages in an economy. While Phillips did not directly link employment and inflation, this was a trivial deduction from his statistical findings. Paul Samuelson and Robert Solow made the connection explicit and subsequently Milton Friedman and Edmund Phelps put the theoretical structure in place.
In mathematics, the prime-counting function is the function counting the number of prime numbers less than or equal to some real number x. It is denoted by π(x) (unrelated to the number π).
In mathematics, the Hurwitz zeta function is one of the many zeta functions. It is formally defined for complex variables s with Re(s) > 1 and a ≠ 0, −1, −2, … by
In finance and economics, systematic risk is vulnerability to events which affect aggregate outcomes such as broad market returns, total economy-wide resource holdings, or aggregate income. In many contexts, events like earthquakes, epidemics and major weather catastrophes pose aggregate risks that affect not only the distribution but also the total amount of resources. That is why it is also known as contingent risk, unplanned risk or risk events. If every possible outcome of a stochastic economic process is characterized by the same aggregate result, the process then has no aggregate risk.
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices.
William Arnold Barnett is an American economist, whose current work is in the fields of chaos, bifurcation, and nonlinear dynamics in socioeconomic contexts, econometric modeling of consumption and production, and the study of the aggregation problem and the challenges of measurement in economics.
Seasonal adjustment or deseasonalization is a statistical method for removing the seasonal component of a time series. It is usually done when wanting to analyse the trend, and cyclical deviations from trend, of a time series independently of the seasonal components. Many economic phenomena have seasonal cycles, such as agricultural production, and consumer consumption. It is necessary to adjust for this component in order to understand underlying trends in the economy, so official statistics are often adjusted to remove seasonal components. Typically, seasonally adjusted data is reported for unemployment rates to reveal the underlying trends and cycles in labor markets.
In monetary economics, the demand for money is the desired holding of financial assets in the form of money: that is, cash or bank deposits rather than investments. It can refer to the demand for money narrowly defined as M1, or for money in the broader sense of M2 or M3.
A Divisia index is a theoretical construct to create index number series for continuous-time data on prices and quantities of goods exchanged. The name comes from François Divisia who first proposed and formally analyzed the indexes in 1926, and discussed them in related 1925 and 1928 works.
In financial econometrics, the Markov-switching multifractal (MSM) is a model of asset returns developed by Laurent E. Calvet and Adlai J. Fisher that incorporates stochastic volatility components of heterogeneous durations. MSM captures the outliers, log-memory-like volatility persistence and power variation of financial returns. In currency and equity series, MSM compares favorably with standard volatility models such as GARCH(1,1) and FIGARCH both in- and out-of-sample. MSM is used by practitioners in the financial industry for different types of forecasts.
Stochastic portfolio theory (SPT) is a mathematical theory for analyzing stock market structure and portfolio behavior introduced by E. Robert Fernholz in 2002. It is descriptive as opposed to normative, and is consistent with the observed behavior of actual markets. Normative assumptions, which serve as a basis for earlier theories like modern portfolio theory (MPT) and the capital asset pricing model (CAPM), are absent from SPT.
The Barnett critique, named for the work of William A. Barnett in monetary economics, argues that internal inconsistency between the aggregation theory used to produce monetary aggregates and the economic theory used to produce the models within which the aggregates are used are responsible for the appearance of unstable demand and supply for money. The Barnett critique has produced a long and growing literature on monetary aggregation and index number theory and the use of the resulting aggregates in econometric modeling and monetary policy.
The sparse Fourier transform (SFT) is a kind of discrete Fourier transform (DFT) for handling big data signals. Specifically, it is used in GPS synchronization, spectrum sensing and analog-to-digital converters.:
The price puzzle is a phenomenon in monetary economics observed within structural vector autoregression (SVAR) models. It refers to the counterintuitive result where a contractionary monetary policy shock—typically modeled as an increase in short-term interest rates—is followed by an increase, rather than a decrease, in the price level. This anomaly challenges conventional macroeconomic theories that predict a decline in prices as monetary tightening reduces aggregate demand.