Seasonal adjustment

Last updated

Seasonal adjustment or deseasonalization is a statistical method for removing the seasonal component of a time series. It is usually done when wanting to analyse the trend, and cyclical deviations from trend, of a time series independently of the seasonal components. Many economic phenomena have seasonal cycles, such as agricultural production, (crop yields fluctuate with the seasons) and consumer consumption (increased personal spending leading up to Christmas). It is necessary to adjust for this component in order to understand underlying trends in the economy, so official statistics are often adjusted to remove seasonal components. [1] Typically, seasonally adjusted data is reported for unemployment rates to reveal the underlying trends and cycles in labor markets. [2] [3]


Time series components

The investigation of many economic time series becomes problematic due to seasonal fluctuations. Time series are made up of four components:

The difference between seasonal and cyclic patterns:

The relation between decomposition of time series components

Seasonal adjustment

Unlike the trend and cyclical components, seasonal components, theoretically, happen with similar magnitude during the same time period each year. The seasonal components of a series are sometimes considered to be uninteresting and to hinder the interpretation of a series. Removing the seasonal component directs focus on other components and will allow better analysis. [5]

Different statistical research groups have developed different methods of seasonal adjustment, for example X-13-ARIMA and X-12-ARIMA developed by the United States Census Bureau; TRAMO/SEATS developed by the Bank of Spain; [6] MoveReg (for weekly data) developed by the United States Bureau of Labor Statistics; [7] STAMP developed by a group led by S. J. Koopman; [8] and “Seasonal and Trend decomposition using Loess” (STL) developed by Cleveland et al. (1990). [9] While X-12/13-ARIMA can only be applied to monthly or quarterly data, STL decomposition can be used on data with any type of seasonality. Furthermore, unlike X-12-ARIMA, STL allows the user to control the degree of smoothness of the trend cycle and how much the seasonal component changes over time. X-12-ARIMA can handle both additive and multiplicative decomposition whereas STL can only be used for additive decomposition. In order to achieve a multiplicative decomposition using STL, the user can take the log of the data before decomposing, and then back-transform after the decomposition. [9]


Each group provides software supporting their methods. Some versions are also included as parts of larger products, and some are commercially available. For example, SAS includes X-12-ARIMA, while Oxmetrics includes STAMP. A recent move by public organisations to harmonise seasonal adjustment practices has resulted in the development of Demetra+ by Eurostat and National Bank of Belgium which currently includes both X-12-ARIMA and TRAMO/SEATS. [10] R includes STL decomposition. [11] The X-12-ARIMA method can be utilized via the R package "X12". [12] EViews supports X-12, X-13, Tramo/Seats, STL and MoveReg.


One well-known example is the rate of unemployment, which is represented by a time series. This rate depends particularly on seasonal influences, which is why it is important to free the unemployment rate of its seasonal component. Such seasonal influences can be due to school graduates or dropouts looking to enter into the workforce and regular fluctuations during holiday periods. Once the seasonal influence is removed from this time series, the unemployment rate data can be meaningfully compared across different months and predictions for the future can be made. [3]

When seasonal adjustment is not performed with monthly data, year-on-year changes are utilised in an attempt to avoid contamination with seasonality.

Indirect seasonal adjustment

When time series data has seasonality removed from it, it is said to be directly seasonally adjusted. If it is made up of a sum or index aggregation of time series which have been seasonally adjusted, it is said to have been indirectly seasonally adjusted. Indirect seasonal adjustment is used for large components of GDP which are made up of many industries, which may have different seasonal patterns and which are therefore analyzed and seasonally adjusted separately. Indirect seasonal adjustment also has the advantage that the aggregate series is the exact sum of the component series. [13] [14] [15] Seasonality can appear in an indirectly adjusted series; this is sometimes called residual seasonality.

Moves to standardise seasonal adjustment processes

Due to the various seasonal adjustment practices by different institutions, a group was created by Eurostat and the European Central Bank to promote standard processes. In 2009 a small group composed of experts from European Union statistical institutions and central banks produced the ESS Guidelines on Seasonal Adjustment, [16] which is being implemented in all the European Union statistical institutions. It is also being adopted voluntarily by other public statistical institutions outside the European Union.

Use of seasonally adjusted data in regressions

By the Frisch–Waugh–Lovell theorem it does not matter whether dummy variables for all but one of the seasons are introduced into the regression equation, or if the independent variable is first seasonally adjusted (by the same dummy variable method), and the regression then run.

Since seasonal adjustment introduces a "non-revertible" moving average (MA) component into time series data, unit root tests (such as the Phillips–Perron test) will be biased towards non-rejection of the unit root null. [17]

Shortcomings of using seasonally adjusted data

Use of seasonally adjusted time series data can be misleading because a seasonally adjusted series contains both the trend-cycle component and the error component. As such, what appear to be "downturns" or "upturns" may actually be randomness in the data. For this reason, if the purpose is finding turning points in a series, using the trend-cycle component is recommended rather than the seasonally adjusted data. [3]

See also

Related Research Articles

Abelian group Commutative group (mathematics)

In mathematics, an abelian group, also called a commutative group, is a group in which the result of applying the group operation to two group elements does not depend on the order in which they are written. That is, the group operation is commutative. With addition as an operation, the integers and the real numbers form abelian groups, and the concept of an abelian group may be viewed as a generalization of these examples. Abelian groups are named after early 19th century mathematician Niels Henrik Abel.

Fast Fourier transform O(N logN) divide-and-conquer algorithm to calculate the discrete Fourier transforms

A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where is the data size. The difference in speed can be enormous, especially for long data sets where N may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.

In mathematics, functional decomposition is the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed from those parts by function composition.

Multiplication Arithmetical operation

Multiplication is one of the four elementary mathematical operations of arithmetic, with the other ones being addition, subtraction and division. The result of a multiplication operation is called a product.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

Forecasting is the process of making predictions based on past and present data and most commonly by analysis of trends. A commonplace example might be estimation of some variable of interest at some specified future date. Prediction is a similar, but more general term. Both might refer to formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods. Usage can differ between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.

In mathematics and statistics, a stationary process is a stochastic process whose unconditional joint probability distribution does not change when shifted in time. Consequently, parameters such as mean and variance also do not change over time. To get an intuition of stationarity, one can imagine a frictionless pendulum. It swings back and forth in an oscillatory motion, yet the amplitude and frequency remain constant. Although the pendulum is moving, the process is stationary as its "statistics" are constant. However, if a force were to be applied to the pendulum, either the frequency or amplitude would change, thus making the process non-stationary.

In statistics, compositional data are quantitative descriptions of the parts of some whole, conveying relative information. Mathematically, compositional data is represented by points on a simplex. Measurements involving probabilities, proportions, percentages, and ppm can all be thought of as compositional data.

Moving average type of statistical measure over subsets of a dataset

In statistics, a moving average is a calculation to analyze data points by creating a series of averages of different subsets of the full data set. It is also called a moving mean (MM) or rolling mean and is a type of finite impulse response filter. Variations include: simple, cumulative, or weighted forms.

The Hodrick–Prescott filter is a mathematical tool used in macroeconomics, especially in real business cycle theory, to remove the cyclical component of a time series from raw data. It is used to obtain a smoothed-curve representation of a time series, one that is more sensitive to long-term than to short-term fluctuations. The adjustment of the sensitivity of the trend to short-term fluctuations is achieved by modifying a multiplier . The filter was popularized in the field of economics in the 1990s by economists Robert J. Hodrick and Nobel Memorial Prize winner Edward C. Prescott. However, it was first proposed much earlier by E. T. Whittaker in 1923.

X-13ARIMA-SEATS, successor to X-12-ARIMA and X-11, is a set of statistical methods for seasonal adjustment and other descriptive analysis of time series data that are implemented in the U.S. Census Bureau's software package. These methods are or have been used by Statistics Canada, Australian Bureau of Statistics, and the statistical offices of many other countries.

In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. Both of these models are fitted to time series data either to better understand the data or to predict future points in the series (forecasting). ARIMA models are applied in some cases where data show evidence of non-stationarity in the sense of mean, where an initial differencing step can be applied one or more times to eliminate the non-stationarity of the mean function. When the seasonality shows in a time series, the seasonal-differencing could be applied to eliminate the seasonal component. Since the ARMA model, according to the Wold's decomposition theorem, is theoretically sufficient to describe a regular wide-sense stationary time series, we are motivated to make stationary a non-stationary time series, e.g., by using differencing, before we can use the ARMA model. Note that if the time series contains a predictable sub-process, the predictable component is treated as a non-zero-mean but periodic component in the ARIMA framework so that it is eliminated by the seasonal differencing.

Exponential smoothing is a rule of thumb technique for smoothing time series data using the exponential window function. Whereas in the simple moving average the past observations are weighted equally, exponential functions are used to assign exponentially decreasing weights over time. It is an easily learned and easily applied procedure for making some determination based on prior assumptions by the user, such as seasonality. Exponential smoothing is often used for analysis of time-series data.

In probability theory, stochastic drift is the change of the average value of a stochastic (random) process. A related concept is the drift rate, which is the rate at which the average changes. For example, a process that counts the number of heads in a series of fair coin tosses has a drift rate of 1/2 per toss. This is in contrast to the random fluctuations about this average value. The stochastic mean of that coin-toss process is 1/2 and the drift rate of the stochastic mean is 0, assuming 1 = heads and 0 = tails.

Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated.

In mathematics, a Witt vector is an infinite sequence of elements of a commutative ring. Ernst Witt showed how to put a ring structure on the set of Witt vectors, in such a way that the ring of Witt vectors over the finite field of order p is the ring of -adic integers. They have a highly non-intuitive structure upon first glance because their additive and multiplicative structure depends on an infinite set of recursive formulas which do not behave like addition and multiplication formulas for standard p-adic integers. The main idea behind Witt vectors is instead of using the standard -adic expansion

The Berlin procedure (BV) is a mathematical procedure for time series decomposition and seasonal adjustment of monthly and quarterly economic time series. The mathematical foundations of the procedure were developed in 1960's at the Technical University of Berlin and the German Institute for Economic Research (DIW). The most important user of the procedure is the Federal Statistical Office of Germany.

The decomposition of time series is a statistical task that deconstructs a time series into several components, each representing one of the underlying categories of patterns. There are two principal types of decomposition, which are outlined below.

In time series data, seasonality is the presence of variations that occur at specific regular intervals less than a year, such as weekly, monthly, or quarterly. Seasonality may be caused by various factors, such as weather, vacation, and holidays and consists of periodic, repetitive, and generally regular and predictable patterns in the levels of a time series.

Agustín Maravall Spanish economist (born 1944)

Agustín Maravall Herrero is a Spanish economist known for his contributions in statistics and econometrics time series analysis, in particular seasonal adjustment and in the estimation of signals in economic time series. He has completed a methodology and several computer programs that are used throughout the world by analysts, researchers, and data producers. An important use is official production of series adjusted for seasonality and (perhaps) other undesirable effects such as noise, outliers, or missing observations. Maravall has received several awards and distinctions and retired in December 2014 from the Bank of Spain.


  1. "Retail spending rise boosts hopes UK can avoid double-dip recession". The Guardian . 17 February 2012. Archived from the original on 8 March 2017.
  2. "What is seasonal adjustment?". Archived from the original on 2011-12-20.
  3. 1 2 3 Hyndman, Rob J; Athanasopoulos, George. Forecasting: principles and practice. pp. Chapter 6.1. Archived from the original on 12 May 2018.
  4. 2.1 Graphics - OTexts. Archived from the original on 2018-01-17.
  5. "MCD - Seasonal Adjustment Frequently Asked Questions". Archived from the original on 2017-01-13.
  6. Directorate, OECD Statistics. "OECD Glossary of Statistical Terms - Seasonal adjustment Definition". Archived from the original on 2014-04-26.
  7. MoveReg
  8. "STAMP". Archived from the original on 2015-05-09.
  9. 1 2 6.5 STL decomposition | OTexts. Archived from the original on 2018-05-12. Retrieved 2016-05-12.
  10. OECD, Short-Term Economic Statistics Expert Group (June 2002), Harmonising Seasonal Adjustment Methods in European Union and OECD Countries
  11. Hyndman, R.J. 6.4 X-12-ARIMA decomposition | OTexts. Archived from the original on 2018-01-17. Retrieved 2016-05-15.
  12. Kowarik, Alexander (February 20, 2015). "Xx12" (PDF). Archived (PDF) from the original on December 6, 2016. Retrieved 2016-08-02.
  13. Hungarian Central Statistical Office.Seasonal adjustment methods and practices, Budapest, July 2007
  14. Thomas D. Evans. Direct vs. Indirect Seasonal Adjustment for CPS National Labor Force Series, Proceedings of the Joint Statistical Meetings, 2009, Business and Economic Statistics Section
  15. Marcus Scheiblecker, 2014. "Direct Versus Indirect Approach in Seasonal Adjustment," WIFO Working Papers 460, WIFO. Abstract at IDEAS/REPEC
  17. Maddala, G. S.; Kim, In-Moo (1998). Unit Roots, Cointegration, and Structural Change . Cambridge: Cambridge University Press. pp.  364–365. ISBN   0-521-58782-4.

Further reading