This article needs additional citations for verification .(July 2011) |
Seasonal adjustment or deseasonalization is a statistical method for removing the seasonal component of a time series. It is usually done when wanting to analyse the trend, and cyclical deviations from trend, of a time series independently of the seasonal components. Many economic phenomena have seasonal cycles, such as agricultural production, (crop yields fluctuate with the seasons) and consumer consumption (increased personal spending leading up to Christmas). It is necessary to adjust for this component in order to understand underlying trends in the economy, so official statistics are often adjusted to remove seasonal components. [1] Typically, seasonally adjusted data is reported for unemployment rates to reveal the underlying trends and cycles in labor markets. [2] [3]
The investigation of many economic time series becomes problematic due to seasonal fluctuations. Time series are made up of four components:
The difference between seasonal and cyclic patterns:
The relation between decomposition of time series components
This section may require cleanup to meet Wikipedia's quality standards. The specific problem is: make less terse; what is the number 6 at start of last paragraph?(March 2017) |
Unlike the trend and cyclical components, seasonal components, theoretically, happen with similar magnitude during the same time period each year. The seasonal components of a series are sometimes considered to be uninteresting and to hinder the interpretation of a series. Removing the seasonal component directs focus on other components and will allow better analysis. [5]
Different statistical research groups have developed different methods of seasonal adjustment, for example X-13-ARIMA and X-12-ARIMA developed by the United States Census Bureau; TRAMO/SEATS developed by the Bank of Spain; [6] MoveReg (for weekly data) developed by the United States Bureau of Labor Statistics; [7] STAMP developed by a group led by S. J. Koopman; [8] and “Seasonal and Trend decomposition using Loess” (STL) developed by Cleveland et al. (1990). [9] While X-12/13-ARIMA can only be applied to monthly or quarterly data, STL decomposition can be used on data with any type of seasonality. Furthermore, unlike X-12-ARIMA, STL allows the user to control the degree of smoothness of the trend cycle and how much the seasonal component changes over time. X-12-ARIMA can handle both additive and multiplicative decomposition whereas STL can only be used for additive decomposition. In order to achieve a multiplicative decomposition using STL, the user can take the log of the data before decomposing, and then back-transform after the decomposition. [9]
Each group provides software supporting their methods. Some versions are also included as parts of larger products, and some are commercially available. For example, SAS includes X-12-ARIMA, while Oxmetrics includes STAMP. A recent move by public organisations to harmonise seasonal adjustment practices has resulted in the development of Demetra+ by Eurostat and National Bank of Belgium which currently includes both X-12-ARIMA and TRAMO/SEATS. [10] R includes STL decomposition. [11] The X-12-ARIMA method can be utilized via the R package "X12". [12] EViews supports X-12, X-13, Tramo/Seats, STL and MoveReg.
One well-known example is the rate of unemployment, which is represented by a time series. This rate depends particularly on seasonal influences, which is why it is important to free the unemployment rate of its seasonal component. Such seasonal influences can be due to school graduates or dropouts looking to enter into the workforce and regular fluctuations during holiday periods. Once the seasonal influence is removed from this time series, the unemployment rate data can be meaningfully compared across different months and predictions for the future can be made. [3]
When seasonal adjustment is not performed with monthly data, year-on-year changes are utilised in an attempt to avoid contamination with seasonality.
When time series data has seasonality removed from it, it is said to be directly seasonally adjusted. If it is made up of a sum or index aggregation of time series which have been seasonally adjusted, it is said to have been indirectly seasonally adjusted. Indirect seasonal adjustment is used for large components of GDP which are made up of many industries, which may have different seasonal patterns and which are therefore analyzed and seasonally adjusted separately. Indirect seasonal adjustment also has the advantage that the aggregate series is the exact sum of the component series. [13] [14] [15] Seasonality can appear in an indirectly adjusted series; this is sometimes called residual seasonality.
Due to the various seasonal adjustment practices by different institutions, a group was created by Eurostat and the European Central Bank to promote standard processes. In 2009 a small group composed of experts from European Union statistical institutions and central banks produced the ESS Guidelines on Seasonal Adjustment, [16] which is being implemented in all the European Union statistical institutions. It is also being adopted voluntarily by other public statistical institutions outside the European Union.
By the Frisch–Waugh–Lovell theorem it does not matter whether dummy variables for all but one of the seasons are introduced into the regression equation, or if the independent variable is first seasonally adjusted (by the same dummy variable method), and the regression then run.
Since seasonal adjustment introduces a "non-revertible" moving average (MA) component into time series data, unit root tests (such as the Phillips–Perron test) will be biased towards non-rejection of the unit root null. [17]
Use of seasonally adjusted time series data can be misleading because a seasonally adjusted series contains both the trend-cycle component and the error component. As such, what appear to be "downturns" or "upturns" may actually be randomness in the data. For this reason, if the purpose is finding turning points in a series, using the trend-cycle component is recommended rather than the seasonally adjusted data. [3]
In mathematics, an abelian group, also called a commutative group, is a group in which the result of applying the group operation to two group elements does not depend on the order in which they are written. That is, the group operation is commutative. With addition as an operation, the integers and the real numbers form abelian groups, and the concept of an abelian group may be viewed as a generalization of these examples. Abelian groups are named after early 19th century mathematician Niels Henrik Abel.
In mathematical analysis, the Haar measure assigns an "invariant volume" to subsets of locally compact topological groups, consequently defining an integral for functions on those groups.
In mathematics, rings are algebraic structures that generalize fields: multiplication need not be commutative and multiplicative inverses need not exist. In other words, a ring is a set equipped with two binary operations satisfying properties analogous to those of addition and multiplication of integers. Ring elements may be numbers such as integers or complex numbers, but they may also be non-numerical objects such as polynomials, square matrices, functions, and power series.
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
Forecasting is the process of making predictions based on past and present data. Later these can be compared (resolved) against what happens. For example, a company might estimate their revenue in the next year, then compare it against the actual results creating a variance actual analysis. Prediction is a similar but more general term. Forecasting might refer to specific formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods or the process of prediction and resolution itself. Usage can vary between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.
In mathematics, a module is a generalization of the notion of vector space in which the field of scalars is replaced by a ring. The concept of module generalizes also the notion of abelian group, since the abelian groups are exactly the modules over the ring of integers.
In abstract algebra, a semiring is an algebraic structure. It is a generalization of a ring, dropping the requirement that each element must have an additive inverse. At the same time, it is a generalization of bounded distributive lattices.
In statistics, compositional data are quantitative descriptions of the parts of some whole, conveying relative information. Mathematically, compositional data is represented by points on a simplex. Measurements involving probabilities, proportions, percentages, and ppm can all be thought of as compositional data.
The Hodrick–Prescott filter is a mathematical tool used in macroeconomics, especially in real business cycle theory, to remove the cyclical component of a time series from raw data. It is used to obtain a smoothed-curve representation of a time series, one that is more sensitive to long-term than to short-term fluctuations. The adjustment of the sensitivity of the trend to short-term fluctuations is achieved by modifying a multiplier .
X-13ARIMA-SEATS, successor to X-12-ARIMA and X-11, is a set of statistical methods for seasonal adjustment and other descriptive analysis of time series data that are implemented in the U.S. Census Bureau's software package. These methods are or have been used by Statistics Canada, Australian Bureau of Statistics, and the statistical offices of many other countries.
In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. To better comprehend the data or to forecast upcoming series points, both of these models are fitted to time series data. ARIMA models are applied in some cases where data show evidence of non-stationarity in the sense of mean, where an initial differencing step can be applied one or more times to eliminate the non-stationarity of the mean function. When the seasonality shows in a time series, the seasonal-differencing could be applied to eliminate the seasonal component. Since the ARMA model, according to the Wold's decomposition theorem, is theoretically sufficient to describe a regular wide-sense stationary time series, we are motivated to make stationary a non-stationary time series, e.g., by using differencing, before we can use the ARMA model. Note that if the time series contains a predictable sub-process, the predictable component is treated as a non-zero-mean but periodic component in the ARIMA framework so that it is eliminated by the seasonal differencing.
Exponential smoothing or exponential moving average (EMA) is a rule of thumb technique for smoothing time series data using the exponential window function. Whereas in the simple moving average the past observations are weighted equally, exponential functions are used to assign exponentially decreasing weights over time. It is an easily learned and easily applied procedure for making some determination based on prior assumptions by the user, such as seasonality. Exponential smoothing is often used for analysis of time-series data.
Functional data analysis (FDA) is a branch of statistics that analyses data providing information about curves, surfaces or anything else varying over a continuum. In its most general form, under an FDA framework, each sample element of functional data is considered to be a random function. The physical continuum over which these functions are defined is often time, but may also be spatial location, wavelength, probability, etc. Intrinsically, functional data are infinite dimensional. The high intrinsic dimensionality of these data brings challenges for theory as well as computation, where these challenges vary with how the functional data were sampled. However, the high or infinite dimensional structure of the data is a rich source of information and there are many interesting challenges for research and data analysis.
In mathematics, a near-field is an algebraic structure similar to a division ring, except that it has only one of the two distributive laws. Alternatively, a near-field is a near-ring in which there is a multiplicative identity and every non-zero element has a multiplicative inverse.
Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated.
The Berlin procedure (BV) is a mathematical procedure for time series decomposition and seasonal adjustment of monthly and quarterly economic time series. The mathematical foundations of the procedure were developed in 1960's at the Technical University of Berlin and the German Institute for Economic Research (DIW). The most important user of the procedure is the Federal Statistical Office of Germany.
The decomposition of time series is a statistical task that deconstructs a time series into several components, each representing one of the underlying categories of patterns. There are two principal types of decomposition, which are outlined below.
In stochastic processes, chaos theory and time series analysis, detrended fluctuation analysis (DFA) is a method for determining the statistical self-affinity of a signal. It is useful for analysing time series that appear to be long-memory processes or 1/f noise.
In time series data, seasonality is the presence of variations that occur at specific regular intervals less than a year, such as weekly, monthly, or quarterly. Seasonality may be caused by various factors, such as weather, vacation, and holidays and consists of periodic, repetitive, and generally regular and predictable patterns in the levels of a time series.
Agustín Maravall Herrero is a Spanish economist. He is known for his contributions to the analysis of statistics and econometrics, particularly in seasonal adjustment and the estimation of signals in economic time series. He created a methodology and several computer programs for such analysis that are used throughout the world by analysts, researchers, and data producers. Maravall retired in December 2014 from the Bank of Spain.
{{cite book}}
: |website=
ignored (help){{cite book}}
: |website=
ignored (help){{cite book}}
: |website=
ignored (help)