This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
In time series data, seasonality refers to the trends that occur at specific regular intervals less than a year, such as weekly, monthly, or quarterly. Seasonality may be caused by various factors, such as weather, vacation, and holidays [1] and consists of periodic, repetitive, and generally regular and predictable patterns in the levels [2] of a time series.
Seasonal fluctuations in a time series can be contrasted with cyclical patterns. The latter occur when the data exhibits rises and falls that are not of a fixed period. Such non-seasonal fluctuations are usually due to economic conditions and are often related to the "business cycle"; their period usually extends beyond a single year, and the fluctuations are usually of at least two years. [3]
Organisations facing seasonal variations, such as ice-cream vendors, are often interested in knowing their performance relative to the normal seasonal variation. Seasonal variations in the labour market can be attributed to the entrance of school leavers into the job market as they aim to contribute to the workforce upon the completion of their schooling. These regular changes are of less interest to those who study employment data than the variations that occur due to the underlying state of the economy; their focus is on how unemployment in the workforce has changed, despite the impact of the regular seasonal variations. [3]
It is necessary for organisations to identify and measure seasonal variations within their market to help them plan for the future. This can prepare them for the temporary increases or decreases in labour requirements and inventory as demand for their product or service fluctuates over certain periods. This may require training, periodic maintenance, and so forth that can be organized in advance. Apart from these considerations, the organisations need to know if variation they have experienced has been more or less than the expected amount, beyond what the usual seasonal variations account for. [4]
There are several main reasons for studying seasonal variation:
The following graphical techniques can be used to detect seasonality:
A really good way to find periodicity, including seasonality, in any regular series of data is to remove any overall trend first and then to inspect time periodicity. [6]
The run sequence plot is a recommended first step for analyzing any time series. Although seasonality can sometimes be indicated by this plot, seasonality is shown more clearly by the seasonal subseries plot or the box plot. The seasonal subseries plot does an excellent job of showing both the seasonal differences (between group patterns) and also the within-group patterns. The box plot shows the seasonal difference (between group patterns) quite well, but it does not show within group patterns. However, for large data sets, the box plot is usually easier to read than the seasonal subseries plot.
The seasonal plot, seasonal subseries plot, and the box plot all assume that the seasonal periods are known. In most cases, the analyst will in fact, know this. For example, for monthly data, the period is 12 since there are 12 months in a year. However, if the period is not known, the autocorrelation plot can help. If there is significant seasonality, the autocorrelation plot should show spikes at lags equal to the period. For example, for monthly data, if there is a seasonality effect, we would expect to see significant peaks at lag 12, 24, 36, and so on (although the intensity may decrease the further out we go).
An autocorrelation plot (ACF) can be used to identify seasonality, as it calculates the difference (residual amount) between a Y value and a lagged value of Y. The result gives some points where the two values are close together ( no seasonality ), but other points where there is a large discrepancy. These points indicate a level of seasonality in the data.
Semiregular cyclic variations might be dealt with by spectral density estimation.
Seasonal variation is measured in terms of an index, called a seasonal index. It is an average that can be used to compare an actual observation relative to what it would be if there were no seasonal variation. An index value is attached to each period of the time series within a year. This implies that if monthly data are considered there are 12 separate seasonal indices, one for each month. The following methods use seasonal indices to measure seasonal variations of a time-series data.
The measurement of seasonal variation by using the ratio-to-moving-average method provides an index to measure the degree of the seasonal variation in a time series. The index is based on a mean of 100, with the degree of seasonality measured by variations away from the base. For example, if we observe the hotel rentals in a winter resort, we find that the winter quarter index is 124. The value 124 indicates that 124 percent of the average quarterly rental occur in winter. If the hotel management records 1436 rentals for the whole of last year, then the average quarterly rental would be 359= (1436/4). As the winter-quarter index is 124, we estimate the number of winter rentals as follows:
359*(124/100)=445;
Here, 359 is the average quarterly rental. 124 is the winter-quarter index. 445 the seasonalized winter-quarter rental.
This method is also called the percentage moving average method. In this method, the original data values in the time-series are expressed as percentages of moving averages. The steps and the tabulations are given below.
Let us calculate the seasonal index by the ratio-to-moving-average method from the following data:
Year/Quarters | 1 | 2 | 3 | 4 |
---|---|---|---|---|
1996 | 75 | 60 | 54 | 59 |
1997 | 86 | 65 | 63 | 80 |
1998 | 90 | 72 | 66 | 85 |
1999 | 100 | 78 | 72 | 93 |
Now calculations for 4 quarterly moving averages and ratio-to-moving-averages are shown in the below table.
Year | Quarter | Original Values(Y) | 4 Figures Moving Total | 4 Figures Moving Average | 2 Figures Moving Total | 2 Figures Moving Average(T) | Ratio-to-Moving-Average(%)(Y)/ (T)*100 |
---|---|---|---|---|---|---|---|
1996 | 1 | 75 | — | — | — | ||
— | — | ||||||
2 | 60 | — | — | — | |||
248 | 62.00 | ||||||
3 | 54 | 126.75 | 63.375 | 85.21 | |||
259 | 64.75 | ||||||
4 | 59 | 130.75 | 65.375 | 90.25 | |||
264 | 66.00 | ||||||
1997 | 1 | 86 | 134.25 | 67.125 | 128.12 | ||
273 | 68.25 | ||||||
2 | 65 | 141.75 | 70.875 | 91.71 | |||
294 | 73.50 | ||||||
3 | 63 | 148.00 | 74.00 | 85.13 | |||
298 | 74.50 | ||||||
4 | 80 | 150.75 | 75.375 | 106.14 | |||
305 | 76.25 | ||||||
1998 | 1 | 90 | 153.25 | 76.625 | 117.45 | ||
308 | 77.00 | ||||||
2 | 72 | 155.25 | 77.625 | 92.75 | |||
313 | 78.25 | ||||||
3 | 66 | 159.00 | 79.50 | 83.02 | |||
323 | 80.75 | ||||||
4 | 85 | 163.00 | 81.50 | 104.29 | |||
329 | 82.25 | ||||||
1999 | 1 | 100 | 166.00 | 83.00 | 120.48 | ||
335 | 83.75 | ||||||
2 | 78 | 169.50 | 84.75 | 92.03 | |||
343 | 85.75 | ||||||
3 | 72 | — | — | — | |||
— | — | ||||||
4 | 93 | — | — | — | |||
Years/Quarters | 1 | 2 | 3 | 4 | Total |
---|---|---|---|---|---|
1996 | — | — | 85.21 | 90.25 | |
1997 | 128.12 | 91.71 | 85.13 | 106.14 | |
1998 | 117.45 | 92.75 | 83.02 | 104.29 | |
1999 | 120.48 | 92.04 | — | — | |
Total | 366.05 | 276.49 | 253.36 | 300.68 | |
Seasonal Average | 122.01 | 92.16 | 84.45 | 100.23 | 398.85 |
Adjusted Seasonal Average | 122.36 | 92.43 | 84.69 | 100.52 | 400 |
Now the total of seasonal averages is 398.85. Therefore, the corresponding correction factor would be 400/398.85 = 1.00288. Each seasonal average is multiplied by the correction factor 1.00288 to get the adjusted seasonal indices as shown in the above table.
1. In an additive time-series model, the seasonal component is estimated as:
where
2. In a multiplicative time-series model, the seasonal component is expressed in terms of ratio and percentage as
However, in practice the detrending of time-series is done to arrive at .
This is done by dividing both sides of by trend values T so that .
3. The deseasonalized time-series data will have only trend (T ), cyclical (C ) and irregular (I ) components and is expressed as:
A completely regular cyclic variation in a time series might be dealt with in time series analysis by using a sinusoidal model with one or more sinusoids whose period-lengths may be known or unknown depending on the context. A less completely regular cyclic variation might be dealt with by using a special form of an ARIMA model which can be structured so as to treat cyclic variations semi-explicitly. Such models represent cyclostationary processes.
Another method of modelling periodic seasonality is the use of pairs of Fourier terms. Similar to using the sinusoidal model, Fourier terms added into regression models utilize sine and cosine terms in order to simulate seasonality. However, the seasonality of such a regression would be represented as the sum of sine or cosine terms, instead of a single sine or cosine term in a sinusoidal model. Every periodic function can be approximated with the inclusion of Fourier terms.
The difference between a sinusoidal model and a regression with Fourier terms can be simplified as below:
Sinusoidal Model:
Regression With Fourier Terms:
Seasonal adjustment or deseasonalization is any method for removing the seasonal component of a time series. The resulting seasonally adjusted data are used, for example, when analyzing or reporting non-seasonal trends over durations rather longer than the seasonal period. An appropriate method for seasonal adjustment is chosen on the basis of a particular view taken of the decomposition of time series into components designated with names such as "trend", "cyclic", "seasonal" and "irregular", including how these interact with each other. For example, such components might act additively or multiplicatively. Thus, if a seasonal component acts additively, the adjustment method has two stages:
If it is a multiplicative model, the magnitude of the seasonal fluctuations will vary with the level, which is more likely to occur with economic series. [3] When taking seasonality into account, the seasonally adjusted multiplicative decomposition can be written as ; whereby the original time series is divided by the estimated seasonal component.
The multiplicative model can be transformed into an additive model by taking the log of the time series;
SA Multiplicative decomposition:
Taking log of the time series of the multiplicative model: [3]
One particular implementation of seasonal adjustment is provided by X-12-ARIMA.
In regression analysis such as ordinary least squares, with a seasonally varying dependent variable being influenced by one or more independent variables, the seasonality can be accounted for and measured by including n-1 dummy variables, one for each of the seasons except for an arbitrarily chosen reference season, where n is the number of seasons (e.g., 4 in the case of meteorological seasons, 12 in the case of months, etc.). Each dummy variable is set to 1 if the data point is drawn from the dummy's specified season and 0 otherwise. Then the predicted value of the dependent variable for the reference season is computed from the rest of the regression, while for any other season it is computed using the rest of the regression and by inserting the value 1 for the dummy variable for that season.
It is important to distinguish seasonal patterns from related patterns. While a seasonal pattern occurs when a time series is affected by the season or the time of the year, such as annual, semiannual, quarterly, etc. A cyclic pattern, or simply a cycle, occurs when the data exhibit rises and falls in other periods, i.e., much longer (e.g., decadal) or much shorter (e.g., weekly) than seasonal. A quasiperiodicity is a more general, irregular periodicity.
Forecasting is the process of making predictions based on past and present data. Later these can be compared (resolved) against what happens. For example, a company might estimate their revenue in the next year, then compare it against the actual results creating a variance actual analysis. Prediction is a similar but more general term. Forecasting might refer to specific formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods or the process of prediction and resolution itself. Usage can vary between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.
In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging gives the best linear unbiased prediction (BLUP) at unsampled locations. Interpolating methods based on other criteria such as smoothness may not yield the BLUP. The method is widely used in the domain of spatial analysis and computer experiments. The technique is also known as Wiener–Kolmogorov prediction, after Norbert Wiener and Andrey Kolmogorov.
Linear trend estimation is a statistical technique used to analyze data patterns. Data patterns, or trends, occur when the information gathered tends to increase or decrease over time or is influenced by changes in an external factor. Linear trend estimation essentially creates a straight line on a graph of data that models the general direction that the data is heading.
In statistics, a moving average is a calculation to analyze data points by creating a series of averages of different selections of the full data set. Variations include: simple, cumulative, or weighted forms.
The Hodrick–Prescott filter is a mathematical tool used in macroeconomics, especially in real business cycle theory, to remove the cyclical component of a time series from raw data. It is used to obtain a smoothed-curve representation of a time series, one that is more sensitive to long-term than to short-term fluctuations. The adjustment of the sensitivity of the trend to short-term fluctuations is achieved by modifying a multiplier .
X-13ARIMA-SEATS, successor to X-12-ARIMA and X-11, is a set of statistical methods for seasonal adjustment and other descriptive analysis of time series data that are implemented in the U.S. Census Bureau's software package. These methods are or have been used by Statistics Canada, Australian Bureau of Statistics, and the statistical offices of many other countries.
In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. To better comprehend the data or to forecast upcoming series points, both of these models are fitted to time series data. ARIMA models are applied in some cases where data show evidence of non-stationarity in the sense of expected value, where an initial differencing step can be applied one or more times to eliminate the non-stationarity of the mean function. When the seasonality shows in a time series, the seasonal-differencing could be applied to eliminate the seasonal component. Since the ARMA model, according to the Wold's decomposition theorem, is theoretically sufficient to describe a regular wide-sense stationary time series, we are motivated to make stationary a non-stationary time series, e.g., by using differencing, before we can use the ARMA model. Note that if the time series contains a predictable sub-process, the predictable component is treated as a non-zero-mean but periodic component in the ARIMA framework so that it is eliminated by the seasonal differencing.
Exponential smoothing or exponential moving average (EMA) is a rule of thumb technique for smoothing time series data using the exponential window function. Whereas in the simple moving average the past observations are weighted equally, exponential functions are used to assign exponentially decreasing weights over time. It is an easily learned and easily applied procedure for making some determination based on prior assumptions by the user, such as seasonality. Exponential smoothing is often used for analysis of time-series data.
Functional data analysis (FDA) is a branch of statistics that analyses data providing information about curves, surfaces or anything else varying over a continuum. In its most general form, under an FDA framework, each sample element of functional data is considered to be a random function. The physical continuum over which these functions are defined is often time, but may also be spatial location, wavelength, probability, etc. Intrinsically, functional data are infinite dimensional. The high intrinsic dimensionality of these data brings challenges for theory as well as computation, where these challenges vary with how the functional data were sampled. However, the high or infinite dimensional structure of the data is a rich source of information and there are many interesting challenges for research and data analysis.
In time series analysis, the Box–Jenkins method, named after the statisticians George Box and Gwilym Jenkins, applies autoregressive moving average (ARMA) or autoregressive integrated moving average (ARIMA) models to find the best fit of a time-series model to past values of a time series.
In probability theory, stochastic drift is the change of the average value of a stochastic (random) process. A related concept is the drift rate, which is the rate at which the average changes. For example, a process that counts the number of heads in a series of fair coin tosses has a drift rate of 1/2 per toss. This is in contrast to the random fluctuations about this average value. The stochastic mean of that coin-toss process is 1/2 and the drift rate of the stochastic mean is 0, assuming 1 = heads and 0 = tails.
In the analysis of data, a correlogram is a chart of correlation statistics. For example, in time series analysis, a plot of the sample autocorrelations versus is an autocorrelogram. If cross-correlation is plotted, the result is called a cross-correlogram.
Seasonal adjustment or deseasonalization is a statistical method for removing the seasonal component of a time series. It is usually done when wanting to analyse the trend, and cyclical deviations from trend, of a time series independently of the seasonal components. Many economic phenomena have seasonal cycles, such as agricultural production, and consumer consumption. It is necessary to adjust for this component in order to understand underlying trends in the economy, so official statistics are often adjusted to remove seasonal components. Typically, seasonally adjusted data is reported for unemployment rates to reveal the underlying trends and cycles in labor markets.
The decomposition of time series is a statistical task that deconstructs a time series into several components, each representing one of the underlying categories of patterns. There are two principal types of decomposition, which are outlined below.
In time series analysis, singular spectrum analysis (SSA) is a nonparametric spectral estimation method. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. Its roots lie in the classical Karhunen (1946)–Loève spectral decomposition of time series and random fields and in the Mañé (1981)–Takens (1981) embedding theorem. SSA can be an aid in the decomposition of time series into a sum of components, each having a meaningful interpretation. The name "singular spectrum analysis" relates to the spectrum of eigenvalues in a singular value decomposition of a covariance matrix, and not directly to a frequency domain decomposition.
Demand forecasting is the prediction of the quantity of goods and services that will be demanded by consumers at a future point in time. More specifically, the methods of demand forecasting entail using predictive analytics to estimate customer demand in consideration of key economic conditions. This is an important tool in optimizing business profitability through efficient supply chain management. Demand forecasting methods are divided into two major categories, qualitative and quantitative methods. Qualitative methods are based on expert opinion and information gathered from the field. This method is mostly used in situations when there is minimal data available for analysis such as when a business or product has recently been introduced to the market. Quantitative methods, however, use available data, and analytical tools in order to produce predictions. Demand forecasting may be used in resource allocation, inventory management, assessing future capacity requirements, or making decisions on whether to enter a new market.
In statistics and in machine learning, a linear predictor function is a linear function of a set of coefficients and explanatory variables, whose value is used to predict the outcome of a dependent variable. This sort of function usually comes in linear regression, where the coefficients are called regression coefficients. However, they also occur in various types of linear classifiers, as well as in various other models, such as principal component analysis and factor analysis. In many of these models, the coefficients are referred to as "weights".
In applied statistics and geostatistics, regression-kriging (RK) is a spatial prediction technique that combines a regression of the dependent variable on auxiliary variables with interpolation (kriging) of the regression residuals. It is mathematically equivalent to the interpolation method variously called universal kriging and kriging with external drift, where auxiliary predictors are used directly to solve the kriging weights.
In statistics, linear regression is a statistical model which estimates the linear relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression. This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable. If the explanatory variables are measured with error then errors-in-variables models are required, also known as measurement error models.
Trend periodic non-stationary processes are a type of cyclostationary process that exhibits both periodic behavior and a statistical trend. The trend can be linear or nonlinear, and it can result from systematic changes in the data over time. A cyclostationary process can be formed by removing the trend component. This approach is utilized in the analysis of the trend-stationary process.
{{cite web}}
: CS1 maint: archived copy as title (link)This article incorporates public domain material from NIST/SEMATECH e-Handbook of Statistical Methods. National Institute of Standards and Technology.