Partial autocorrelation function

Last updated
Partial autocorrelation function of Lake Huron's depth with confidence interval (in blue, plotted around 0) Partial autocorrelation function.png
Partial autocorrelation function of Lake Huron's depth with confidence interval (in blue, plotted around 0)

In time series analysis, the partial autocorrelation function (PACF) gives the partial correlation of a stationary time series with its own lagged values, regressed the values of the time series at all shorter lags. It contrasts with the autocorrelation function, which does not control for other lags.

Contents

This function plays an important role in data analysis aimed at identifying the extent of the lag in an autoregressive (AR) model. The use of this function was introduced as part of the Box–Jenkins approach to time series modelling, whereby plotting the partial autocorrelative functions one could determine the appropriate lags p in an AR (p) model or in an extended ARIMA (p,d,q) model.

Definition

Given a time series , the partial autocorrelation of lag , denoted , is the autocorrelation between and with the linear dependence of on through removed. Equivalently, it is the autocorrelation between and that is not accounted for by lags through , inclusive. [1]

where and are linear combinations of that minimize the mean squared error of and respectively. For stationary processes, the coefficients in and are the same, but reversed: [2]

Calculation

The theoretical partial autocorrelation function of a stationary time series can be calculated by using the Durbin–Levinson Algorithm:

where for and is the autocorrelation function. [3] [4] [5]

The formula above can be used with sample autocorrelations to find the sample partial autocorrelation function of any given time series. [6] [7]

Examples

The following table summarizes the partial autocorrelation function of different models: [5] [8]

ModelPACF
White noise The partial autocorrelation is 0 for all lags.
Autoregressive model The partial autocorrelation for an AR(p) model is nonzero for lags less than or equal to p and 0 for lags greater than p.
Moving-average model If , the partial autocorrelation oscillates to 0.
If , the partial autocorrelation geometrically decays to 0.
Autoregressive–moving-average model An ARMA(p, q) model's partial autocorrelation geometrically decays to 0 but only after lags greater than p.

The behavior of the partial autocorrelation function mirrors that of the autocorrelation function for autoregressive and moving-average models. For example, the partial autocorrelation function of an AR(p) series cuts off after lag p similar to the autocorrelation function of an MA(q) series with lag q. In addition, the autocorrelation function of an AR(p) process tails off just like the partial autocorrelation function of an MA(q) process. [2]

Autoregressive model identification

Sample partial autocorrelation function with confidence interval of a simulated AR(3) time series Partial Autocorrelation Function Graph.png
Sample partial autocorrelation function with confidence interval of a simulated AR(3) time series

Partial autocorrelation is a commonly used tool for identifying the order of an autoregressive model. [6] As previously mentioned, the partial autocorrelation of an AR(p) process is zero at lags greater than p. [5] [8] If an AR model is determined to be appropriate, then the sample partial autocorrelation plot is examined to help identify the order.

The partial autocorrelation of lags greater than p for an AR(p) time series are approximately independent and normal with a mean of 0. [9] Therefore, a confidence interval can be constructed by dividing a selected z-score by . Lags with partial autocorrelations outside of the confidence interval indicate that the AR model's order is likely greater than or equal to the lag. Plotting the partial autocorrelation function and drawing the lines of the confidence interval is a common way to analyze the order of an AR model. To evaluate the order, one examines the plot to find the lag after which the partial autocorrelations are all within the confidence interval. This lag is determined to likely be the AR model's order. [1]

Related Research Articles

<span class="mw-page-title-main">Autocorrelation</span> Correlation of a signal with a time-shifted copy of itself, as a function of shift

Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.

<span class="mw-page-title-main">Cylindrical coordinate system</span> 3-dimensional coordinate system

A cylindrical coordinate system is a three-dimensional coordinate system that specifies point positions by the distance from a chosen reference axis (axis L in the image opposite), the direction from the axis relative to a chosen reference direction (axis A), and the distance from a chosen reference plane perpendicular to the axis (plane containing the purple section). The latter distance is given as a positive or negative number depending on which side of the reference plane faces the point.

<span class="mw-page-title-main">Quantum decoherence</span> Loss of quantum coherence

Quantum decoherence is the loss of quantum coherence, the process in which a system's behaviour changes from that which can be explained by quantum mechanics to that which can be explained by classical mechanics. In quantum mechanics, particles such as electrons are described by a wave function, a mathematical representation of the quantum state of a system; a probabilistic interpretation of the wave function is used to explain various quantum effects. As long as there exists a definite phase relation between different states, the system is said to be coherent. A definite phase relationship is necessary to perform quantum computing on quantum information encoded in quantum states. Coherence is preserved under the laws of quantum physics.

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.

In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless.

In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.

This is a list of some vector calculus formulae for working with common curvilinear coordinate systems.

In the statistical analysis of time series, autoregressive–moving-average (ARMA) models provide a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the autoregression (AR) and the second for the moving average (MA). The general ARMA model was described in the 1951 thesis of Peter Whittle, Hypothesis testing in time series analysis, and it was popularized in the 1970 book by George E. P. Box and Gwilym Jenkins.

In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term ; thus the model is in the form of a stochastic difference equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.

In statistics and econometrics, and in particular in time series analysis, an autoregressive integrated moving average (ARIMA) model is a generalization of an autoregressive moving average (ARMA) model. To better comprehend the data or to forecast upcoming series points, both of these models are fitted to time series data. ARIMA models are applied in some cases where data show evidence of non-stationarity in the sense of mean, where an initial differencing step can be applied one or more times to eliminate the non-stationarity of the mean function. When the seasonality shows in a time series, the seasonal-differencing could be applied to eliminate the seasonal component. Since the ARMA model, according to the Wold's decomposition theorem, is theoretically sufficient to describe a regular wide-sense stationary time series, we are motivated to make stationary a non-stationary time series, e.g., by using differencing, before we can use the ARMA model. Note that if the time series contains a predictable sub-process, the predictable component is treated as a non-zero-mean but periodic component in the ARIMA framework so that it is eliminated by the seasonal differencing.

In time series analysis, the Box–Jenkins method, named after the statisticians George Box and Gwilym Jenkins, applies autoregressive moving average (ARMA) or autoregressive integrated moving average (ARIMA) models to find the best fit of a time-series model to past values of a time series.

In statistics, the Durbin–Watson statistic is a test statistic used to detect the presence of autocorrelation at lag 1 in the residuals from a regression analysis. It is named after James Durbin and Geoffrey Watson. The small sample distribution of this ratio was derived by John von Neumann. Durbin and Watson applied this statistic to the residuals from least squares regressions, and developed bounds tests for the null hypothesis that the errors are serially uncorrelated against the alternative that they follow a first order autoregressive process. Note that the distribution of this test statistic does not depend on the estimated regression coefficients and the variance of the errors.

The Ljung–Box test is a type of statistical test of whether any of a group of autocorrelations of a time series are different from zero. Instead of testing randomness at each distinct lag, it tests the "overall" randomness based on a number of lags, and is therefore a portmanteau test.

<span class="mw-page-title-main">Autoencoder</span> Neural network that learns efficient data encoding in an unsupervised manner

An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data. An autoencoder learns two functions: an encoding function that transforms the input data, and a decoding function that recreates the input data from the encoded representation. The autoencoder learns an efficient representation (encoding) for a set of data, typically for dimensionality reduction.

In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. When determining the numerical relationship between two variables of interest, using their correlation coefficient will give misleading results if there is another confounding variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient. This is precisely the motivation for including other right-side variables in a multiple regression; but while multiple regression gives unbiased results for the effect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest.

In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density of a signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.

In statistics, the Breusch–Godfrey test is used to assess the validity of some of the modelling assumptions inherent in applying regression-like models to observed data series. In particular, it tests for the presence of serial correlation that has not been included in a proposed model structure and which, if present, would mean that incorrect conclusions would be drawn from other tests or that sub-optimal estimates of model parameters would be obtained.

A polymer field theory is a statistical field theory describing the statistical behavior of a neutral or charged polymer system. It can be derived by transforming the partition function from its standard many-dimensional integral representation over the particle degrees of freedom in a functional integral representation over an auxiliary field function, using either the Hubbard–Stratonovich transformation or the delta-functional transformation. Computer simulations based on polymer field theories have been shown to deliver useful results, for example to calculate the structures and properties of polymer solutions, polymer melts and thermoplastics.

The generalized functional linear model (GFLM) is an extension of the generalized linear model (GLM) that allows one to regress univariate responses of various types on functional predictors, which are mostly random trajectories generated by a square-integrable stochastic processes. Similarly to GLM, a link function relates the expected value of the response variable to a linear predictor, which in case of GFLM is obtained by forming the scalar product of the random predictor function with a smooth parameter function . Functional Linear Regression, Functional Poisson Regression and Functional Binomial Regression, with the important Functional Logistic Regression included, are special cases of GFLM. Applications of GFLM include classification and discrimination of stochastic processes and functional data.

<span class="mw-page-title-main">Batch normalization</span> Method used to make artificial neural networks faster and stable by re-centering and re-scaling

Batch normalization is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.

References

  1. 1 2 "6.4.4.6.3. Partial Autocorrelation Plot". www.itl.nist.gov. Retrieved 2022-07-14.
  2. 1 2 Shumway, Robert H.; Stoffer, David S. (2017). Time Series Analysis and Its Applications: With R Examples. Springer Texts in Statistics. Cham: Springer International Publishing. pp. 97–99. doi:10.1007/978-3-319-52452-8. ISBN   978-3-319-52451-1.
  3. Durbin, J. (1960). "The Fitting of Time-Series Models". Revue de l'Institut International de Statistique / Review of the International Statistical Institute. 28 (3): 233–244. doi:10.2307/1401322. ISSN   0373-1138. JSTOR   1401322.
  4. Shumway, Robert H.; Stoffer, David S. (2017). Time Series Analysis and Its Applications: With R Examples. Springer Texts in Statistics. Cham: Springer International Publishing. pp. 103–104. doi:10.1007/978-3-319-52452-8. ISBN   978-3-319-52451-1.
  5. 1 2 3 Enders, Walter (2004). Applied econometric time series (2nd ed.). Hoboken, NJ: J. Wiley. pp. 65–67. ISBN   0-471-23065-0. OCLC   52387978.
  6. 1 2 Box, George E. P.; Reinsel, Gregory C.; Jenkins, Gwilym M. (2008). Time Series Analysis: Forecasting and Control (4th ed.). Hoboken, New Jersey: John Wiley. ISBN   9780470272848.
  7. Brockwell, Peter J.; Davis, Richard A. (1991). Time Series: Theory and Methods (2nd ed.). New York, NY: Springer. pp. 102, 243–245. ISBN   9781441903198.
  8. 1 2 Das, Panchanan (2019). Econometrics in Theory and Practice : Analysis of Cross Section, Time Series and Panel Data with Stata 15. 1. Singapore: Springer. pp. 294–299. ISBN   978-981-329-019-8. OCLC   1119630068.
  9. Quenouille, M. H. (1949). "Approximate Tests of Correlation in Time-Series". Journal of the Royal Statistical Society, Series B (Methodological). 11 (1): 68–84. doi:10.1111/j.2517-6161.1949.tb00023.x.