In econometrics, the autoregressive conditional heteroscedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms;often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.
ARCH models are commonly employed in modeling financial time series that exhibit time-varying volatility and volatility clustering, i.e. periods of swings interspersed with periods of relative calm. ARCH-type models are sometimes considered to be in the family of stochastic volatility models, although this is strictly incorrect since at time t the volatility is completely pre-determined (deterministic) given previous values.
To model a time series using an ARCH process, let denote the error terms (return residuals, with respect to a mean process), i.e. the series terms. These are split into a stochastic piece and a time-dependent standard deviation characterizing the typical size of the terms so that
The random variable is a strong white noise process. The series is modeled by
An ARCH(q) model can be estimated using ordinary least squares. A method for testing whether the residuals exhibit time-varying heteroskedasticity using the Lagrange multiplier test was proposed by Engle (1982). This procedure is as follows:
If an autoregressive moving average model (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.
In that case, the GARCH (p, q) model (where p is the order of the GARCH terms and q is the order of the ARCH terms ), following the notation of the original paper, is given by
Generally, when testing for heteroskedasticity in econometric models, the best test is the White test. However, when dealing with time series data, this means to test for ARCH and GARCH errors.
Exponentially weighted moving average (EWMA) is an alternative model in a separate class of exponential smoothing models. As an alternative to GARCH modelling it has some attractive properties such as a greater weight upon more recent observations, but also drawbacks such as an arbitrary decay factor that introduces subjectivity into the estimation.
The lag length p of a GARCH(p, q) process is established in three steps:
This section needs expansionwith: . You can help by adding to it. (October 2017)
Nonlinear Asymmetric GARCH(1,1) (NAGARCH) is a model with the specification:
For stock returns, parameter is usually estimated to be positive; in this case, it reflects a phenomenon commonly referred to as the "leverage effect", signifying that negative returns increase future volatility by a larger amount than positive returns of the same magnitude.
This model should not be confused with the NARCH model, together with the NGARCH extension, introduced by Higgins and Bera in 1992.
Integrated Generalized Autoregressive Conditional heteroskedasticity (IGARCH) is a restricted version of the GARCH model, where the persistent parameters sum up to one, and imports a unit root in the GARCH process. The condition for this is
The exponential generalized autoregressive conditional heteroskedastic (EGARCH) model by Nelson & Cao (1991) is another form of the GARCH model. Formally, an EGARCH(p,q):
where , is the conditional variance, , , , and are coefficients. may be a standard normal variable or come from a generalized error distribution. The formulation for allows the sign and the magnitude of to have separate effects on the volatility. This is particularly useful in an asset pricing context.
Since may be negative, there are no sign restrictions for the parameters.
The GARCH-in-mean (GARCH-M) model adds a heteroskedasticity term into the mean equation. It has the specification:
The residual is defined as:
The Quadratic GARCH (QGARCH) model by Sentana (1995) is used to model asymmetric effects of positive and negative shocks.
In the example of a GARCH(1,1) model, the residual process is
where is i.i.d. and
Similar to QGARCH, the Glosten-Jagannathan-Runkle GARCH (GJR-GARCH) model by Glosten, Jagannathan and Runkle (1993) also models asymmetry in the ARCH process. The suggestion is to model where is i.i.d., and
where if , and if .
The Threshold GARCH (TGARCH) model by Zakoian (1994) is similar to GJR GARCH. The specification is one on conditional standard deviation instead of conditional variance:
where if , and if . Likewise, if , and if .
Hentschel's fGARCH model,also known as Family GARCH, is an omnibus model that nests a variety of other popular symmetric and asymmetric GARCH models including APARCH, GJR, AVGARCH, NGARCH, etc.
In 2004, Claudia Klüppelberg, Alexander Lindner and Ross Maller proposed a continuous-time generalization of the discrete-time GARCH(1,1) process. The idea is to start with the GARCH(1,1) model equations
and then to replace the strong white noise process by the infinitesimal increments of a Lévy process , and the squared noise process by the increments , where
is the purely discontinuous part of the quadratic variation process of . The result is the following system of stochastic differential equations:
where the positive parameters , and are determined by , and . Now given some initial condition , the system above has a pathwise unique solution which is then called the continuous-time GARCH (COGARCH) model.
Unlike GARCH model, the Zero-Drift GARCH (ZD-GARCH) model by Li, Zhang, Zhu and Ling (2018) in the first order GARCH model. The ZD-GARCH model is to model , where is i.i.d., andlets the drift term
The ZD-GARCH model does not require , and hence it nests the Exponentially weighted moving average (EWMA) model in "RiskMetrics". Since the drift term , the ZD-GARCH model is always non-stationary, and its statistical inference methods are quite different from those for the classical GARCH model. Based on the historical data, the parameters and can be estimated by the generalized QMLE method.
Spatial GARCH processes by Otto, Schmid and Garthoff (2018) andare considered as the spatial equivalent to the temporal generalized autoregressive conditional heteroscedasticity (GARCH) models. In contrast to the temporal ARCH model, in which the distribution is known given the full information set for the prior periods, the distribution is not straightforward in the spatial and spatiotemporal setting due to the interdependence between neighboring spatial locations. The spatial model is given by
where denotes the -th spatial location and refers to the -th entry of a spatial weight matrix and for . The spatial weight matrix defines which locations are considered to be adjacent.
In a different vein, the machine learning community has proposed the use of Gaussian process regression models to obtain a GARCH scheme. [ citation needed ]This results in a nonparametric modelling scheme, which allows for: (i) advanced robustness to overfitting, since the model marginalises over its parameters to perform inference, under a Bayesian inference rationale; and (ii) capturing highly-nonlinear dependencies without increasing model complexity.
In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator, ridge regression, or simply any degenerate estimator.
In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance. This is also known as homogeneity of variance. The complementary notion is called heteroscedasticity. The spellings homoskedasticity and heteroskedasticity are also frequently used.
In statistics, a vector of random variables is heteroscedastic if the variability of the random disturbance is different across elements of the vector. Here, variability could be quantified by the variance or any other measure of statistical dispersion. Thus heteroscedasticity is the absence of homoscedasticity. A typical example is the set of observations of income in different cities.
In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. OLS chooses the parameters of a linear function of a set of explanatory variables by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the given dataset and those predicted by the linear function of the independent variable.
In general relativity, the Gibbons–Hawking–York boundary term is a term that needs to be added to the Einstein–Hilbert action when the underlying spacetime manifold has a boundary.
In statistics, simple linear regression is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable and finds a linear function that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjective simple refers to the fact that the outcome variable is related to a single predictor.
In statistics, the Breusch–Pagan test, developed in 1979 by Trevor Breusch and Adrian Pagan, is used to test for heteroskedasticity in a linear regression model. It was independently suggested with some extension by R. Dennis Cook and Sanford Weisberg in 1983. Derived from the Lagrange multiplier test principle, it tests whether the variance of the errors from a regression is dependent on the values of the independent variables. In that case, heteroskedasticity is present.
In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordinary least squares and weighted least squares can be statistically inefficient, or even give misleading inferences. GLS was first described by Alexander Aitken in 1936.
In relativistic physics, the electromagnetic stress–energy tensor is the contribution to the stress–energy tensor due to the electromagnetic field. The stress–energy tensor describes the flow of energy and momentum in spacetime. The electromagnetic stress–energy tensor contains the negative of the classical Maxwell stress tensor that governs the electromagnetic interactions.
The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.
In statistics, stochastic volatility models are those in which the variance of a stochastic process is itself randomly distributed. They are used in the field of mathematical finance to evaluate derivative securities, such as options. The name derives from the models' treatment of the underlying security's volatility as a random process, governed by state variables such as the price level of the underlying security, the tendency of volatility to revert to some long-run mean value, and the variance of the volatility process itself, among others.
The single-index model (SIM) is a simple asset pricing model to measure both the risk and the return of a stock. The model has been developed by William Sharpe in 1963 and is commonly used in the finance industry. Mathematically the SIM is expressed as:
In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above. The truncated normal distribution has wide applications in statistics and econometrics. For example, it is used to model the probabilities of the binary outcomes in the probit model and to model censored data in the tobit model.
The topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors, Eicker–Huber–White standard errors, to recognize the contributions of Friedhelm Eicker, Peter J. Huber, and Halbert White.
In financial econometrics, an autoregressive conditional duration model considers irregularly spaced and autocorrelated intertrade durations. ACD is analogous to GARCH. Indeed, in a continuous double auction waiting times between two consecutive trades vary at random.
Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of classical financial models. These classical models of financial time series typically assume homoskedasticity and normality cannot explain stylized phenomena such as skewness, heavy tails, and volatility clustering of the empirical asset returns in finance. In 1963, Benoit Mandelbrot first used the stable distribution to model the empirical distributions which have the skewness and heavy-tail property. Since -stable distributions have infinite -th moments for all , the tempered stable processes have been proposed for overcoming this limitation of the stable distribution.
In Finance the Treynor–Black model is a mathematical model for security selection published by Fischer Black and Jack Treynor in 1973. The model assumes an investor who considers that most securities are priced efficiently, but who believes they have information that can be used to predict the abnormal performance (Alpha) of a few of them; the model finds the optimum portfolio to hold under such conditions.
In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.
In theoretical physics, massive representations of an extended supersymmetry algebra called BPS states have mass equal to the supersymmetry central charge Z. Quantum mechanically, if the supersymmetry remains unbroken, exact equality to the modulus of Z exists. Their importance arises as the multiplets shorten for generic massive representations, with stability and mass formula exact.
In econometrics, the Park test is a test for heteroscedasticity. The test is based on the method proposed by Rolla Edward Park for estimating linear regression parameters in the presence of heteroscedastic error terms.
It is not yet clear in the finance literature that the asymmetric properties of variances are due to changing leverage. The name "leverage effect" is used simply because it is popular among researchers when referring to such a phenomenon.
Special attention to the model is given by the parameter of asymmetry [theta (θ)] which describes the correlation between returns and variance.6 ...
6 In the case of analyzing stock returns, the positive value of [theta] reflects the empirically well known leverage effect indicating that a downward movement in the price of a stock causes more of an increase in variance more than a same value downward movement in the price of a stock, meaning that returns and variance are negatively correlated