This article includes a list of general references, but it lacks sufficient corresponding inline citations .(February 2012) |
Vector autoregression (VAR) is a statistical model used to capture the relationship between multiple quantities as they change over time. VAR is a type of stochastic process model. VAR models generalize the single-variable (univariate) autoregressive model by allowing for multivariate time series. VAR models are often used in economics and the natural sciences.
Like the autoregressive model, each variable has an equation modelling its evolution over time. This equation includes the variable's lagged (past) values, the lagged values of the other variables in the model, and an error term. VAR models do not require as much knowledge about the forces influencing a variable as do structural models with simultaneous equations. The only prior knowledge required is a list of variables which can be hypothesized to affect each other over time.
This section includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations .(February 2012) |
A VAR model describes the evolution of a set of k variables, called endogenous variables, over time. Each period of time is numbered, t = 1, ..., T. The variables are collected in a vector, yt, which is of length k. (Equivalently, this vector might be described as a (k × 1)-matrix.) The vector is modelled as a linear function of its previous value. The vector's components are referred to as yi,t, meaning the observation at time t of the i th variable. For example, if the first variable in the model measures the price of wheat over time, then y1,1998 would indicate the price of wheat in the year 1998.
VAR models are characterized by their order, which refers to the number of earlier time periods the model will use. Continuing the above example, a 5th-order VAR would model each year's wheat price as a linear combination of the last five years of wheat prices. A lag is the value of a variable in a previous time period. So in general a pth-order VAR refers to a VAR model which includes lags for the last p time periods. A pth-order VAR is denoted "VAR(p)" and sometimes called "a VAR with p lags". A pth-order VAR model is written as
The variables of the form yt−i indicate that variable's value i time periods earlier and are called the "ith lag" of yt. The variable c is a k-vector of constants serving as the intercept of the model. Ai is a time-invariant (k × k)-matrix and et is a k-vector of error terms. The error terms must satisfy three conditions:
The process of choosing the maximum lag p in the VAR model requires special attention because inference is dependent on correctness of the selected lag order. [2] [3]
Note that all variables have to be of the same order of integration. The following cases are distinct:
One can stack the vectors in order to write a VAR(p) as a stochastic matrix difference equation, with a concise matrix notation:
A VAR(1) in two variables can be written in matrix form (more compact notation) as
(in which only a single A matrix appears because this example has a maximum lag p equal to 1), or, equivalently, as the following system of two equations
Each variable in the model has one equation. The current (time t) observation of each variable depends on its own lagged values as well as on the lagged values of each other variable in the VAR.
A VAR with p lags can always be equivalently rewritten as a VAR with only one lag by appropriately redefining the dependent variable. The transformation amounts to stacking the lags of the VAR(p) variable in the new VAR(1) dependent variable and appending identities to complete the precise number of equations.
For example, the VAR(2) model
can be recast as the VAR(1) model
where I is the identity matrix.
The equivalent VAR(1) form is more convenient for analytical derivations and allows more compact statements.
A structural VAR with p lags (sometimes abbreviated SVAR) is
where c0 is a k × 1 vector of constants, Bi is a k × k matrix (for every i = 0, ..., p) and εt is a k × 1 vector of error terms. The main diagonal terms of the B0 matrix (the coefficients on the ith variable in the ith equation) are scaled to 1.
The error terms εt (structural shocks) satisfy the conditions (1) - (3) in the definition above, with the particularity that all the elements in the off diagonal of the covariance matrix are zero. That is, the structural shocks are uncorrelated.
For example, a two variable structural VAR(1) is:
where
that is, the variances of the structural shocks are denoted (i = 1, 2) and the covariance is .
Writing the first equation explicitly and passing y2,t to the right hand side one obtains
Note that y2,t can have a contemporaneous effect on y1,t if B0;1,2 is not zero. This is different from the case when B0 is the identity matrix (all off-diagonal elements are zero — the case in the initial definition), when y2,t can impact directly y1,t+1 and subsequent future values, but not y1,t.
Because of the parameter identification problem, ordinary least squares estimation of the structural VAR would yield inconsistent parameter estimates. This problem can be overcome by rewriting the VAR in reduced form.
From an economic point of view, if the joint dynamics of a set of variables can be represented by a VAR model, then the structural form is a depiction of the underlying, "structural", economic relationships. Two features of the structural form make it the preferred candidate to represent the underlying relations:
By premultiplying the structural VAR with the inverse of B0
and denoting
one obtains the pth order reduced VAR
Note that in the reduced form all right hand side variables are predetermined at time t. As there are no time t endogenous variables on the right hand side, no variable has a direct contemporaneous effect on other variables in the model.
However, the error terms in the reduced VAR are composites of the structural shocks et = B0−1εt. Thus, the occurrence of one structural shock εi,t can potentially lead to the occurrence of shocks in all error terms ej,t, thus creating contemporaneous movement in all endogenous variables. Consequently, the covariance matrix of the reduced VAR
can have non-zero off-diagonal elements, thus allowing non-zero correlation between error terms.
Starting from the concise matrix notation:
This can be written alternatively as:
where denotes the Kronecker product and Vec the vectorization of the indicated matrix.
This estimator is consistent and asymptotically efficient. It is furthermore equal to the conditional maximum likelihood estimator. [4]
As in the standard case, the maximum likelihood estimator (MLE) of the covariance matrix differs from the ordinary least squares (OLS) estimator.
MLE estimator:[ citation needed ]
OLS estimator:[ citation needed ] for a model with a constant, k variables and p lags.
In a matrix notation, this gives:
The covariance matrix of the parameters can be estimated as[ citation needed ]
Vector autoregression models often involve the estimation of many parameters. For example, with seven variables and four lags, each matrix of coefficients for a given lag length is 7 by 7, and the vector of constants has 7 elements, so a total of 49×4 + 7 = 203 parameters are estimated, substantially lowering the degrees of freedom of the regression (the number of data points minus the number of parameters to be estimated). This can hurt the accuracy of the parameter estimates and hence of the forecasts given by the model.
Consider the first-order case (i.e., with only one lag), with equation of evolution
for evolving (state) vector and vector of shocks. To find, say, the effect of the j-th element of the vector of shocks upon the i-th element of the state vector 2 periods later, which is a particular impulse response, first write the above equation of evolution one period lagged:
Use this in the original equation of evolution to obtain
then repeat using the twice lagged equation of evolution, to obtain
From this, the effect of the j-th component of upon the i-th component of is the i, j element of the matrix
It can be seen from this induction process that any shock will have an effect on the elements of y infinitely far forward in time, although the effect will become smaller and smaller over time assuming that the AR process is stable — that is, that all the eigenvalues of the matrix A are less than 1 in absolute value.
An estimated VAR model can be used for forecasting, and the quality of the forecasts can be judged, in ways that are completely analogous to the methods used in univariate autoregressive modelling.
Christopher Sims has advocated VAR models, criticizing the claims and performance of earlier modeling in macroeconomic econometrics. [6] He recommended VAR models, which had previously appeared in time series statistics and in system identification, a statistical specialty in control theory. Sims advocated VAR models as providing a theory-free method to estimate economic relationships, thus being an alternative to the "incredible identification restrictions" in structural models. [6] VAR models are also increasingly used in health research for automatic analyses of diary data [7] or sensor data. Sio Iong Ao and R. E. Caraka found that the artificial neural network can improve its performance with the addition of the hybrid vector autoregression component. [8] [9]
In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator, ridge regression, or simply any degenerate estimator.
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
Simultaneous equations models are a type of statistical model in which the dependent variables are functions of other dependent variables, rather than just independent variables. This means some of the explanatory variables are jointly determined with the dependent variable, which in economics usually is the consequence of some underlying equilibrium mechanism. Take the typical supply and demand model: whilst typically one would determine the quantity supplied and demanded to be a function of the price set by the market, it is also possible for the reverse to be true, where producers observe the quantity that consumers demand and then set the price.
Linear elasticity is a mathematical model as to how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging gives the best linear unbiased prediction (BLUP) at unsampled locations. Interpolating methods based on other criteria such as smoothness may not yield the BLUP. The method is widely used in the domain of spatial analysis and computer experiments. The technique is also known as Wiener–Kolmogorov prediction, after Norbert Wiener and Andrey Kolmogorov.
In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.
In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression and also of orthogonal regression, and can be applied to both linear and non-linear models.
In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it can be used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term ; thus the model is in the form of a stochastic difference equation which should not be confused with a differential equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.
In statistics, the theory of minimum norm quadratic unbiased estimation (MINQUE) was developed by C. R. Rao. MINQUE is a theory alongside other estimation methods in estimation theory, such as the method of moments or maximum likelihood estimation. Similar to the theory of best linear unbiased estimation, MINQUE is specifically concerned with linear regression models. The method was originally conceived to estimate heteroscedastic error variance in multiple linear regression. MINQUE estimators also provide an alternative to maximum likelihood estimators or restricted maximum likelihood estimators for variance components in mixed effects models. MINQUE estimators are quadratic forms of the response variable and are used to estimate a linear function of the variances.
In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the input dataset and the output of the (linear) function of the independent variable. Some sources consider OLS to be linear regression.
In statistics, simple linear regression (SLR) is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable and finds a linear function that, as accurately as possible, predicts the dependent variable values as a function of the independent variable. The adjective simple refers to the fact that the outcome variable is related to a single predictor.
In econometrics, the seemingly unrelated regressions (SUR) or seemingly unrelated regression equations (SURE) model, proposed by Arnold Zellner in (1962), is a generalization of a linear regression model that consists of several regression equations, each having its own dependent variable and potentially different sets of exogenous explanatory variables. Each equation is a valid linear regression on its own and can be estimated separately, which is why the system is called seemingly unrelated, although some authors suggest that the term seemingly related would be more appropriate, since the error terms are assumed to be correlated across the equations.
In econometrics and other applications of multivariate time series analysis, a variance decomposition or forecast error variance decomposition (FEVD) is used to aid in the interpretation of a vector autoregression (VAR) model once it has been fitted. The variance decomposition indicates the amount of information each variable contributes to the other variables in the autoregression. It determines how much of the forecast error variance of each of the variables can be explained by exogenous shocks to the other variables.
The Durbin–Wu–Hausman test is a statistical hypothesis test in econometrics named after James Durbin, De-Min Wu, and Jerry A. Hausman. The test evaluates the consistency of an estimator when compared to an alternative, less efficient estimator which is already known to be consistent. It helps one evaluate if a statistical model corresponds to the data.
In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.
In statistics and signal processing, the orthogonality principle is a necessary and sufficient condition for the optimality of a Bayesian estimator. Loosely stated, the orthogonality principle says that the error vector of the optimal estimator is orthogonal to any possible estimator. The orthogonality principle is most commonly stated for linear estimators, but more general formulations are possible. Since the principle is a necessary and sufficient condition for optimality, it can be used to find the minimum mean square error estimator.
Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods.
In econometrics, the Arellano–Bond estimator is a generalized method of moments estimator used to estimate dynamic models of panel data. It was proposed in 1991 by Manuel Arellano and Stephen Bond, based on the earlier work by Alok Bhargava and John Denis Sargan in 1983, for addressing certain endogeneity problems. The GMM-SYS estimator is a system that contains both the levels and the first difference equations. It provides an alternative to the standard first difference GMM estimator.
In statistics, linear regression is a model that estimates the linear relationship between a scalar response and one or more explanatory variables. A model with exactly one explanatory variable is a simple linear regression; a model with two or more explanatory variables is a multiple linear regression. This term is distinct from multivariate linear regression, which predicts multiple correlated dependent variables rather than a single dependent variable.
In statistics, a sequence of random variables is homoscedastic if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as heterogeneity of variance. The spellings homoskedasticity and heteroskedasticity are also frequently used. “Skedasticity” comes from the Ancient Greek word “skedánnymi”, meaning “to scatter”. Assuming a variable is homoscedastic when in reality it is heteroscedastic results in unbiased but inefficient point estimates and in biased estimates of standard errors, and may result in overestimating the goodness of fit as measured by the Pearson coefficient.