Granger causality

Last updated
When time series X Granger-causes time series Y, the patterns in X are approximately repeated in Y after some time lag (two examples are indicated with arrows). Thus, past values of X can be used for the prediction of future values of Y. GrangerCausalityIllustration.svg
When time series X Granger-causes time series Y, the patterns in X are approximately repeated in Y after some time lag (two examples are indicated with arrows). Thus, past values of X can be used for the prediction of future values of Y.

The Granger causality test is a statistical hypothesis test for determining whether one time series is useful in forecasting another, first proposed in 1969. [1] Ordinarily, regressions reflect "mere" correlations, but Clive Granger argued that causality in economics could be tested for by measuring the ability to predict the future values of a time series using prior values of another time series. Since the question of "true causality" is deeply philosophical, and because of the post hoc ergo propter hoc fallacy of assuming that one thing preceding another can be used as a proof of causation, econometricians assert that the Granger test finds only "predictive causality". [2] Using the term "causality" alone is a misnomer, as Granger-causality is better described as "precedence", [3] or, as Granger himself later claimed in 1977, "temporally related". [4] Rather than testing whether Xcauses Y, the Granger causality tests whether X forecastsY. [5]


A time series X is said to Granger-cause Y if it can be shown, usually through a series of t-tests and F-tests on lagged values of X (and with lagged values of Y also included), that those X values provide statistically significant information about future values of Y.

Granger also stressed that some studies using "Granger causality" testing in areas outside economics reached "ridiculous" conclusions. [6] "Of course, many ridiculous papers appeared", he said in his Nobel lecture. [7] However, it remains a popular method for causality analysis in time series due to its computational simplicity. [8] [9] The original definition of Granger causality does not account for latent confounding effects and does not capture instantaneous and non-linear causal relationships, though several extensions have been proposed to address these issues. [8]


We say that a variable X that evolves over time Granger-causes another evolving variable Y if predictions of the value of Y based on its own past values and on the past values of X are better than predictions of Y based only on Y's own past values.

Underlying principles

Granger defined the causality relationship based on two principles: [8] [10]

  1. The cause happens prior to its effect.
  2. The cause has unique information about the future values of its effect.

Given these two assumptions about causality, Granger proposed to test the following hypothesis for identification of a causal effect of on :

where refers to probability, is an arbitrary non-empty set, and and respectively denote the information available as of time in the entire universe, and that in the modified universe in which is excluded. If the above hypothesis is accepted, we say that Granger-causes . [8] [10]


If a time series is a stationary process, the test is performed using the level values of two (or more) variables. If the variables are non-stationary, then the test is done using first (or higher) differences. The number of lags to be included is usually chosen using an information criterion, such as the Akaike information criterion or the Schwarz information criterion. Any particular lagged value of one of the variables is retained in the regression if (1) it is significant according to a t-test, and (2) it and the other lagged values of the variable jointly add explanatory power to the model according to an F-test. Then the null hypothesis of no Granger causality is not rejected if and only if no lagged values of an explanatory variable have been retained in the regression.

In practice it may be found that neither variable Granger-causes the other, or that each of the two variables Granger-causes the other.

Mathematical statement

Let y and x be stationary time series. To test the null hypothesis that x does not Granger-cause y, one first finds the proper lagged values of y to include in an univariate autoregression of y:

Next, the autoregression is augmented by including lagged values of x:

One retains in this regression all lagged values of x that are individually significant according to their t-statistics, provided that collectively they add explanatory power to the regression according to an F-test (whose null hypothesis is no explanatory power jointly added by the x's). In the notation of the above augmented regression, p is the shortest, and q is the longest, lag length for which the lagged value of x is significant.

The null hypothesis that x does not Granger-cause y is accepted if and only if no lagged values of x are retained in the regression.

Multivariate analysis

Multivariate Granger causality analysis is usually performed by fitting a vector autoregressive model (VAR) to the time series. In particular, let for be a -dimensional multivariate time series. Granger causality is performed by fitting a VAR model with time lags as follows:

where is a white Gaussian random vector, and is a matrix for every . A time series is called a Granger cause of another time series , if at least one of the elements for is significantly larger than zero (in absolute value). [11]

Non-parametric test

The above linear methods are appropriate for testing Granger causality in the mean. However they are not able to detect Granger causality in higher moments, e.g., in the variance. Non-parametric tests for Granger causality are designed to address this problem. [12] The definition of Granger causality in these tests is general and does not involve any modelling assumptions, such as a linear autoregressive model. The non-parametric tests for Granger causality can be used as diagnostic tools to build better parametric models including higher order moments and/or non-linearity. [13]


As its name implies, Granger causality is not necessarily true causality. In fact, the Granger-causality tests fulfill only the Humean definition of causality that identifies the cause-effect relations with constant conjunctions. [14] If both X and Y are driven by a common third process with different lags, one might still fail to reject the alternative hypothesis of Granger causality. Yet, manipulation of one of the variables would not change the other. Indeed, the Granger-causality tests are designed to handle pairs of variables, and may produce misleading results when the true relationship involves three or more variables. Having said this, it has been argued that given a probabilistic view of causation, Granger causality can be considered true causality in that sense, especially when Reichenbach's "screening off" notion of probabilistic causation is taken into account. [15] Other possible sources of misguiding test results are: (1) not frequent enough or too frequent sampling, (2) nonlinear causal relationship, (3) time series nonstationarity and nonlinearity and (4) existence of rational expectations. [14] A similar test involving more variables can be applied with vector autoregression.


A method for Granger causality has been developed that is not sensitive to deviations from the assumption that the error term is normally distributed. [16] This method is especially useful in financial economics, since many financial variables are non-normally distributed. [17] Recently, asymmetric causality testing has been suggested in the literature in order to separate the causal impact of positive changes from the negative ones. [18] An extension of Granger (non-)causality testing to panel data is also available. [19] A modified Granger causality test based on the GARCH (generalized auto-regressive conditional heteroscedasticity) type of integer-valued time series models is available in many areas. [20] [21]

In neuroscience

A long-held belief about neural function maintained that different areas of the brain were task specific; that the structural connectivity local to a certain area somehow dictated the function of that piece. Collecting work that has been performed over many years, there has been a move to a different, network-centric approach to describing information flow in the brain. Explanation of function is beginning to include the concept of networks existing at different levels and throughout different locations in the brain. [22] The behavior of these networks can be described by non-deterministic processes that are evolving through time. That is to say that given the same input stimulus, you will not get the same output from the network. The dynamics of these networks are governed by probabilities so we treat them as stochastic (random) processes so that we can capture these kinds of dynamics between different areas of the brain.

Different methods of obtaining some measure of information flow from the firing activities of a neuron and its surrounding ensemble have been explored in the past, but they are limited in the kinds of conclusions that can be drawn and provide little insight into the directional flow of information, its effect size, and how it can change with time. [23] Recently Granger causality has been applied to address some of these issues with great success. [24] Put plainly, one examines how to best predict the future of a neuron: using either the entire ensemble or the entire ensemble except a certain target neuron. If the prediction is made worse by excluding the target neuron, then we say it has a “g-causal” relationship with the current neuron.

Extensions to point process models

Previous Granger-causality methods could only operate on continuous-valued data so the analysis of neural spike train recordings involved transformations that ultimately altered the stochastic properties of the data, indirectly altering the validity of the conclusions that could be drawn from it. In 2011, however, a new general-purpose Granger-causality framework was proposed that could directly operate on any modality, including neural-spike trains. [23]

Neural spike train data can be modeled as a point-process. A temporal point process is a stochastic time-series of binary events that occurs in continuous time. It can only take on two values at each point in time, indicating whether or not an event has actually occurred. This type of binary-valued representation of information suits the activity of neural populations because a single neuron's action potential has a typical waveform. In this way, what carries the actual information being output from a neuron is the occurrence of a “spike”, as well as the time between successive spikes. Using this approach one could abstract the flow of information in a neural-network to be simply the spiking times for each neuron through an observation period. A point-process can be represented either by the timing of the spikes themselves, the waiting times between spikes, using a counting process, or, if time is discretized enough to ensure that in each window only one event has the possibility of occurring, that is to say one time bin can only contain one event, as a set of 1s and 0s, very similar to binary.[ citation needed ]

One of the simplest types of neural-spiking models is the Poisson process. This however, is limited in that it is memory-less. It does not account for any spiking history when calculating the current probability of firing. Neurons, however, exhibit a fundamental (biophysical) history dependence by way of its relative and absolute refractory periods. To address this, a conditional intensity function is used to represent the probability of a neuron spiking, conditioned on its own history. The conditional intensity function expresses the instantaneous firing probability and implicitly defines a complete probability model for the point process. It defines a probability per unit time. So if this unit time is taken small enough to ensure that only one spike could occur in that time window, then our conditional intensity function completely specifies the probability that a given neuron will fire in a certain time.[ citation needed ]

See also

Related Research Articles

<span class="mw-page-title-main">Autocorrelation</span> Correlation of a signal with a time-shifted copy of itself, as a function of shift

Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.

Causality (also called causation, or cause and effect) is influence by which one event, process, state, or object (acause) contributes to the production of another event, process, state, or object (an effect) where the cause is partly responsible for the effect, and the effect is partly dependent on the cause. In general, a process has many causes, which are also said to be causal factors for it, and all lie in its past. An effect can in turn be a cause of, or causal factor for, many other effects, which all lie in its future. Some writers have held that causality is metaphysically prior to notions of time and space.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

<span class="mw-page-title-main">Spurious relationship</span> Apparent, but false, correlation between causally-independent variables

In statistics, a spurious relationship or spurious correlation is a mathematical relationship in which two or more events or variables are associated but not causally related, due to either coincidence or the presence of a certain third, unseen factor.

In statistics, a generalized linear model (GLM) is a flexible generalization of ordinary linear regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

Cointegration is a statistical property of a collection (X1X2, ..., Xk) of time series variables. First, all of the series must be integrated of order d (see Order of integration). Next, if a linear combination of this collection is integrated of order less than d, then the collection is said to be co-integrated. Formally, if (X,Y,Z) are each integrated of order d, and there exist coefficients a,b,c such that aX + bY + cZ is integrated of order less than d, then X, Y, and Z are cointegrated. Cointegration has become an important property in contemporary time series analysis. Time series often have trends—either deterministic or stochastic. In an influential paper, Charles Nelson and Charles Plosser (1982) provided statistical evidence that many US macroeconomic time series (like GNP, wages, employment, etc.) have stochastic trends.

In statistics, an augmented Dickey–Fuller test (ADF) tests the null hypothesis that a unit root is present in a time series sample. The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models.

In statistics, the Durbin–Watson statistic is a test statistic used to detect the presence of autocorrelation at lag 1 in the residuals from a regression analysis. It is named after James Durbin and Geoffrey Watson. The small sample distribution of this ratio was derived by John von Neumann. Durbin and Watson applied this statistic to the residuals from least squares regressions, and developed bounds tests for the null hypothesis that the errors are serially uncorrelated against the alternative that they follow a first order autoregressive process. Note that the distribution of this test statistic does not depend on the estimated regression coefficients and the variance of the errors.

<span class="mw-page-title-main">Causal model</span> Conceptual model in philosophy of science

In the philosophy of science, a causal model is a conceptual model that describes the causal mechanisms of a system. Causal models can improve study designs by providing clear rules for deciding which independent variables need to be included/controlled for.

In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient, is a statistic used to measure the ordinal association between two measured quantities. A τ test is a non-parametric hypothesis test for statistical dependence based on the τ coefficient.

<span class="mw-page-title-main">Biological neuron model</span> Mathematical descriptions of the properties of certain cells in the nervous system

Biological neuron models, also known as a spiking neuron models, are mathematical descriptions of the properties of certain cells in the nervous system that generate sharp electrical potentials across their cell membrane, roughly one millisecond in duration, called action potentials or spikes. Since spikes are transmitted along the axon and synapses from the sending neuron to many other neurons, spiking neurons are considered to be a major information processing unit of the nervous system. Spiking neuron models can be divided into different categories: the most detailed mathematical models are biophysical neuron models that describe the membrane voltage as a function of the input current and the activation of ion channels. Mathematically simpler are integrate-and-fire models that describe the membrane voltage as a function of the input current and predict the spike times without a description of the biophysical processes that shape the time course of an action potential. Even more abstract models only predict output spikes as a function of the stimulation where the stimulation can occur through sensory input or pharmacologically. This article provides a short overview of different spiking neuron models and links, whenever possible to experimental phenomena. It includes deterministic and probabilistic models.

<span class="mw-page-title-main">Quantile regression</span> Statistics concept

Quantile regression is a type of regression analysis used in statistics and econometrics. Whereas the method of least squares estimates the conditional mean of the response variable across values of the predictor variables, quantile regression estimates the conditional median of the response variable. Quantile regression is an extension of linear regression used when the conditions of linear regression are not met.

An error correction model (ECM) belongs to a category of multiple time series models most commonly used for data where the underlying variables have a long-run common stochastic trend, also known as cointegration. ECMs are a theoretically-driven approach useful for estimating both short-term and long-term effects of one time series on another. The term error-correction relates to the fact that last-period's deviation from a long-run equilibrium, the error, influences its short-run dynamics. Thus ECMs directly estimate the speed at which a dependent variable returns to equilibrium after a change in other variables.

Neural decoding is a neuroscience field concerned with the hypothetical reconstruction of sensory and other stimuli from information that has already been encoded and represented in the brain by networks of neurons. Reconstruction refers to the ability of the researcher to predict what sensory stimuli the subject is receiving based purely on neuron action potentials. Therefore, the main goal of neural decoding is to characterize how the electrical activity of neurons elicit activity and responses in the brain.

Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed. The science of why things occur is called etiology. Causal inference is said to provide the evidence of causality theorized by causal reasoning.

Exponential integrate-and-fire models are compact and computationally efficient nonlinear spiking neuron models with one or two variables. The exponential integrate-and-fire model was first proposed as a one-dimensional model. The most prominent two-dimensional examples are the adaptive exponential integrate-and-fire model and the generalized exponential integrate-and-fire model. Exponential integrate-and-fire models are widely used in the field of computational neuroscience and spiking neural networks because of (i) a solid grounding of the neuron model in the field of experimental neuroscience, (ii) computational efficiency in simulations and hardware implementations, and (iii) mathematical transparency.

Transfer entropy is a non-parametric statistic measuring the amount of directed (time-asymmetric) transfer of information between two random processes. Transfer entropy from a process X to another process Y is the amount of uncertainty reduced in future values of Y by knowing the past values of X given past values of Y. More specifically, if and for denote two random processes and the amount of information is measured using Shannon's entropy, the transfer entropy can be written as:

In statistics, Somers’ D, sometimes incorrectly referred to as Somer’s D, is a measure of ordinal association between two possibly dependent random variables X and Y. Somers’ D takes values between when all pairs of the variables disagree and when all pairs of the variables agree. Somers’ D is named after Robert H. Somers, who proposed it in 1962.

<span class="mw-page-title-main">Galves–Löcherbach model</span>

The Galves–Löcherbach model is a mathematical model for a network of neurons with intrinsic stochasticity.

The spike response model (SRM) is a spiking neuron model in which spikes are generated by either a deterministic or a stochastic threshold process. In the SRM, the membrane voltage V is described as a linear sum of the postsynaptic potentials (PSPs) caused by spike arrivals to which the effects of refractoriness and adaptation are added. The threshold is either fixed or dynamic. In the latter case it increases after each spike. The SRM is flexible enough to account for a variety of neuronal firing pattern in response to step current input. The SRM has also been used in the theory of computation to quantify the capacity of spiking neural networks; and in the neurosciences to predict the subthreshold voltage and the firing times of cortical neurons during stimulation with a time-dependent current stimulation. The name Spike Response Model points to the property that the two important filters and of the model can be interpreted as the response of the membrane potential to an incoming spike (response kernel , the PSP) and to an outgoing spike (response kernel , also called refractory kernel). The SRM has been formulated in continuous time and in discrete time. The SRM can be viewed as a generalized linear model (GLM) or as an (integrated version of) a generalized integrate-and-fire model with adaptation.


  1. Granger, C. W. J. (1969). "Investigating Causal Relations by Econometric Models and Cross-spectral Methods". Econometrica. 37 (3): 424–438. doi:10.2307/1912791. JSTOR   1912791.
  2. Diebold, Francis X. (2007). Elements of Forecasting (PDF) (4th ed.). Thomson South-Western. pp. 230–231. ISBN   978-0324359046.
  3. Leamer, Edward E. (1985). "Vector Autoregressions for Causal Inference?". Carnegie-Rochester Conference Series on Public Policy. 22: 283. doi:10.1016/0167-2231(85)90035-1.
  4. Granger, C. W. J.; Newbold, Paul (1977). Forecasting Economic Time Series. New York: Academic Press. p. 225. ISBN   0122951506.
  5. Hamilton, James D. (1994). Time Series Analysis (PDF). Princeton University Press. pp. 306–308. ISBN   0-691-04289-6.
  6. Thurman, Walter (1988). "Chickens, Eggs, and Causality or Which Came First?" (PDF). American Journal of Agricultural Economics. 70 (2): 237–238. doi:10.2307/1242062. JSTOR   1242062 . Retrieved 2 April 2022.
  7. Granger, Clive W. J. (2004). "Time Series Analysis, Cointegration, and Applications" (PDF). American Economic Review. 94 (3): 421–425. CiteSeerX . doi:10.1257/0002828041464669. S2CID   154709108 . Retrieved 12 June 2019.
  8. 1 2 3 4 Eichler, Michael (2012). "Causal Inference in Time Series Analysis" (PDF). In Berzuini, Carlo (ed.). Causality : statistical perspectives and applications (3rd ed.). Hoboken, N.J.: Wiley. pp. 327–352. ISBN   978-0470665565.
  9. Seth, Anil (2007). "Granger causality". Scholarpedia. 2 (7): 1667. Bibcode:2007SchpJ...2.1667S. doi: 10.4249/scholarpedia.1667 .
  10. 1 2 Granger, C.W.J. (1980). "Testing for causality: A personal viewpoint". Journal of Economic Dynamics and Control. 2: 329–352. doi:10.1016/0165-1889(80)90069-X.
  11. Lütkepohl, Helmut (2005). New introduction to multiple time series analysis (3 ed.). Berlin: Springer. pp.  41–51. ISBN   978-3540262398.
  12. Diks, Cees; Panchenko, Valentyn (2006). "A new statistic and practical guidelines for nonparametric Granger causality testing" (PDF). Journal of Economic Dynamics and Control. 30 (9): 1647–1669. doi:10.1016/j.jedc.2005.08.008.
  13. Francis, Bill B.; Mougoue, Mbodja; Panchenko, Valentyn (2010). "Is there a Symmetric Nonlinear Causal Relationship between Large and Small Firms?" (PDF). Journal of Empirical Finance. 17 (1): 23–28. doi:10.1016/j.jempfin.2009.08.003.
  14. 1 2 Mariusz, Maziarz (2015-05-20). "A review of the Granger-causality fallacy". The Journal of Philosophical Economics. VIII. (2). ISSN   1843-2298.
  15. Mannino, Michael; Bressler, Steven L (2015). "Foundational perspectives on causality in large-scale brain networks". Physics of Life Reviews. 15: 107–23. Bibcode:2015PhLRv..15..107M. doi:10.1016/j.plrev.2015.09.002. PMID   26429630.
  16. Hacker, R. Scott; Hatemi-j, A. (2006). "Tests for causality between integrated variables using asymptotic and bootstrap distributions: Theory and application". Applied Economics. 38 (13): 1489–1500. doi:10.1080/00036840500405763. S2CID   121999615.
  17. Mandelbrot, Benoit (1963). "The Variation of Certain Speculative Prices". The Journal of Business. 36 (4): 394–419. doi:10.1086/294632.
  18. Hatemi-j, A. (2012). "Asymmetric causality tests with an application". Empirical Economics. 43: 447–456. doi:10.1007/s00181-011-0484-x. S2CID   153562476.
  19. Dumitrescu, E.-I.; Hurlin, C. (2012). "Testing for Granger non-causality in heterogeneous panels". Economic Modelling. 29 (4): 1450–1460. CiteSeerX . doi:10.1016/j.econmod.2012.02.014. S2CID   9227921.
  20. Chen, Cathy W. S.; Hsieh, Ying-Hen; Su, Hung-Chieh; Wu, Jia Jing (2018-02-01). "Causality test of ambient fine particles and human influenza in Taiwan: Age group-specific disparity and geographic heterogeneity". Environment International. 111: 354–361. doi:10.1016/j.envint.2017.10.011. ISSN   0160-4120. PMID   29173968.
  21. Chen, Cathy W. S.; Lee, Sangyeol (2017). "Bayesian causality test for integer-valued time series models with applications to climate and crime data". Journal of the Royal Statistical Society, Series C (Applied Statistics). 66 (4): 797–814. doi:10.1111/rssc.12200. ISSN   1467-9876. S2CID   125296454.
  22. Knight, R. T (2007). "NEUROSCIENCE: Neural Networks Debunk Phrenology". Science. 316 (5831): 1578–9. doi:10.1126/science.1144677. PMID   17569852. S2CID   15065228.
  23. 1 2 Kim, Sanggyun; Putrino, David; Ghosh, Soumya; Brown, Emery N (2011). "A Granger Causality Measure for Point Process Models of Ensemble Neural Spiking Activity". PLOS Computational Biology. 7 (3): e1001110. Bibcode:2011PLSCB...7E1110K. doi:10.1371/journal.pcbi.1001110. PMC   3063721 . PMID   21455283.
  24. Bressler, Steven L; Seth, Anil K (2011). "Wiener–Granger Causality: A well established methodology". NeuroImage. 58 (2): 323–9. doi:10.1016/j.neuroimage.2010.02.059. PMID   20202481. S2CID   36616970.

Further reading