SETAR (model)

Last updated

In statistics, Self-Exciting Threshold AutoRegressive (SETAR) models are typically applied to time series data as an extension of autoregressive models, in order to allow for higher degree of flexibility in model parameters through a regime switching behaviour.

Contents

Given a time series of data xt, the SETAR model is a tool for understanding and, perhaps, predicting future values in this series, assuming that the behaviour of the series changes once the series enters a different regime. The switch from one regime to another depends on the past values of the x series (hence the Self-Exciting portion of the name).

The model consists of k autoregressive (AR) parts, each for a different regime. The model is usually referred to as the SETAR(k, p) model where k is the number of threshold, there are k+1 number of regime in the model, and p is the order of the autoregressive part (since those can differ between regimes, the p portion is sometimes dropped and models are denoted simply as SETAR(k).

Definition

Autoregressive Models

Consider a simple AR(p) model for a time series yt

where:

for i=1,2,...,p are autoregressive coefficients, assumed to be constant over time;
stands for white-noise error term with constant variance.

written in a following vector form:

where:

is a row vector of variables;
is the vector of parameters :;
stands for white-noise error term with constant variance.

SETAR as an Extension of the Autoregressive Model

SETAR models were introduced by Howell Tong in 1977 and more fully developed in the seminal paper (Tong and Lim, 1980). They can be thought of in terms of extension of autoregressive models, allowing for changes in the model parameters according to the value of weakly exogenous threshold variablezt, assumed to be past values of y, e.g. yt-d, where d is the delay parameter, triggering the changes.

Defined in this way, SETAR model can be presented as follows:

if

where:

is a column vector of variables;
are k+1 non-trivial thresholds dividing the domain of zt into k different regimes.

The SETAR model is a special case of Tong's general threshold autoregressive models (Tong and Lim, 1980, p. 248). The latter allows the threshold variable to be very flexible, such as an exogenous time series in the open-loop threshold autoregressive system (Tong and Lim, 1980, p. 249), a Markov chain in the Markov-chain driven threshold autoregressive model (Tong and Lim, 1980, p. 285), which is now also known as the Markov switching model.

For a comprehensive review of developments over the 30 years since the birth of the model, see Tong (2011).

Basic Structure

In each of the k regimes, the AR(p) process is governed by a different set of p variables :. In such setting, a change of the regime (because the past values of the series yt-d surpassed the threshold) causes a different set of coefficients : to govern the process y.

See also

Related Research Articles

<span class="mw-page-title-main">Pushdown automaton</span> Type of automaton

In the theory of computation, a branch of theoretical computer science, a pushdown automaton (PDA) is a type of automaton that employs a stack.

A binary symmetric channel is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit, and the receiver will receive a bit. The bit will be "flipped" with a "crossover probability" of p, and otherwise is received correctly. This model can be applied to varied communication channels such as telephone lines or disk drive storage.

<span class="mw-page-title-main">Rankine–Hugoniot conditions</span>

The Rankine–Hugoniot conditions, also referred to as Rankine–Hugoniot jump conditions or Rankine–Hugoniot relations, describe the relationship between the states on both sides of a shock wave or a combustion wave in a one-dimensional flow in fluids or a one-dimensional deformation in solids. They are named in recognition of the work carried out by Scottish engineer and physicist William John Macquorn Rankine and French engineer Pierre Henri Hugoniot.

In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.

In statistics, econometrics and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term ; thus the model is in the form of a stochastic difference equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.

Vector autoregression (VAR) is a statistical model used to capture the relationship between multiple quantities as they change over time. VAR is a type of stochastic process model. VAR models generalize the single-variable (univariate) autoregressive model by allowing for multivariate time series. VAR models are often used in economics and the natural sciences.

In theoretical physics, the Wess–Zumino model has become the first known example of an interacting four-dimensional quantum field theory with linearly realised supersymmetry. In 1974, Julius Wess and Bruno Zumino studied, using modern terminology, dynamics of a single chiral superfield whose cubic superpotential leads to a renormalizable theory.

In the mathematical discipline of graph theory, the expander walk sampling theorem intuitively states that sampling vertices in an expander graph by doing relatively short random walk can simulate sampling the vertices independently from a uniform distribution. The earliest version of this theorem is due to Ajtai, Komlós & Szemerédi (1987), and the more general version is typically attributed to Gillman (1998).

<span class="mw-page-title-main">STAR model</span>

In statistics, Smooth Transition Autoregressive (STAR) models are typically applied to time series data as an extension of autoregressive models, in order to allow for higher degree of flexibility in model parameters through a smooth transition.

In mathematics, the Melnikov method is a tool to identify the existence of chaos in a class of dynamical systems under periodic perturbation.

Financial models with long-tailed distributions and volatility clustering have been introduced to overcome problems with the realism of classical financial models. These classical models of financial time series typically assume homoskedasticity and normality cannot explain stylized phenomena such as skewness, heavy tails, and volatility clustering of the empirical asset returns in finance. In 1963, Benoit Mandelbrot first used the stable distribution to model the empirical distributions which have the skewness and heavy-tail property. Since -stable distributions have infinite -th moments for all , the tempered stable processes have been proposed for overcoming this limitation of the stable distribution.

In mathematics, a packing in a hypergraph is a partition of the set of the hypergraph's edges into a number of disjoint subsets such that no pair of edges in each subset share any vertex. There are two famous algorithms to achieve asymptotically optimal packing in k-uniform hypergraphs. One of them is a random greedy algorithm which was proposed by Joel Spencer. He used a branching process to formally prove the optimal achievable bound under some side conditions. The other algorithm is called the Rödl nibble and was proposed by Vojtěch Rödl et al. They showed that the achievable packing by the Rödl nibble is in some sense close to that of the random greedy algorithm.

In statistics, the Johansen test, named after Søren Johansen, is a procedure for testing cointegration of several, say k, I(1) time series. This test permits more than one cointegrating relationship so is more generally applicable than the Engle–Granger test which is based on the Dickey–Fuller test for unit roots in the residuals from a single (estimated) cointegrating relationship.

Overcompleteness is a concept from linear algebra that is widely used in mathematics, computer science, engineering, and statistics. It was introduced by R. J. Duffin and A. C. Schaeffer in 1952.

In computer networks, self-similarity is a feature of network data transfer dynamics. When modeling network data dynamics the traditional time series models, such as an autoregressive moving average model are not appropriate. This is because these models only provide a finite number of parameters in the model and thus interaction in a finite time window, but the network data usually have a long-range dependent temporal structure. A self-similar process is one way of modeling network data dynamics with such a long range correlation. This article defines and describes network data transfer dynamics in the context of a self-similar process. Properties of the process are shown and methods are given for graphing and estimating parameters modeling the self-similarity of network data.

In continuum mechanics, a compatible deformation tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.

In mathematics, the connective constant is a numerical quantity associated with self-avoiding walks on a lattice. It is studied in connection with the notion of universality in two-dimensional statistical physics models. While the connective constant depends on the choice of lattice so itself is not universal, it is nonetheless an important quantity that appears in conjectures for universal laws. Furthermore, the mathematical techniques used to understand the connective constant, for example in the recent rigorous proof by Duminil-Copin and Smirnov that the connective constant of the hexagonal lattice has the precise value , may provide clues to a possible approach for attacking other important open problems in the study of self-avoiding walks, notably the conjecture that self-avoiding walks converge in the scaling limit to the Schramm–Loewner evolution.

In the ADM formulation of general relativity one splits spacetime into spatial slices and time, the basic variables are taken to be the induced metric, , on the spatial slice, and its conjugate momentum variable related to the extrinsic curvature, ,. These are the metric canonical coordinates.

Accelerations in special relativity (SR) follow, as in Newtonian Mechanics, by differentiation of velocity with respect to time. Because of the Lorentz transformation and time dilation, the concepts of time and distance become more complex, which also leads to more complex definitions of "acceleration". SR as the theory of flat Minkowski spacetime remains valid in the presence of accelerations, because general relativity (GR) is only required when there is curvature of spacetime caused by the energy–momentum tensor. However, since the amount of spacetime curvature is not particularly high on Earth or its vicinity, SR remains valid for most practical purposes, such as experiments in particle accelerators.

(Stochastic) variance reduction is an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are able to achieve convergence rates that are impossible to achieve with methods that treat the objective as an infinite sum, as in the classical Stochastic approximation setting.

References

https://www.ssc.wisc.edu/~bhansen/papers/saii_11.pdf