STAR model

Last updated
Exponential transition function for the ESTAR model with
z
t
{\displaystyle z_{t}}
varying from -10 to +10 and
z
{\displaystyle \zeta }
- from 0 to 1. Estar transition function.png
Exponential transition function for the ESTAR model with varying from -10 to +10 and - from 0 to 1.

In statistics, Smooth Transition Autoregressive (STAR) models are typically applied to time series data as an extension of autoregressive models, in order to allow for higher degree of flexibility in model parameters through a smooth transition.

Contents

Given a time series of data xt, the STAR model is a tool for understanding and, perhaps, predicting future values in this series, assuming that the behaviour of the series changes depending on the value of the transition variable. The transition might depend on the past values of the x series (similar to the SETAR models), or exogenous variables.

The model consists of 2 autoregressive (AR) parts linked by the transition function. The model is usually referred to as the STAR(p) models proceeded by the letter describing the transition function (see below) and p is the order of the autoregressive part. Most popular transition function include exponential function and first and second-order logistic functions. They give rise to Logistic STAR (LSTAR) and Exponential STAR (ESTAR) models.

Definition

AutoRegressive Models

Consider a simple AR(p) model for a time series yt

where:

for i=1,2,...,p are autoregressive coefficients, assumed to be constant over time;
stands for white-noise error term with constant variance.

written in a following vector form:

where:

is a column vector of variables;
is the vector of parameters :;
stands for white-noise error term with constant variance.
Exponential transition function for the ESTAR model with
z
t
{\displaystyle z_{t}}
varying from -10 to +10,
z
{\displaystyle \zeta }
from 0 to 1 and two exponential roots (
c
1
{\displaystyle c_{1}}
and
c
2
{\displaystyle c_{2}}
) equal to -7 and +3. Lstar transition function c1 and c2.png
Exponential transition function for the ESTAR model with varying from -10 to +10, from 0 to 1 and two exponential roots ( and ) equal to -7 and +3.

STAR as an Extension of the AutoRegressive Model

STAR models were introduced and comprehensively developed by Kung-sik Chan and Howell Tong in 1986 (esp. p. 187), in which the same acronym was used. It originally stands for Smooth Threshold AutoRegressive. For some background history, see Tong (2011, 2012). The models can be thought of in terms of extension of autoregressive models discussed above, allowing for changes in the model parameters according to the value of a transition variablezt. Chan and Tong (1986) rigorously proved that the family of STAR models includes the SETAR model as a limiting case by showing the uniform boundedness and equicontinuity with respect to the switching parameter. Without this proof, to say that STAR models nest the SETAR model lacks justification. Unfortunately, whether one should use a SETAR model or a STAR model for one's data has been a matter of subjective judgement, taste and inclination in much of the literature. Fortunately, the test procedure, based on David Cox's test of separate family of hypotheses and developed by Gao, Ling and Tong (2018, Statistica Sinica, volume 28, 2857-2883) is now available to address this issue. Such a test is important before adopting a STAR model because, among other issues, the parameter controlling its rate of switching is notoriously data-hungry.

Defined in this way, STAR model can be presented as follows:

where:

is a column vector of variables;
is the transition function bounded between 0 and 1.

Basic Structure

They can be understood as two-regime SETAR model with smooth transition between regimes, or as continuum of regimes. In both cases the presence of the transition function is the defining feature of the model as it allows for changes in values of the parameters.

Transition Function

Logistic transition function for the ESTAR model with
z
t
{\displaystyle z_{t}}
varying from -10 to +10 and
z
{\displaystyle \zeta }
- from 0 to 1. Calculated using GNU R package. Lstar transition function.png
Logistic transition function for the ESTAR model with varying from -10 to +10 and - from 0 to 1. Calculated using GNU R package.

Three basic transition functions and the name of resulting models are:

See also

Related Research Articles

<span class="mw-page-title-main">Lorentz transformation</span> Family of linear transformations

In physics, the Lorentz transformations are a six-parameter family of linear transformations from a coordinate frame in spacetime to another frame that moves at a constant velocity relative to the former. The respective inverse transformation is then parameterized by the negative of this velocity. The transformations are named after the Dutch physicist Hendrik Lorentz.

<span class="mw-page-title-main">Pushdown automaton</span> Type of automaton

In the theory of computation, a branch of theoretical computer science, a pushdown automaton (PDA) is a type of automaton that employs a stack.

<span class="mw-page-title-main">Riemann zeta function</span> Analytic function in mathematics

The Riemann zeta function or Euler–Riemann zeta function, denoted by the Greek letter ζ (zeta), is a mathematical function of a complex variable defined as

<span class="mw-page-title-main">Euler's constant</span> Constant value used in mathematics

Euler's constant is a mathematical constant, usually denoted by the lowercase Greek letter gamma, defined as the limiting difference between the harmonic series and the natural logarithm, denoted here by log:

<span class="mw-page-title-main">Hurwitz zeta function</span> Special function in mathematics

In mathematics, the Hurwitz zeta function is one of the many zeta functions. It is formally defined for complex variables s with Re(s) > 1 and a ≠ 0, −1, −2, … by

<span class="mw-page-title-main">Polylogarithm</span> Special mathematical function

In mathematics, the polylogarithm (also known as Jonquière's function, for Alfred Jonquière) is a special function Lis(z) of order s and argument z. Only for special values of s does the polylogarithm reduce to an elementary function such as the natural logarithm or a rational function. In quantum statistics, the polylogarithm function appears as the closed form of integrals of the Fermi–Dirac distribution and the Bose–Einstein distribution, and is also known as the Fermi–Dirac integral or the Bose–Einstein integral. In quantum electrodynamics, polylogarithms of positive integer order arise in the calculation of processes represented by higher-order Feynman diagrams.

In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.

In differential geometry, the Einstein tensor is used to express the curvature of a pseudo-Riemannian manifold. In general relativity, it occurs in the Einstein field equations for gravitation that describe spacetime curvature in a manner that is consistent with conservation of energy and momentum.

In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term ; thus the model is in the form of a stochastic difference equation which should not be confused with a differential equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.

In physics, precisely in the study of the theory of general relativity and many alternatives to it, the post-Newtonian formalism is a calculational tool that expresses Einstein's (nonlinear) equations of gravity in terms of the lowest-order deviations from Newton's law of universal gravitation. This allows approximations to Einstein's equations to be made in the case of weak fields. Higher-order terms can be added to increase accuracy, but for strong fields, it may be preferable to solve the complete equations numerically. Some of these post-Newtonian approximations are expansions in a small parameter, which is the ratio of the velocity of the matter forming the gravitational field to the speed of light, which in this case is better called the speed of gravity. In the limit, when the fundamental speed of gravity becomes infinite, the post-Newtonian expansion reduces to Newton's law of gravity.

<span class="mw-page-title-main">Joukowsky transform</span> In mathematics, a type of conformal map

In applied mathematics, the Joukowsky transform is a conformal map historically used to understand some principles of airfoil design. It is named after Nikolai Zhukovsky, who published it in 1910.

In statistics, Self-Exciting Threshold AutoRegressive (SETAR) models are typically applied to time series data as an extension of autoregressive models, in order to allow for higher degree of flexibility in model parameters through a regime switching behaviour.

<span class="mw-page-title-main">Schramm–Loewner evolution</span>

In probability theory, the Schramm–Loewner evolution with parameter κ, also known as stochastic Loewner evolution (SLEκ), is a family of random planar curves that have been proven to be the scaling limit of a variety of two-dimensional lattice models in statistical mechanics. Given a parameter κ and a domain in the complex plane U, it gives a family of random curves in U, with κ controlling how much the curve turns. There are two main variants of SLE, chordal SLE which gives a family of random curves from two fixed boundary points, and radial SLE, which gives a family of random curves from a fixed boundary point to a fixed interior point. These curves are defined to satisfy conformal invariance and a domain Markov property.

<span class="mw-page-title-main">Wigner rotation</span>

In theoretical physics, the composition of two non-collinear Lorentz boosts results in a Lorentz transformation that is not a pure boost but is the composition of a boost and a rotation. This rotation is called Thomas rotation, Thomas–Wigner rotation or Wigner rotation. If a sequence of non-collinear boosts returns an object to its initial velocity, then the sequence of Wigner rotations can combine to produce a net rotation called the Thomas precession.

<span class="mw-page-title-main">Wrapped Cauchy distribution</span>

In probability theory and directional statistics, a wrapped Cauchy distribution is a wrapped probability distribution that results from the "wrapping" of the Cauchy distribution around the unit circle. The Cauchy distribution is sometimes known as a Lorentzian distribution, and the wrapped Cauchy distribution may sometimes be referred to as a wrapped Lorentzian distribution.

Head grammar (HG) is a grammar formalism introduced in Carl Pollard (1984) as an extension of the context-free grammar class of grammars. Head grammar is therefore a type of phrase structure grammar, as opposed to a dependency grammar. The class of head grammars is a subset of the linear context-free rewriting systems.

In mathematical physics, the Wu–Sprung potential, named after Hua Wu and Donald Sprung, is a potential function in one dimension inside a Hamiltonian with the potential defined by solving a non-linear integral equation defined by the Bohr–Sommerfeld quantization conditions involving the spectral staircase, the energies and the potential .

Accelerations in special relativity (SR) follow, as in Newtonian Mechanics, by differentiation of velocity with respect to time. Because of the Lorentz transformation and time dilation, the concepts of time and distance become more complex, which also leads to more complex definitions of "acceleration". SR as the theory of flat Minkowski spacetime remains valid in the presence of accelerations, because general relativity (GR) is only required when there is curvature of spacetime caused by the energy–momentum tensor. However, since the amount of spacetime curvature is not particularly high on Earth or its vicinity, SR remains valid for most practical purposes, such as experiments in particle accelerators.

Batch normalization is a method used to make training of artificial neural networks faster and more stable through normalization of the layers' inputs by re-centering and re-scaling. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.

(Stochastic) variance reduction is an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are able to achieve convergence rates that are impossible to achieve with methods that treat the objective as an infinite sum, as in the classical Stochastic approximation setting.

References