Stationary process

Last updated

In mathematics and statistics, a stationary process (or a strict/strictly stationary process or strong/strongly stationary process) is a stochastic process whose unconditional joint probability distribution does not change when shifted in time. [1] Consequently, parameters such as mean and variance also do not change over time. To get an intuition of stationarity, one can imagine a frictionless pendulum. It swings back and forth in an oscillatory motion, yet the amplitude and frequency remain constant. Although the pendulum is moving, the process is stationary as its "statistics" are constant (frequency and amplitude). However, if a force were to be applied to the pendulum, either the frequency or amplitude would change, thus making the process non-stationary. [2]

Contents

Since stationarity is an assumption underlying many statistical procedures used in time series analysis, non-stationary data are often transformed to become stationary. The most common cause of violation of stationarity is a trend in the mean, which can be due either to the presence of a unit root or of a deterministic trend. In the former case of a unit root, stochastic shocks have permanent effects, and the process is not mean-reverting. In the latter case of a deterministic trend, the process is called a trend-stationary process, and stochastic shocks have only transitory effects after which the variable tends toward a deterministically evolving (non-constant) mean.

A trend stationary process is not strictly stationary, but can easily be transformed into a stationary process by removing the underlying trend, which is solely a function of time. Similarly, processes with one or more unit roots can be made stationary through differencing. An important type of non-stationary process that does not include a trend-like behavior is a cyclostationary process, which is a stochastic process that varies cyclically with time.

For many applications strict-sense stationarity is too restrictive. Other forms of stationarity such as wide-sense stationarity or N-th-order stationarity are then employed. The definitions for different kinds of stationarity are not consistent among different authors (see Other terminology).

Strict-sense stationarity

Definition

Formally, let be a stochastic process and let represent the cumulative distribution function of the unconditional (i.e., with no reference to any particular starting value) joint distribution of at times . Then, is said to be strictly stationary, strongly stationary or strict-sense stationary if [3] :p. 155

 

 

 

 

(Eq.1)

Since does not affect , is not a function of time.

Examples

Two simulated time series processes, one stationary and the other non-stationary, are shown above. The augmented Dickey-Fuller (ADF) test statistic is reported for each process; non-stationarity cannot be rejected for the second process at a 5% significance level. Stationarycomparison.png
Two simulated time series processes, one stationary and the other non-stationary, are shown above. The augmented Dickey–Fuller (ADF) test statistic is reported for each process; non-stationarity cannot be rejected for the second process at a 5% significance level.

White noise is the simplest example of a stationary process.

An example of a discrete-time stationary process where the sample space is also discrete (so that the random variable may take one of N possible values) is a Bernoulli scheme. Other examples of a discrete-time stationary process with continuous sample space include some autoregressive and moving average processes which are both subsets of the autoregressive moving average model. Models with a non-trivial autoregressive component may be either stationary or non-stationary, depending on the parameter values, and important non-stationary special cases are where unit roots exist in the model.

Example 1

Let be any scalar random variable, and define a time-series , by

Then is a stationary time series, for which realisations consist of a series of constant values, with a different constant value for each realisation. A law of large numbers does not apply on this case, as the limiting value of an average from a single realisation takes the random value determined by , rather than taking the expected value of .

The time average of does not converge since the process is not ergodic.

Example 2

As a further example of a stationary process for which any single realisation has an apparently noise-free structure, let have a uniform distribution on and define the time series by

Then is strictly stationary.

Example 3

Keep in mind that a white noise is not necessarily strictly stationary. Let be a random variable uniformly distributed in the interval and define the time series

Then

.

So is a white noise, however it is not strictly stationary.

Nth-order stationarity

In Eq.1 , the distribution of samples of the stochastic process must be equal to the distribution of the samples shifted in time for all. N-th-order stationarity is a weaker form of stationarity where this is only requested for all up to a certain order . A random process is said to be N-th-order stationary if: [3] :p. 152

 

 

 

 

(Eq.2)

Weak or wide-sense stationarity

Definition

A weaker form of stationarity commonly employed in signal processing is known as weak-sense stationarity, wide-sense stationarity (WSS), or covariance stationarity. WSS random processes only require that 1st moment (i.e. the mean) and autocovariance do not vary with respect to time and that the 2nd moment is finite for all times. Any strictly stationary process which has a finite mean and a covariance is also WSS. [4] :p. 299

So, a continuous time random process which is WSS has the following restrictions on its mean function and autocovariance function :

 

 

 

 

(Eq.3)

The first property implies that the mean function must be constant. The second property implies that the covariance function depends only on the difference between and and only needs to be indexed by one variable rather than two variables. [3] :p. 159 Thus, instead of writing,

the notation is often abbreviated by the substitution :

This also implies that the autocorrelation depends only on , that is

The third property says that the second moments must be finite for any time .

Motivation

The main advantage of wide-sense stationarity is that it places the time-series in the context of Hilbert spaces. Let H be the Hilbert space generated by {x(t)} (that is, the closure of the set of all linear combinations of these random variables in the Hilbert space of all square-integrable random variables on the given probability space). By the positive definiteness of the autocovariance function, it follows from Bochner's theorem that there exists a positive measure on the real line such that H is isomorphic to the Hilbert subspace of L2(μ) generated by {e−2πiξ⋅t}. This then gives the following Fourier-type decomposition for a continuous time stationary stochastic process: there exists a stochastic process with orthogonal increments such that, for all

where the integral on the right-hand side is interpreted in a suitable (Riemann) sense. The same result holds for a discrete-time stationary process, with the spectral measure now defined on the unit circle.

When processing WSS random signals with linear, time-invariant (LTI) filters, it is helpful to think of the correlation function as a linear operator. Since it is a circulant operator (depends only on the difference between the two arguments), its eigenfunctions are the Fourier complex exponentials. Additionally, since the eigenfunctions of LTI operators are also complex exponentials, LTI processing of WSS random signals is highly tractable—all computations can be performed in the frequency domain. Thus, the WSS assumption is widely employed in signal processing algorithms.

Definition for complex stochastic process

In the case where is a complex stochastic process the autocovariance function is defined as and, in addition to the requirements in Eq.3 , it is required that the pseudo-autocovariance function depends only on the time lag. In formulas, is WSS, if

 

 

 

 

(Eq.4)

Joint stationarity

The concept of stationarity may be extended to two stochastic processes.

Joint strict-sense stationarity

Two stochastic processes and are called jointly strict-sense stationary if their joint cumulative distribution remains unchanged under time shifts, i.e. if

 

 

 

 

(Eq.5)

Joint (M + N)th-order stationarity

Two random processes and is said to be jointly (M + N)-th-order stationary if: [3] :p. 159

 

 

 

 

(Eq.6)

Joint weak or wide-sense stationarity

Two stochastic processes and are called jointly wide-sense stationary if they are both wide-sense stationary and their cross-covariance function depends only on the time difference . This may be summarized as follows:

 

 

 

 

(Eq.7)

Relation between types of stationarity

Other terminology

The terminology used for types of stationarity other than strict stationarity can be rather mixed. Some examples follow.

Differencing

One way to make some time series stationary is to compute the differences between consecutive observations. This is known as differencing. Differencing can help stabilize the mean of a time series by removing changes in the level of a time series, and so eliminating trend and seasonality.

Transformations such as logarithms can help to stabilize the variance of a time series.

One of the ways for identifying non-stationary times series is the ACF plot. For a stationary time series, the ACF will drop to zero relatively quickly, while the ACF of non-stationary data decreases slowly. [9]

See also

Related Research Articles

Autocorrelation Correlation of a signal with a time-shifted copy of itself, as a function of shift

Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.

In classical mechanics, a harmonic oscillator is a system that, when displaced from its equilibrium position, experiences a restoring force F proportional to the displacement x:

Spectral density Relative importance of certain frequencies in a composite signal

The power spectrum of a time series describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of a certain signal or sort of signal as analyzed in terms of its frequency content, is called its spectrum.

In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories of data compression.

Short-time Fourier transform Fourier-related transform suited to signals that change rather quickly in time

The Short-time Fourier transform (STFT), is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in Software Defined Radio (SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use Fast Fourier Transforms (FFTs) with 2^24 points on desktop computers.

In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely.

In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable.

In mathematics and in signal processing, the Hilbert transform is a specific linear operator that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). This linear operator is given by convolution with the function . The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° to every frequency component of a function, the sign of the shift depending on the sign of the frequency. The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions.

Cross-correlation

In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.

Linear phase is a property of a filter where the phase response of the filter is a linear function of frequency. The result is that all frequency components of the input signal are shifted in time by the same constant amount, which is referred to as the group delay. Consequently, there is no phase distortion due to the time delay of frequencies relative to one another.

In statistics, econometrics and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term ; thus the model is in the form of a stochastic difference equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.

The Havriliak–Negami relaxation is an empirical modification of the Debye relaxation model in electromagnetism. Unlike the Debye model, the Havriliak–Negami relaxation accounts for the asymmetry and broadness of the dielectric dispersion curve. The model was first used to describe the dielectric relaxation of some polymers, by adding two exponential parameters to the Debye equation:

In probability and statistics, given two stochastic processes and , the cross-covariance is a function that gives the covariance of one process with the other at pairs of time points. With the usual notation ; for the expectation operator, if the processes have the mean functions and , then the cross-covariance is given by

Instantaneous phase and frequency

Instantaneous phase and frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase of a complex-valued function s(t), is the real-valued function:

Gabor transform

The Gabor transform, named after Dennis Gabor, is a special case of the short-time Fourier transform. It is used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. The function to be transformed is first multiplied by a Gaussian function, which can be regarded as a window function, and the resulting function is then transformed with a Fourier transform to derive the time-frequency analysis. The window function means that the signal near the time being analyzed will have higher weight. The Gabor transform of a signal x(t) is defined by this formula:

In mathematics, a local martingale is a type of stochastic process, satisfying the localized version of the martingale property. Every martingale is a local martingale; every bounded local martingale is a martingale; in particular, every local martingale that is bounded from below is a supermartingale, and every local martingale that is bounded from above is a submartingale; however, in general a local martingale is not a martingale, because its expectation can be distorted by large values of small probability. In particular, a driftless diffusion process is a local martingale, but not necessarily a martingale.

In statistical signal processing, the goal of spectral density estimation (SDE) is to estimate the spectral density of a random signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.

The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology.

Steered-Response Power Phase Transform (SRP-PHAT) is a popular algorithm for acoustic source localization, well known for its robust performance in adverse acoustic environments. The algorithm can be interpreted as a beamforming-based approach that searches for the candidate position that maximizes the output of a steered delay-and-sum beamformer.

The redundancy principle in biology expresses the need of many copies of the same entity to fulfill a biological function. Examples are numerous: disproportionate numbers of spermatozoa during fertilization compared to one egg, large number of neurotransmitters released during neuronal communication compared to the number of receptors, large numbers of released calcium ions during transient in cells and many more in molecular and cellular transduction or gene activation and cell signaling. This redundancy is particularly relevant when the sites of activation is physically separated from the initial position of the molecular messengers. The redundancy is often generated for the purpose of resolving the time constraint of fast-activating pathways. It can be expressed in terms of the theory of extreme statistics to determine its laws and quantify how shortest paths are selected. The main goal is to estimate these large numbers from physical principles and mathematical derivations.

References

  1. Gagniuc, Paul A. (2017). Markov Chains: From Theory to Implementation and Experimentation. USA, NJ: John Wiley & Sons. pp. 1–256. ISBN   978-1-119-38755-8.
  2. Laumann, Timothy O.; Snyder, Abraham Z.; Mitra, Anish; Gordon, Evan M.; Gratton, Caterina; Adeyemo, Babatunde; Gilmore, Adrian W.; Nelson, Steven M.; Berg, Jeff J.; Greene, Deanna J.; McCarthy, John E. (2016-09-02). "On the Stability of BOLD fMRI Correlations". Cerebral Cortex. doi:10.1093/cercor/bhw265. ISSN   1047-3211. PMC   6248456 . PMID   27591147.
  3. 1 2 3 4 5 6 7 Park,Kun Il (2018). Fundamentals of Probability and Stochastic Processes with Applications to Communications. Springer. ISBN   978-3-319-68074-3.
  4. 1 2 Ionut Florescu (7 November 2014). Probability and Stochastic Processes. John Wiley & Sons. ISBN   978-1-118-59320-2.
  5. Priestley, M. B. (1981). Spectral Analysis and Time Series. Academic Press. ISBN   0-12-564922-3.
  6. Priestley, M. B. (1988). Non-linear and Non-stationary Time Series Analysis . Academic Press. ISBN   0-12-564911-8.
  7. Honarkhah, M.; Caers, J. (2010). "Stochastic Simulation of Patterns Using Distance-Based Pattern Modeling". Mathematical Geosciences. 42 (5): 487–517. doi:10.1007/s11004-010-9276-7.
  8. Tahmasebi, P.; Sahimi, M. (2015). "Reconstruction of nonstationary disordered materials and media: Watershed transform and cross-correlation function" (PDF). Physical Review E. 91 (3): 032401. doi: 10.1103/PhysRevE.91.032401 . PMID   25871117.
  9. "8.1 Stationarity and differencing | OTexts". www.otexts.org. Retrieved 2016-05-18.

Further reading