In statistics, signal processing, and econometrics, an unevenly (or unequally or irregularly) spaced time series is a sequence of observation time and value pairs (tn, Xn) in which the spacing of observation times is not constant.
Unevenly spaced time series naturally occur in many industrial and scientific domains: natural disasters such as earthquakes, floods, or volcanic eruptions typically occur at irregular time intervals. In observational astronomy, measurements such as spectra of celestial objects are taken at times determined by weather conditions, availability of observation time slots, and suitable planetary configurations. In clinical trials (or more generally, longitudinal studies), a patient's state of health may be observed only at irregular time intervals, and different patients are usually observed at different points in time. Wireless sensors in the Internet of things often transmit information only when a state changes to conserve battery life. There are many more examples in climatology, ecology, high-frequency finance, geology, and signal processing.
A common approach to analyzing unevenly spaced time series is to transform the data into equally spaced observations using some form of interpolation - most often linear - and then to apply existing methods for equally spaced data. However, transforming data in such a way can introduce a number of significant and hard to quantify biases, [1] [2] [3] [4] [5] especially if the spacing of observations is highly irregular.
Ideally, unevenly spaced time series are analyzed in their unaltered form. However, most of the basic theory for time series analysis was developed at a time when limitations in computing resources favored an analysis of equally spaced data, since in this case efficient linear algebra routines can be used and many problems have an explicit solution. As a result, fewer methods currently exist specifically for analyzing unevenly spaced time series data. [5] [6] [7] [8] [9] [10] [11]
The least-squares spectral analysis methods are commonly used for computing a frequency spectrum from such time series without any data alterations.
Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. In digital electronics, a digital signal is represented as a pulse train, which is typically generated by the switching of a transistor.
In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
Harmonic analysis is a branch of mathematics concerned with investigating the connections between a function and its representation in frequency. The frequency representation is found by using the Fourier transform for functions on the real line, or by Fourier series for periodic functions. Generalizing these transforms to other domains is generally called Fourier analysis, although the term is sometimes used interchangeably with harmonic analysis. Harmonic Analysis has become a vast subject with applications in areas as diverse as number theory, representation theory, signal processing, quantum mechanics, tidal analysis and neuroscience.
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, images, potential fields, seismic signals, altimetry processing, and scientific measurements. Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, subjective video quality and to also detect or pinpoint components of interest in a measured signal.
In Fourier analysis, the cepstrum is the result of computing the inverse Fourier transform (IFT) of the logarithm of the estimated signal spectrum. The method is a tool for investigating periodic structures in frequency spectra. The power cepstrum has applications in the analysis of human speech.
A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the number and direction of its pulses. Wavelets are imbued with specific properties that make them useful for signal processing.
Spectral analysis or spectrum analysis is analysis in terms of a spectrum of frequencies or related quantities such as energies, eigenvalues, etc. In specific areas it may refer to:
In mathematics, physics, electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency, rather than time. Put simply, a time-domain graph shows how a signal changes over time, whereas a frequency-domain graph shows how much of the signal lies within each given frequency band over a range of frequencies. A frequency-domain representation can also include information on the phase shift that must be applied to each sinusoid in order to be able to recombine the frequency components to recover the original time signal.
In mathematics, a time series is a series of data points indexed in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average.
In signal processing, a periodogram is an estimate of the spectral density of a signal. The term was coined by Arthur Schuster in 1898. Today, the periodogram is a component of more sophisticated methods. It is the most common tool for examining the amplitude vs frequency characteristics of FIR filters and window functions. FFT spectrum analyzers are also implemented as a time-sequence of periodograms.
A cyclostationary process is a signal having statistical properties that vary cyclically with time. A cyclostationary process can be viewed as multiple interleaved stationary processes. For example, the maximum daily temperature in New York City can be modeled as a cyclostationary process: the maximum temperature on July 21 is statistically different from the temperature on December 20; however, it is a reasonable approximation that the temperature on December 20 of different years has identical statistics. Thus, we can view the random process composed of daily maximum temperatures as 365 interleaved stationary processes, each of which takes on a new value once per year.
In finance, volatility is the degree of variation of a trading price series over time, usually measured by the standard deviation of logarithmic returns.
In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density of a signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum based on a least-squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in the long and gapped records; LSSA mitigates such problems. Unlike in Fourier analysis, data need not be equally spaced to use LSSA.
The Hilbert–Huang transform (HHT) is a way to decompose a signal into so-called intrinsic mode functions (IMF) along with a trend, and obtain instantaneous frequency data. It is designed to work well for data that is nonstationary and nonlinear. In contrast to other common transforms like the Fourier transform, the HHT is an algorithm that can be applied to a data set, rather than a theoretical tool.
Within statistics, the Kolmogorov–Zurbenko (KZ) filter was first proposed by A. N. Kolmogorov and formally defined by Zurbenko. It is a series of iterations of a moving average filter of length m, where m is a positive, odd integer. The KZ filter belongs to the class of low-pass filters. The KZ filter has two parameters, the length m of the moving average window and the number of iterations k of the moving average itself. It also can be considered as a special window function designed to eliminate spectral leakage.
Multidimension spectral estimation is a generalization of spectral estimation, normally formulated for one-dimensional signals, to multidimensional signals or multivariate data, such as wave vectors.
High frequency data refers to time-series data collected at an extremely fine scale. As a result of advanced computational power in recent decades, high frequency data can be accurately collected at an efficient rate for analysis. Largely used in financial analysis and in high frequency trading, high frequency data provides intraday observations that can be used to understand market behaviors, dynamics, and micro-structures.
Directional-change intrinsic time is an event-based operator to dissect a data series into a sequence of alternating trends of defined size .