This article needs additional citations for verification .(November 2014) |
Multidimension spectral estimation is a generalization of spectral estimation, normally formulated for one-dimensional signals, to multidimensional signals or multivariate data, such as wave vectors.
Multidimensional spectral estimation has gained popularity because of its application in fields like medicine, aerospace, sonar, radar, bio informatics and geophysics. In the recent past, a number of methods have been suggested to design models with finite parameters to estimate the power spectrum of multidimensional signals. In this article, we will be looking into the basics of methods used to estimate the power spectrum of multidimensional signals.
There are many applications of spectral estimation of multi-D signals such as classification of signals as low pass, high pass, pass band and stop band. It is also used in compression and coding of audio and video signals, beam forming and direction finding in radars, [1] Seismic data estimation and processing, array of sensors and antennas and vibrational analysis. In the field of radio astronomy, [1] it is used to synchronize the outputs of an array of telescopes.
In a single dimensional case, a signal is characterized by an amplitude and a time scale. The basic concepts involved in spectral estimation include autocorrelation, multi-D Fourier transform, mean square error and entropy. [2] When it comes to multidimensional signals, there are two main approaches: use a bank of filters or estimate the parameters of the random process in order to estimate the power spectrum.
It is a technique to estimate the power spectrum of a single dimensional or a multidimensional signal as it cannot be calculated accurately. Given are samples of a wide sense stationary random process and its second order statistics (measurements).The estimates are obtained by applying a multidimensional Fourier transform of the autocorrelation function of the random signal. The estimation begins by calculating a periodogram which is obtained by squaring the magnitude of the multidimensional Fourier transform of the measurements ri(n). The spectral estimates obtained from the periodogram have a large variance in amplitude for consecutive periodogram samples or in wavenumber. This problem is resolved using techniques that constitute the classical estimation theory. They are as follows: 1.Bartlett suggested a method that averages the spectral estimates to calculate the power spectrum. The measurements are divided into equally spaced segments in time and an average is taken. This gives a better estimate. [3] 2.Based on the wavenumber and index of the receiver/output we can partition the segments. This increases the spectral estimates and decreases the variances between consecutive segments. 3.Welch suggested that we should divide the measurements using data window functions, calculate a periodogram, average them to get a spectral estimate and calculate the power spectrum using Fast Fourier Transform (FFT). This increases the computational speed. [4] 4.Smoothing window will help us smoothen the estimate by multiplying the periodogram with a smoothening spectrum. Wider the main lobe of the smoothening spectrum, smoother it becomes at the cost of frequency resolution. [2]
This method gives a better estimate whose frequency resolution is higher than the classical estimation theory. In the high resolution estimation method we use a variable wavenumber window which allows only certain wavenumbers and suppresses the others. Capon’s [5] work helped us establish an estimation method by using wavenumber-frequency components. This results in an estimate with a higher frequency resolution. It is similar to maximum likelihood method as the optimization tool used is similar.
In this type of estimation, we select the multidimensional signal to be a separable function. [1] Because of this property we will be able to view the Fourier analysis taking place in multiple dimensions successively. A time delay in the magnitude squaring operation will help us process the Fourier transformation in each dimension. A Discrete time Multidimensional Fourier transform is applied along each dimension and in the end a maximum entropy estimator is applied and the magnitude is squared.
This method is an extension of a 1-D technique called Autoregressive spectral estimation. In autoregressive models, the output variables depend linearly on its own previous values. In this model, the estimation of power spectrum is reduced to estimating the coefficients from the autocorrelation coefficients of the random process which are assumed to be known for a specific region. The power spectrum of a random process is given by: [2]
Above, is the power spectrum of a random process , which is given as the input to a system with a transfer function to obtain [2] and is:
Therefore, the power estimation reduces to estimation of coefficients of from the auto correlation function of the random process. The coefficients can also be estimated using the linear prediction formulation which deals with minimization of mean square error between the actual random signal and predicted values of the random signal.
In this method of spectral estimation, we try to find the spectral estimate whose inverse Fourier transform matches the known auto correlation coefficients. We maximize the entropy of the spectral estimate such that it matches the autocorrelation coefficients. [2] The entropy equation is given as: [1] [2]
The power spectrum can be expressed as a sum of known autocorrelation coefficients and unknown autocorrelation coefficients. By adjusting the values of unconstrained coefficients, the entropy can be maximized.
The max entropy is of the form: [2]
λ(l,m) must be chosen such that known autocorrelation coefficients are matched.
This is a relatively new approach. Improved maximum likelihood method (IMLM) is a combination of two MLM(maximum likelihood) estimators. [1] [7] The improved maximum likelihood of two 2-dimensional arrays A and B at a wave number k( gives information about the orientation of the array in space) is given by the relation: [8]
Array B is a subset of A. Therefore, assuming that A>B, if there is a difference between the MLM of A and MLM of B then significant part of the estimated spectral energy at the frequency may be due to power leakage from other frequencies. The de-emphasis of MLM of A may improve spectral estimate. This is accomplished by multiplying by a weighted function which is smaller when there is a greater difference between MLA of B and MLA of A.
where is the weighting function and is given by the expression: [7]
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
In mathematics, Fourier analysis is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
The power spectrum of a time series describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of a certain signal or sort of signal as analyzed in terms of its frequency content, is called its spectrum.
In signal processing, a periodogram is an estimate of the spectral density of a signal. The term was coined by Arthur Schuster in 1898. Today, the periodogram is a component of more sophisticated methods. It is the most common tool for examining the amplitude vs frequency characteristics of FIR filters and window functions. FFT spectrum analyzers are also implemented as a time-sequence of periodograms.
In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant (LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, and additive noise. The Wiener filter minimizes the mean square error between the estimated random process and the desired process.
In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of values.
Array processing is a wide area of research in the field of signal processing that extends from the simplest form of 1 dimensional line arrays to 2 and 3 dimensional array geometries. Array structure can be defined as a set of sensors that are spatially separated, e.g. radio antenna and seismic arrays. The sensors used for a specific problem may vary widely, for example microphones, accelerometers and telescopes. However, many similarities exist, the most fundamental of which may be an assumption of wave propagation. Wave propagation means there is a systemic relationship between the signal received on spatially separated sensors. By creating a physical model of the wave propagation, or in machine learning applications a training data set, the relationships between the signals received on spatially separated sensors can be leveraged for many applications.
In statistics, econometrics and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term ; thus the model is in the form of a stochastic difference equation. Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.
A cyclostationary process is a signal having statistical properties that vary cyclically with time. A cyclostationary process can be viewed as multiple interleaved stationary processes. For example, the maximum daily temperature in New York City can be modeled as a cyclostationary process: the maximum temperature on July 21 is statistically different from the temperature on December 20; however, it is a reasonable approximation that the temperature on December 20 of different years has identical statistics. Thus, we can view the random process composed of daily maximum temperatures as 365 interleaved stationary processes, each of which takes on a new value once per year.
In applied mathematics, the Wiener–Khinchin theorem or Wiener–Khintchine theorem, also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectrum of that process.
Geophysical survey is the systematic collection of geophysical data for spatial studies. Detection and analysis of the geophysical signals forms the core of Geophysical signal processing. The magnetic and gravitational fields emanating from the Earth's interior hold essential information concerning seismic activities and the internal structure. Hence, detection and analysis of the electric and Magnetic fields is very crucial. As the Electromagnetic and gravitational waves are multi-dimensional signals, all the 1-D transformation techniques can be extended for the analysis of these signals as well. Hence this article also discusses multi-dimensional signal processing techniques.
In signal processing, multitaper is a spectral density estimation technique developed by David J. Thomson. It can estimate the power spectrum SX of a stationary ergodic finite-variance random process X, given a finite contiguous realization of X as data.
Maximum entropy spectral estimation is a method of spectral density estimation. The goal is to improve the spectral quality based on the principle of maximum entropy. The method is based on choosing the spectrum which corresponds to the most random or the most unpredictable time series whose autocorrelation function agrees with the known values. This assumption, which corresponds to the concept of maximum entropy as used in both statistical mechanics and information theory, is maximally non-committal with regard to the unknown values of the autocorrelation function of the time series. It is simply the application of maximum entropy modeling to any type of spectrum and is used in all fields where data is presented in spectral form. The usefulness of the technique varies based on the source of the spectral data since it is dependent on the amount of assumed knowledge about the spectrum that can be applied to the model.
In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density of a random signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum, based on a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems. Unlike with Fourier analysis, data need not be equally spaced to use LSSA.
In computer networks, self-similarity is a feature of network data transfer dynamics. When modeling network data dynamics the traditional time series models, such as an autoregressive moving average model are not appropriate. This is because these models only provide a finite number of parameters in the model and thus interaction in a finite time window, but the network data usually have a long-range dependent temporal structure. A self-similar process is one way of modeling network data dynamics with such a long range correlation. This article defines and describes network data transfer dynamics in the context of a self-similar process. Properties of the process are shown and methods are given for graphing and estimating parameters modeling the self-similarity of network data.
In mathematical analysis and applications, multidimensional transforms are used to analyze the frequency content of signals in a domain of two or more dimensions.
In statistics, Whittle likelihood is an approximation to the likelihood function of a stationary Gaussian time series. It is named after the mathematician and statistician Peter Whittle, who introduced it in his PhD thesis in 1951. It is commonly used in time series analysis and signal processing for parameter estimation and signal detection.