MUSIC (MUltiple SIgnal Classification) is an algorithm used for frequency estimation [1] [2] [3] and radio direction finding. [4]
In many practical signal processing problems, the objective is to estimate from measurements a set of constant parameters upon which the received signals depend. There have been several approaches to such problems including the so-called maximum likelihood (ML) method of Capon (1969) and Burg's maximum entropy (ME) method. Although often successful and widely used, these methods have certain fundamental limitations (especially bias and sensitivity in parameter estimates), largely because they use an incorrect model (e.g., AR rather than special ARMA) of the measurements.
Pisarenko (1973) was one of the first to exploit the structure of the data model, doing so in the context of estimation of parameters of complex sinusoids in additive noise using a covariance approach. Schmidt (1977), while working at Northrop Grumman and independently Bienvenu and Kopp (1979) were the first to correctly exploit the measurement model in the case of sensor arrays of arbitrary form. Schmidt, in particular, accomplished this by first deriving a complete geometric solution in the absence of noise, then cleverly extending the geometric concepts to obtain a reasonable approximate solution in the presence of noise. The resulting algorithm was called MUSIC (MUltiple SIgnal Classification) and has been widely studied.
In a detailed evaluation based on thousands of simulations, the Massachusetts Institute of Technology's Lincoln Laboratory concluded in 1998 that, among currently accepted high-resolution algorithms, MUSIC was the most promising and a leading candidate for further study and actual hardware implementation. [5] However, although the performance advantages of MUSIC are substantial, they are achieved at a cost in computation (searching over parameter space) and storage (of array calibration data). [6]
MUSIC method assumes that a signal vector, , consists of complex exponentials, whose frequencies are unknown, in the presence of Gaussian white noise, , as given by the linear model
Here is an Vandermonde matrix of steering vectors and is the amplitude vector. A crucial assumption is that number of sources, , is less than the number of elements in the measurement vector, , i.e. .
The autocorrelation matrix of is then given by
where is the noise variance, is identity matrix, and is the autocorrelation matrix of .
The autocorrelation matrix is traditionally estimated using sample correlation matrix
where is the number of vector observations and . Given the estimate of , MUSIC estimates the frequency content of the signal or autocorrelation matrix using an eigenspace method.
Since is a Hermitian matrix, all of its eigenvectors are orthogonal to each other. If the eigenvalues of are sorted in decreasing order, the eigenvectors corresponding to the largest eigenvalues (i.e. directions of largest variability) span the signal subspace . The remaining eigenvectors correspond to eigenvalue equal to and span the noise subspace , which is orthogonal to the signal subspace, .
Note that for , MUSIC is identical to Pisarenko harmonic decomposition. The general idea behind MUSIC method is to use all the eigenvectors that span the noise subspace to improve the performance of the Pisarenko estimator.
Since any signal vector that resides in the signal subspace must be orthogonal to the noise subspace, , it must be that for all the eigenvectors that spans the noise subspace. In order to measure the degree of orthogonality of with respect to all the , the MUSIC algorithm defines a squared norm
where the matrix is the matrix of eigenvectors that span the noise subspace . If , then as implied by the orthogonality condition. Taking a reciprocal of the squared norm expression creates sharp peaks at the signal frequencies. The frequency estimation function for MUSIC (or the pseudo-spectrum) is
where are the noise eigenvectors and
is the candidate steering vector. The locations of the largest peaks of the estimation function give the frequency estimates for the signal components
MUSIC is a generalization of Pisarenko's method, and it reduces to Pisarenko's method when . In Pisarenko's method, only a single eigenvector is used to form the denominator of the frequency estimation function; and the eigenvector is interpreted as a set of autoregressive coefficients, whose zeros can be found analytically or with polynomial root finding algorithms. In contrast, MUSIC assumes that several such functions have been added together, so zeros may not be present. Instead there are local minima, which can be located by computationally searching the estimation function for peaks.
The fundamental observation MUSIC and other subspace decomposition methods are based on is about the rank of the autocorrelation matrix which is related to number of signal sources as follows.
If the sources are complex, then and the dimension of the signal subspace is . If sources are real, then and dimension of the signal subspace is , i.e. each real sinusoid is generated by two base vectors.
This fundamental result, although often skipped in spectral analysis books, is a reason why the input signal can be distributed into signal subspace eigenvectors spanning ( for real valued signals) and noise subspace eigenvectors spanning . It is based on signal embedding theory [2] [7] and can also be explained by the topological theory of manifolds. [4]
MUSIC outperforms simple methods such as picking peaks of DFT spectra in the presence of noise, when the number of components is known in advance, because it exploits knowledge of this number to ignore the noise in its final report.
Unlike DFT, it is able to estimate frequencies with accuracy higher than one sample, because its estimation function can be evaluated for any frequency, not just those of DFT bins. This is a form of superresolution.
Its chief disadvantage is that it requires the number of components to be known in advance, so the original method cannot be used in more general cases. Methods exist for estimating the number of source components purely from statistical properties of the autocorrelation matrix. See, e.g. [8] In addition, MUSIC assumes coexistent sources to be uncorrelated, which limits its practical applications.
Recent iterative semi-parametric methods offer robust superresolution despite highly correlated sources, e.g., SAMV [9] [10]
A modified version of MUSIC, denoted as Time-Reversal MUSIC (TR-MUSIC) has been recently applied to computational time-reversal imaging. [11] [12] MUSIC algorithm has also been implemented for fast detection of the DTMF frequencies (Dual-tone multi-frequency signaling) in the form of C library - libmusic [13] (including for MATLAB implementation). [14]
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT (IDFT) is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
In engineering, a transfer function of a system, sub-system, or component is a mathematical function that models the system's output for each possible input. It is widely used in electronic engineering tools like circuit simulators and control systems. In simple cases, this function can be represented as a two-dimensional graph of an independent scalar input versus the dependent scalar output. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.
In mathematics, an eigenfunction of a linear operator D defined on some function space is any non-zero function in that space that, when acted upon by D, is only multiplied by some scaling factor called an eigenvalue. As an equation, this condition can be written as for some scalar eigenvalue The solutions to this equation may also be subject to boundary conditions that limit the allowable eigenvalues and eigenfunctions.
H∞methods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use H∞ methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization. H∞ techniques have the advantage over classical control techniques in that H∞ techniques are readily applicable to problems involving multivariate systems with cross-coupling between channels; disadvantages of H∞ techniques include the level of mathematical understanding needed to apply them successfully and the need for a reasonably good model of the system to be controlled. It is important to keep in mind that the resulting controller is only optimal with respect to the prescribed cost function and does not necessarily represent the best controller in terms of the usual performance measures used to evaluate controllers such as settling time, energy expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These methods were introduced into control theory in the late 1970s-early 1980s by George Zames, J. William Helton , and Allen Tannenbaum.
Synthetic-aperture radar (SAR) is a form of radar that is used to create two-dimensional images or three-dimensional reconstructions of objects, such as landscapes. SAR uses the motion of the radar antenna over a target region to provide finer spatial resolution than conventional stationary beam-scanning radars. SAR is typically mounted on a moving platform, such as an aircraft or spacecraft, and has its origins in an advanced form of side looking airborne radar (SLAR). The distance the SAR device travels over a target during the period when the target scene is illuminated creates the large synthetic antenna aperture. Typically, the larger the aperture, the higher the image resolution will be, regardless of whether the aperture is physical or synthetic – this allows SAR to create high-resolution images with comparatively small physical antennas. For a fixed antenna size and orientation, objects which are further away remain illuminated longer – therefore SAR has the property of creating larger synthetic apertures for more distant objects, which results in a consistent spatial resolution over a range of viewing distances.
A sensor array is a group of sensors, usually deployed in a certain geometry pattern, used for collecting and processing electromagnetic or acoustic signals. The advantage of using a sensor array over using a single sensor lies in the fact that an array adds new dimensions to the observation, helping to estimate more parameters and improve the estimation performance. For example an array of radio antenna elements used for beamforming can increase antenna gain in the direction of the signal while decreasing the gain in other directions, i.e., increasing signal-to-noise ratio (SNR) by amplifying the signal coherently. Another example of sensor array application is to estimate the direction of arrival of impinging electromagnetic waves. The related processing method is called array signal processing. A third examples includes chemical sensor arrays, which utilize multiple chemical sensors for fingerprint detection in complex mixtures or sensing environments. Application examples of array signal processing include radar/sonar, wireless communications, seismology, machine condition monitoring, astronomical observations fault diagnosis, etc.
Array processing is a wide area of research in the field of signal processing that extends from the simplest form of 1 dimensional line arrays to 2 and 3 dimensional array geometries. Array structure can be defined as a set of sensors that are spatially separated, e.g. radio antenna and seismic arrays. The sensors used for a specific problem may vary widely, for example microphones, accelerometers and telescopes. However, many similarities exist, the most fundamental of which may be an assumption of wave propagation. Wave propagation means there is a systemic relationship between the signal received on spatially separated sensors. By creating a physical model of the wave propagation, or in machine learning applications a training data set, the relationships between the signals received on spatially separated sensors can be leveraged for many applications.
The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz.
In statistics, generalized least squares (GLS) is a method used to estimate the unknown parameters in a linear regression model. It is used when there is a non-zero amount of correlation between the residuals in the regression model. GLS is employed to improve statistical efficiency and reduce the risk of drawing erroneous inferences, as compared to conventional least squares and weighted least squares methods. It was first described by Alexander Aitken in 1935.
In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.
Pisarenko harmonic decomposition, also referred to as Pisarenko's method, is a method of frequency estimation. This method assumes that a signal, , consists of complex exponentials in the presence of white noise. Because the number of complex exponentials must be known a priori, it is somewhat limited in its usefulness.
In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density of a signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.
In time series analysis, singular spectrum analysis (SSA) is a nonparametric spectral estimation method. It combines elements of classical time series analysis, multivariate statistics, multivariate geometry, dynamical systems and signal processing. Its roots lie in the classical Karhunen (1946)–Loève spectral decomposition of time series and random fields and in the Mañé (1981)–Takens (1981) embedding theorem. SSA can be an aid in the decomposition of time series into a sum of components, each having a meaningful interpretation. The name "singular spectrum analysis" relates to the spectrum of eigenvalues in a singular value decomposition of a covariance matrix, and not directly to a frequency domain decomposition.
Estimation of signal parameters via rotational invariant techniques (ESPRIT), is a technique to determine the parameters of a mixture of sinusoids in background noise. This technique was first proposed for frequency estimation. However, with the introduction of phased-array systems in everyday technology, it is also used for angle of arrival estimations.
Common spatial pattern (CSP) is a mathematical procedure used in signal processing for separating a multivariate signal into additive subcomponents which have maximum differences in variance between two windows.
In machine learning, the kernel embedding of distributions comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.
Lightfieldmicroscopy (LFM) is a scanning-free 3-dimensional (3D) microscopic imaging method based on the theory of light field. This technique allows sub-second (~10 Hz) large volumetric imaging with ~1 μm spatial resolution in the condition of weak scattering and semi-transparence, which has never been achieved by other methods. Just as in traditional light field rendering, there are two steps for LFM imaging: light field capture and processing. In most setups, a microlens array is used to capture the light field. As for processing, it can be based on two kinds of representations of light propagation: the ray optics picture and the wave optics picture. The Stanford University Computer Graphics Laboratory published their first prototype LFM in 2006 and has been working on the cutting edge since then.
The streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations can be used for finite element computations of high Reynolds number incompressible flow using equal order of finite element space by introducing additional stabilization terms in the Navier–Stokes Galerkin formulation.
Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.