Estimation of signal parameters via rotational invariance techniques

Last updated
Example of separation into subarrays (2D ESPRIT) ESPRIT 2D.gif
Example of separation into subarrays (2D ESPRIT)

Estimation of signal parameters via rotational invariant techniques (ESPRIT), is a technique to determine the parameters of a mixture of sinusoids in background noise. This technique was first proposed for frequency estimation. [1] However, with the introduction of phased-array systems in everyday technology, it is also used for angle of arrival estimations. [2]

Contents

One-dimensional ESPRIT

At instance , the (complex -valued) output signals (measurements) , , of the system are related to the (complex -valued) input signals , , aswhere denotes the noise added by the system. The one-dimensional form of ESPRIT can be applied if the weights have the form , whose phases are integer multiples of some radial frequency . This frequency only depends on the index of the system's input, i.e., . The goal of ESPRIT is to estimate 's, given the outputs and the number of input signals, . Since the radial frequencies are the actual objectives, is denoted as .

Collating the weights as and the output signals at instance as , where . Further, when the weight vectors are put into a Vandermonde matrix , and the inputs at instance into a vector , we can writeWith several measurements at instances and the notations , and , the model equation becomes

Dividing into virtual sub-arrays

Maximum overlapping of two sub-arrays (N denotes number of sensors in the array, m is the number of sensors in each sub-array, and
J
1
{\displaystyle J_{1}}
and
J
2
{\displaystyle J_{2}}
are selection matrices) Max overlapping subarrays.png
Maximum overlapping of two sub-arrays (N denotes number of sensors in the array, m is the number of sensors in each sub-array, and and are selection matrices)

The weight vector has the property that adjacent entries are related.For the whole vector , the equation introduces two selection matrices and : and . Here, is an identity matrix of size and is a vector of zero.

The vectors contains all elements of except the last [first] one. Thus, andThe above relation is the first major observation required for ESPRIT. The second major observation concerns the signal subspace that can be computed from the output signals.

Signal subspace

The singular value decomposition (SVD) of is given aswhere and are unitary matrices and is a diagonal matrix of size , that holds the singular values from the largest (top left) in descending order. The operator denotes the complex-conjugate transpose (Hermitian transpose).

Let us assume that . Notice that we have input signals. If there was no noise, there would only be non-zero singular values. We assume that the largest singular values stem from these input signals and other singular values are presumed to stem from noise. The matrices in SVD of can be partitioned into submatrices, where some submatrices correspond to the signal subspace and some correspond to the noise subspace.where and contain the first columns of and , respectively and is a diagonal matrix comprising the largest singular values.

Thus, The SVD can be written aswhere , ⁣, and represent the contribution of the input signal to . We term the signal subspace. In contrast, , , and represent the contribution of noise to .

Hence, from the system model, we can write and . Also, from the former, we can writewhere . In the sequel, it is only important that there exist such an invertible matrix and its actual content will not be important.

Note: The signal subspace can also be extracted from the spectral decomposition of the auto-correlation matrix of the measurements, which is estimated as

Estimation of radial frequencies

We have established two expressions so far: and . Now, where and denote the truncated signal sub spaces, and The above equation has the form of an eigenvalue decomposition, and the phases of eigenvalues in the diagonal matrix are used to estimate the radial frequencies.

Thus, after solving for in the relation , we would find the eigenvalues of , where , and the radial frequencies are estimated as the phases (argument) of the eigenvalues.

Remark: In general, is not invertible. One can use the least squares estimate . An alternative would be the total least squares estimate.

Algorithm summary

Input: Measurements , the number of input signals (estimate if not already known).

  1. Compute the singular value decomposition (SVD) of and extract the signal subspace as the first columns of .
  2. Compute and , where and .
  3. Solve for in (see the remark above).
  4. Compute the eigenvalues of .
  5. The phases of the eigenvalues provide the radial frequencies , i.e.,

Notes

Choice of selection matrices

In the derivation above, the selection matrices and were used. However, any appropriate matrices and may be used as long as the rotational invariance i.e., , or some generalization of it (see below) holds; accordingly, the matrices and may contain any rows of .

Generalized rotational invariance

The rotational invariance used in the derivation may be generalized. So far, the matrix has been defined to be a diagonal matrix that stores the sought-after complex exponentials on its main diagonal. However, may also exhibit some other structure. [3] For instance, it may be an upper triangular matrix. In this case, constitutes a triangularization of .

See also

Related Research Articles

In physics, the cross section is a measure of the probability that a specific process will take place in a collision of two particles. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted σ (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process.

In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model.

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

<span class="mw-page-title-main">Exterior algebra</span> Algebra of exterior/ wedge products

In mathematics, the exterior algebra or Grassmann algebra of a vector space is an associative algebra that contains which has a product, called exterior product or wedge product and denoted with , such that for every vector in The exterior algebra is named after Hermann Grassmann, and the names of the product come from the "wedge" symbol and the fact that the product of two elements of are "outside"

<span class="mw-page-title-main">Hooke's law</span> Physical law: force needed to deform a spring scales linearly with distance

In physics, Hooke's law is an empirical law which states that the force needed to extend or compress a spring by some distance scales linearly with respect to that distance—that is, Fs = kx, where k is a constant factor characteristic of the spring, and x is small compared to the total possible deformation of the spring. The law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram. He published the solution of his anagram in 1678 as: ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law since 1660.

<span class="mw-page-title-main">Lorentz group</span> Lie group of Lorentz transformations

In physics and mathematics, the Lorentz group is the group of all Lorentz transformations of Minkowski spacetime, the classical and quantum setting for all (non-gravitational) physical phenomena. The Lorentz group is named for the Dutch physicist Hendrik Lorentz.

In quantum field theory, the Dirac spinor is the spinor that describes all known fundamental particles that are fermions, with the possible exception of neutrinos. It appears in the plane-wave solution to the Dirac equation, and is a certain combination of two Weyl spinors, specifically, a bispinor that transforms "spinorially" under the action of the Lorentz group.

<span class="mw-page-title-main">Stellar dynamics</span>

Stellar dynamics is the branch of astrophysics which describes in a statistical way the collective motions of stars subject to their mutual gravity. The essential difference from celestial mechanics is that the number of body

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.

The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz.

In physics, the Majorana equation is a relativistic wave equation. It is named after the Italian physicist Ettore Majorana, who proposed it in 1937 as a means of describing fermions that are their own antiparticle. Particles corresponding to this equation are termed Majorana particles, although that term now has a more expansive meaning, referring to any fermionic particle that is its own anti-particle.

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

<span class="mw-page-title-main">Prony's method</span> Method to estimate the components of a signal

Prony analysis was developed by Gaspard Riche de Prony in 1795. However, practical use of the method awaited the digital computer. Similar to the Fourier transform, Prony's method extracts valuable information from a uniformly sampled signal and builds a series of damped complex exponentials or damped sinusoids. This allows the estimation of frequency, amplitude, phase and damping components of a signal.

<span class="mw-page-title-main">Jaynes–Cummings model</span> Model in quantum optics

The Jaynes–Cummings model is a theoretical model in quantum optics. It describes the system of a two-level atom interacting with a quantized mode of an optical cavity, with or without the presence of light. It was originally developed to study the interaction of atoms with the quantized electromagnetic field in order to investigate the phenomena of spontaneous emission and absorption of photons in a cavity.

<span class="mw-page-title-main">Stokes' theorem</span> Theorem in vector calculus

Stokes' theorem, also known as the Kelvin–Stokes theorem after Lord Kelvin and George Stokes, the fundamental theorem for curls or simply the curl theorem, is a theorem in vector calculus on . Given a vector field, the theorem relates the integral of the curl of the vector field over some surface, to the line integral of the vector field around the boundary of the surface. The classical theorem of Stokes can be stated in one sentence: The line integral of a vector field over a loop is equal to the surface integral of its curl over the enclosed surface. It is illustrated in the figure, where the direction of positive circulation of the bounding contour ∂Σ, and the direction n of positive flux through the surface Σ, are related by a right-hand-rule. For the right hand the fingers circulate along ∂Σ and the thumb is directed along n.

In the geometry of numbers, the Klein polyhedron, named after Felix Klein, is used to generalize the concept of continued fractions to higher dimensions.

In mathematics, the Lindström–Gessel–Viennot lemma provides a way to count the number of tuples of non-intersecting lattice paths, or, more generally, paths on a directed graph. It was proved by Gessel–Viennot in 1985, based on previous work of Lindström published in 1973.

<span class="mw-page-title-main">Weyl equation</span> Relativistic wave equation describing massless fermions

In physics, particularly in quantum field theory, the Weyl equation is a relativistic wave equation for describing massless spin-1/2 particles called Weyl fermions. The equation is named after Hermann Weyl. The Weyl fermions are one of the three possible types of elementary fermions, the other two being the Dirac and the Majorana fermions.

<span class="mw-page-title-main">Symmetry in quantum mechanics</span> Properties underlying modern physics

Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are powerful methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints and the first steps to solving a multitude of problems. In application, understanding symmetries can also provide insights on the eigenstates that can be expected. For example, the existence of degenerate states can be inferred by the presence of non commuting symmetry operators or that the non degenerate states are also eigenvectors of symmetry operators.

<span class="mw-page-title-main">Generalized pencil-of-function method</span>

Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.

References

  1. Paulraj, A.; Roy, R.; Kailath, T. (1985), "Estimation Of Signal Parameters Via Rotational Invariance Techniques - Esprit", Nineteenth Asilomar Conference on Circuits, Systems and Computers, pp. 83–89, doi:10.1109/ACSSC.1985.671426, ISBN   978-0-8186-0729-5, S2CID   2293566
  2. Volodymyr Vasylyshyn. The direction of arrival estimation using ESPRIT with sparse arrays.// Proc. 2009 European Radar Conference (EuRAD). – 30 Sept.-2 Oct. 2009. - Pp. 246 - 249. -
  3. Hu, Anzhong; Lv, Tiejun; Gao, Hui; Zhang, Zhang; Yang, Shaoshi (2014). "An ESPRIT-Based Approach for 2-D Localization of Incoherently Distributed Sources in Massive MIMO Systems". IEEE Journal of Selected Topics in Signal Processing. 8 (5): 996–1011. arXiv: 1403.5352 . Bibcode:2014ISTSP...8..996H. doi:10.1109/JSTSP.2014.2313409. ISSN   1932-4553. S2CID   11664051.

Further reading