Generalized pencil-of-function method

Last updated
Extraction of two sinusoids from a noisy data through the GPOF method GPOF fit to noisy data.png
Extraction of two sinusoids from a noisy data through the GPOF method

Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency. [1]

Contents

The method was originally developed by Yingbo Hua and Tapan Sarkar for estimating the behaviour of electromagnetic systems by its transient response, building on Sarkar's past work on the original pencil-of-function method. [1] [2] The method has a plethora of applications in electrical engineering, particularly related to problems in computational electromagnetics, microwave engineering and antenna theory. [1]

Method

Mathematical basis

A transient electromagnetic signal can be represented as: [3]

where

is the observed time-domain signal,
is the signal noise,
is the actual signal,
are the residues (),
are the poles of the system, defined as ,
by the identities of Z-transform,
are the damping factors and
are the angular frequencies.

The same sequence, sampled by a period of , can be written as the following:

,

Generalized pencil-of-function estimates the optimal and 's. [4]

Noise-free analysis

For the noiseless case, two matrices, and , are produced: [3]

where is defined as the pencil parameter. and can be decomposed into the following matrices: [3]

where

and are diagonal matrices with sequentially-placed and values, respectively. [3]

If , the generalized eigenvalues of the matrix pencil

yield the poles of the system, which are . Then, the generalized eigenvectors can be obtained by the following identities: [3]

    
    

where the denotes the Moore–Penrose inverse, also known as the pseudo-inverse. Singular value decomposition can be employed to compute the pseudo-inverse.

Noise filtering

If noise is present in the system, and are combined in a general data matrix, : [3]

where is the noisy data. For efficient filtering, L is chosen between and . A singular value decomposition on yields:

In this decomposition, and are unitary matrices with respective eigenvectors and and is a diagonal matrix with singular values of . Superscript denotes the conjugate transpose. [3] [4]

Then the parameter is chosen for filtering. Singular values after , which are below the filtering threshold, are set to zero; for an arbitrary singular value , the threshold is denoted by the following formula: [1]

,

and p are the maximum singular value and significant decimal digits, respectively. For a data with significant digits accurate up to p, singular values below are considered noise. [4]

and are obtained through removing the last and first row and column of the filtered matrix , respectively; columns of represent . Filtered and matrices are obtained as: [4]

Prefiltering can be used to combat noise and enhance signal-to-noise ratio (SNR). [1] Band-pass matrix pencil (BPMP) method is a modification of the GPOF method via FIR or IIR band-pass filters. [1] [5]

GPOF can handle up to 25 dB SNR. For GPOF, as well as for BPMP, variance of the estimates approximately reaches Cramér–Rao bound. [3] [5] [4]

Calculation of residues

Residues of the complex poles are obtained through the least squares problem: [1]

Applications

The method is generally used for the closed-form evaluation of Sommerfeld integrals in discrete complex image method for method of moments applications, where the spectral Green's function is approximated as a sum of complex exponentials. [1] [6] Additionally, the method is used in antenna analysis, S-parameter-estimation in microwave integrated circuits, wave propagation analysis, moving target indication, radar signal processing, [1] [7] [8] and series acceleration in electromagnetic problems. [9]

See also

Related Research Articles

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal for the theorem to apply, nor do they need to be independent and identically distributed.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

<span class="mw-page-title-main">Covariance matrix</span> Measure of covariance of components of a random vector

In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.

In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant.

<span class="mw-page-title-main">Block matrix</span> Matrix defined using smaller matrices called blocks

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.

In linear algebra, the Frobenius companion matrix of the monic polynomial

Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA assumes that words that are close in meaning will occur in similar pieces of text. A matrix containing word counts per document is constructed from a large piece of text and a mathematical technique called singular value decomposition (SVD) is used to reduce the number of rows while preserving the similarity structure among columns. Documents are then compared by cosine similarity between any two columns. Values close to 1 represent very similar documents while values close to 0 represent very dissimilar documents.

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

<span class="mw-page-title-main">Beam emittance</span> Property of a charged particle beam

In accelerator physics, emittance is a property of a charged particle beam. It refers to the area occupied by the beam in a position-and-momentum phase space.

In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging iterative process.

In mathematics and physics, in particular quantum information, the term generalized Pauli matrices refers to families of matrices which generalize the properties of the Pauli matrices. Here, a few classes of such matrices are summarized.

<span class="mw-page-title-main">Prony's method</span> Method to estimate the components of a signal

Prony analysis was developed by Gaspard Riche de Prony in 1795. However, practical use of the method awaited the digital computer. Similar to the Fourier transform, Prony's method extracts valuable information from a uniformly sampled signal and builds a series of damped complex exponentials or damped sinusoids. This allows the estimation of frequency, amplitude, phase and damping components of a signal.

In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The product sometimes includes a permutation matrix as well. LU decomposition can be viewed as the matrix form of Gaussian elimination. Computers usually solve square systems of linear equations using LU decomposition, and it is also a key step when inverting a matrix or computing the determinant of a matrix. The LU decomposition was introduced by the Polish astronomer Tadeusz Banachiewicz in 1938. To quote: "It appears that Gauss and Doolittle applied the method [of elimination] only to symmetric equations. More recent authors, for example, Aitken, Banachiewicz, Dwyer, and Crout … have emphasized the use of the method, or variations of it, in connection with non-symmetric problems … Banachiewicz … saw the point … that the basic problem is really one of matrix factorization, or “decomposition” as he called it." It is also sometimes referred to as LR decomposition.

In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.

In statistics and signal processing, the orthogonality principle is a necessary and sufficient condition for the optimality of a Bayesian estimator. Loosely stated, the orthogonality principle says that the error vector of the optimal estimator is orthogonal to any possible estimator. The orthogonality principle is most commonly stated for linear estimators, but more general formulations are possible. Since the principle is a necessary and sufficient condition for optimality, it can be used to find the minimum mean square error estimator.

<span class="mw-page-title-main">Interval finite element</span>

In numerical analysis, the interval finite element method is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of the structure. This is important in concrete structures, wood structures, geomechanics, composite structures, biomechanics and in many other areas. The goal of the Interval Finite Element is to find upper and lower bounds of different characteristics of the model and use these results in the design process. This is so called worst case design, which is closely related to the limit state design.


In mathematics, a Bunce–Deddens algebra, named after John W. Bunce and James A. Deddens, is a certain type of AT algebra, a direct limit of matrix algebras over the continuous functions on the circle, in which the connecting maps are given by embeddings between families of shift operators with periodic weights.

References

  1. 1 2 3 4 5 6 7 8 9 Sarkar, T. K.; Pereira, O. (February 1995). "Using the matrix pencil method to estimate the parameters of a sum of complex exponentials". IEEE Antennas and Propagation Magazine. 37 (1): 48–55. Bibcode:1995IAPM...37...48S. doi:10.1109/74.370583.
  2. Sarkar, T.; Nebat, J.; Weiner, D.; Jain, V. (November 1980). "Suboptimal approximation/identification of transient waveforms from electromagnetic systems by pencil-of-function method". IEEE Transactions on Antennas and Propagation . 28 (6): 928–933. Bibcode:1980ITAP...28..928S. doi:10.1109/TAP.1980.1142411.
  3. 1 2 3 4 5 6 7 8 Hua, Y.; Sarkar, T. K. (February 1989). "Generalized pencil-of-function method for extracting poles of an EM system from its transient response". IEEE Transactions on Antennas and Propagation . 37 (2): 229–234. Bibcode:1989ITAP...37..229H. doi:10.1109/8.18710.
  4. 1 2 3 4 5 Hua, Y.; Sarkar, T. K. (May 1990). "Matrix pencil method for estimating parameters of exponentially damped/undamped sinusoids in noise". IEEE Transactions on Acoustics, Speech, and Signal Processing . 38 (5): 814–824. doi:10.1109/29.56027.
  5. 1 2 Hu, Fengduo; Sarkar, T. K.; Hua, Yingbo (January 1993). "Utilization of Bandpass Filtering for the Matrix Pencil Method". IEEE Transactions on Signal Processing . 41 (1): 442–446. Bibcode:1993ITSP...41..442H. doi:10.1109/TSP.1993.193174.
  6. Dural, G.; Aksun, M. I. (July 1995). "Closed-form Green's functions for general sources and stratified media". IEEE Transactions on Microwave Theory and Techniques . 43 (7): 1545–1552. Bibcode:1995ITMTT..43.1545D. doi:10.1109/22.392913. hdl: 11693/10756 .
  7. Kahrizi, M.; Sarkar, T. K.; Maricevic, Z. A. (January 1994). "Analysis of a wide radiating slot in the ground plane of a microstrip line". IEEE Transactions on Microwave Theory and Techniques . 41 (1): 29–37. doi:10.1109/22.210226.
  8. Hua, Y. (January 1994). "High resolution imaging of continuously moving object using stepped frequency radar". Signal Processing. 35 (1): 33–40. doi:10.1016/0165-1684(94)90188-0.
  9. Karabulut, E. Pınar; Ertürk, Vakur B.; Alatan, Lale; Karan, S.; Alişan, Burak; Aksun, M. I. (2016). "A novel approach for the efficient computation of 1-D and 2-D summations". IEEE Transactions on Antennas and Propagation . 64 (3): 1014–1022. Bibcode:2016ITAP...64.1014K. doi:10.1109/TAP.2016.2521860.