Hankel matrix

Last updated

In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a n x m matrix in which each ascending skew-diagonal from left to right is constant. For example,

Contents

More generally, a Hankel matrix is any matrix of the form

In terms of the components, if the element of is denoted with , and assuming , then we have for all

Properties

Hankel operator

Given a formal Laurent series the corresponding Hankel operator is defined as [2] This takes a polynomial and sends it to the product , but discards all powers of with a non-negative exponent, so as to give an element in , the formal power series with strictly negative exponents. The map is in a natural way -linear, and its matrix with respect to the elements and is the Hankel matrix Any Hankel matrix arises in this way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if is a rational function, that is, a fraction of two polynomials

Approximations

We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.

Note that the matrix does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.

Hankel matrix transform

The Hankel matrix transform, or simply Hankel transform, of a sequence is the sequence of the determinants of the Hankel matrices formed from . Given an integer , define the corresponding -dimensional Hankel matrix as having the matrix elements Then the sequence given by is the Hankel transform of the sequence The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes as the binomial transform of the sequence , then one has

Applications of Hankel matrices

Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired. [3] The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization. [4] The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.

Method of moments for polynomial distributions

The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation. [5]

Positive Hankel matrices and the Hamburger moment problems

See also

Notes

  1. Yasuda, M. (2003). "A Spectral Characterization of Hermitian Centrosymmetric and Hermitian Skew-Centrosymmetric K-Matrices". SIAM J. Matrix Anal. Appl. 25 (3): 601–605. doi:10.1137/S0895479802418835.
  2. Fuhrmann 2012 , §8.3
  3. Aoki, Masanao (1983). "Prediction of Time Series". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 38–47. ISBN   0-387-12696-1.
  4. Aoki, Masanao (1983). "Rank determination of Hankel matrices". Notes on Economic Time Series Analysis : System Theoretic Perspectives. New York: Springer. pp. 67–68. ISBN   0-387-12696-1.
  5. J. Munkhammar, L. Mattsson, J. Rydén (2017) "Polynomial probability distribution estimation using the method of moments". PLoS ONE 12(4): e0174573. https://doi.org/10.1371/journal.pone.0174573

Related Research Articles

In mathematics, the determinant is a scalar-valued function of the entries of a square matrix. The determinant of a matrix A is commonly denoted det(A), det A, or |A|. Its value characterizes some properties of the matrix and the linear map represented, on a given basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the corresponding linear map is an isomorphism.

<span class="mw-page-title-main">Matrix multiplication</span> Mathematical operation in linear algebra

In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices A and B is denoted as AB.

In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix:

In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the column vector of right-sides of the equations. It is named after Gabriel Cramer, who published the rule for an arbitrary number of unknowns in 1750, although Colin Maclaurin also published special cases of the rule in 1748, and possibly knew of it as early as 1729.

In linear algebra, the adjugate or classical adjoint of a square matrix A, adj(A), is the transpose of its cofactor matrix. It is occasionally known as adjunct matrix, or "adjoint", though that normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.

In linear algebra, an invertible matrix is a square matrix which has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by an inverse to undo the operation. Invertible matrices are the same size as their inverse.

In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix generated from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices are required for calculating matrix cofactors, which are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.

In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero.

In mathematics, the Hessian matrix, Hessian or Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or, ambiguously, by ∇2.

<span class="mw-page-title-main">Block matrix</span> Matrix defined using smaller matrices called blocks

In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

In linear algebra, the Frobenius companion matrix of the monic polynomial is the square matrix defined as

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product from vectors to matrices and gives the matrix of the tensor product linear map with respect to a standard choice of basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product.

In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.

In linear algebra, a nilpotent matrix is a square matrix N such that

In linear algebra, an eigenvector or characteristic vector is a vector that has its direction unchanged by a given linear transformation. More precisely, an eigenvector, , of a linear transformation, , is scaled by a constant factor, , when the linear transformation is applied to it: . The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor .

In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring R, where each block along the diagonal, called a Jordan block, has the following form:

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

<span class="mw-page-title-main">Matrix (mathematics)</span> Array of numbers

In mathematics, a matrix is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object.

Polynomial Matrix Spectral Factorization or Matrix Fejer–Riesz Theorem is a tool used to study the matrix decomposition of polynomial matrices. Polynomial matrices are widely studied in the fields of systems theory and control theory and have seen other uses relating to stable polynomials. In stability theory, Spectral Factorization has been used to find determinantal matrix representations for bivariate stable polynomials and real zero polynomials.

References