Jacobi operator

Last updated

A Jacobi operator, also known as Jacobi matrix, is a symmetric linear operator acting on sequences which is given by an infinite tridiagonal matrix. It is commonly used to specify systems of orthonormal polynomials over a finite, positive Borel measure. This operator is named after Carl Gustav Jacob Jacobi.

Contents

The name derives from a theorem from Jacobi, dating to 1848, stating that every symmetric matrix over a principal ideal domain is congruent to a tridiagonal matrix.

Self-adjoint Jacobi operators

The most important case is the one of self-adjoint Jacobi operators acting on the Hilbert space of square summable sequences over the positive integers . In this case it is given by

where the coefficients are assumed to satisfy

The operator will be bounded if and only if the coefficients are bounded.

There are close connections with the theory of orthogonal polynomials. In fact, the solution of the recurrence relation

is a polynomial of degree n and these polynomials are orthonormal with respect to the spectral measure corresponding to the first basis vector .

This recurrence relation is also commonly written as

Applications

It arises in many areas of mathematics and physics. The case a(n) = 1 is known as the discrete one-dimensional Schrödinger operator. It also arises in:

Generalizations

When one considers Bergman space, namely the space of square-integrable holomorphic functions over some domain, then, under general circumstances, one can give that space a basis of orthogonal polynomials, the Bergman polynomials. In this case, the analog of the tridiagonal Jacobi operator is a Hessenberg operator – an infinite-dimensional Hessenberg matrix. The system of orthogonal polynomials is given by

and . Here, D is the Hessenberg operator that generalizes the tridiagonal Jacobi operator J for this situation. [2] [3] [4] Note that D is the right-shift operator on the Bergman space: that is, it is given by

The zeros of the Bergman polynomial correspond to the eigenvalues of the principal submatrix of D. That is, The Bergman polynomials are the characteristic polynomials for the principal submatrices of the shift operator.

See also

Related Research Articles

In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix.

In linear algebra, a Hankel matrix, named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant. For example,

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:

In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.

In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.

In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.

The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the "most useful" eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability.

In mathematics, the composition operator with symbol is a linear operator defined by the rule

In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence (m0, m1, m2, ...), does there exist a positive Borel measure μ (for instance, the measure determined by the cumulative distribution function of a random variable) on the real line such that

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In numerical analysis, Gauss–Legendre quadrature is a form of Gaussian quadrature for approximating the definite integral of a function. For integrating over the interval [−1, 1], the rule takes the form:

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.

In mathematics, an orthogonal polynomial sequence is a family of polynomials such that any two different polynomials in the sequence are orthogonal to each other under some inner product.

References

  1. Meurant, Gérard; Sommariva, Alvise (2014). "Fast variants of the Golub and Welsch algorithm for symmetric weight functions in Matlab" (PDF). Numerical Algorithms. 67 (3): 491–506. doi:10.1007/s11075-013-9804-x. S2CID   7385259.
  2. Tomeo, V.; Torrano, E. (2011). "Two applications of the subnormality of the Hessenberg matrix related to general orthogonal polynomials". Linear Algebra and Its Applications. 435 (9): 2314–2320. doi: 10.1016/j.laa.2011.04.027 .
  3. Saff, Edward B.; Stylianopoulos, Nikos (2012). "Asymptotics for Hessenberg matrices for the Bergman shift operator on Jordan regions". arXiv: 1205.4183 .{{cite journal}}: Cite journal requires |journal= (help)
  4. Escribano, Carmen; Giraldo, Antonio; Asunción Sastre, M.; Torrano, Emilio (2011). "The Hessenberg matrix and the Riemann mapping". arXiv: 1107.6036 .{{cite journal}}: Cite journal requires |journal= (help)