Jacobi operator

Last updated

A Jacobi operator, also known as Jacobi matrix, is a symmetric linear operator acting on sequences which is given by an infinite tridiagonal matrix. It is commonly used to specify systems of orthonormal polynomials over a finite, positive Borel measure. This operator is named after Carl Gustav Jacob Jacobi.

Contents

The name derives from a theorem from Jacobi, dating to 1848, stating that every symmetric matrix over a principal ideal domain is congruent to a tridiagonal matrix.

Self-adjoint Jacobi operators

The most important case is the one of self-adjoint Jacobi operators acting on the Hilbert space of square summable sequences over the positive integers . In this case it is given by

where the coefficients are assumed to satisfy

The operator will be bounded if and only if the coefficients are bounded.

There are close connections with the theory of orthogonal polynomials. In fact, the solution of the recurrence relation

is a polynomial of degree n and these polynomials are orthonormal with respect to the spectral measure corresponding to the first basis vector .

This recurrence relation is also commonly written as

Applications

It arises in many areas of mathematics and physics. The case a(n) = 1 is known as the discrete one-dimensional Schrödinger operator. It also arises in:

Generalizations

When one considers Bergman space, namely the space of square-integrable holomorphic functions over some domain, then, under general circumstances, one can give that space a basis of orthogonal polynomials, the Bergman polynomials. In this case, the analog of the tridiagonal Jacobi operator is a Hessenberg operator – an infinite-dimensional Hessenberg matrix. The system of orthogonal polynomials is given by

and . Here, D is the Hessenberg operator that generalizes the tridiagonal Jacobi operator J for this situation. [2] [3] [4] Note that D is the right-shift operator on the Bergman space: that is, it is given by

The zeros of the Bergman polynomial correspond to the eigenvalues of the principal submatrix of D. That is, The Bergman polynomials are the characteristic polynomials for the principal submatrices of the shift operator.

See also

Related Research Articles

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

In mathematics, a recurrence relation is an equation according to which the th term of a sequence of numbers is equal to some combination of the previous terms. Often, only previous terms of the sequence appear in the equation, for a parameter that is independent of ; this number is called the order of the relation. If the values of the first numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.

In mathematics, particularly in linear algebra, a skew-symmetricmatrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition

In mathematics, a quadratic form is a polynomial with terms all of degree two. For example,

In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix , often called the pseudoinverse, is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. The terms pseudoinverse and generalized inverse are sometimes used as synonyms for the Moore–Penrose inverse of a matrix, but sometimes applied to other elements of algebraic structures which share some but not all properties expected for an inverse element.

In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors.

<span class="mw-page-title-main">Projection (linear algebra)</span> Idempotent linear transformation from a vector space to itself

In linear algebra and functional analysis, a projection is a linear transformation from a vector space to itself such that . That is, whenever is applied twice to any vector, it gives the same result as if it were applied once. It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.

In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg.

In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal, and the supradiagonal/upper diagonal. For example, the following matrix is tridiagonal:

In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate.

In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices.

The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the "most useful" eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability.

In mathematics, the composition operator with symbol is a linear operator defined by the rule

In mathematics, the Hamburger moment problem, named after Hans Ludwig Hamburger, is formulated as follows: given a sequence (m0, m1, m2, ...), does there exist a positive Borel measure μ (for instance, the measure determined by the cumulative distribution function of a random variable) on the real line such that

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem.

In the mathematical field of linear algebra, an arrowhead matrix is a square matrix containing zeros in all entries except for the first row, first column, and main diagonal, these entries can be any number. In other words, the matrix has the form

In numerical analysis, Gauss–Legendre quadrature is a form of Gaussian quadrature for approximating the definite integral of a function. For integrating over the interval [−1, 1], the rule takes the form:

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.

In mathematics, an orthogonal polynomial sequence is a family of polynomials such that any two different polynomials in the sequence are orthogonal to each other under some inner product.

References

  1. Meurant, Gérard; Sommariva, Alvise (2014). "Fast variants of the Golub and Welsch algorithm for symmetric weight functions in Matlab" (PDF). Numerical Algorithms. 67 (3): 491–506. doi:10.1007/s11075-013-9804-x. S2CID   7385259.
  2. Tomeo, V.; Torrano, E. (2011). "Two applications of the subnormality of the Hessenberg matrix related to general orthogonal polynomials" (PDF). Linear Algebra and Its Applications. 435 (9): 2314–2320. doi: 10.1016/j.laa.2011.04.027 .
  3. Saff, Edward B.; Stylianopoulos, Nikos (2014). "Asymptotics for Hessenberg matrices for the Bergman shift operator on Jordan regions". Complex Analysis and Operator Theory. 8 (1): 1–24. arXiv: 1205.4183 . doi:10.1007/s11785-012-0252-8. MR   3147709.
  4. Escribano, Carmen; Giraldo, Antonio; Sastre, M. Asunción; Torrano, Emilio (2013). "The Hessenberg matrix and the Riemann mapping function". Advances in Computational Mathematics. 39 (3–4): 525–545. arXiv: 1107.6036 . doi:10.1007/s10444-012-9291-y. MR   3116040.