Operator theory

Last updated

In mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded linear operators or closed operators, and consideration may be given to nonlinear operators. The study, which depends heavily on the topology of function spaces, is a branch of functional analysis.

Contents

If a collection of operators forms an algebra over a field, then it is an operator algebra. The description of operator algebras is part of operator theory.

Single operator theory

Single operator theory deals with the properties and classification of operators, considered one at a time. For example, the classification of normal operators in terms of their spectra falls into this category.

Spectrum of operators

The spectral theorem is any of a number of results about linear operators or about matrices. [1] In broad terms the spectral theorem provides conditions under which an operator or a matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This concept of diagonalization is relatively straightforward for operators on finite-dimensional spaces, but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces.

The spectral theorem also provides a canonical decomposition, called the spectral decomposition, eigenvalue decomposition, or eigendecomposition , of the underlying vector space on which the operator acts.

Normal operators

A normal operator on a complex Hilbert space H is a continuous linear operator N : HH that commutes with its hermitian adjoint N*, that is: NN* = N*N. [2]

Normal operators are important because the spectral theorem holds for them. Today, the class of normal operators is well understood. Examples of normal operators are

The spectral theorem extends to a more general class of matrices. Let A be an operator on a finite-dimensional inner product space. A is said to be normal if A*A = A A*. One can show that A is normal if and only if it is unitarily diagonalizable: By the Schur decomposition, we have A = U T U*, where U is unitary and T upper-triangular. Since A is normal, T T* = T*T. Therefore, T must be diagonal since normal upper triangular matrices are diagonal. The converse is obvious.

In other words, A is normal if and only if there exists a unitary matrix U such that

where D is a diagonal matrix. Then, the entries of the diagonal of D are the eigenvalues of A. The column vectors of U are the eigenvectors of A and they are orthonormal. Unlike the Hermitian case, the entries of D need not be real.

Polar decomposition

The polar decomposition of any bounded linear operator A between complex Hilbert spaces is a canonical factorization as the product of a partial isometry and a non-negative operator. [3]

The polar decomposition for matrices generalizes as follows: if A is a bounded linear operator then there is a unique factorization of A as a product A = UP where U is a partial isometry, P is a non-negative self-adjoint operator and the initial space of U is the closure of the range of P.

The operator U must be weakened to a partial isometry, rather than unitary, because of the following issues. If A is the one-sided shift on l2(N), then |A| = (A*A)1/2 = I. So if A = U |A|, U must be A, which is not unitary.

The existence of a polar decomposition is a consequence of Douglas' lemma:

Lemma  If A, B are bounded operators on a Hilbert space H, and A*AB*B, then there exists a contraction C such that A = CB. Furthermore, C is unique if Ker(B*) ⊂ Ker(C).

The operator C can be defined by C(Bh) = Ah, extended by continuity to the closure of Ran(B), and by zero on the orthogonal complement of Ran(B). The operator C is well-defined since A*AB*B implies Ker(B) ⊂ Ker(A). The lemma then follows.

In particular, if A*A = B*B, then C is a partial isometry, which is unique if Ker(B*) ⊂ Ker(C). In general, for any bounded operator A,

where (A*A)1/2 is the unique positive square root of A*A given by the usual functional calculus. So by the lemma, we have

for some partial isometry U, which is unique if Ker(A) ⊂ Ker(U). (Note Ker(A) = Ker(A*A) = Ker(B) = Ker(B*), where B = B* = (A*A)1/2.) Take P to be (A*A)1/2 and one obtains the polar decomposition A = UP. Notice that an analogous argument can be used to show A = P'U' , where P' is positive and U' a partial isometry.

When H is finite dimensional, U can be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version of singular value decomposition.

By property of the continuous functional calculus, |A| is in the C*-algebra generated by A. A similar but weaker statement holds for the partial isometry: the polar part U is in the von Neumann algebra generated by A. If A is invertible, U will be in the C*-algebra generated by A as well.

Connection with complex analysis

Many operators that are studied are operators on Hilbert spaces of holomorphic functions, and the study of the operator is intimately linked to questions in function theory. For example, Beurling's theorem describes the invariant subspaces of the unilateral shift in terms of inner functions, which are bounded holomorphic functions on the unit disk with unimodular boundary values almost everywhere on the circle. Beurling interpreted the unilateral shift as multiplication by the independent variable on the Hardy space. [4] The success in studying multiplication operators, and more generally Toeplitz operators (which are multiplication, followed by projection onto the Hardy space) has inspired the study of similar questions on other spaces, such as the Bergman space.

Operator algebras

The theory of operator algebras brings algebras of operators such as C*-algebras to the fore.

C*-algebras

A C*-algebra, A, is a Banach algebra over the field of complex numbers, together with a map * : AA. One writes x* for the image of an element x of A. The map * has the following properties: [5]

Remark. The first three identities say that A is a *-algebra. The last identity is called the C* identity and is equivalent to:

The C*-identity is a very strong requirement. For instance, together with the spectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure:

See also

Related Research Articles

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

<span class="mw-page-title-main">Symmetric matrix</span> Matrix equal to its transpose

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,

<span class="mw-page-title-main">Singular value decomposition</span> Matrix decomposition

In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.

In mathematics, a complex square matrix A is normal if it commutes with its conjugate transpose A*:

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In functional analysis, a unitary operator is a surjective bounded operator on a Hilbert space that preserves the inner product. Unitary operators are usually taken as operating on a Hilbert space, but the same notion serves to define the concept of isomorphism between Hilbert spaces.

In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of problems.

<span class="mw-page-title-main">Jordan normal form</span> Form of a matrix indicating its eigenvalues and their algebraic multiplicities

In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal, and with identical diagonal entries to the left and below them.

In mathematics and functional analysis a direct integral is a generalization of the concept of direct sum. The theory is most developed for direct integrals of Hilbert spaces and direct integrals of von Neumann algebras. The concept was introduced in 1949 by John von Neumann in one of the papers in the series On Rings of Operators. One of von Neumann's goals in this paper was to reduce the classification of von Neumann algebras on separable Hilbert spaces to the classification of so-called factors. Factors are analogous to full matrix algebras over a field, and von Neumann wanted to prove a continuous analogue of the Artin–Wedderburn theorem classifying semi-simple rings.

In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix, both square and of the same size.

In the theory of von Neumann algebras, a part of the mathematical field of functional analysis, Tomita–Takesaki theory is a method for constructing modular automorphisms of von Neumann algebras from the polar decomposition of a certain involution. It is essential for the theory of type III factors, and has led to a good structure theory for these previously intractable objects.

In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix B is said to be a square root of A if the matrix product BB is equal to A.

In functional analysis, compact operators are linear operators on Banach spaces that map bounded sets to relatively compact sets. In the case of a Hilbert space H, the compact operators are the closure of the finite rank operators in the uniform operator topology. In general, operators on infinite-dimensional spaces feature properties that do not appear in the finite-dimensional case, i.e. for matrices. The compact operators are notable in that they share as much similarity with matrices as one can expect from a general operator. In particular, the spectral properties of compact operators resemble those of square matrices.

In mathematics, affiliated operators were introduced by Murray and von Neumann in the theory of von Neumann algebras as a technique for using unbounded operators to study modules generated by a single vector. Later Atiyah and Singer showed that index theorems for elliptic operators on closed manifolds with infinite fundamental group could naturally be phrased in terms of unbounded operators affiliated with the von Neumann algebra of the group. Algebraic properties of affiliated operators have proved important in L2 cohomology, an area between analysis and geometry that evolved from the study of such index theorems.

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

In operator theory, quasinormal operators is a class of bounded operators defined by weakening the requirements of a normal operator.

<span class="mw-page-title-main">Hilbert space</span> Generalization of Euclidean space allowing infinite dimensions

In mathematics, Hilbert spaces allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that defines a distance function for which the space is a complete metric space.

In mathematics and more precisely in functional analysis, the Aluthge transformation is an operation defined on the set of bounded operators of a Hilbert space. It was introduced by Ariyadasa Aluthge to study p-hyponormal linear operators.

This is a glossary for the terminology in a mathematical field of functional analysis.

References

  1. Sunder, V.S. Functional Analysis: Spectral Theory (1997) Birkhäuser Verlag
  2. Hoffman, Kenneth; Kunze, Ray (1971), Linear algebra (2nd ed.), Englewood Cliffs, N.J.: Prentice-Hall, Inc., p. 312, MR   0276251
  3. Conway, John B. (2000), A Course in Operator Theory, Graduate Studies in Mathematics, American Mathematical Society, ISBN   0821820656
  4. Nikolski, N. (1986), A treatise on the shift operator, Springer-Verlag, ISBN   0-387-90176-0 . A sophisticated treatment of the connections between Operator theory and Function theory in the Hardy space.
  5. Arveson, W. (1976), An Invitation to C*-Algebra, Springer-Verlag, ISBN   0-387-90176-0 . An excellent introduction to the subject, accessible for those with a knowledge of basic functional analysis.

Further reading