Extensions of symmetric operators

Last updated

In functional analysis, one is interested in extensions of symmetric operators acting on a Hilbert space. Of particular importance is the existence, and sometimes explicit constructions, of self-adjoint extensions. This problem arises, for example, when one needs to specify domains of self-adjointness for formal expressions of observables in quantum mechanics. Other applications of solutions to this problem can be seen in various moment problems.

Contents

This article discusses a few related problems of this type. The unifying theme is that each problem has an operator-theoretic characterization which gives a corresponding parametrization of solutions. More specifically, finding self-adjoint extensions, with various requirements, of symmetric operators is equivalent to finding unitary extensions of suitable partial isometries.

Symmetric operators

Let be a Hilbert space. A linear operator acting on with dense domain is symmetric if

If , the Hellinger-Toeplitz theorem says that is a bounded operator, in which case is self-adjoint and the extension problem is trivial. In general, a symmetric operator is self-adjoint if the domain of its adjoint, , lies in .

When dealing with unbounded operators, it is often desirable to be able to assume that the operator in question is closed. In the present context, it is a convenient fact that every symmetric operator is closable. That is, has the smallest closed extension, called the closure of . This can be shown by invoking the symmetric assumption and Riesz representation theorem. Since and its closure have the same closed extensions, it can always be assumed that the symmetric operator of interest is closed.

In the next section, a symmetric operator will be assumed to be densely defined and closed.

Self-adjoint extensions of symmetric operators

If an operator on the Hilbert space is symmetric, when does it have self-adjoint extensions? An operator that has a unique self-adjoint extension is said to be essentially self-adjoint; equivalently, an operator is essentially self-adjoint if its closure (the operator whose graph is the closure of the graph of ) is self-adjoint. In general, a symmetric operator could have many self-adjoint extensions or none at all. Thus, we would like a classification of its self-adjoint extensions.

The first basic criterion for essential self-adjointness is the following: [1]

Theorem   If is a symmetric operator on , then is essentially self-adjoint if and only if the range of the operators and are dense in .

Equivalently, is essentially self-adjoint if and only if the operators have trivial kernels. [2] That is to say, fails to be self-adjoint if and only if has an eigenvector with complex eigenvalues .

Another way of looking at the issue is provided by the Cayley transform of a self-adjoint operator and the deficiency indices. [3]

Theorem  Suppose is a symmetric operator. Then there is a unique densely defined linear operator

such that

is isometric on its domain. Moreover, is dense in .

Conversely, given any densely defined operator which is isometric on its (not necessarily closed) domain and such that is dense, then there is a (unique) densely defined symmetric operator

such that

The mappings and are inverses of each other, i.e., .

The mapping is called the Cayley transform. It associates a partially defined isometry to any symmetric densely defined operator. Note that the mappings and are monotone: This means that if is a symmetric operator that extends the densely defined symmetric operator , then extends , and similarly for .

Theorem  A necessary and sufficient condition for to be self-adjoint is that its Cayley transform be unitary on .

This immediately gives us a necessary and sufficient condition for to have a self-adjoint extension, as follows:

Theorem  A necessary and sufficient condition for to have a self-adjoint extension is that have a unitary extension to .

A partially defined isometric operator on a Hilbert space has a unique isometric extension to the norm closure of . A partially defined isometric operator with closed domain is called a partial isometry.

Define the deficiency subspaces of A by

In this language, the description of the self-adjoint extension problem given by the theorem can be restated as follows: a symmetric operator has self-adjoint extensions if and only if the deficiency subspaces and have the same dimension. [4]

The deficiency indices of a partial isometry are defined as the dimension of the orthogonal complements of the domain and range:

Theorem  A partial isometry has a unitary extension if and only if the deficiency indices are identical. Moreover, has a unique unitary extension if and only if the deficiency indices are both zero.

We see that there is a bijection between symmetric extensions of an operator and isometric extensions of its Cayley transform. The symmetric extension is self-adjoint if and only if the corresponding isometric extension is unitary.

A symmetric operator has a unique self-adjoint extension if and only if both its deficiency indices are zero. Such an operator is said to be essentially self-adjoint. Symmetric operators which are not essentially self-adjoint may still have a canonical self-adjoint extension. Such is the case for non-negative symmetric operators (or more generally, operators which are bounded below). These operators always have a canonically defined Friedrichs extension and for these operators we can define a canonical functional calculus. Many operators that occur in analysis are bounded below (such as the negative of the Laplacian operator), so the issue of essential adjointness for these operators is less critical.

Suppose is symmetric densely defined. Then any symmetric extension of is a restriction of . Indeed, and symmetric yields by applying the definition of . This notion leads to the von Neumann formulae: [5]

Theorem   Suppose is a densely defined symmetric operator, with domain . Let

be any pair of its deficiency subspaces. Then

and

where the decomposition is orthogonal relative to the graph inner product of :

Example

Consider the Hilbert space . On the subspace of absolutely continuous function that vanish on the boundary, define the operator by

Integration by parts shows is symmetric. Its adjoint is the same operator with being the absolutely continuous functions with no boundary condition. We will see that extending A amounts to modifying the boundary conditions, thereby enlarging and reducing , until the two coincide.

Direct calculation shows that and are one-dimensional subspaces given by

where is a normalizing constant. The self-adjoint extensions of are parametrized by the circle group . For each unitary transformation defined by

there corresponds an extension with domain

If , then is absolutely continuous and

Conversely, if is absolutely continuous and for some , then lies in the above domain.

The self-adjoint operators are instances of the momentum operator in quantum mechanics.

Self-adjoint extension on a larger space

Every partial isometry can be extended, on a possibly larger space, to a unitary operator. Consequently, every symmetric operator has a self-adjoint extension, on a possibly larger space.

Positive symmetric operators

A symmetric operator is called positive if

It is known that for every such , one has . Therefore, every positive symmetric operator has self-adjoint extensions. The more interesting question in this direction is whether has positive self-adjoint extensions.

For two positive operators and , we put if

in the sense of bounded operators.

Structure of 2 × 2 matrix contractions

While the extension problem for general symmetric operators is essentially that of extending partial isometries to unitaries, for positive symmetric operators the question becomes one of extending contractions: by "filling out" certain unknown entries of a 2 × 2 self-adjoint contraction, we obtain the positive self-adjoint extensions of a positive symmetric operator.

Before stating the relevant result, we first fix some terminology. For a contraction , acting on , we define its defect operators by

The defect spaces of are

The defect operators indicate the non-unitarity of , while the defect spaces ensure uniqueness in some parameterizations. Using this machinery, one can explicitly describe the structure of general matrix contractions. We will only need the 2 × 2 case. Every 2 × 2 contraction can be uniquely expressed as

where each is a contraction.

Extensions of Positive symmetric operators

The Cayley transform for general symmetric operators can be adapted to this special case. For every non-negative number ,

This suggests we assign to every positive symmetric operator a contraction

defined by

which have matrix representation[ clarification needed ]

It is easily verified that the entry, projected onto , is self-adjoint. The operator can be written as

with . If is a contraction that extends and its projection onto its domain is self-adjoint, then it is clear that its inverse Cayley transform

defined on is a positive symmetric extension of . The symmetric property follows from its projection onto its own domain being self-adjoint and positivity follows from contractivity. The converse is also true: given a positive symmetric extension of , its Cayley transform is a contraction satisfying the stated "partial" self-adjoint property.

Theorem  The positive symmetric extensions of are in one-to-one correspondence with the extensions of its Cayley transform where, if is such an extension, we require projected onto be self-adjoint.

The unitarity criterion of the Cayley transform is replaced by self-adjointness for positive operators.

Theorem  A symmetric positive operator is self-adjoint if and only if its Cayley transform is a self-adjoint contraction defined on all of , i.e. when .

Therefore, finding self-adjoint extension for a positive symmetric operator becomes a "matrix completion problem". Specifically, we need to embed the column contraction into a 2 × 2 self-adjoint contraction. This can always be done and the structure of such contractions gives a parametrization of all possible extensions.

By the preceding subsection, all self-adjoint extensions of takes the form

So the self-adjoint positive extensions of are in bijective correspondence with the self-adjoint contractions on the defect space of . The contractions and give rise to positive extensions and respectively. These are the smallest and largest positive extensions of in the sense that

for any positive self-adjoint extension of . The operator is the Friedrichs extension of and is the von Neumann-Krein extension of .

Similar results can be obtained for accretive operators.

Notes

  1. Hall 2013 Theorem 9.21
  2. Hall 2013 Corollary 9.22
  3. Rudin 1991, p. 356-357 §13.17.
  4. Jørgensen, Kornelson & Shuman 2011, p. 85.
  5. Akhiezer 1981, p. 354.

Related Research Articles

In commutative algebra, the prime spectrum of a ring R is the set of all prime ideals of R, and is usually denoted by ; in algebraic geometry it is simultaneously a topological space equipped with the sheaf of rings .

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In mathematics, specifically functional analysis, a trace-class operator is a linear operator for which a trace may be defined, such that the trace is a finite number independent of the choice of basis used to compute the trace. This trace of trace-class operators generalizes the trace of matrices studied in linear algebra. All trace-class operators are compact operators.

<span class="mw-page-title-main">Differential operator</span> Typically linear operator defined in terms of differentiation of functions

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.

<span class="mw-page-title-main">Reproducing kernel Hilbert space</span> In functional analysis, a Hilbert space

In functional analysis, a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Roughly speaking, this means that if two functions and in the RKHS are close in norm, i.e., is small, then and are also pointwise close, i.e., is small for all . The converse does not need to be true. Informally, this can be shown by looking at the supremum norm: the sequence of functions converges pointwise, but does not converge uniformly i.e. does not converge with respect to the supremum norm.

In mathematics and in theoretical physics, the Stone–von Neumann theorem refers to any one of a number of different formulations of the uniqueness of the canonical commutation relations between position and momentum operators. It is named after Marshall Stone and John von Neumann.

In functional analysis, a branch of mathematics, the Borel functional calculus is a functional calculus, which has particularly broad scope. Thus for instance if T is an operator, applying the squaring function ss2 to T yields the operator T2. Using the functional calculus for larger classes of functions, we can for example define rigorously the "square root" of the (negative) Laplacian operator −Δ or the exponential

In functional analysis, the Friedrichs extension is a canonical self-adjoint extension of a non-negative densely defined symmetric operator. It is named after the mathematician Kurt Friedrichs. This extension is particularly useful in situations where an operator may fail to be essentially self-adjoint or whose essential self-adjointness is difficult to show.

In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix, both square and of the same size.

In mathematics, more specifically functional analysis and operator theory, the notion of unbounded operator provides an abstract framework for dealing with differential operators, unbounded observables in quantum mechanics, and other cases.

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

In mathematics – specifically, in operator theory – a densely defined operator or partially defined operator is a type of partially defined function. In a topological sense, it is a linear operator that is defined "almost everywhere". Densely defined operators often arise in functional analysis as operations that one would like to apply to a larger class of objects than those for which they a priori "make sense".

The Stokes operator, named after George Gabriel Stokes, is an unbounded linear operator used in the theory of partial differential equations, specifically in the fields of fluid dynamics and electromagnetics.

In mathematics, the oscillator representation is a projective unitary representation of the symplectic group, first investigated by Irving Segal, David Shale, and André Weil. A natural extension of the representation leads to a semigroup of contraction operators, introduced as the oscillator semigroup by Roger Howe in 1988. The semigroup had previously been studied by other mathematicians and physicists, most notably Felix Berezin in the 1960s. The simplest example in one dimension is given by SU(1,1). It acts as Möbius transformations on the extended complex plane, leaving the unit circle invariant. In that case the oscillator representation is a unitary representation of a double cover of SU(1,1) and the oscillator semigroup corresponds to a representation by contraction operators of the semigroup in SL(2,C) corresponding to Möbius transformations that take the unit disk into itself.

In mathematics, the Earle–Hamilton fixed point theorem is a result in geometric function theory giving sufficient conditions for a holomorphic mapping of an open domain in a complex Banach space into itself to have a fixed point. The result was proved in 1968 by Clifford Earle and Richard S. Hamilton by showing that, with respect to the Carathéodory metric on the domain, the holomorphic mapping becomes a contraction mapping to which the Banach fixed-point theorem can be applied.

In mathematics, there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces. This article covers some important operator inequalities connected with traces of matrices.

In mathematics, symmetric cones, sometimes called domains of positivity, are open convex self-dual cones in Euclidean space which have a transitive group of symmetries, i.e. invertible operators that take the cone onto itself. By the Koecher–Vinberg theorem these correspond to the cone of squares in finite-dimensional real Euclidean Jordan algebras, originally studied and classified by Jordan, von Neumann & Wigner (1934). The tube domain associated with a symmetric cone is a noncompact Hermitian symmetric space of tube type. All the algebraic and geometric structures associated with the symmetric space can be expressed naturally in terms of the Jordan algebra. The other irreducible Hermitian symmetric spaces of noncompact type correspond to Siegel domains of the second kind. These can be described in terms of more complicated structures called Jordan triple systems, which generalize Jordan algebras without identity.

<span class="mw-page-title-main">Glossary of Lie groups and Lie algebras</span>

This is a glossary for the terminology applied in the mathematical theories of Lie groups and Lie algebras. For the topics in the representation theory of Lie groups and Lie algebras, see Glossary of representation theory. Because of the lack of other options, the glossary also includes some generalizations such as quantum group.

In mathematics, and especially differential geometry and mathematical physics, gauge theory is the general study of connections on vector bundles, principal bundles, and fibre bundles. Gauge theory in mathematics should not be confused with the closely related concept of a gauge theory in physics, which is a field theory which admits gauge symmetry. In mathematics theory means a mathematical theory, encapsulating the general study of a collection of concepts or phenomena, whereas in the physical sense a gauge theory is a mathematical model of some natural phenomenon.

In complex geometry, the lemma is a mathematical lemma about the de Rham cohomology class of a complex differential form. The -lemma is a result of Hodge theory and the Kähler identities on a compact Kähler manifold. Sometimes it is also known as the -lemma, due to the use of a related operator , with the relation between the two operators being and so .

References