Unbounded operator

Last updated

In mathematics, more specifically functional analysis and operator theory, the notion of unbounded operator provides an abstract framework for dealing with differential operators, unbounded observables in quantum mechanics, and other cases.

Contents

The term "unbounded operator" can be misleading, since

In contrast to bounded operators, unbounded operators on a given space do not form an algebra, nor even a linear space, because each one is defined on its own domain.

The term "operator" often means "bounded linear operator", but in the context of this article it means "unbounded operator", with the reservations made above. The given space is assumed to be a Hilbert space.[ clarification needed ] Some generalizations to Banach spaces and more general topological vector spaces are possible.

Short history

The theory of unbounded operators developed in the late 1920s and early 1930s as part of developing a rigorous mathematical framework for quantum mechanics. [1] The theory's development is due to John von Neumann [2] and Marshall Stone. [3] Von Neumann introduced using graphs to analyze unbounded operators in 1936. [4]

Definitions and basic properties

Let X, Y be Banach spaces. An unbounded operator (or simply operator) T : XY is a linear map T from a linear subspace D(T) ⊆ X — the domain of T — to the space Y. [5] Contrary to the usual convention, T may not be defined on the whole space X. Two operators are equal if they have a common domain and they coincide on that common domain. [5]

An operator T is said to be closed if its graph Γ(T) is a closed set. [6] (Here, the graph Γ(T) is a linear subspace of the direct sum XY, defined as the set of all pairs (x, Tx), where x runs over the domain of T.) Explicitly, this means that for every sequence {xn} of points from the domain of T such that xnx and Txny, it holds that x belongs to the domain of T and Tx = y. [6] The closedness can also be formulated in terms of the graph norm: an operator T is closed if and only if its domain D(T) is a complete space with respect to the norm: [7]

An operator T is said to be densely defined if its domain is dense in X. [5] This also includes operators defined on the entire space X, since the whole space is dense in itself. The denseness of the domain is necessary and sufficient for the existence of the adjoint (if X and Y are Hilbert spaces) and the transpose; see the sections below.

If T : XY is closed, densely defined and continuous on its domain, then its domain is all of X. [8]

A densely defined operator T on a Hilbert space H is called bounded from below if T + a is a positive operator for some real number a. That is, Tx|x⟩ ≥ −a ||x||2 for all x in the domain of T (or alternatively Tx|x⟩ ≥ a ||x||2 since a is arbitrary). [9] If both T and T are bounded from below then T is bounded. [9]

Example

Let C([0, 1]) denote the space of continuous functions on the unit interval, and let C1([0, 1]) denote the space of continuously differentiable functions. We equip with the supremum norm, , making it a Banach space. Define the classical differentiation operator d/dx : C1([0, 1]) → C([0, 1]) by the usual formula:

Every differentiable function is continuous, so C1([0, 1]) ⊆ C([0, 1]). We claim that d/dx : C([0, 1]) → C([0, 1]) is a well-defined unbounded operator, with domain C1([0, 1]). For this, we need to show that is linear and then, for example, exhibit some such that and .

This is a linear operator, since a linear combination af + bg of two continuously differentiable functions f, g is also continuously differentiable, and

The operator is not bounded. For example,

satisfy

but

as .

The operator is densely defined, and closed.

The same operator can be treated as an operator ZZ for many choices of Banach space Z and not be bounded between any of them. At the same time, it can be bounded as an operator XY for other pairs of Banach spaces X, Y, and also as operator ZZ for some topological vector spaces Z.[ clarification needed ] As an example let IR be an open interval and consider

where:

Adjoint

The adjoint of an unbounded operator can be defined in two equivalent ways. Let T : D(T) ⊆ H1H2 be an unbounded operator between Hilbert spaces.

First, it can be defined in a way analogous to how one defines the adjoint of a bounded operator. Namely, the adjoint T : D(T*) ⊆ H2H1 of T is defined as an operator with the property:

More precisely, T is defined in the following way. If y ∈ H2 is such that is a continuous linear functional on the domain of T, then y is declared to be an element of D(T*) , and after extending the linear functional to the whole space via the Hahn–Banach theorem, it is possible to find a z in H1 such that

since the dual of a Hilbert space can be identified with the set of linear functionals given by the inner product. For each y, z is uniquely determined if and only if the so extended linear functional was densely defined; i.e., if T is densely defined. Finally, letting Ty = z completes the construction of T. [10] Note that T exists if and only if T is densely defined.

By definition, the domain of T consists of elements y in H2 such that is continuous on the domain of T. Consequently, the domain of T could be anything; it could be trivial (i.e., contains only zero). [11] It may happen that the domain of T is a closed hyperplane and T vanishes everywhere on the domain. [12] [13] Thus, boundedness of T on its domain does not imply boundedness of T. On the other hand, if T is defined on the whole space then T is bounded on its domain and therefore can be extended by continuity to a bounded operator on the whole space. [14] If the domain of T is dense, then it has its adjoint T∗∗. [15] A closed densely defined operator T is bounded if and only if T is bounded. [16]

The other equivalent definition of the adjoint can be obtained by noticing a general fact. Define a linear operator J as follows: [15]

Since J is an isometric surjection, it is unitary. Hence: J(Γ(T)) is the graph of some operator S if and only if T is densely defined. [17] A simple calculation shows that this "some" S satisfies:

for every x in the domain of T. Thus, S is the adjoint of T.

It follows immediately from the above definition that the adjoint T is closed. [15] In particular, a self-adjoint operator (i.e., T = T) is closed. An operator T is closed and densely defined if and only if T∗∗ = T. [18]

Some well-known properties for bounded operators generalize to closed densely defined operators. The kernel of a closed operator is closed. Moreover, the kernel of a closed densely defined operator T : H1H2 coincides with the orthogonal complement of the range of the adjoint. That is, [19]

von Neumann's theorem states that TT and TT are self-adjoint, and that I + TT and I + TT both have bounded inverses. [20] If T has trivial kernel, T has dense range (by the above identity.) Moreover:

T is surjective if and only if there is a K > 0 such that ||f||2K||Tf||1 for all f in D(T). [21] (This is essentially a variant of the so-called closed range theorem.) In particular, T has closed range if and only if T has closed range.

In contrast to the bounded case, it is not necessary that (TS) = ST, since, for example, it is even possible that (TS) doesn't exist.[ citation needed ] This is, however, the case if, for example, T is bounded. [22]

A densely defined, closed operator T is called normal if it satisfies the following equivalent conditions: [23]

Every self-adjoint operator is normal.

Transpose

Let T : B1B2 be an operator between Banach spaces. Then the transpose (or dual) of T is an operator satisfying:

for all x in B1 and y in B2*. Here, we used the notation: . [24]

The necessary and sufficient condition for the transpose of T to exist is that T is densely defined (for essentially the same reason as to adjoints, as discussed above.)

For any Hilbert space H, there is the anti-linear isomorphism:

given by Jf = y where . Through this isomorphism, the transpose T' relates to the adjoint T in the following way:

, [25]

where . (For the finite-dimensional case, this corresponds to the fact that the adjoint of a matrix is its conjugate transpose.) Note that this gives the definition of adjoint in terms of a transpose.

Closed linear operators

Closed linear operators are a class of linear operators on Banach spaces. They are more general than bounded operators, and therefore not necessarily continuous, but they still retain nice enough properties that one can define the spectrum and (with certain assumptions) functional calculus for such operators. Many important linear operators which fail to be bounded turn out to be closed, such as the derivative and a large class of differential operators.

Let X, Y be two Banach spaces. A linear operator A : D(A) ⊆ XY is closed if for every sequence {xn} in D(A) converging to x in X such that AxnyY as n → ∞ one has xD(A) and Ax = y. Equivalently, A is closed if its graph is closed in the direct sum XY.

Given a linear operator A, not necessarily closed, if the closure of its graph in XY happens to be the graph of some operator, that operator is called the closure of A, and we say that A is closable. Denote the closure of A by A. It follows that A is the restriction of A to D(A).

A core (or essential domain) of a closable operator is a subset C of D(A) such that the closure of the restriction of A to C is A.

Example

Consider the derivative operator A = d/dx where X = Y = C([a, b]) is the Banach space of all continuous functions on an interval [a, b]. If one takes its domain D(A) to be C1([a, b]), then A is a closed operator, which is not bounded. [26] On the other hand if {{math|1=D(A) = [[smooth function|C([a, b])]]}}, then A will no longer be closed, but it will be closable, with the closure being its extension defined on C1([a, b]).

Symmetric operators and self-adjoint operators

An operator T on a Hilbert space is symmetric if and only if for each x and y in the domain of T we have . A densely defined operator T is symmetric if and only if it agrees with its adjoint T restricted to the domain of T, in other words when T is an extension of T. [27]

In general, if T is densely defined and symmetric, the domain of the adjoint T need not equal the domain of T. If T is symmetric and the domain of T and the domain of the adjoint coincide, then we say that T is self-adjoint. [28] Note that, when T is self-adjoint, the existence of the adjoint implies that T is densely defined and since T is necessarily closed, T is closed.

A densely defined operator T is symmetric, if the subspace Γ(T) (defined in a previous section) is orthogonal to its image J(Γ(T)) under J (where J(x,y):=(y,-x)). [29]

Equivalently, an operator T is self-adjoint if it is densely defined, closed, symmetric, and satisfies the fourth condition: both operators Ti, T + i are surjective, that is, map the domain of T onto the whole space H. In other words: for every x in H there exist y and z in the domain of T such that Tyiy = x and Tz + iz = x. [30]

An operator T is self-adjoint, if the two subspaces Γ(T), J(Γ(T)) are orthogonal and their sum is the whole space [15]

This approach does not cover non-densely defined closed operators. Non-densely defined symmetric operators can be defined directly or via graphs, but not via adjoint operators.

A symmetric operator is often studied via its Cayley transform.

An operator T on a complex Hilbert space is symmetric if and only if its quadratic form is real, that is, the number is real for all x in the domain of T. [27]

A densely defined closed symmetric operator T is self-adjoint if and only if T is symmetric. [31] It may happen that it is not. [32] [33]

A densely defined operator T is called positive [9] (or nonnegative [34] ) if its quadratic form is nonnegative, that is, for all x in the domain of T. Such operator is necessarily symmetric.

The operator TT is self-adjoint [35] and positive [9] for every densely defined, closed T.

The spectral theorem applies to self-adjoint operators [36] and moreover, to normal operators, [37] [38] but not to densely defined, closed operators in general, since in this case the spectrum can be empty. [39] [40]

A symmetric operator defined everywhere is closed, therefore bounded, [6] which is the Hellinger–Toeplitz theorem. [41]

By definition, an operator T is an extension of an operator S if Γ(S) ⊆ Γ(T). [42] An equivalent direct definition: for every x in the domain of S, x belongs to the domain of T and Sx = Tx. [5] [42]

Note that an everywhere defined extension exists for every operator, which is a purely algebraic fact explained at Discontinuous linear map#General existence theorem and based on the axiom of choice. If the given operator is not bounded then the extension is a discontinuous linear map. It is of little use since it cannot preserve important properties of the given operator (see below), and usually is highly non-unique.

An operator T is called closable if it satisfies the following equivalent conditions: [6] [42] [43]

Not all operators are closable. [44]

A closable operator T has the least closed extension called the closure of T. The closure of the graph of T is equal to the graph of [6] [42]

Other, non-minimal closed extensions may exist. [32] [33]

A densely defined operator T is closable if and only if T is densely defined. In this case and [15] [45]

If S is densely defined and T is an extension of S then S is an extension of T. [46]

Every symmetric operator is closable. [47]

A symmetric operator is called maximal symmetric if it has no symmetric extensions, except for itself. [27]

Every self-adjoint operator is maximal symmetric. [27] The converse is wrong. [48]

An operator is called essentially self-adjoint if its closure is self-adjoint. [47]

An operator is essentially self-adjoint if and only if it has one and only one self-adjoint extension. [31]

A symmetric operator may have more than one self-adjoint extension, and even a continuum of them. [33]

A densely defined, symmetric operator T is essentially self-adjoint if and only if both operators Ti, T + i have dense range. [49]

Let T be a densely defined operator. Denoting the relation "T is an extension of S" by ST (a conventional abbreviation for Γ(S) ⊆ Γ(T)) one has the following. [50]

Importance of self-adjoint operators

The class of self-adjoint operators is especially important in mathematical physics. Every self-adjoint operator is densely defined, closed and symmetric. The converse holds for bounded operators but fails in general. Self-adjointness is substantially more restricting than these three properties. The famous spectral theorem holds for self-adjoint operators. In combination with Stone's theorem on one-parameter unitary groups it shows that self-adjoint operators are precisely the infinitesimal generators of strongly continuous one-parameter unitary groups, see Self-adjoint operator#Self-adjoint extensions in quantum mechanics. Such unitary groups are especially important for describing time evolution in classical and quantum mechanics.

See also

Notes

  1. Reed & Simon 1980 , Notes to Chapter VIII, page 305
  2. von Neumann, J. (1930), "Allgemeine Eigenwerttheorie Hermitescher Functionaloperatoren (General Eigenvalue Theory of Hermitian Functional Operators)", Mathematische Annalen, 102 (1): 49–131, doi:10.1007/BF01782338
  3. Stone, Marshall Harvey (1932). Linear Transformations in Hilbert Space and Their Applications to Analysis. Reprint of the 1932 Ed. American Mathematical Society. ISBN   978-0-8218-7452-3.
  4. von Neumann, J. (1936), "Über Adjungierte Funktionaloperatore (On Adjoint Functional Operators)", Annals of Mathematics, Second Series, 33 (2): 294–310, doi:10.2307/1968331, JSTOR   1968331
  5. 1 2 3 4 Pedersen 1989 , 5.1.1
  6. 1 2 3 4 5 Pedersen 1989 , 5.1.4
  7. Berezansky, Sheftel & Us 1996 , page 5
  8. Suppose fj is a sequence in the domain of T that converges to gX. Since T is uniformly continuous on its domain, Tfj is Cauchy in Y. Thus, (fj, Tfj) is Cauchy and so converges to some (f, Tf) since the graph of T is closed. Hence, f = g, and the domain of T is closed.
  9. 1 2 3 4 Pedersen 1989 , 5.1.12
  10. Verifying that T is linear trivial.
  11. Berezansky, Sheftel & Us 1996 , Example 3.2 on page 16
  12. Reed & Simon 1980 , page 252
  13. Berezansky, Sheftel & Us 1996 , Example 3.1 on page 15
  14. Proof: being closed, the everywhere defined T is bounded, which implies boundedness of T∗∗, the latter being the closure of T. See also ( Pedersen 1989 , 2.3.11) for the case of everywhere defined T.
  15. 1 2 3 4 5 Pedersen 1989 , 5.1.5
  16. Proof: T∗∗ = T. So, if T is bounded, then its adjoint T is bounded.
  17. Berezansky, Sheftel & Us 1996 , page 12
  18. Proof: If T is closed densely defined, then T exists and is densely defined. Thus, T∗∗ exists. The graph of T is dense in the graph of T∗∗; hence, T = T∗∗. Conversely, since the existence of T∗∗ implies that that of T, which in turn implies T is densely defined. Since T∗∗ is closed, T is densely defined and closed.
  19. Brezis, pp. 28.
  20. Yoshida, pp. 200.
  21. If T is surjective, then T : (ker T)H2 has bounded inverse, denoted by S. The estimate then follows since
    Conversely, suppose the estimate holds. Since T has closed range, it is the case that ran(T) = ran(TT*). Since ran(T) is dense, it suffices to show that TT has closed range. If TTfj is convergent, then fj is convergent by the estimate since
    Say, fjg. Since TT is self-adjoint; thus, closed, (von Neumann's theorem), TTfjTTg. QED
  22. Yoshida, pp. 195.
  23. Pedersen 1989 , 5.1.11
  24. Yoshida, pp. 193.
  25. Yoshida, pp. 196.
  26. Kreyszig, Erwin (1978). Introductory Functional Analysis With Applications. USA: John Wiley & Sons. Inc. p. 294. ISBN   0-471-50731-8.
  27. 1 2 3 4 Pedersen 1989 , 5.1.3
  28. Kato 1995 , 5.3.3
  29. Follows from ( Pedersen 1989 , 5.1.5) and the definition via adjoint operators.
  30. Pedersen 1989 , 5.2.5
  31. 1 2 Reed & Simon 1980 , page 256
  32. 1 2 Pedersen 1989 , 5.1.16
  33. 1 2 3 Reed & Simon 1980 , Example on pages 257-259
  34. Berezansky, Sheftel & Us 1996 , page 25
  35. Pedersen 1989 , 5.1.9
  36. Pedersen 1989 , 5.3.8
  37. Berezansky, Sheftel & Us 1996 , page 89
  38. Pedersen 1989 , 5.3.19
  39. Reed & Simon 1980 , Example 5 on page 254
  40. Pedersen 1989 , 5.2.12
  41. Reed & Simon 1980 , page 84
  42. 1 2 3 4 Reed & Simon 1980 , page 250
  43. Berezansky, Sheftel & Us 1996 , pages 6,7
  44. Berezansky, Sheftel & Us 1996 , page 7
  45. Reed & Simon 1980 , page 253
  46. Pedersen 1989 , 5.1.2
  47. 1 2 Pedersen 1989 , 5.1.6
  48. Pedersen 1989 , 5.2.6
  49. Reed & Simon 1980 , page 257
  50. Reed & Simon 1980 , pages 255, 256

Related Research Articles

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

In mathematics, a self-adjoint operator on a finite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint: for all vectors v and w. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In mathematics, especially functional analysis, a normal operator on a complex Hilbert space H is a continuous linear operator N : HH that commutes with its hermitian adjoint N*, that is: NN* = N*N.

In mathematics, a trace-class operator is a compact operator for which a trace may be defined, such that the trace is finite and independent of the choice of basis. Trace-class operators are essentially the same as nuclear operators, though many authors reserve the term "trace-class operator" for the special case of nuclear operators on Hilbert spaces and reserve "nuclear operator" for usage in more general topological vector spaces.

In mathematics, particularly in functional analysis, the spectrum of a bounded linear operator is a generalisation of the set of eigenvalues of a matrix. Specifically, a complex number λ is said to be in the spectrum of a bounded linear operator T if is not invertible, where I is the identity operator. The study of spectra and related properties is known as spectral theory, which has numerous applications, most notably the mathematical formulation of quantum mechanics.

In functional analysis, a branch of mathematics, a unitary operator is a surjective bounded operator on a Hilbert space preserving the inner product. Unitary operators are usually taken as operating on a Hilbert space, but the same notion serves to define the concept of isomorphism between Hilbert spaces.

Differential operator Typically linear operator defined in terms of differentiation of functions

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.

In mathematics, especially functional analysis, a self-adjoint element of a C*-algebra is called positive if its spectrum consists of non-negative real numbers. Moreover, an element of a C*-algebra is positive if and only if there is some in such that . A positive element is self-adjoint and thus normal.

In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter.

In functional analysis, a branch of mathematics, the Hellinger–Toeplitz theorem states that an everywhere-defined symmetric operator on a Hilbert space with inner product is bounded. By definition, an operator A is symmetric if

In mathematics, specifically in functional analysis, each bounded linear operator on a complex Hilbert space has a corresponding Hermitian adjoint. Adjoints of operators generalize conjugate transposes of square matrices to (possibly) infinite-dimensional situations. If one thinks of operators on a complex Hilbert space as generalized complex numbers, then the adjoint of an operator plays the role of the complex conjugate of a complex number.

In functional analysis, a branch of mathematics, the Borel functional calculus is a functional calculus, which has particularly broad scope. Thus for instance if T is an operator, applying the squaring function ss2 to T yields the operator T2. Using the functional calculus for larger classes of functions, we can for example define rigorously the "square root" of the (negative) Laplacian operator −Δ or the exponential

In functional analysis, the Friedrichs extension is a canonical self-adjoint extension of a non-negative densely defined symmetric operator. It is named after the mathematician Kurt Friedrichs. This extension is particularly useful in situations where an operator may fail to be essentially self-adjoint or whose essential self-adjointness is difficult to show.

The spectrum of a linear operator that operates on a Banach space consists of all scalars such that the operator does not have a bounded inverse on . The spectrum has a standard decomposition into three parts:

In mathematics, particularly in functional analysis, a projection-valued measure (PVM) is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. Projection-valued measures are formally similar to real-valued measures, except that their values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space.

In functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

In mathematics, a coercive function is a function that "grows rapidly" at the extremes of the space on which it is defined. Depending on the context different exact definitions of this idea are in use.

In functional analysis, one is interested in extensions of symmetric operators acting on a Hilbert space. Of particular importance is the existence, and sometimes explicit constructions, of self-adjoint extensions. This problem arises, for example, when one needs to specify domains of self-adjointness for formal expressions of observables in quantum mechanics. Other applications of solutions to this problem can be seen in various moment problems.

Hilbert space Inner product space that is metrically complete; a Banach space whose norm induces an inner product (The norm satisfies the parallelogram identity)

The mathematical concept of a Hilbert space, named after David Hilbert, generalizes the notion of Euclidean space. It extends the methods of vector algebra and calculus from the two-dimensional Euclidean plane and three-dimensional space to spaces with any finite or infinite number of dimensions. A Hilbert space is a vector space equipped with an inner product, an operation that allows defining lengths and angles. Furthermore, Hilbert spaces are complete, which means that there are enough limits in the space to allow the techniques of calculus to be used.

This is a glossary for the terminology in a mathematical field of functional analysis.

References

    This article incorporates material from Closed operator on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.