Spectral theory

Last updated

In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. [1] It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. [2] The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter. [3]

Contents

Mathematical background

The name spectral theory was introduced by David Hilbert in his original formulation of Hilbert space theory, which was cast in terms of quadratic forms in infinitely many variables. The original spectral theorem was therefore conceived as a version of the theorem on principal axes of an ellipsoid, in an infinite-dimensional setting. The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous. Hilbert himself was surprised by the unexpected application of this theory, noting that "I developed my theory of infinitely many variables from purely mathematical interests, and even called it 'spectral analysis' without any presentiment that it would later find application to the actual spectrum of physics." [4]

There have been three main ways to formulate spectral theory, each of which find use in different domains. After Hilbert's initial formulation, the later development of abstract Hilbert spaces and the spectral theory of single normal operators on them were well suited to the requirements of physics, exemplified by the work of von Neumann. [5] The further theory built on this to address Banach algebras in general. This development leads to the Gelfand representation, which covers the commutative case, and further into non-commutative harmonic analysis.

The difference can be seen in making the connection with Fourier analysis. The Fourier transform on the real line is in one sense the spectral theory of differentiation as a differential operator. But for that to cover the phenomena one has already to deal with generalized eigenfunctions (for example, by means of a rigged Hilbert space). On the other hand it is simple to construct a group algebra, the spectrum of which captures the Fourier transform's basic properties, and this is carried out by means of Pontryagin duality.

One can also study the spectral properties of operators on Banach spaces. For example, compact operators on Banach spaces have many spectral properties similar to that of matrices.

Physical background

The background in the physics of vibrations has been explained in this way: [6]

Spectral theory is connected with the investigation of localized vibrations of a variety of different objects, from atoms and molecules in chemistry to obstacles in acoustic waveguides. These vibrations have frequencies, and the issue is to decide when such localized vibrations occur, and how to go about computing the frequencies. This is a very complicated problem since every object has not only a fundamental tone but also a complicated series of overtones, which vary radically from one body to another.

Such physical ideas have nothing to do with the mathematical theory on a technical level, but there are examples of indirect involvement (see for example Mark Kac's question Can you hear the shape of a drum? ). Hilbert's adoption of the term "spectrum" has been attributed to an 1897 paper of Wilhelm Wirtinger on Hill differential equation (by Jean Dieudonné), and it was taken up by his students during the first decade of the twentieth century, among them Erhard Schmidt and Hermann Weyl. The conceptual basis for Hilbert space was developed from Hilbert's ideas by Erhard Schmidt and Frigyes Riesz. [7] [8] It was almost twenty years later, when quantum mechanics was formulated in terms of the Schrödinger equation, that the connection was made to atomic spectra; a connection with the mathematical physics of vibration had been suspected before, as remarked by Henri Poincaré, but rejected for simple quantitative reasons, absent an explanation of the Balmer series. [9] The later discovery in quantum mechanics that spectral theory could explain features of atomic spectra was therefore fortuitous, rather than being an object of Hilbert's spectral theory.

A definition of spectrum

Consider a bounded linear transformation T defined everywhere over a general Banach space. We form the transformation:

Here I is the identity operator and ζ is a complex number. The inverse of an operator T, that is T−1, is defined by:

If the inverse exists, T is called regular. If it does not exist, T is called singular.

With these definitions, the resolvent set of T is the set of all complex numbers ζ such that Rζ exists and is bounded. This set often is denoted as ρ(T). The spectrum of T is the set of all complex numbers ζ such that Rζfails to exist or is unbounded. Often the spectrum of T is denoted by σ(T). The function Rζ for all ζ in ρ(T) (that is, wherever Rζ exists as a bounded operator) is called the resolvent of T. The spectrum of T is therefore the complement of the resolvent set of T in the complex plane. [10] Every eigenvalue of T belongs to σ(T), but σ(T) may contain non-eigenvalues. [11]

This definition applies to a Banach space, but of course other types of space exist as well; for example, topological vector spaces include Banach spaces, but can be more general. [12] [13] On the other hand, Banach spaces include Hilbert spaces, and it is these spaces that find the greatest application and the richest theoretical results. [14] With suitable restrictions, much can be said about the structure of the spectra of transformations in a Hilbert space. In particular, for self-adjoint operators, the spectrum lies on the real line and (in general) is a spectral combination of a point spectrum of discrete eigenvalues and a continuous spectrum. [15]

Spectral theory briefly

In functional analysis and linear algebra the spectral theorem establishes conditions under which an operator can be expressed in simple form as a sum of simpler operators. As a full rigorous presentation is not appropriate for this article, we take an approach that avoids much of the rigor and satisfaction of a formal treatment with the aim of being more comprehensible to a non-specialist.

This topic is easiest to describe by introducing the bra–ket notation of Dirac for operators. [16] [17] As an example, a very particular linear operator L might be written as a dyadic product: [18] [19]

in terms of the "bra" ⟨b1| and the "ket" |k1⟩. A function f is described by a ket as |f ⟩. The function f(x) defined on the coordinates is denoted as

and the magnitude of f by

where the notation (*) denotes a complex conjugate. This inner product choice defines a very specific inner product space, restricting the generality of the arguments that follow. [14]

The effect of L upon a function f is then described as:

expressing the result that the effect of L on f is to produce a new function multiplied by the inner product represented by .

A more general linear operator L might be expressed as:

where the are scalars and the are a basis and the a reciprocal basis for the space. The relation between the basis and the reciprocal basis is described, in part, by:

If such a formalism applies, the are eigenvalues of L and the functions are eigenfunctions of L. The eigenvalues are in the spectrum of L. [20]

Some natural questions are: under what circumstances does this formalism work, and for what operators L are expansions in series of other operators like this possible? Can any function f be expressed in terms of the eigenfunctions (are they a Schauder basis) and under what circumstances does a point spectrum or a continuous spectrum arise? How do the formalisms for infinite-dimensional spaces and finite-dimensional spaces differ, or do they differ? Can these ideas be extended to a broader class of spaces? Answering such questions is the realm of spectral theory and requires considerable background in functional analysis and matrix algebra.

Resolution of the identity

This section continues in the rough and ready manner of the above section using the bra–ket notation, and glossing over the many important details of a rigorous treatment. [21] A rigorous mathematical treatment may be found in various references. [22] In particular, the dimension n of the space will be finite.

Using the bra–ket notation of the above section, the identity operator may be written as:

where it is supposed as above that are a basis and the a reciprocal basis for the space satisfying the relation:

This expression of the identity operation is called a representation or a resolution of the identity. [21] [22] This formal representation satisfies the basic property of the identity:

valid for every positive integer k.

Applying the resolution of the identity to any function in the space , one obtains:

which is the generalized Fourier expansion of ψ in terms of the basis functions { ei }. [23] Here .

Given some operator equation of the form:

with h in the space, this equation can be solved in the above basis through the formal manipulations:

which converts the operator equation to a matrix equation determining the unknown coefficients cj in terms of the generalized Fourier coefficients of h and the matrix elements of the operator O.

The role of spectral theory arises in establishing the nature and existence of the basis and the reciprocal basis. In particular, the basis might consist of the eigenfunctions of some linear operator L:

with the { λi } the eigenvalues of L from the spectrum of L. Then the resolution of the identity above provides the dyad expansion of L:

Resolvent operator

Using spectral theory, the resolvent operator R:

can be evaluated in terms of the eigenfunctions and eigenvalues of L, and the Green's function corresponding to L can be found.

Applying R to some arbitrary function in the space, say ,

This function has poles in the complex λ-plane at each eigenvalue of L. Thus, using the calculus of residues:

where the line integral is over a contour C that includes all the eigenvalues of L.

Suppose our functions are defined over some coordinates {xj}, that is:

Introducing the notation

where δ(x − y) = δ(x1 − y1, x2 − y2, x3 − y3, ...) is the Dirac delta function, [24] we can write

Then:

The function G(x, y; λ) defined by:

is called the Green's function for operator L, and satisfies: [25]

Operator equations

Consider the operator equation:

in terms of coordinates:

A particular case is λ = 0.

The Green's function of the previous section is:

and satisfies:

Using this Green's function property:

Then, multiplying both sides of this equation by h(z) and integrating:

which suggests the solution is:

That is, the function ψ(x) satisfying the operator equation is found if we can find the spectrum of O, and construct G, for example by using:

There are many other ways to find G, of course. [26] See the articles on Green's functions and on Fredholm integral equations. It must be kept in mind that the above mathematics is purely formal, and a rigorous treatment involves some pretty sophisticated mathematics, including a good background knowledge of functional analysis, Hilbert spaces, distributions and so forth. Consult these articles and the references for more detail.

Spectral theorem and Rayleigh quotient

Optimization problems may be the most useful examples about the combinatorial significance of the eigenvalues and eigenvectors in symmetric matrices, especially for the Rayleigh quotient with respect to a matrix M.

TheoremLet M be a symmetric matrix and let x be the non-zero vector that maximizes the Rayleigh quotient with respect to M. Then, x is an eigenvector of M with eigenvalue equal to the Rayleigh quotient. Moreover, this eigenvalue is the largest eigenvalue of M.

Proof Assume the spectral theorem. Let the eigenvalues of M be . Since the form an orthonormal basis, any vector x can be expressed in this basis as

The way to prove this formula is pretty easy. Namely,

evaluate the Rayleigh quotient with respect to x:

where we used Parseval's identity in the last line. Finally we obtain that

so the Rayleigh quotient is always less than . [27]

See also

Notes

  1. Jean Alexandre Dieudonné (1981). History of functional analysis. Elsevier. ISBN   0-444-86148-3.
  2. William Arveson (2002). "Chapter 1: spectral theory and Banach algebras". A short course on spectral theory. Springer. ISBN   0-387-95300-0.
  3. Viktor Antonovich Sadovnichiĭ (1991). "Chapter 4: The geometry of Hilbert space: the spectral theory of operators". Theory of Operators. Springer. p. 181 et seq. ISBN   0-306-11028-8.
  4. Steen, Lynn Arthur. "Highlights in the History of Spectral Theory" (PDF). St. Olaf College. St. Olaf College. Archived from the original (PDF) on 4 March 2016. Retrieved 14 December 2015.
  5. John von Neumann (1996). The mathematical foundations of quantum mechanics; Volume 2 in Princeton Landmarks in Mathematics series (Reprint of translation of original 1932 ed.). Princeton University Press. ISBN   0-691-02893-1.
  6. E. Brian Davies, quoted on the King's College London analysis group website "Research at the analysis group".
  7. Nicholas Young (1988). An introduction to Hilbert space. Cambridge University Press. p. 3. ISBN   0-521-33717-8.
  8. Jean-Luc Dorier (2000). On the teaching of linear algebra; Vol. 23 of Mathematics education library. Springer. ISBN   0-7923-6539-9.
  9. Cf. Spectra in mathematics and in physics Archived 2011-07-27 at the Wayback Machine by Jean Mawhin, p.4 and pp. 10-11.
  10. Edgar Raymond Lorch (2003). Spectral Theory (Reprint of Oxford 1962 ed.). Textbook Publishers. p. 89. ISBN   0-7581-7156-0.
  11. Nicholas Young (1988-07-21). op. cit. p. 81. ISBN   0-521-33717-8.
  12. Helmut H. Schaefer; Manfred P. H. Wolff (1999). Topological vector spaces (2nd ed.). Springer. p. 36. ISBN   0-387-98726-6.
  13. Dmitriĭ Petrovich Zhelobenko (2006). Principal structures and methods of representation theory. American Mathematical Society. ISBN   0821837311.
  14. 1 2 Edgar Raymond Lorch (2003). "Chapter III: Hilbert Space". Spectral Theory. p. 57. ISBN   0-7581-7156-0.
  15. Edgar Raymond Lorch (2003). "Chapter V: The Structure of Self-Adjoint Transformations". Spectral Theory. p. 106 ff. ISBN   0-7581-7156-0.
  16. Bernard Friedman (1990). Principles and Techniques of Applied Mathematics (Reprint of 1956 Wiley ed.). Dover Publications. p. 26. ISBN   0-486-66444-9.
  17. PAM Dirac (1981). The principles of quantum mechanics (4th ed.). Oxford University Press. p. 29 ff. ISBN   0-19-852011-5.
  18. Jürgen Audretsch (2007). "Chapter 1.1.2: Linear operators on the Hilbert space". Entangled systems: new directions in quantum physics. Wiley-VCH. p. 5. ISBN   978-3-527-40684-5.
  19. R. A. Howland (2006). Intermediate dynamics: a linear algebraic approach (2nd ed.). Birkhäuser. p. 69 ff. ISBN   0-387-28059-6.
  20. Bernard Friedman (1990). "Chapter 2: Spectral theory of operators". op. cit. p. 57. ISBN   0-486-66444-9.
  21. 1 2 See discussion in Dirac's book referred to above, and Milan Vujičić (2008). Linear algebra thoroughly explained. Springer. p. 274. ISBN   978-3-540-74637-9.
  22. 1 2 See, for example, the fundamental text of John von Neumann (1955). op. cit. ISBN   0-691-02893-1. and Arch W. Naylor, George R. Sell (2000). Linear Operator Theory in Engineering and Science; Vol. 40 of Applied mathematical science. Springer. p. 401. ISBN   0-387-95001-X., Steven Roman (2008). Advanced linear algebra (3rd ed.). Springer. ISBN   978-0-387-72828-5., I︠U︡riĭ Makarovich Berezanskiĭ (1968). Expansions in eigenfunctions of selfadjoint operators; Vol. 17 in Translations of mathematical monographs. American Mathematical Society. ISBN   0-8218-1567-9.
  23. See for example, Gerald B Folland (2009). "Convergence and completeness". Fourier Analysis and its Applications (Reprint of Wadsworth & Brooks/Cole 1992 ed.). American Mathematical Society. pp. 77 ff. ISBN   978-0-8218-4790-9.
  24. PAM Dirac (1981). op. cit. p. 60 ff. ISBN   0-19-852011-5.
  25. Bernard Friedman (1956). op. cit. p. 214, Eq. 2.14. ISBN   0-486-66444-9.
  26. For example, see Sadri Hassani (1999). "Chapter 20: Green's functions in one dimension". Mathematical physics: a modern introduction to its foundations. Springer. p. 553 et seq. ISBN   0-387-98579-4. and Qing-Hua Qin (2007). Green's function and boundary elements of multifield materials. Elsevier. ISBN   978-0-08-045134-3.
  27. Spielman, Daniel A. "Lecture Notes on Spectral Graph Theory" Yale University (2012) http://cs.yale.edu/homes/spielman/561/ .

Related Research Articles

In quantum mechanics, a density matrix is a matrix that describes the quantum state of a physical system. It allows for the calculation of the probabilities of the outcomes of any measurement performed upon this system, using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed states. Mixed states arise in quantum mechanics in two different situations:

  1. when the preparation of the system is not fully known, and thus one must deal with a statistical ensemble of possible preparations, and
  2. when one wants to describe a physical system which is entangled with another, without describing their combined state.

In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized. This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

In mathematics, specifically functional analysis, a trace-class operator is a linear operator for which a trace may be defined, such that the trace is a finite number independent of the choice of basis used to compute the trace. This trace of trace-class operators generalizes the trace of matrices studied in linear algebra. All trace-class operators are compact operators.

In mathematics, particularly in functional analysis, the spectrum of a bounded linear operator is a generalisation of the set of eigenvalues of a matrix. Specifically, a complex number is said to be in the spectrum of a bounded linear operator if

<span class="mw-page-title-main">Eigenfunction</span> Mathematical function of a linear operator

In mathematics, an eigenfunction of a linear operator D defined on some function space is any non-zero function in that space that, when acted upon by D, is only multiplied by some scaling factor called an eigenvalue. As an equation, this condition can be written as

In mathematics, specifically functional analysis, Mercer's theorem is a representation of a symmetric positive-definite function on a square as a sum of a convergent sequence of product functions. This theorem, presented in, is one of the most notable results of the work of James Mercer (1883–1932). It is an important theoretical tool in the theory of integral equations; it is used in the Hilbert space theory of stochastic processes, for example the Karhunen–Loève theorem; and it is also used to characterize a symmetric positive-definite kernel.

In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form:

In functional analysis, a branch of mathematics, a compact operator is a linear operator , where are normed vector spaces, with the property that maps bounded subsets of to relatively compact subsets of . Such an operator is necessarily a bounded operator, and so continuous. Some authors require that are Banach, but the definition can be extended to more general spaces.

<span class="mw-page-title-main">Reproducing kernel Hilbert space</span> In functional analysis, a Hilbert space

In functional analysis, a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Roughly speaking, this means that if two functions and in the RKHS are close in norm, i.e., is small, then and are also pointwise close, i.e., is small for all . The converse does not need to be true. Informally, this can be shown by looking at the supremum norm: the sequence of functions converges pointwise, but does not converge uniformly i.e. does not converge with respect to the supremum norm.

The spectrum of a linear operator that operates on a Banach space is a fundamental concept of functional analysis. The spectrum consists of all scalars such that the operator does not have a bounded inverse on . The spectrum has a standard decomposition into three parts:

In mathematics, particularly in functional analysis, a projection-valued measure (PVM) is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. Projection-valued measures are formally similar to real-valued measures, except that their values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space.

In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all elements are random variables. Many important properties of physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.

The Born rule is a postulate of quantum mechanics which gives the probability that a measurement of a quantum system will yield a given result. In its simplest form, it states that the probability density of finding a system in a given state, when measured, is proportional to the square of the amplitude of the system's wavefunction at that state. It was formulated and published by German physicist Max Born in July, 1926.

In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm.

In the mathematical discipline of functional analysis, the concept of a compact operator on Hilbert space is an extension of the concept of a matrix acting on a finite-dimensional vector space; in Hilbert space, compact operators are precisely the closure of finite-rank operators in the topology induced by the operator norm. As such, results from matrix theory can sometimes be extended to compact operators using similar arguments. By contrast, the study of general operators on infinite-dimensional spaces often requires a genuinely different approach.

Hilbert C*-modules are mathematical objects that generalise the notion of Hilbert spaces, in that they endow a linear space with an "inner product" that takes values in a C*-algebra. Hilbert C*-modules were first introduced in the work of Irving Kaplansky in 1953, which developed the theory for commutative, unital algebras. In the 1970s the theory was extended to non-commutative C*-algebras independently by William Lindall Paschke and Marc Rieffel, the latter in a paper that used Hilbert C*-modules to construct a theory of induced representations of C*-algebras. Hilbert C*-modules are crucial to Kasparov's formulation of KK-theory, and provide the right framework to extend the notion of Morita equivalence to C*-algebras. They can be viewed as the generalization of vector bundles to noncommutative C*-algebras and as such play an important role in noncommutative geometry, notably in C*-algebraic quantum group theory, and groupoid C*-algebras.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.

In mathematics, a singular trace is a trace on a space of linear operators of a separable Hilbert space that vanishes on operators of finite rank. Singular traces are a feature of infinite-dimensional Hilbert spaces such as the space of square-summable sequences and spaces of square-integrable functions. Linear operators on a finite-dimensional Hilbert space have only the zero functional as a singular trace since all operators have finite rank. For example, matrix algebras have no non-trivial singular traces and the matrix trace is the unique trace up to scaling.

References