Functional determinant

Last updated

In functional analysis, a branch of mathematics, it is sometimes possible to generalize the notion of the determinant of a square matrix of finite order (representing a linear transformation from a finite-dimensional vector space to itself) to the infinite-dimensional case of a linear operator S mapping a function space V to itself. The corresponding quantity det(S) is called the functional determinant of S.

Contents

There are several formulas for the functional determinant. They are all based on the fact that the determinant of a finite matrix is equal to the product of the eigenvalues of the matrix. A mathematically rigorous definition is via the zeta function of the operator,

where tr stands for the functional trace: the determinant is then defined by

where the zeta function in the point s = 0 is defined by analytic continuation. Another possible generalization, often used by physicists when using the Feynman path integral formalism in quantum field theory (QFT), uses a functional integration:

This path integral is only well defined up to some divergent multiplicative constant. To give it a rigorous meaning it must be divided by another functional determinant, thus effectively cancelling the problematic 'constants'.

These are now, ostensibly, two different definitions for the functional determinant, one coming from quantum field theory and one coming from spectral theory. Each involves some kind of regularization: in the definition popular in physics, two determinants can only be compared with one another; in mathematics, the zeta function was used. Osgood, Phillips & Sarnak (1988) have shown that the results obtained by comparing two functional determinants in the QFT formalism agree with the results obtained by the zeta functional determinant.

Defining formulae

Path integral version

For a positive self-adjoint operator S on a finite-dimensional Euclidean space V, the formula

holds.

The problem is to find a way to make sense of the determinant of an operator S on an infinite dimensional function space. One approach, favored in quantum field theory, in which the function space consists of continuous paths on a closed interval, is to formally attempt to calculate the integral

where V is the function space and the L2 inner product, and the Wiener measure. The basic assumption on S is that it should be self-adjoint, and have discrete spectrum λ1, λ2, λ3, … with a corresponding set of eigenfunctions f1, f2, f3, … which are complete in L2 (as would, for example, be the case for the second derivative operator on a compact interval Ω). This roughly means all functions φ can be written as linear combinations of the functions fi:

Hence the inner product in the exponential can be written as

In the basis of the functions fi, the functional integration reduces to an integration over all basis functions. Formally, assuming our intuition from the finite dimensional case carries over into the infinite dimensional setting, the measure should then be equal to

This makes the functional integral a product of Gaussian integrals:

The integrals can then be evaluated, giving

where N is an infinite constant that needs to be dealt with by some regularization procedure. The product of all eigenvalues is equal to the determinant for finite-dimensional spaces, and we formally define this to be the case in our infinite-dimensional case also. This results in the formula

If all quantities converge in an appropriate sense, then the functional determinant can be described as a classical limit (Watson and Whittaker). Otherwise, it is necessary to perform some kind of regularization. The most popular of which for computing functional determinants is the zeta function regularization. [1] For instance, this allows for the computation of the determinant of the Laplace and Dirac operators on a Riemannian manifold, using the Minakshisundaram–Pleijel zeta function. Otherwise, it is also possible to consider the quotient of two determinants, making the divergent constants cancel.

Zeta function version

Let S be an elliptic differential operator with smooth coefficients which is positive on functions of compact support. That is, there exists a constant c > 0 such that

for all compactly supported smooth functions φ. Then S has a self-adjoint extension to an operator on L2 with lower bound c. The eigenvalues of S can be arranged in a sequence

Then the zeta function of S is defined by the series: [2]

It is known that ζS has a meromorphic extension to the entire plane. [3] Moreover, although one can define the zeta function in more general situations, the zeta function of an elliptic differential operator (or pseudodifferential operator) is regular at .

Formally, differentiating this series term-by-term gives

and so if the functional determinant is well-defined, then it should be given by

Since the analytic continuation of the zeta function is regular at zero, this can be rigorously adopted as a definition of the determinant.

This kind of Zeta-regularized functional determinant also appears when evaluating sums of the form . Integration over a gives which can just be considered as the logarithm of the determinant for a Harmonic oscillator. This last value is just equal to , where is the Hurwitz zeta function.


Practical example

The infinite potential well with A = 0. Infinite potential well.svg
The infinite potential well with A = 0.

The infinite potential well

We will compute the determinant of the following operator describing the motion of a quantum mechanical particle in an infinite potential well:

where A is the depth of the potential and L is the length of the well. We will compute this determinant by diagonalizing the operator and multiplying the eigenvalues. So as not to have to bother with the uninteresting divergent constant, we will compute the quotient between the determinants of the operator with depth A and the operator with depth A = 0. The eigenvalues of this potential are equal to

This means that

Now we can use Euler's infinite product representation for the sine function:

from which a similar formula for the hyperbolic sine function can be derived:

Applying this, we find that

Another way for computing the functional determinant

For one-dimensional potentials, a short-cut yielding the functional determinant exists. [4] It is based on consideration of the following expression:

where m is a complex constant. This expression is a meromorphic function of m, having zeros when m equals an eigenvalue of the operator with potential V1(x) and a pole when m is an eigenvalue of the operator with potential V2(x). We now consider the functions ψm
1
and ψm
2
with

obeying the boundary conditions

If we construct the function

which is also a meromorphic function of m, we see that it has exactly the same poles and zeroes as the quotient of determinants we are trying to compute: if m is an eigenvalue of the operator number one, then ψm
1
(x)
will be an eigenfunction thereof, meaning ψm
1
(L) = 0
; and analogously for the denominator. By Liouville's theorem, two meromorphic functions with the same zeros and poles must be proportional to one another. In our case, the proportionality constant turns out to be one, and we get

for all values of m. For m = 0 we get

The infinite potential well revisited

The problem in the previous section can be solved more easily with this formalism. The functions ψ0
i
(x) obey

yielding the following solutions:

This gives the final expression

See also

Notes

  1. ( Branson 1993 ); ( Osgood, Phillips & Sarnak 1988 )
  2. See Osgood, Phillips & Sarnak (1988). For a more general definition in terms of the spectral function, see Hörmander (1968) or Shubin (1987).
  3. For the case of the generalized Laplacian, as well as regularity at zero, see Berline, Getzler & Vergne (2004 , Proposition 9.35). For the general case of an elliptic pseudodifferential operator, see Seeley (1967).
  4. S. Coleman, The uses of instantons, Int. School of Subnuclear Physics, (Erice, 1977)

Related Research Articles

Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread.

<span class="mw-page-title-main">Feynman diagram</span> Pictorial representation of the behavior of subatomic particles

In theoretical physics, a Feynman diagram is a pictorial representation of the mathematical expressions describing the behavior and interaction of subatomic particles. The scheme is named after American physicist Richard Feynman, who introduced the diagrams in 1948. The interaction of subatomic particles can be complex and difficult to understand; Feynman diagrams give a simple visualization of what would otherwise be an arcane and abstract formula. According to David Kaiser, "Since the middle of the 20th century, theoretical physicists have increasingly turned to this tool to help them undertake critical calculations. Feynman diagrams have revolutionized nearly every aspect of theoretical physics." While the diagrams are applied primarily to quantum field theory, they can also be used in other areas of physics, such as solid-state theory. Frank Wilczek wrote that the calculations that won him the 2004 Nobel Prize in Physics "would have been literally unthinkable without Feynman diagrams, as would [Wilczek's] calculations that established a route to production and observation of the Higgs particle."

The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural isomorphism.

In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. This article deals with applying generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

The Fock space is an algebraic construction used in quantum mechanics to construct the quantum states space of a variable or unknown number of identical particles from a single particle Hilbert space H. It is named after V. A. Fock who first introduced it in his 1932 paper "Konfigurationsraum und zweite Quantelung".

In physics, an operator is a function over a space of physical states onto another space of physical states. The simplest example of the utility of operators is the study of symmetry. Because of this, they are useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter.

In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).

<span class="mw-page-title-main">Schwinger–Dyson equation</span> Equations for correlation functions in QFT

The Schwinger–Dyson equations (SDEs) or Dyson–Schwinger equations, named after Julian Schwinger and Freeman Dyson, are general relations between correlation functions in quantum field theories (QFTs). They are also referred to as the Euler–Lagrange equations of quantum field theories, since they are the equations of motion corresponding to the Green's function. They form a set of infinitely many functional differential equations, all coupled to each other, sometimes referred to as the infinite tower of SDEs.

In quantum mechanics, a rotational transition is an abrupt change in angular momentum. Like all other properties of a quantum particle, angular momentum is quantized, meaning it can only equal certain discrete values, which correspond to different rotational energy states. When a particle loses angular momentum, it is said to have transitioned to a lower rotational energy state. Likewise, when a particle gains angular momentum, a positive rotational transition is said to have occurred.

The Ehrenfest theorem, named after Austrian theoretical physicist Paul Ehrenfest, relates the time derivative of the expectation values of the position and momentum operators x and p to the expectation value of the force on a massive particle moving in a scalar potential ,

In mathematical physics, some approaches to quantum field theory are more popular than others. For historical reasons, the Schrödinger representation is less favored than Fock space methods. In the early days of quantum field theory, maintaining symmetries such as Lorentz invariance, displaying them manifestly, and proving renormalisation were of paramount importance. The Schrödinger representation is not manifestly Lorentz invariant and its renormalisability was only shown as recently as the 1980s by Kurt Symanzik (1981).

In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm.

The theoretical and experimental justification for the Schrödinger equation motivates the discovery of the Schrödinger equation, the equation that describes the dynamics of nonrelativistic particles. The motivation uses photons, which are relativistic particles with dynamics described by Maxwell's equations, as an analogue for all types of particles.

In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment. It can be thought of as an average of all the possible outcomes of a measurement as weighted by their likelihood, and as such it is not the most probable value of a measurement; indeed the expectation value may have zero probability of occurring. It is a fundamental concept in all areas of quantum physics.

In 1927, a year after the publication of the Schrödinger equation, Hartree formulated what are now known as the Hartree equations for atoms, using the concept of self-consistency that Lindsay had introduced in his study of many electron systems in the context of Bohr theory. Hartree assumed that the nucleus together with the electrons formed a spherically symmetric field. The charge distribution of each electron was the solution of the Schrödinger equation for an electron in a potential , derived from the field. Self-consistency required that the final field, computed from the solutions, was self-consistent with the initial field, and he thus called his method the self-consistent field method.

Coherent states have been introduced in a physical context, first as quasi-classical states in quantum mechanics, then as the backbone of quantum optics and they are described in that spirit in the article Coherent states. However, they have generated a huge variety of generalizations, which have led to a tremendous amount of literature in mathematical physics. In this article, we sketch the main directions of research on this line. For further details, we refer to several existing surveys.

In mathematics, a singular trace is a trace on a space of linear operators of a separable Hilbert space that vanishes on operators of finite rank. Singular traces are a feature of infinite-dimensional Hilbert spaces such as the space of square-summable sequences and spaces of square-integrable functions. Linear operators on a finite-dimensional Hilbert space have only the zero functional as a singular trace since all operators have finite rank. For example, matrix algebras have no non-trivial singular traces and the matrix trace is the unique trace up to scaling.

In quantum mechanics, weak measurements are a type of quantum measurement that results in an observer obtaining very little information about the system on average, but also disturbs the state very little. From Busch's theorem the system is necessarily disturbed by the measurement. In the literature weak measurements are also known as unsharp, fuzzy, dull, noisy, approximate, and gentle measurements. Additionally weak measurements are often confused with the distinct but related concept of the weak value.

References