Eigenfunction

Last updated
This solution of the vibrating drum problem is, at any point in time, an eigenfunction of the Laplace operator on a disk. Drum vibration mode12.gif
This solution of the vibrating drum problem is, at any point in time, an eigenfunction of the Laplace operator on a disk.

In mathematics, an eigenfunction of a linear operator D defined on some function space is any non-zero function in that space that, when acted upon by D, is only multiplied by some scaling factor called an eigenvalue. As an equation, this condition can be written as

Contents

for some scalar eigenvalue [1] [2] [3] The solutions to this equation may also be subject to boundary conditions that limit the allowable eigenvalues and eigenfunctions.

An eigenfunction is a type of eigenvector.

Eigenfunctions

In general, an eigenvector of a linear operator D defined on some vector space is a nonzero vector in the domain of D that, when D acts upon it, is simply scaled by some scalar value called an eigenvalue. In the special case where D is defined on a function space, the eigenvectors are referred to as eigenfunctions. That is, a function f is an eigenfunction of D if it satisfies the equation

 

 

 

 

(1)

where λ is a scalar. [1] [2] [3] The solutions to Equation ( 1 ) may also be subject to boundary conditions. Because of the boundary conditions, the possible values of λ are generally limited, for example to a discrete set λ1, λ2, … or to a continuous set over some range. The set of all possible eigenvalues of D is sometimes called its spectrum, which may be discrete, continuous, or a combination of both. [1]

Each value of λ corresponds to one or more eigenfunctions. If multiple linearly independent eigenfunctions have the same eigenvalue, the eigenvalue is said to be degenerate and the maximum number of linearly independent eigenfunctions associated with the same eigenvalue is the eigenvalue's degree of degeneracy or geometric multiplicity. [4] [5]

Derivative example

A widely used class of linear operators acting on infinite dimensional spaces are differential operators on the space C of infinitely differentiable real or complex functions of a real or complex argument t. For example, consider the derivative operator with eigenvalue equation

This differential equation can be solved by multiplying both sides by and integrating. Its solution, the exponential function

is the eigenfunction of the derivative operator, where f0 is a parameter that depends on the boundary conditions. Note that in this case the eigenfunction is itself a function of its associated eigenvalue λ, which can take any real or complex value. In particular, note that for λ = 0 the eigenfunction f(t) is a constant.

Suppose in the example that f(t) is subject to the boundary conditions f(0) = 1 and . We then find that

where λ = 2 is the only eigenvalue of the differential equation that also satisfies the boundary condition.

Eigenfunctions can be expressed as column vectors and linear operators can be expressed as matrices, although they may have infinite dimensions. As a result, many of the concepts related to eigenvectors of matrices carry over to the study of eigenfunctions.

Define the inner product in the function space on which D is defined as

integrated over some range of interest for t called Ω. The * denotes the complex conjugate.

Suppose the function space has an orthonormal basis given by the set of functions {u1(t), u2(t), …, un(t)}, where n may be infinite. For the orthonormal basis,

where δij is the Kronecker delta and can be thought of as the elements of the identity matrix.

Functions can be written as a linear combination of the basis functions,

for example through a Fourier expansion of f(t). The coefficients bj can be stacked into an n by 1 column vector b = [b1b2bn]T. In some special cases, such as the coefficients of the Fourier series of a sinusoidal function, this column vector has finite dimension.

Additionally, define a matrix representation of the linear operator D with elements

We can write the function Df(t) either as a linear combination of the basis functions or as D acting upon the expansion of f(t),

Taking the inner product of each side of this equation with an arbitrary basis function ui(t),

This is the matrix multiplication Ab = c written in summation notation and is a matrix equivalent of the operator D acting upon the function f(t) expressed in the orthonormal basis. If f(t) is an eigenfunction of D with eigenvalue λ, then Ab = λb.

Eigenvalues and eigenfunctions of Hermitian operators

Many of the operators encountered in physics are Hermitian. Suppose the linear operator D acts on a function space that is a Hilbert space with an orthonormal basis given by the set of functions {u1(t), u2(t), …, un(t)}, where n may be infinite. In this basis, the operator D has a matrix representation A with elements

integrated over some range of interest for t denoted Ω.

By analogy with Hermitian matrices, D is a Hermitian operator if Aij = Aji*, or: [6]

Consider the Hermitian operator D with eigenvalues λ1, λ2, … and corresponding eigenfunctions f1(t), f2(t), …. This Hermitian operator has the following properties:

The second condition always holds for λiλj. For degenerate eigenfunctions with the same eigenvalue λi, orthogonal eigenfunctions can always be chosen that span the eigenspace associated with λi, for example by using the Gram-Schmidt process. [5] Depending on whether the spectrum is discrete or continuous, the eigenfunctions can be normalized by setting the inner product of the eigenfunctions equal to either a Kronecker delta or a Dirac delta function, respectively. [8] [9]

For many Hermitian operators, notably Sturm–Liouville operators, a third property is

As a consequence, in many important cases, the eigenfunctions of the Hermitian operator form an orthonormal basis. In these cases, an arbitrary function can be expressed as a linear combination of the eigenfunctions of the Hermitian operator.

Applications

Vibrating strings

The shape of a standing wave in a string fixed at its boundaries is an example of an eigenfunction of a differential operator. The admissible eigenvalues are governed by the length of the string and determine the frequency of oscillation. Standing wave.gif
The shape of a standing wave in a string fixed at its boundaries is an example of an eigenfunction of a differential operator. The admissible eigenvalues are governed by the length of the string and determine the frequency of oscillation.

Let h(x, t) denote the transverse displacement of a stressed elastic chord, such as the vibrating strings of a string instrument, as a function of the position x along the string and of time t. Applying the laws of mechanics to infinitesimal portions of the string, the function h satisfies the partial differential equation

which is called the (one-dimensional) wave equation. Here c is a constant speed that depends on the tension and mass of the string.

This problem is amenable to the method of separation of variables. If we assume that h(x, t) can be written as the product of the form X(x)T(t), we can form a pair of ordinary differential equations:

Each of these is an eigenvalue equation with eigenvalues and ω2, respectively. For any values of ω and c, the equations are satisfied by the functions

where the phase angles φ and ψ are arbitrary real constants.

If we impose boundary conditions, for example that the ends of the string are fixed at x = 0 and x = L, namely X(0) = X(L) = 0, and that T(0) = 0, we constrain the eigenvalues. For these boundary conditions, sin(φ) = 0 and sin(ψ) = 0, so the phase angles φ = ψ = 0, and

This last boundary condition constrains ω to take a value ωn = ncπ/L, where n is any integer. Thus, the clamped string supports a family of standing waves of the form

In the example of a string instrument, the frequency ωn is the frequency of the n-th harmonic, which is called the (n − 1)-th overtone.

Schrödinger equation

In quantum mechanics, the Schrödinger equation

with the Hamiltonian operator

can be solved by separation of variables if the Hamiltonian does not depend explicitly on time. [10] In that case, the wave function Ψ(r,t) = φ(r)T(t) leads to the two differential equations,

 

 

 

 

(2)

 

 

 

 

(3)

Both of these differential equations are eigenvalue equations with eigenvalue E. As shown in an earlier example, the solution of Equation ( 3 ) is the exponential

Equation ( 2 ) is the time-independent Schrödinger equation. The eigenfunctions φk of the Hamiltonian operator are stationary states of the quantum mechanical system, each with a corresponding energy Ek. They represent allowable energy states of the system and may be constrained by boundary conditions.

The Hamiltonian operator H is an example of a Hermitian operator whose eigenfunctions form an orthonormal basis. When the Hamiltonian does not depend explicitly on time, general solutions of the Schrödinger equation are linear combinations of the stationary states multiplied by the oscillatory T(t), [11] or, for a system with a continuous spectrum,

The success of the Schrödinger equation in explaining the spectral characteristics of hydrogen is considered one of the greatest triumphs of 20th century physics.

Signals and systems

In the study of signals and systems, an eigenfunction of a system is a signal f(t) that, when input into the system, produces a response y(t) = λf(t), where λ is a complex scalar eigenvalue. [12]

See also

Notes

    Citations

    Works cited

    Related Research Articles

    <span class="mw-page-title-main">Quantum harmonic oscillator</span> Important, well-understood quantum mechanical model

    The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary smooth potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known.

    <span class="mw-page-title-main">Schrödinger equation</span> Description of a quantum-mechanical system

    The Schrödinger equation is a linear partial differential equation that governs the wave function of a quantum-mechanical system. It is a key result in quantum mechanics, and its discovery was a significant landmark in the development of the subject. The equation is named after Erwin Schrödinger, who postulated the equation in 1925, and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.

    In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space V with inner product is a linear map A that is its own adjoint. If V is finite-dimensional with a given orthonormal basis, this is equivalent to the condition that the matrix of A is a Hermitian matrix, i.e., equal to its conjugate transpose A. By the finite-dimensional spectral theorem, V has an orthonormal basis such that the matrix of A relative to this basis is a diagonal matrix with entries in the real numbers. In this article, we consider generalizations of this concept to operators on Hilbert spaces of arbitrary dimension.

    In physics, an operator is a function over a space of physical states onto another space of physical states. The simplest example of the utility of operators is the study of symmetry. Because of this, they are very useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.

    In physics, the Heisenberg picture is a formulation of quantum mechanics in which the operators incorporate a dependency on time, but the state vectors are time-independent, an arbitrary fixed basis rigidly underlying the theory.

    Fourier optics is the study of classical optics using Fourier transforms (FTs), in which the waveform being considered is regarded as made up of a combination, or superposition, of plane waves. It has some parallels to the Huygens–Fresnel principle, in which the wavefront is regarded as being made up of a combination of spherical wavefronts whose sum is the wavefront being studied. A key difference is that Fourier optics considers the plane waves to be natural modes of the propagation medium, as opposed to Huygens–Fresnel, where the spherical waves originate in the physical medium.

    In mathematics and its applications, classical Sturm–Liouville theory is the theory of real second-order linear ordinary differential equations of the form:

    Creation operators and annihilation operators are mathematical operators that have widespread applications in quantum mechanics, notably in the study of quantum harmonic oscillators and many-particle systems. An annihilation operator lowers the number of particles in a given state by one. A creation operator increases the number of particles in a given state by one, and it is the adjoint of the annihilation operator. In many subfields of physics and chemistry, the use of these operators instead of wavefunctions is known as second quantization. They were introduced by Paul Dirac.

    <span class="mw-page-title-main">Linear time-invariant system</span> Mathematical model

    In system analysis, among other fields of study, a linear time-invariant system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply to many important physical systems, in which case the response y(t) of the system to an arbitrary input x(t) can be found directly using convolution: y(t) = x(t) ∗ h(t) where h(t) is called the system's impulse response and ∗ represents convolution. What's more, there are systematic methods for solving any such system, whereas systems not meeting both properties are generally more difficult to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.

    In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by , is the factor by which the eigenvector is scaled.

    <span class="mw-page-title-main">Degenerate energy levels</span> Energy level of a quantum system that corresponds to two or more different measurable states

    In quantum mechanics, an energy level is degenerate if it corresponds to two or more different measurable states of a quantum system. Conversely, two or more different states of a quantum mechanical system are said to be degenerate if they give the same value of energy upon measurement. The number of different states corresponding to a particular energy level is known as the degree of degeneracy of the level. It is represented mathematically by the Hamiltonian for the system having more than one linearly independent eigenstate with the same energy eigenvalue. When this is the case, energy alone is not enough to characterize what state the system is in, and other quantum numbers are needed to characterize the exact state when distinction is desired. In classical mechanics, this can be understood in terms of different possible trajectories corresponding to the same energy.

    In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a theorem on Fredholm operators. Part of the result states that a non-zero complex number in the spectrum of a compact operator is an eigenvalue.

    In mathematics, in the theory of differential equations and dynamical systems, a particular stationary or quasistationary solution to a nonlinear system is called linearly unstable if the linearization of the equation at this solution has the form , where r is the perturbation to the steady state, A is a linear operator whose spectrum contains eigenvalues with positive real part. If all the eigenvalues have negative real part, then the solution is called linearlystable. Other names for linear stability include exponential stability or stability in terms of first approximation. If there exist an eigenvalue with zero real part then the question about stability cannot be solved on the basis of the first approximation and we approach the so-called "centre and focus problem".

    In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm.

    Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two.

    The theoretical and experimental justification for the Schrödinger equation motivates the discovery of the Schrödinger equation, the equation that describes the dynamics of nonrelativistic particles. The motivation uses photons, which are relativistic particles with dynamics described by Maxwell's equations, as an analogue for all types of particles.

    The prolate spheroidal wave functions are eigenfunctions of the Laplacian in prolate spheroidal coordinates, adapted to boundary conditions on certain ellipsoids of revolution. Related are the oblate spheroidal wave functions.

    In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

    In mathematics, Sobolev spaces for planar domains are one of the principal techniques used in the theory of partial differential equations for solving the Dirichlet and Neumann boundary value problems for the Laplacian in a bounded domain in the plane with smooth boundary. The methods use the theory of bounded operators on Hilbert space. They can be used to deduce regularity properties of solutions and to solve the corresponding eigenvalue problems.

    In mathematics, the Neumann–Poincaré operator or Poincaré–Neumann operator, named after Carl Neumann and Henri Poincaré, is a non-self-adjoint compact operator introduced by Poincaré to solve boundary value problems for the Laplacian on bounded domains in Euclidean space. Within the language of potential theory it reduces the partial differential equation to an integral equation on the boundary to which the theory of Fredholm operators can be applied. The theory is particularly simple in two dimensions—the case treated in detail in this article—where it is related to complex function theory, the conjugate Beurling transform or complex Hilbert transform and the Fredholm eigenvalues of bounded planar domains.