Green's function

Last updated
If one knows the solution
G
(
x
,
x
'
)
{\textstyle G(x,x')}
to a differential equation subject to a point source
L
^
(
x
)
G
(
x
,
x
'
)
=
d
(
x
-
x
'
)
{\textstyle {\hat {L}}(x)G(x,x')=\delta (x-x')}
and the differential operator
L
^
(
x
)
{\textstyle {\hat {L}}(x)}
is linear, then one can superpose them to build the solution
u
(
x
)
=
[?]
f
(
x
'
)
G
(
x
,
x
'
)
d
x
'
{\textstyle u(x)=\int f(x')G(x,x')\,dx'}
for a general source
L
^
(
x
)
u
(
x
)
=
f
(
x
)
{\textstyle {\hat {L}}(x)u(x)=f(x)}
. Green's function animation.gif
If one knows the solution to a differential equation subject to a point source and the differential operator is linear, then one can superpose them to build the solution for a general source .

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

Contents

This means that if is the linear differential operator, then

Through the superposition principle, given a linear ordinary differential equation (ODE), , one can first solve , for each s, and realizing that, since the source is a sum of delta functions, the solution is a sum of Green's functions as well, by linearity of L.

Green's functions are named after the British mathematician George Green, who first developed the concept in the 1820s. In the modern study of linear partial differential equations, Green's functions are studied largely from the point of view of fundamental solutions instead.

Under many-body theory, the term is also used in physics, specifically in quantum field theory, aerodynamics, aeroacoustics, electrodynamics, seismology and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition. In quantum field theory, Green's functions take the roles of propagators.

Definition and uses

A Green's function, G(x,s), of a linear differential operator acting on distributions over a subset of the Euclidean space , at a point s, is any solution of

where δ is the Dirac delta function. This property of a Green's function can be exploited to solve differential equations of the form

If the kernel of L is non-trivial, then the Green's function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green's function. Green's functions may be categorized, by the type of boundary conditions satisfied, by a Green's function number. Also, Green's functions in general are distributions, not necessarily functions of a real variable.

Green's functions are also useful tools in solving wave equations and diffusion equations. In quantum mechanics, Green's function of the Hamiltonian is a key concept with important links to the concept of density of states.

The Green's function as used in physics is usually defined with the opposite sign, instead. That is,

This definition does not significantly change any of the properties of Green's function due to the evenness of the Dirac delta function.

If the operator is translation invariant, that is, when has constant coefficients with respect to x, then the Green's function can be taken to be a convolution kernel, that is,

In this case, Green's function is the same as the impulse response of linear time-invariant system theory.

Motivation

Loosely speaking, if such a function G can be found for the operator , then, if we multiply the equation ( 1 ) for the Green's function by f(s), and then integrate with respect to s, we obtain,

Because the operator is linear and acts only on the variable x (and not on the variable of integration s), one may take the operator outside of the integration, yielding

This means that

is a solution to the equation

Thus, one may obtain the function u(x) through knowledge of the Green's function in equation ( 1 ) and the source term on the right-hand side in equation ( 2 ). This process relies upon the linearity of the operator .

In other words, the solution of equation ( 2 ), u(x), can be determined by the integration given in equation ( 3 ). Although f(x) is known, this integration cannot be performed unless G is also known. The problem now lies in finding the Green's function G that satisfies equation ( 1 ). For this reason, the Green's function is also sometimes called the fundamental solution associated to the operator .

Not every operator admits a Green's function. A Green's function can also be thought of as a right inverse of . Aside from the difficulties of finding a Green's function for a particular operator, the integral in equation ( 3 ) may be quite difficult to evaluate. However the method gives a theoretically exact result.

This can be thought of as an expansion of f according to a Dirac delta function basis (projecting f over ; and a superposition of the solution on each projection. Such an integral equation is known as a Fredholm integral equation, the study of which constitutes Fredholm theory.

Green's functions for solving inhomogeneous boundary value problems

The primary use of Green's functions in mathematics is to solve non-homogeneous boundary value problems. In modern theoretical physics, Green's functions are also usually used as propagators in Feynman diagrams; the term Green's function is often further used for any correlation function.

Framework

Let be the Sturm–Liouville operator, a linear differential operator of the form

and let be the vector-valued boundary conditions operator

Let be a continuous function in Further suppose that the problem

is "regular", i.e., the only solution for for all x is . [lower-alpha 1]

Theorem

There is one and only one solution that satisfies

and it is given by

where is a Green's function satisfying the following conditions:

  1. is continuous in and .
  2. For , .
  3. For , .
  4. Derivative "jump": .
  5. Symmetry: .

Advanced and retarded Green's functions

Green's function is not necessarily unique since the addition of any solution of the homogeneous equation to one Green's function results in another Green's function. Therefore if the homogeneous equation has nontrivial solutions, multiple Green's functions exist. In some cases, it is possible to find one Green's function that is nonvanishing only for , which is called a retarded Green's function, and another Green's function that is nonvanishing only for , which is called an advanced Green's function. In such cases, any linear combination of the two Green's functions is also a valid Green's function. The terminology advanced and retarded is especially useful when the variable x corresponds to time. In such cases, the solution provided by the use of the retarded Green's function depends only on the past sources and is causal whereas the solution provided by the use of the advanced Green's function depends only on the future sources and is acausal. In these problems, it is often the case that the causal solution is the physically important one. The use of advanced and retarded Green's function is especially common for the analysis of solutions of the inhomogeneous electromagnetic wave equation.

Finding Green's functions

Units

While it does not uniquely fix the form the Green's function will take, performing a dimensional analysis to find the units a Green's function must have is an important sanity check on any Green's function found through other means. A quick examination of the defining equation,

shows that the units of depend not only on the units of but also on the number and units of the space of which the position vectors and are elements. This leads to the relationship:

where is defined as, "the physical units of ", and is the volume element of the space (or spacetime).

For example, if and time is the only variable then:

If , the d'Alembert operator, and space has 3 dimensions then:

Eigenvalue expansions

If a differential operator L admits a set of eigenvectors Ψn(x) (i.e., a set of functions Ψn and scalars λn such that LΨn = λn Ψn ) that is complete, then it is possible to construct a Green's function from these eigenvectors and eigenvalues.

"Complete" means that the set of functions n} satisfies the following completeness relation,

Then the following holds,

where represents complex conjugation.

Applying the operator L to each side of this equation results in the completeness relation, which was assumed.

The general study of Green's function written in the above form, and its relationship to the function spaces formed by the eigenvectors, is known as Fredholm theory.

There are several other methods for finding Green's functions, including the method of images, separation of variables, and Laplace transforms. [1]

Combining Green's functions

If the differential operator can be factored as then the Green's function of can be constructed from the Green's functions for and :

The above identity follows immediately from taking to be the representation of the right operator inverse of , analogous to how for the invertible linear operator , defined by , is represented by its matrix elements .

A further identity follows for differential operators that are scalar polynomials of the derivative, . The fundamental theorem of algebra, combined with the fact that commutes with itself, guarantees that the polynomial can be factored, putting in the form:

where are the zeros of . Taking the Fourier transform of with respect to both and gives:

The fraction can then be split into a sum using a partial fraction decomposition before Fourier transforming back to and space. This process yields identities that relate integrals of Green's functions and sums of the same. For example, if then one form for its Green's function is:

While the example presented is tractable analytically, it illustrates a process that works when the integral is not trivial (for example, when is the operator in the polynomial).

Table of Green's functions

The following table gives an overview of Green's functions of frequently appearing differential operators, where , , is the Heaviside step function, is a Bessel function, is a modified Bessel function of the first kind, and is a modified Bessel function of the second kind. [2] Where time (t) appears in the first column, the retarded (causal) Green's function is listed.

Differential operator LGreen's function GExample of application
where   with   1D underdamped harmonic oscillator
where   with  1D overdamped harmonic oscillator
where 1D critically damped harmonic oscillator
1D Laplace operator 1D Poisson equation
2D Laplace operator   with  2D Poisson equation
3D Laplace operator   with   Poisson equation
Helmholtz operator stationary 3D Schrödinger equation for free particle
Divergence operator
Curl operator
in dimensions Yukawa potential, Feynman propagator, Screened Poisson equation
1D wave equation
2D wave equation
D'Alembert operator 3D wave equation
1D diffusion
2D diffusion
3D diffusion
  with  1D Klein–Gordon equation
  with  2D Klein–Gordon equation
  with  3D Klein–Gordon equation
  with   telegrapher's equation
  with  2D relativistic heat conduction
  with  3D relativistic heat conduction

Green's functions for the Laplacian

Green's functions for linear differential operators involving the Laplacian may be readily put to use using the second of Green's identities.

To derive Green's theorem, begin with the divergence theorem (otherwise known as Gauss's theorem),

Let and substitute into Gauss' law.

Compute and apply the product rule for the ∇ operator,

Plugging this into the divergence theorem produces Green's theorem,

Suppose that the linear differential operator L is the Laplacian, ∇2, and that there is a Green's function G for the Laplacian. The defining property of the Green's function still holds,

Let in Green's second identity, see Green's identities. Then,

Using this expression, it is possible to solve Laplace's equation2φ(x) = 0 or Poisson's equation2φ(x) = −ρ(x), subject to either Neumann or Dirichlet boundary conditions. In other words, we can solve for φ(x) everywhere inside a volume where either (1) the value of φ(x) is specified on the bounding surface of the volume (Dirichlet boundary conditions), or (2) the normal derivative of φ(x) is specified on the bounding surface (Neumann boundary conditions).

Suppose the problem is to solve for φ(x) inside the region. Then the integral

reduces to simply φ(x) due to the defining property of the Dirac delta function and we have

This form expresses the well-known property of harmonic functions, that if the value or normal derivative is known on a bounding surface, then the value of the function inside the volume is known everywhere.

In electrostatics, φ(x) is interpreted as the electric potential, ρ(x) as electric charge density, and the normal derivative as the normal component of the electric field.

If the problem is to solve a Dirichlet boundary value problem, the Green's function should be chosen such that G(x,x) vanishes when either x or x′ is on the bounding surface. Thus only one of the two terms in the surface integral remains. If the problem is to solve a Neumann boundary value problem, it might seem logical to choose Green's function so that its normal derivative vanishes on the bounding surface. However, application of Gauss's theorem to the differential equation defining the Green's function yields

meaning the normal derivative of G(x,x) cannot vanish on the surface, because it must integrate to 1 on the surface. [3]

The simplest form the normal derivative can take is that of a constant, namely 1/S, where S is the surface area of the surface. The surface term in the solution becomes

where is the average value of the potential on the surface. This number is not known in general, but is often unimportant, as the goal is often to obtain the electric field given by the gradient of the potential, rather than the potential itself.

With no boundary conditions, the Green's function for the Laplacian (Green's function for the three-variable Laplace equation) is

Supposing that the bounding surface goes out to infinity and plugging in this expression for the Green's function finally yields the standard expression for electric potential in terms of electric charge density as

Example

Find the Green function for the following problem, whose Green's function number is X11:

First step: The Green's function for the linear operator at hand is defined as the solution to

If , then the delta function gives zero, and the general solution is

For , the boundary condition at implies

if and .

For , the boundary condition at implies

The equation of is skipped for similar reasons.

To summarize the results thus far:

Second step: The next task is to determine and .

Ensuring continuity in the Green's function at implies

One can ensure proper discontinuity in the first derivative by integrating the defining differential equation (i.e., Eq. * ) from to and taking the limit as goes to zero. Note that we only integrate the second derivative as the remaining term will be continuous by construction.

The two (dis)continuity equations can be solved for and to obtain

So Green's function for this problem is:

Further examples

See also

Footnotes

  1. In technical jargon "regular" means that only the trivial solution () exists for the homogeneous problem ().

Related Research Articles

The Riesz representation theorem, sometimes called the Riesz–Fréchet representation theorem after Frigyes Riesz and Maurice René Fréchet, establishes an important connection between a Hilbert space and its continuous dual space. If the underlying field is the real numbers, the two are isometrically isomorphic; if the underlying field is the complex numbers, the two are isometrically anti-isomorphic. The (anti-) isomorphism is a particular natural isomorphism.

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.

In mathematics, specifically in operator theory, each linear operator on an inner product space defines a Hermitian adjoint operator on that space according to the rule

Geometrical optics, or ray optics, is a model of optics that describes light propagation in terms of rays. The ray in geometrical optics is an abstraction useful for approximating the paths along which light propagates under certain circumstances.

In mathematics, Green's identities are a set of three identities in vector calculus relating the bulk with the boundary of a region on which differential operators act. They are named after the mathematician George Green, who discovered Green's theorem.

<span class="mw-page-title-main">Schwinger–Dyson equation</span> Equations for correlation functions in QFT

The Schwinger–Dyson equations (SDEs) or Dyson–Schwinger equations, named after Julian Schwinger and Freeman Dyson, are general relations between correlation functions in quantum field theories (QFTs). They are also referred to as the Euler–Lagrange equations of quantum field theories, since they are the equations of motion corresponding to the Green's function. They form a set of infinitely many functional differential equations, all coupled to each other, sometimes referred to as the infinite tower of SDEs.

<span class="mw-page-title-main">LSZ reduction formula</span> Connection between correlation functions and the S-matrix

In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.

The following are important identities involving derivatives and integrals in vector calculus.

In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm.

The derivation of the Navier–Stokes equations as well as its application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.

In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation, Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

In fluid dynamics, the Oseen equations describe the flow of a viscous and incompressible fluid at small Reynolds numbers, as formulated by Carl Wilhelm Oseen in 1910. Oseen flow is an improved description of these flows, as compared to Stokes flow, with the (partial) inclusion of convective acceleration.

In the finite element method for the numerical solution of elliptic partial differential equations, the stiffness matrix is a matrix that represents the system of linear equations that must be solved in order to ascertain an approximate solution to the differential equation.

In mathematics, the Neumann–Poincaré operator or Poincaré–Neumann operator, named after Carl Neumann and Henri Poincaré, is a non-self-adjoint compact operator introduced by Poincaré to solve boundary value problems for the Laplacian on bounded domains in Euclidean space. Within the language of potential theory it reduces the partial differential equation to an integral equation on the boundary to which the theory of Fredholm operators can be applied. The theory is particularly simple in two dimensions—the case treated in detail in this article—where it is related to complex function theory, the conjugate Beurling transform or complex Hilbert transform and the Fredholm eigenvalues of bounded planar domains.

Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.

<span class="mw-page-title-main">Gradient discretisation method</span>

In numerical mathematics, the gradient discretisation method (GDM) is a framework which contains classical and recent numerical schemes for diffusion problems of various kinds: linear or non-linear, steady-state or time-dependent. The schemes may be conforming or non-conforming, and may rely on very general polygonal or polyhedral meshes.

In mathematics, and especially differential geometry and mathematical physics, gauge theory is the general study of connections on vector bundles, principal bundles, and fibre bundles. Gauge theory in mathematics should not be confused with the closely related concept of a gauge theory in physics, which is a field theory which admits gauge symmetry. In mathematics theory means a mathematical theory, encapsulating the general study of a collection of concepts or phenomena, whereas in the physical sense a gauge theory is a mathematical model of some natural phenomenon.

References

  1. (Cole 2011)
  2. some examples taken from Schulz, Hermann: Physik mit Bleistift. Frankfurt am Main: Deutsch, 2001. ISBN   3-8171-1661-6 (German)
  3. Jackson, John David (1998-08-14). Classical Electrodynamics. John Wiley & Sons. p. 39.