Dirichlet problem

Last updated

In mathematics, a Dirichlet problem is the problem of finding a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region. [1]

Contents

The Dirichlet problem can be solved for many PDEs, although originally it was posed for Laplace's equation. In that case the problem can be stated as follows:

Given a function f that has values everywhere on the boundary of a region in Rn, is there a unique continuous function u twice continuously differentiable in the interior and continuous on the boundary, such that u is harmonic in the interior and u = f on the boundary?

This requirement is called the Dirichlet boundary condition. The main issue is to prove the existence of a solution; uniqueness can be proven using the maximum principle.

History

The Dirichlet problem goes back to George Green, who studied the problem on general domains with general boundary conditions in his Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, published in 1828. He reduced the problem into a problem of constructing what we now call Green's functions, and argued that Green's function exists for any domain. His methods were not rigorous by today's standards, but the ideas were highly influential in the subsequent developments. The next steps in the study of the Dirichlet's problem were taken by Karl Friedrich Gauss, William Thomson (Lord Kelvin) and Peter Gustav Lejeune Dirichlet, after whom the problem was named, and the solution to the problem (at least for the ball) using the Poisson kernel was known to Dirichlet (judging by his 1850 paper submitted to the Prussian academy). Lord Kelvin and Dirichlet suggested a solution to the problem by a variational method based on the minimization of "Dirichlet's energy". According to Hans Freudenthal (in the Dictionary of Scientific Biography, vol. 11), Bernhard Riemann was the first mathematician who solved this variational problem based on a method which he called Dirichlet's principle. The existence of a unique solution is very plausible by the "physical argument": any charge distribution on the boundary should, by the laws of electrostatics, determine an electrical potential as solution. However, Karl Weierstrass found a flaw in Riemann's argument, and a rigorous proof of existence was found only in 1900 by David Hilbert, using his direct method in the calculus of variations. It turns out that the existence of a solution depends delicately on the smoothness of the boundary and the prescribed data.

General solution

For a domain having a sufficiently smooth boundary , the general solution to the Dirichlet problem is given by

where is the Green's function for the partial differential equation, and

is the derivative of the Green's function along the inward-pointing unit normal vector . The integration is performed on the boundary, with measure . The function is given by the unique solution to the Fredholm integral equation of the second kind,

The Green's function to be used in the above integral is one which vanishes on the boundary:

for and . Such a Green's function is usually a sum of the free-field Green's function and a harmonic solution to the differential equation.

Existence

The Dirichlet problem for harmonic functions always has a solution, and that solution is unique, when the boundary is sufficiently smooth and is continuous. More precisely, it has a solution when

for some , where denotes the Hölder condition.

Example: the unit disk in two dimensions

In some simple cases the Dirichlet problem can be solved explicitly. For example, the solution to the Dirichlet problem for the unit disk in R2 is given by the Poisson integral formula.

If is a continuous function on the boundary of the open unit disk , then the solution to the Dirichlet problem is given by

The solution is continuous on the closed unit disk and harmonic on

The integrand is known as the Poisson kernel; this solution follows from the Green's function in two dimensions:

where is harmonic () and chosen such that for .

Methods of solution

For bounded domains, the Dirichlet problem can be solved using the Perron method, which relies on the maximum principle for subharmonic functions. This approach is described in many text books. [2] It is not well-suited to describing smoothness of solutions when the boundary is smooth. Another classical Hilbert space approach through Sobolev spaces does yield such information. [3] The solution of the Dirichlet problem using Sobolev spaces for planar domains can be used to prove the smooth version of the Riemann mapping theorem. Bell (1992) has outlined a different approach for establishing the smooth Riemann mapping theorem, based on the reproducing kernels of Szegő and Bergman, and in turn used it to solve the Dirichlet problem. The classical methods of potential theory allow the Dirichlet problem to be solved directly in terms of integral operators, for which the standard theory of compact and Fredholm operators is applicable. The same methods work equally for the Neumann problem. [4]

Generalizations

Dirichlet problems are typical of elliptic partial differential equations, and potential theory, and the Laplace equation in particular. Other examples include the biharmonic equation and related equations in elasticity theory.

They are one of several types of classes of PDE problems defined by the information given at the boundary, including Neumann problems and Cauchy problems.

Example: equation of a finite string attached to one moving wall

Consider the Dirichlet problem for the wave equation describing a string attached between walls with one end attached permanently and the other moving with the constant velocity i.e. the d'Alembert equation on the triangular region of the Cartesian product of the space and the time:

As one can easily check by substitution, the solution fulfilling the first condition is

Additionally we want

Substituting

we get the condition of self-similarity

where

It is fulfilled, for example, by the composite function

with

thus in general

where is a periodic function with a period :

and we get the general solution

See also

Notes

  1. "Dirichlet Problem".
  2. See for example:
  3. See for example:
  4. See:

Related Research Articles

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

<span class="mw-page-title-main">Heat equation</span> Partial differential equation describing the evolution of temperature in a region

In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).

Fractional calculus is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form:

In mathematics, the Newtonian potential or Newton potential is an operator in vector calculus that acts as the inverse to the negative Laplacian, on functions that are smooth and decay rapidly enough at infinity. As such, it is a fundamental object of study in potential theory. In its general nature, it is a singular integral operator, defined by convolution with a function having a mathematical singularity at the origin, the Newtonian kernel Γ which is the fundamental solution of the Laplace equation. It is named for Isaac Newton, who first discovered it and proved that it was a harmonic function in the special case of three variables, where it served as the fundamental gravitational potential in Newton's law of universal gravitation. In modern potential theory, the Newtonian potential is instead thought of as an electrostatic potential.

<span class="mw-page-title-main">Envelope (mathematics)</span> Family of curves in geometry

In geometry, an envelope of a planar family of curves is a curve that is tangent to each member of the family at some point, and these points of tangency together form the whole envelope. Classically, a point on the envelope can be thought of as the intersection of two "infinitesimally adjacent" curves, meaning the limit of intersections of nearby curves. This idea can be generalized to an envelope of surfaces in space, and so on to higher dimensions.

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.

<span class="mw-page-title-main">Series expansion</span> Expression of a function as an infinite sum of simpler functions

In mathematics, a series expansion is a technique that expresses a function as an infinite sum, or series, of simpler functions. It is a method for calculating a function that cannot be expressed by just elementary operators.

In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational forces is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic.

The Adomian decomposition method (ADM) is a semi-analytical method for solving ordinary and partial nonlinear differential equations. The method was developed from the 1970s to the 1990s by George Adomian, chair of the Center for Applied Mathematics at the University of Georgia. It is further extensible to stochastic systems by using the Ito integral. The aim of this method is towards a unified theory for the solution of partial differential equations (PDE); an aim which has been superseded by the more general theory of the homotopy analysis method. The crucial aspect of the method is employment of the "Adomian polynomials" which allow for solution convergence of the nonlinear portion of the equation, without simply linearizing the system. These polynomials mathematically generalize to a Maclaurin series about an arbitrary external parameter; which gives the solution method more flexibility than direct Taylor series expansion.

In mathematics, Riemann–Hilbert problems, named after Bernhard Riemann and David Hilbert, are a class of problems that arise in the study of differential equations in the complex plane. Several existence theorems for Riemann–Hilbert problems have been produced by Mark Krein, Israel Gohberg and others.

In fluid mechanics and mathematics, a capillary surface is a surface that represents the interface between two different fluids. As a consequence of being a surface, a capillary surface has no thickness in slight contrast with most real fluid interfaces.

The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology.

In mathematics, log-polar coordinates is a coordinate system in two dimensions, where a point is identified by two numbers, one for the logarithm of the distance to a certain point, and one for an angle. Log-polar coordinates are closely connected to polar coordinates, which are usually used to describe domains in the plane with some sort of rotational symmetry. In areas like harmonic and complex analysis, the log-polar coordinates are more canonical than polar coordinates.

In mathematics, the walk-on-spheres method (WoS) is a numerical probabilistic algorithm, or Monte-Carlo method, used mainly in order to approximate the solutions of some specific boundary value problem for partial differential equations (PDEs). The WoS method was first introduced by Mervin E. Muller in 1956 to solve Laplace's equation, and was since then generalized to other problems.

The Fokas method, or unified transform, is an algorithmic procedure for analysing boundary value problems for linear partial differential equations and for an important class of nonlinear PDEs belonging to the so-called integrable systems. It is named after Greek mathematician Athanassios S. Fokas.

Tau functions are an important ingredient in the modern mathematical theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form.

References