In mathematics, a well-posed problem is one for which the following properties hold: [a]
Examples of archetypal well-posed problems include the Dirichlet problem for Laplace's equation, and the heat equation with specified initial conditions. These might be regarded as 'natural' problems in that there are physical processes modelled by these problems.
Problems that are not well-posed in the sense above are termed ill-posed. A simple example is a global optimization problem, because the location of the optima is generally not a continuous function of the parameters specifying the objective, even when the objective itself is a smooth function of those parameters. Inverse problems are often ill-posed; for example, the inverse heat equation, deducing a previous distribution of temperature from final data, is not well-posed in that the solution is highly sensitive to changes in the final data.
Continuum models must often be discretized in order to obtain a numerical solution. While solutions may be continuous with respect to the initial conditions, they may suffer from numerical instability when solved with finite precision, or with errors in the data.
Even if a problem is well-posed, it may still be ill-conditioned , meaning that a small error in the initial data can result in much larger errors in the answers. Problems in nonlinear complex systems (so-called chaotic systems) provide well-known examples of instability. An ill-conditioned problem is indicated by a large condition number.
If the problem is well-posed, then it stands a good chance of solution on a computer using a stable algorithm. If it is not well-posed, it needs to be re-formulated for numerical treatment. Typically this involves including additional assumptions, such as smoothness of solution. This process is known as regularization . [1] Tikhonov regularization is one of the most commonly used for regularization of linear ill-posed problems.
The existence of local solutions is often an important part of the well-posedness problem, and it is the foundation of many estimate methods, for example the energy method below.
There are many results on this topic. For example, the Cauchy–Kowalevski theorem for Cauchy initial value problems essentially states that if the terms in a partial differential equation are all made up of analytic functions and a certain transversality condition is satisfied (the hyperplane or more generally hypersurface where the initial data are posed must be non-characteristic with respect to the partial differential operator), then on certain regions, there necessarily exist solutions which are as well analytic functions. This is a fundamental result in the study of analytic partial differential equations. Surprisingly, the theorem does not hold in the setting of smooth functions; an example discovered by Hans Lewy in 1957 consists of a linear partial differential equation whose coefficients are smooth (i.e., have derivatives of all orders) but not analytic for which no solution exists. So the Cauchy-Kowalevski theorem is necessarily limited in its scope to analytic functions.
The energy method is useful for establishing both uniqueness and continuity with respect to initial conditions (i.e. it does not establish existence). The method is based upon deriving an upper bound of an energy-like functional for a given problem.
Example: Consider the diffusion equation on the unit interval with homogeneous Dirichlet boundary conditions and suitable initial data (e.g. for which ).
Multiply the equation by and integrate in space over the unit interval to obtain
This tells us that (p-norm) cannot grow in time. By multiplying by two and integrating in time, from up to , one finds
This result is the energy estimate for this problem.
To show uniqueness of solutions, assume there are two distinct solutions to the problem, call them and , each satisfying the same initial data. Upon defining then, via the linearity of the equations, one finds that satisfies
Applying the energy estimate tells us which implies (almost everywhere).
Similarly, to show continuity with respect to initial conditions, assume that and are solutions corresponding to different initial data and . Considering once more, one finds that satisfies the same equations as above but with . This leads to the energy estimate which establishes continuity (i.e. as and become closer, as measured by the norm of their difference, then ).
The maximum principle is an alternative approach to establish uniqueness and continuity of solutions with respect to initial conditions for this example. The existence of solutions to this problem can be established using Fourier series.
If it is possible to denote the solution to a Cauchy problem , where A is a linear operator mapping a dense linear subspace D(A) of X into X, with , where is a family of linear operators on X, satisfying
then (1) is well-posed.
Hille-Yosida theorem states the criteria on A for such a to exist.
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves or electromagnetic waves. It arises in fields like acoustics, electromagnetism, and fluid dynamics.
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where is the Laplace operator, is the divergence operator, is the gradient operator, and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives.
In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region. Since then, the heat equation and its variants have been found to be fundamental in many parts of both pure and applied mathematics.
The calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.
In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x.
In the study of differential equations, a boundary-value problem is a differential equation subjected to constraints called boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satisfies the boundary conditions.
In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form: where is an integral operator acting on u. Hence, integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals. A direct comparison can be seen with the mathematical form of the general integral equation above with the general form of a differential equation which may be expressed as follows:where may be viewed as a differential operator of order i. Due to this close connection between differential and integral equations, one can often convert between the two. For example, one method of solving a boundary value problem is by converting the differential equation with its boundary conditions into an integral equation and solving the integral equation. In addition, because one can convert between the two, differential equations in physics such as Maxwell's equations often have an analog integral and differential form. See also, for example, Green's function and Fredholm theory.
In mathematics and its applications, a Sturm–Liouville problem is a second-order linear ordinary differential equation of the form for given functions , and , together with some boundary conditions at extreme values of . The goals of a given Sturm–Liouville problem are:
In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions.
In mathematics, a Dirichlet problem asks for a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region.
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.
In mathematics, a hyperbolic partial differential equation of order is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is The equation has the property that, if u and its first time derivative are arbitrarily specified initial data on the line t = 0, then there exists a solution for all time t.
In mathematical analysis, a C0-semigroup, also known as a strongly continuous one-parameter semigroup, is a generalization of the exponential function. Just as exponential functions provide solutions of scalar linear constant coefficient ordinary differential equations, strongly continuous semigroups provide solutions of linear constant coefficient ordinary differential equations in Banach spaces. Such differential equations in Banach spaces arise from e.g. delay differential equations and partial differential equations.
In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. It is also an indispensable tool in the study of nonlinear partial differential equations such as the Navier–Stokes equations and nonlinear Schrödinger equation where one treats the nonlinearity as an inhomogeneity.
A parabolic partial differential equation is a type of partial differential equation (PDE). Parabolic PDEs are used to describe a wide variety of time-dependent phenomena in, i.a., engineering science, quantum mechanics and financial mathematics. Examples include the heat equation, time-dependent Schrödinger equation and the Black–Scholes equation.
In the finite element method for the numerical solution of elliptic partial differential equations, the stiffness matrix is a matrix that represents the system of linear equations that must be solved in order to ascertain an approximate solution to the differential equation.
In mathematics, the Cauchy–Kovalevskaya theorem is the main local existence and uniqueness theorem for analytic partial differential equations associated with Cauchy initial value problems. A special case was proven by Augustin Cauchy, and the full result by Sofya Kovalevskaya.
In mathematics, an abstract differential equation is a differential equation in which the unknown function and its derivatives take values in some generic abstract space. Equations of this kind arise e.g. in the study of partial differential equations: if to one of the variables is given a privileged position and all the others are put together, an ordinary "differential" equation with respect to the variable which was put in evidence is obtained. Adding boundary conditions can often be translated in terms of considering solutions in some convenient function spaces.