In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. [1] It is also an indispensable tool in the study of nonlinear partial differential equations such as the Navier–Stokes equations and nonlinear Schrödinger equation where one treats the nonlinearity as an inhomogeneity.
The philosophy underlying Duhamel's principle is that it is possible to go from solutions of the Cauchy problem (or initial value problem) to solutions of the inhomogeneous problem. Consider, for instance, the example of the heat equation modeling the distribution of heat energy u in Rn. Indicating by ut (x, t) the time derivative of u(x, t), the initial value problem is where g is the initial heat distribution. By contrast, the inhomogeneous problem for the heat equation, corresponds to adding an external heat energy f (x, t) dt at each point. Intuitively, one can think of the inhomogeneous problem as a set of homogeneous problems each starting afresh at a different time slice t = t0. By linearity, one can add up (integrate) the resulting solutions through time t0 and obtain the solution for the inhomogeneous problem. This is the essence of Duhamel's principle.
Formally, consider a linear inhomogeneous evolution equation for a function with spatial domain D in Rn, of the form where L is a linear differential operator that involves no time derivatives.
Duhamel's principle is, formally, that the solution to this problem is where Psf is the solution of the problem The integrand is the retarded solution , evaluated at time t, representing the effect, at the later time t, of an infinitesimal force applied at time s. (The operator can be thought of as an inverse of the operator for the Cauchy problem with initial condition .)
Duhamel's principle also holds for linear systems (with vector-valued functions u), and this in turn furnishes a generalization to higher t derivatives, such as those appearing in the wave equation (see below). Validity of the principle depends on being able to solve the homogeneous problem in an appropriate function space and that the solution should exhibit reasonable dependence on parameters so that the integral is well-defined. Precise analytic conditions on u and f depend on the particular application.
The linear wave equation models the displacement u of an idealized dispersionless one-dimensional string, in terms of derivatives with respect to time t and space x:
The function f (x, t), in natural units, represents an external force applied to string at the position (x, t). In order to be a suitable physical model for nature, it should be possible to solve it for any initial state that the string is in, specified by its initial displacement and velocity:
More generally, we should be able to solve the equation with data specified on any t = constant slice:
To evolve a solution from any given time slice T to T + dT, the contribution of the force must be added to the solution. That contribution comes from changing the velocity of the string by f (x, T) dT. That is, to get the solution at time T + dT from the solution at time T, we must add to it a new (forward) solution of the homogeneous (no external forces) wave equation
with the initial conditions
A solution to this equation is achieved by straightforward integration:
(The expression in parentheses is just in the notation of the general method above.) So a solution of the original initial value problem is obtained by starting with a solution to the problem with the same prescribed initial values problem but with zero initial displacement, and adding to that (integrating) the contributions from the added force in the time intervals from T to T+dT:
Duhamel's principle is the result that the solution to an inhomogeneous, linear, partial differential equation can be solved by first finding the solution for a step input, and then superposing using Duhamel's integral. Suppose we have a constant coefficient, m-th order inhomogeneous ordinary differential equation. where
We can reduce this to the solution of a homogeneous ODE using the following method. All steps are done formally, ignoring necessary requirements for the solution to be well defined.
First let G solve
Define , with being the characteristic function of the interval . Then we have
in the sense of distributions. Therefore
solves the ODE.
More generally, suppose we have a constant coefficient inhomogeneous partial differential equation
where
We can reduce this to the solution of a homogeneous ODE using the following method. All steps are done formally, ignoring necessary requirements for the solution to be well defined.
First, taking the Fourier transform in x we have
Assume that is an m-th order ODE in t. Let be the coefficient of the highest order term of . Now for every let solve
Define . We then have in the sense of distributions. Therefore solves the PDE (after transforming back to x).
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves or electromagnetic waves. It arises in fields like acoustics, electromagnetism, and fluid dynamics.
In mathematical analysis, the Dirac delta function, also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be represented heuristically as
In physics, engineering and mathematics, the Fourier transform (FT) is an integral transform that takes a function as input and outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches.
In physics, a Langevin equation is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.
In statistical mechanics and information theory, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. The Fokker-Planck equation has multiple applications in information theory, graph theory, data science, finance, economics etc.
In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form: where is an integral operator acting on u. Hence, integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals. A direct comparison can be seen with the mathematical form of the general integral equation above with the general form of a differential equation which may be expressed as follows:where may be viewed as a differential operator of order i. Due to this close connection between differential and integral equations, one can often convert between the two. For example, one method of solving a boundary value problem is by converting the differential equation with its boundary conditions into an integral equation and solving the integral equation. In addition, because one can convert between the two, differential equations in physics such as Maxwell's equations often have an analog integral and differential form. See also, for example, Green's function and Fredholm theory.
Bosonic string theory is the original version of string theory, developed in the late 1960s and named after Satyendra Nath Bose. It is so called because it contains only bosons in the spectrum.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
Burgers' equation or Bateman–Burgers equation is a fundamental partial differential equation and convection–diffusion equation occurring in various areas of applied mathematics, such as fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow. The equation was first introduced by Harry Bateman in 1915 and later studied by Johannes Martinus Burgers in 1948. For a given field and diffusion coefficient , the general form of Burgers' equation in one space dimension is the dissipative system:
In mathematics, delay differential equations (DDEs) are a type of differential equation in which the derivative of the unknown function at a certain time is given in terms of the values of the function at previous times. DDEs are also called time-delay systems, systems with aftereffect or dead-time, hereditary systems, equations with deviating argument, or differential-difference equations. They belong to the class of systems with the functional state, i.e. partial differential equations (PDEs) which are infinite dimensional, as opposed to ordinary differential equations (ODEs) having a finite dimensional state vector. Four points may give a possible explanation of the popularity of DDEs:
The Wiener–Hopf method is a mathematical technique widely used in applied mathematics. It was initially developed by Norbert Wiener and Eberhard Hopf as a method to solve systems of integral equations, but has found wider use in solving two-dimensional partial differential equations with mixed boundary conditions on the same boundary. In general, the method works by exploiting the complex-analytical properties of transformed functions. Typically, the standard Fourier transform is used, but examples exist using other transforms, such as the Mellin transform.
In physics and fluid mechanics, a Blasius boundary layer describes the steady two-dimensional laminar boundary layer that forms on a semi-infinite plate which is held parallel to a constant unidirectional flow. Falkner and Skan later generalized Blasius' solution to wedge flow, i.e. flows in which the plate is not parallel to the flow.
In mathematics, and specifically partial differential equations (PDEs), d´Alembert's formula is the general solution to the one-dimensional wave equation:
In theory of vibrations, Duhamel's integral is a way of calculating the response of linear systems and structures to arbitrary time-varying external perturbation.
Bilinear time–frequency distributions, or quadratic time–frequency distributions, arise in a sub-field of signal analysis and signal processing called time–frequency signal processing, and, in the statistical analysis of time series data. Such methods are used where one needs to deal with a situation where the frequency composition of a signal may be changing over time; this sub-field used to be called time–frequency signal analysis, and is now more often called time–frequency signal processing due to the progress in using these methods to a wide range of signal-processing problems.
In fluid dynamics, a cnoidal wave is a nonlinear and exact periodic wave solution of the Korteweg–de Vries equation. These solutions are in terms of the Jacobi elliptic function cn, which is why they are coined cnoidal waves. They are used to describe surface gravity waves of fairly long wavelength, as compared to the water depth.
Quantum characteristics are phase-space trajectories that arise in the phase space formulation of quantum mechanics through the Wigner transform of Heisenberg operators of canonical coordinates and momenta. These trajectories obey the Hamilton equations in quantum form and play the role of characteristics in terms of which time-dependent Weyl's symbols of quantum operators can be expressed. In the classical limit, quantum characteristics reduce to classical trajectories. The knowledge of quantum characteristics is equivalent to the knowledge of quantum dynamics.
In physics and mathematics, the spacetime triangle diagram (STTD) technique, also known as the Smirnov method of incomplete separation of variables, is the direct space-time domain method for electromagnetic and scalar wave motion.
In physics and mathematics, the Klein–Kramers equation or sometimes referred as Kramers–Chandrasekhar equation is a partial differential equation that describes the probability density function f of a Brownian particle in phase space (r, p). It is a special case of the Fokker–Planck equation.
In mathematics, stochastic analysis on manifolds or stochastic differential geometry is the study of stochastic analysis over smooth manifolds. It is therefore a synthesis of stochastic analysis and of differential geometry.