Differential equations |
---|

Classification |

Solution |

In mathematics, a **partial differential equation** (**PDE**) is an equation which imposes relations between the various partial derivatives of a multivariable function.

- Introduction
- Well-posedness
- Existence of local solutions
- Classification
- Notation
- Equations of first order
- Linear and nonlinear equations
- Linear equations of second order
- Systems of first-order equations and characteristic surfaces
- Analytical solutions
- Separation of variables
- Method of characteristics
- Integral transform
- Change of variables
- Fundamental solution
- Superposition principle
- Methods for non-linear equations
- Lie group method
- Semianalytical methods
- Numerical solutions
- Finite element method
- Finite difference method
- Finite volume method
- The energy method
- See also
- Notes
- References
- Further reading
- External links

The function is often thought of as an "unknown" to be solved for, similarly to how x is thought of as an unknown number, to be solved for, in an algebraic equation like *x*^{2} − 3*x* + 2 = 0. However, it is usually impossible to write down explicit formulas for solutions of partial differential equations. There is, correspondingly, a vast amount of modern mathematical and scientific research on methods to numerically approximate solutions of certain partial differential equations using computers. Partial differential equations also occupy a large sector of pure mathematical research, in which the usual questions are, broadly speaking, on the identification of general qualitative features of solutions of various partial differential equations.^{[ citation needed ]} Among the many open questions are the existence and smoothness of solutions to the Navier–Stokes equations, named as one of the Millennium Prize Problems in 2000.

Partial differential equations are ubiquitous in mathematically-oriented scientific fields, such as physics and engineering. For instance, they are foundational in the modern scientific understanding of sound, heat, diffusion, electrostatics, electrodynamics, fluid dynamics, elasticity, general relativity, and quantum mechanics.^{[ citation needed ]} They also arise from many purely mathematical considerations, such as differential geometry and the calculus of variations; among other notable applications, they are the fundamental tool in the proof of the Poincaré conjecture from geometric topology.

Partly due to this variety of sources, there is a wide spectrum of different types of partial differential equations, and methods have been developed for dealing with many of the individual equations which arise. As such, it is usually acknowledged that there is no "general theory" of partial differential equations, with specialist knowledge being somewhat divided between several essentially distinct subfields.^{ [1] }

Ordinary differential equations form a subclass of partial differential equations, corresponding to functions of a single variable. Stochastic partial differential equations and nonlocal equations are, as of 2020, particularly widely studied extensions of the "PDE" notion. More classical topics, on which there is still much active research, include elliptic and parabolic partial differential equations, fluid mechanics, Boltzmann equations, and dispersive partial differential equations.

One says that a function *u*(*x*, *y*, *z*) of three variables is "* harmonic *" or "a solution of *the Laplace equation *" if it satisfies the condition

Such functions were widely studied in the nineteenth century due to their relevance for classical mechanics. If explicitly given a function, it is usually a matter of straightforward computation to check whether or not it is harmonic. For instance

are both harmonic while

is not. It may be surprising that the two given examples of harmonic functions are of such a strikingly different form from one another. This is a reflection of the fact that they are *not*, in any immediate way, both special cases of a "general solution formula" of the Laplace equation. This is in striking contrast to the case of ordinary differential equations (ODEs) roughly similar to the Laplace equation, with the aim of many introductory textbooks being to find algorithms leading to general solution formulas. For the Laplace equation, as for a large number of partial differential equations, such solution formulas fail to exist.

The nature of this failure can be seen more concretely in the case of the following PDE: for a function *v*(*x*, *y*) of two variables, consider the equation

It can be directly checked that any function v of the form *v*(*x*, *y*) = *f*(*x*) + *g*(*y*), for any single-variable functions f and g whatsoever, will satisfy this condition. This is far beyond the choices available in ODE solution formulas, which typically allow the free choice of some numbers. In the study of PDE, one generally has the free choice of functions.

The nature of this choice varies from PDE to PDE. To understand it for any given equation, *existence and uniqueness theorems* are usually important organizational principles. In many introductory textbooks, the role of existence and uniqueness theorems for ODE can be somewhat opaque; the existence half is usually unnecessary, since one can directly check any proposed solution formula, while the uniqueness half is often only present in the background in order to ensure that a proposed solution formula is as general as possible. By contrast, for PDE, existence and uniqueness theorems are often the only means by which one can navigate through the plethora of different solutions at hand. For this reason, they are also fundamental when carrying out a purely numerical simulation, as one must have an understanding of what data is to be prescribed by the user and what is to be left to the computer to calculate.

To discuss such existence and uniqueness theorems, it is necessary to be precise about the domain of the "unknown function." Otherwise, speaking only in terms such as "a function of two variables," it is impossible to meaningfully formulate the results. That is, the domain of the unknown function must be regarded as part of the structure of the PDE itself.

The following provides two classic examples of such existence and uniqueness theorems. Even though the two PDE in question are so similar, there is a striking difference in behavior: for the first PDE, one has the free prescription of a single function, while for the second PDE, one has the free prescription of two functions.

- Let B denote the unit-radius disk around the origin in the plane. For any continuous function U on the unit circle, there is exactly one function u on B such that

- and whose restriction to the unit circle is given by U.

- For any functions f and g on the real line ℝ, there is exactly one function u on ℝ × (−1, 1) such that

- and with
*u*(*x*, 0) =*f*(*x*) and ∂*u*/∂*y*(*x*, 0) =*g*(*x*) for all values of x.

Even more phenomena are possible. For instance, the following PDE, arising naturally in the field of differential geometry, illustrates an example where there is a simple and completely explicit solution formula, but with the free choice of only three numbers and not even one function.

- If u is a function on ℝ
^{2}with

- then there are numbers a, b, and c with
*u*(*x*,*y*) =*ax*+*by*+*c*.

In contrast to the earlier examples, this PDE is **nonlinear,** owing to the square roots and the squares. A **linear** PDE is one such that, if it is homogeneous, the sum of any two solutions is also a solution, and all constant multiples of any solution is also a solution.

Well-posedness refers to a common schematic package of information about a PDE. To say that a PDE is well-posed, one must have:

- an existence and uniqueness theorem, asserting that by the prescription of some freely chosen functions, one can single out one specific solution of the PDE
- by continuously changing the free choices, one continuously changes the corresponding solution

This is, by the necessity of being applicable to several different PDE, somewhat vague. The requirement of "continuity," in particular, is ambiguous, since there are usually many inequivalent means by which it can be rigorously defined. It is, however, somewhat unusual to study a PDE without specifying a way in which it is well-posed.

In a slightly weak form, the Cauchy–Kowalevski theorem essentially states that if the terms in a partial differential equation are all made up of analytic functions, then on certain regions, there necessarily exist solutions of the PDE which are also analytic functions. Although this is a fundamental result, in many situations it is not useful since one cannot easily control the domain of the solutions produced. Furthermore, there are known examples of linear partial differential equations whose coefficients have derivatives of all orders (which are nevertheless not analytic) but which have no solutions at all: this surprising example was discovered by Hans Lewy in 1957. So the Cauchy-Kowalevski theorem is necessarily limited in its scope to analytic functions. This context precludes many phenomena of both physical and mathematical interest.

When writing PDEs, it is common to denote partial derivatives using subscripts. For example:

In the general situation that u is a function of n variables, then *u*_{i} denotes the first partial derivative relative to the i'th input, *u*_{ij} denotes the second partial derivative relative to the i'th and j'th inputs, and so on.

The Greek letter Δ denotes the Laplace operator; if u is a function of n variables, then

In the physics literature, the Laplace operator is often denoted by ∇^{2}; in the mathematics literature, ∇^{2}*u* may also denote the Hessian matrix of u.

A PDE is called **linear** if it is linear in the unknown and its derivatives. For example, for a function u of x and y, a second order linear PDE is of the form

where *a _{i}* and

Nearest to linear PDEs are **semilinear** PDEs, where the highest order derivatives appear only as linear terms, with coefficients that are functions of the independent variables only. The lower order derivatives and the unknown function may appear arbitrarily otherwise. For example, a general second order semilinear PDE in two variables is

In a **quasilinear** PDE the highest order derivatives likewise appear only as linear terms, but with coefficients possibly functions of the unknown and lower-order derivatives:

Many of the fundamental PDEs in physics are quasilinear, such as the Einstein equations of general relativity and the Navier–Stokes equations describing fluid motion.

A PDE without any linearity properties is called **fully nonlinear**, and possesses nonlinearities on one or more of the highest-order derivatives. An example is the Monge–Ampère equation, which arises in differential geometry.^{ [2] }

Elliptic, parabolic, and hyperbolic partial differential equations of order two have been widely studied since the beginning of the twentieth century. However, there are many other important types of PDE, including the Korteweg–de Vries equation. There are also hybrids such as the Euler–Tricomi equation, which vary from elliptic to hyperbolic for different regions of the domain. There are also important extensions of these basic types to higher-order PDE, but such knowledge is more specialized.

The elliptic/parabolic/hyperbolic classification provides a guide to appropriate initial and boundary conditions and to the smoothness of the solutions. Assuming *u _{xy}* =

where the coefficients A, B, C... may depend upon x and y. If *A*^{2} + *B*^{2} + *C*^{2} > 0 over a region of the xy-plane, the PDE is second-order in that region. This form is analogous to the equation for a conic section:

More precisely, replacing ∂_{x} by X, and likewise for other variables (formally this is done by a Fourier transform), converts a constant-coefficient PDE into a polynomial of the same degree, with the terms of the highest degree (a homogeneous polynomial, here a quadratic form) being most significant for the classification.

Just as one classifies conic sections and quadratic forms into parabolic, hyperbolic, and elliptic based on the discriminant *B*^{2} − 4*AC*, the same can be done for a second-order PDE at a given point. However, the discriminant in a PDE is given by *B*^{2} − *AC* due to the convention of the xy term being 2*B* rather than B; formally, the discriminant (of the associated quadratic form) is (2*B*)^{2} − 4*AC* = 4(*B*^{2} − *AC*), with the factor of 4 dropped for simplicity.

*B*^{2}−*AC*< 0 (*elliptic partial differential equation*): Solutions of elliptic PDEs are as smooth as the coefficients allow, within the interior of the region where the equation and solutions are defined. For example, solutions of Laplace's equation are analytic within the domain where they are defined, but solutions may assume boundary values that are not smooth. The motion of a fluid at subsonic speeds can be approximated with elliptic PDEs, and the Euler–Tricomi equation is elliptic where*x*< 0.*B*^{2}−*AC*= 0 (*parabolic partial differential equation*): Equations that are parabolic at every point can be transformed into a form analogous to the heat equation by a change of independent variables. Solutions smooth out as the transformed time variable increases. The Euler–Tricomi equation has parabolic type on the line where*x*= 0.*B*^{2}−*AC*> 0 (*hyperbolic partial differential equation*): hyperbolic equations retain any discontinuities of functions or derivatives in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds can be approximated with hyperbolic PDEs, and the Euler–Tricomi equation is hyperbolic where*x*> 0.

If there are n independent variables *x*_{1}, *x*_{2 }, …, *x*_{n}, a general linear partial differential equation of second order has the form

The classification depends upon the signature of the eigenvalues of the coefficient matrix *a*_{i,j}.

- Elliptic: the eigenvalues are all positive or all negative.
- Parabolic: the eigenvalues are all positive or all negative, save one that is zero.
- Hyperbolic: there is only one negative eigenvalue and all the rest are positive, or there is only one positive eigenvalue and all the rest are negative.
- Ultrahyperbolic: there is more than one positive eigenvalue and more than one negative eigenvalue, and there are no zero eigenvalues. There is only a limited theory for ultrahyperbolic equations (Courant and Hilbert, 1962).
^{[ needs update? ]}

The classification of partial differential equations can be extended to systems of first-order equations, where the unknown u is now a vector with m components, and the coefficient matrices A_{ν} are m by m matrices for *ν* = 1, 2, …, *n*. The partial differential equation takes the form

where the coefficient matrices A_{ν} and the vector B may depend upon x and u. If a hypersurface S is given in the implicit form

where φ has a non-zero gradient, then S is a **characteristic surface** for the operator L at a given point if the characteristic form vanishes:

The geometric interpretation of this condition is as follows: if data for u are prescribed on the surface S, then it may be possible to determine the normal derivative of u on S from the differential equation. If the data on S and the differential equation determine the normal derivative of u on S, then S is non-characteristic. If the data on S and the differential equation *do not* determine the normal derivative of u on S, then the surface is **characteristic**, and the differential equation restricts the data on S: the differential equation is *internal* to S.

- A first-order system
*Lu*= 0 is*elliptic*if no surface is characteristic for L: the values of u on S and the differential equation always determine the normal derivative of u on S. - A first-order system is
*hyperbolic*at a point if there is a**spacelike**surface S with normal ξ at that point. This means that, given any non-trivial vector η orthogonal to ξ, and a scalar multiplier λ, the equation*Q*(*λξ*+*η*) = 0 has m real roots*λ*_{1},*λ*_{2}, …,*λ*_{m}. The system is**strictly hyperbolic**if these roots are always distinct. The geometrical interpretation of this condition is as follows: the characteristic form*Q*(*ζ*) = 0 defines a cone (the normal cone) with homogeneous coordinates ζ. In the hyperbolic case, this cone has m sheets, and the axis*ζ*=*λξ*runs inside these sheets: it does not intersect any of them. But when displaced from the origin by η, this axis intersects every sheet. In the elliptic case, the normal cone has no real sheets.

Linear PDEs can be reduced to systems of ordinary differential equations by the important technique of separation of variables. This technique rests on a characteristic of solutions to differential equations: if one can find any solution that solves the equation and satisfies the boundary conditions, then it is *the* solution (this also applies to ODEs). We assume as an ansatz that the dependence of a solution on the parameters space and time can be written as a product of terms that each depend on a single parameter, and then see if this can be made to solve the problem.^{ [3] }

In the method of separation of variables, one reduces a PDE to a PDE in fewer variables, which is an ordinary differential equation if in one variable – these are in turn easier to solve.

This is possible for simple PDEs, which are called separable partial differential equations, and the domain is generally a rectangle (a product of intervals). Separable PDEs correspond to diagonal matrices – thinking of "the value for fixed x" as a coordinate, each coordinate can be understood separately.

This generalizes to the method of characteristics, and is also used in integral transforms.

In special cases, one can find characteristic curves on which the equation reduces to an ODE – changing coordinates in the domain to straighten these curves allows separation of variables, and is called the method of characteristics.

More generally, one may find characteristic surfaces.

An integral transform may transform the PDE to a simpler one, in particular, a separable PDE. This corresponds to diagonalizing an operator.

An important example of this is Fourier analysis, which diagonalizes the heat equation using the eigenbasis of sinusoidal waves.

If the domain is finite or periodic, an infinite sum of solutions such as a Fourier series is appropriate, but an integral of solutions such as a Fourier integral is generally required for infinite domains. The solution for a point source for the heat equation given above is an example of the use of a Fourier integral.

Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables. For example, the Black–Scholes equation

is reducible to the heat equation

by the change of variables^{ [4] }

Inhomogeneous equations^{[ clarification needed ]} can often be solved (for constant coefficient PDEs, always be solved) by finding the fundamental solution (the solution for a point source), then taking the convolution with the boundary conditions to get the solution.

This is analogous in signal processing to understanding a filter by its impulse response.

The superposition principle applies to any linear system, including linear systems of PDEs. A common visualization of this concept is the interaction of two waves in phase being combined to result in a greater amplitude, for example sin *x* + sin *x* = 2 sin *x*. The same principle can be observed in PDEs where the solutions may be real or complex and additive. If *u*_{1} and *u*_{2} are solutions of linear PDE in some function space R, then *u* = *c*_{1}*u*_{1} + *c*_{2}*u*_{2} with any constants *c*_{1} and *c*_{2} are also a solution of that PDE in the same function space.

There are no generally applicable methods to solve nonlinear PDEs. Still, existence and uniqueness results (such as the Cauchy–Kowalevski theorem) are often possible, as are proofs of important qualitative and quantitative properties of solutions (getting these results is a major part of analysis). Computational solution to the nonlinear PDEs, the split-step method, exist for specific equations like nonlinear Schrödinger equation.

Nevertheless, some techniques can be used for several types of equations. The h-principle is the most powerful method to solve underdetermined equations. The Riquier–Janet theory is an effective method for obtaining information about many analytic overdetermined systems.

The method of characteristics can be used in some very special cases to solve nonlinear partial differential equations.^{ [5] }

In some cases, a PDE can be solved via perturbation analysis in which the solution is considered to be a correction to an equation with a known solution. Alternatives are numerical analysis techniques from simple finite difference schemes to the more mature multigrid and finite element methods. Many interesting problems in science and engineering are solved in this way using computers, sometimes high performance supercomputers.

From 1870 Sophus Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred, to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact.

A general approach to solving PDEs uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras and differential geometry are used to understand the structure of linear and nonlinear partial differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform and finally finding exact analytic solutions to the PDE.

Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines.

The Adomian decomposition method,^{ [6] } the Lyapunov artificial small parameter method, and his homotopy perturbation method are all special cases of the more general homotopy analysis method.^{ [7] } These are series expansion methods, and except for the Lyapunov method, are independent of small physical parameters as compared to the well known perturbation theory, thus giving these methods greater flexibility and solution generality.

The three most widely used numerical methods to solve PDEs are the finite element method (FEM), finite volume methods (FVM) and finite difference methods (FDM), as well other kind of methods called Meshfree methods, which were made to solve problems where the aforementioned methods are limited. The FEM has a prominent position among these methods and especially its exceptionally efficient higher-order version hp-FEM. Other hybrid versions of FEM and Meshfree methods include the generalized finite element method (GFEM), extended finite element method (XFEM), spectral finite element method (SFEM), meshfree finite element method, discontinuous Galerkin finite element method (DGFEM), Element-Free Galerkin Method (EFGM), Interpolating Element-Free Galerkin Method (IEFGM), etc.

The finite element method (FEM) (its practical application often known as finite element analysis (FEA)) is a numerical technique for finding approximate solutions of partial differential equations (PDE) as well as of integral equations.^{ [8] }^{ [9] } The solution approach is based either on eliminating the differential equation completely (steady state problems), or rendering the PDE into an approximating system of ordinary differential equations, which are then numerically integrated using standard techniques such as Euler's method, Runge–Kutta, etc.

Finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives.

Similar to the finite difference method or finite element method, values are calculated at discrete places on a meshed geometry. "Finite volume" refers to the small volume surrounding each node point on a mesh. In the finite volume method, surface integrals in a partial differential equation that contain a divergence term are converted to volume integrals, using the divergence theorem. These terms are then evaluated as fluxes at the surfaces of each finite volume. Because the flux entering a given volume is identical to that leaving the adjacent volume, these methods conserve mass by design.

The energy method is a mathematical procedure that can be used to verify well-posedness of initial-boundary-value-problems.^{ [10] } In the following example the energy method is used to decide where and which boundary conditions should be imposed such that the resulting IBVP is well-posed. Consider the one-dimensional hyperbolic PDE given by

where is a constant and is an unknown function with initial condition . Multiplying with and integrating over the domain gives

Using that

where integration by parts has been used for the second relationship, we get

Here denotes the standard L2-norm. For well-posedness we require that the energy of the solution is non-increasing, i.e. that , which is achieved by specifying at if and at if . This corresponds to only imposing boundary conditions at the inflow. Note that well-posedness allows for growth in terms of data (initial and boundary) and thus it is sufficient to show that holds when all data is set to zero.

**Some common PDEs**

- Heat equation
- Wave equation
- Laplace's equation
- Helmholtz equation
- Klein–Gordon equation
- Poisson's equation
- Navier-Stokes equation
- Burger's equation

**Types of boundary conditions**

**Various topics**

- Jet bundle
- Laplace transform applied to differential equations
- List of dynamical systems and differential equations topics
- Matrix differential equation
- Numerical partial differential equations
- Partial differential algebraic equation
- Recurrence relation
- Stochastic processes and boundary value problems

- ↑ Klainerman, Sergiu (2010). "PDE as a Unified Subject". In Alon, N.; Bourgain, J.; Connes, A.; Gromov, M.; Milman, V. (eds.).
*Visions in Mathematics*. Modern Birkhäuser Classics. Basel: Birkhäuser. pp. 279–315. doi:10.1007/978-3-0346-0422-2_10. ISBN 978-3-0346-0421-5. - ↑ Klainerman, Sergiu (2008), "Partial Differential Equations", in Gowers, Timothy; Barrow-Green, June; Leader, Imre (eds.),
*The Princeton Companion to Mathematics*, Princeton University Press, pp. 455–483 - ↑ Gershenfeld, Neil (2000).
*The nature of mathematical modeling*(Reprinted (with corr.) ed.). Cambridge: Cambridge Univ. Press. p. 27. ISBN 0521570956. - ↑ Wilmott, Paul; Howison, Sam; Dewynne, Jeff (1995).
*The Mathematics of Financial Derivatives*. Cambridge University Press. pp. 76–81. ISBN 0-521-49789-2. - ↑ Logan, J. David (1994). "First Order Equations and Characteristics".
*An Introduction to Nonlinear Partial Differential Equations*. New York: John Wiley & Sons. pp. 51–79. ISBN 0-471-59916-6. - ↑ Adomian, G. (1994).
*Solving Frontier problems of Physics: The decomposition method*. Kluwer Academic Publishers. ISBN 9789401582896. - ↑ Liao, S.J. (2003),
*Beyond Perturbation: Introduction to the Homotopy Analysis Method*, Boca Raton: Chapman & Hall/ CRC Press, ISBN 1-58488-407-X - ↑ Solin, P. (2005),
*Partial Differential Equations and the Finite Element Method*, Hoboken, NJ: J. Wiley & Sons, ISBN 0-471-72070-4 - ↑ Solin, P.; Segeth, K. & Dolezel, I. (2003),
*Higher-Order Finite Element Methods*, Boca Raton: Chapman & Hall/CRC Press, ISBN 1-58488-438-X - ↑ Gustafsson, Bertil (2008).
*High Order Difference Methods for Time Dependent PDE*. Springer Series in Computational Mathematics.**38**. Springer. doi:10.1007/978-3-540-74993-6. ISBN 978-3-540-74992-9.

In mathematics, an **equation** is a statement that asserts the equality of two expressions, which are connected by the equals sign "=". The word *equation* and its cognates in other languages may have subtly different meanings; for example, in French an *équation* is defined as containing one or more variables, while in English, any equality is an equation.

In mathematics and science, a **nonlinear system** is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists because most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.

In mathematics, a **linear differential equation** is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

In mathematics, **separation of variables** is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

Second-order linear partial differential equations (PDEs) are classified as either **elliptic**, hyperbolic, or parabolic. Any second-order linear PDE in two variables can be written in the form

In mathematics, the **method of characteristics** is a technique for solving partial differential equations. Typically, it applies to first-order equations, although more generally the method of characteristics is valid for any hyperbolic partial differential equation. The method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data given on a suitable hypersurface.

**Burgers' equation** or **Bateman–Burgers equation** is a fundamental partial differential equation occurring in various areas of applied mathematics, such as fluid mechanics, nonlinear acoustics, gas dynamics, and traffic flow. The equation was first introduced by Harry Bateman in 1915 and later studied by Johannes Martinus Burgers in 1948.

In mathematics, the eigenvalue problem for the Laplace operator is known as the **Helmholtz equation**. It corresponds to the linear partial differential equation:

In mathematics, a **differential equation** is an equation that relates one or more functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.

In mathematics, a **hyperbolic partial differential equation** of order is a partial differential equation (PDE) that, roughly speaking, has a well-posed initial value problem for the first derivatives. More precisely, the Cauchy problem can be locally solved for arbitrary initial data along any non-characteristic hypersurface. Many of the equations of mechanics are hyperbolic, and so the study of hyperbolic equations is of substantial contemporary interest. The model hyperbolic equation is the wave equation. In one spatial dimension, this is

In mathematics, the **convergence condition by Courant–Friedrichs–Lewy** is a necessary condition for convergence while solving certain partial differential equations numerically. It arises in the numerical analysis of explicit time integration schemes, when these are used for the numerical solution. As a consequence, the time step must be less than a certain time in many explicit time-marching computer simulations, otherwise the simulation produces incorrect results. The condition is named after Richard Courant, Kurt Friedrichs, and Hans Lewy who described it in their 1928 paper.

In mathematics, a **weak solution** to an ordinary or partial differential equation is a function for which the derivatives may not all exist but which is nonetheless deemed to satisfy the equation in some precisely defined sense. There are many different definitions of weak solution, appropriate for different classes of equations. One of the most important is based on the notion of distributions.

In mathematics, a **first-order partial differential equation** is a partial differential equation that involves only first derivatives of the unknown function of *n* variables. The equation takes the form

In mathematics, the **inverse scattering transform** is a method for solving some non-linear partial differential equations. The method is a non-linear analogue, and in some sense generalization, of the Fourier transform, which itself is applied to solve many linear partial differential equations. The name "inverse scattering method" comes from the key idea of recovering the time evolution of a potential from the time evolution of its scattering data: inverse scattering refers to the problem of recovering a potential from its scattering matrix, as opposed to the direct scattering problem of finding the scattering matrix from the potential.

In numerical analysis, **finite-difference methods** (**FDM**) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time interval are discretized, or broken into a finite number of steps, and the value of the solution at these discrete points is approximated by solving algebraic equations containing finite differences and values from nearby points.

A **parabolic partial differential equation** is a type of partial differential equation (PDE). Parabolic PDEs are used to describe a wide variety of time-dependent phenomena, including heat conduction, particle diffusion, and pricing of derivative investment instruments.

The **finite element method** (**FEM**) is a widely used method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. The FEM is a general numerical method for solving partial differential equations in two or three space variables. To solve a problem, the FEM subdivides a large system into smaller, simpler parts that are called **finite elements**. This is achieved by a particular space discretization in the space dimensions, which is implemented by the construction of a mesh of the object: the numerical domain for the solution, which has a finite number of points. The finite element method formulation of a boundary value problem finally results in a system of algebraic equations. The method approximates the unknown function over the domain. The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. The FEM then uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function.

In mathematics, an **ordinary differential equation** (**ODE**) is a differential equation containing one or more functions of one independent variable and the derivatives of those functions. The term *ordinary* is used in contrast with the term partial differential equation which may be with respect to *more than* one independent variable.

The **Kansa method** is a computer method used to solve partial differential equations. Partial differential equations are mathematical models of things like stresses in a car's body, air flow around a wing, the shock wave in front of a supersonic airplane, quantum mechanical model of an atom, ocean waves, socio-economic models, digital image processing etc. The computer takes the known quantities such as pressure, temperature, air velocity, stress, and then uses the laws of physics to figure out what the rest of the quantities should be like a puzzle being fit together. Then, for example, the stresses in various parts of a car can be determined when that car hits a bump at 70 miles per hour.

In mathematics a **partial differential algebraic equation (PDAE)** set is an incomplete system of partial differential equations that is closed with a set of algebraic equations.

- Courant, R. & Hilbert, D. (1962),
*Methods of Mathematical Physics*,**II**, New York: Wiley-Interscience, ISBN 9783527617241 . - Evans, L. C. (1998),
*Partial Differential Equations*, Providence: American Mathematical Society, ISBN 0-8218-0772-2 . - Drábek, Pavel; Holubová, Gabriela (2007).
*Elements of partial differential equations*(Online ed.). Berlin: de Gruyter. ISBN 9783110191240. - Ibragimov, Nail H. (1993),
*CRC Handbook of Lie Group Analysis of Differential Equations Vol. 1-3*, Providence: CRC-Press, ISBN 0-8493-4488-3 . - John, F. (1982),
*Partial Differential Equations*(4th ed.), New York: Springer-Verlag, ISBN 0-387-90609-6 . - Jost, J. (2002),
*Partial Differential Equations*, New York: Springer-Verlag, ISBN 0-387-95428-7 . - Olver, P.J. (1995),
*Equivalence, Invariants and Symmetry*, Cambridge Press. - Petrovskii, I. G. (1967),
*Partial Differential Equations*, Philadelphia: W. B. Saunders Co.. - Pinchover, Y. & Rubinstein, J. (2005),
*An Introduction to Partial Differential Equations*, New York: Cambridge University Press, ISBN 0-521-84886-5 . - Polyanin, A. D. (2002),
*Handbook of Linear Partial Differential Equations for Engineers and Scientists*, Boca Raton: Chapman & Hall/CRC Press, ISBN 1-58488-299-9 . - Polyanin, A. D. & Zaitsev, V. F. (2004),
*Handbook of Nonlinear Partial Differential Equations*, Boca Raton: Chapman & Hall/CRC Press, ISBN 1-58488-355-3 . - Polyanin, A. D.; Zaitsev, V. F. & Moussiaux, A. (2002),
*Handbook of First Order Partial Differential Equations*, London: Taylor & Francis, ISBN 0-415-27267-X . - Roubíček, T. (2013),
*Nonlinear Partial Differential Equations with Applications*(PDF), International Series of Numerical Mathematics,**153**(2nd ed.), Basel, Boston, Berlin: Birkhäuser, doi:10.1007/978-3-0348-0513-1, ISBN 978-3-0348-0512-4, MR 3014456 - Stephani, H. (1989), MacCallum, M. (ed.),
*Differential Equations: Their Solution Using Symmetries*, Cambridge University Press. - Wazwaz, Abdul-Majid (2009).
*Partial Differential Equations and Solitary Waves Theory*. Higher Education Press. ISBN 978-3-642-00251-9. - Wazwaz, Abdul-Majid (2002).
*Partial Differential Equations Methods and Applications*. A.A. Balkema. ISBN 90-5809-369-7. - Zwillinger, D. (1997),
*Handbook of Differential Equations*(3rd ed.), Boston: Academic Press, ISBN 0-12-784395-7 . - Gershenfeld, N. (1999),
*The Nature of Mathematical Modeling*(1st ed.), New York: Cambridge University Press, New York, NY, USA, ISBN 0-521-57095-6 . - Krasil'shchik, I.S. & Vinogradov, A.M., Eds. (1999),
*Symmetries and Conserwation Laws for Differential Equations of Mathematical Physics*, American Mathematical Society, Providence, Rhode Island, USA, ISBN 0-8218-0958-X . - Krasil'shchik, I.S.; Lychagin, V.V. & Vinogradov, A.M. (1986),
*Geometry of Jet Spaces and Nonlinear Partial Differential Equations*, Gordon and Breach Science Publishers, New York, London, Paris, Montreux, Tokyo, ISBN 2-88124-051-8 . - Vinogradov, A.M. (2001),
*Cohomological Analysis of Partial Differential Equations and Secondary Calculus*, American Mathematical Society, Providence, Rhode Island, USA, ISBN 0-8218-2922-X . - Gustafsson, Bertil (2008).
*High Order Difference Methods for Time Dependent PDE*. Springer Series in Computational Mathematics.**38**. Springer. doi:10.1007/978-3-540-74993-6. ISBN 978-3-540-74992-9.

- Cajori, Florian (1928). "The Early History of Partial Differential Equations and of Partial Differentiation and Integration" (PDF).
*The American Mathematical Monthly*.**35**(9): 459–467. doi:10.2307/2298771. JSTOR 2298771. Archived from the original (PDF) on 2018-11-23. Retrieved 2016-05-15. - Nirenberg, Louis (1994). "Partial differential equations in the first half of the century." Development of mathematics 1900–1950 (Luxembourg, 1992), 479–515, Birkhäuser, Basel.
- Brezis, H., & Browder, F. (1998). "Partial Differential Equations in the 20th Century." Advances in Mathematics, 135(1), 76–144. doi:10.1006/aima.1997.1713

- "Differential equation, partial",
*Encyclopedia of Mathematics*, EMS Press, 2001 [1994] - Partial Differential Equations: Exact Solutions at EqWorld: The World of Mathematical Equations.
- Partial Differential Equations: Index at EqWorld: The World of Mathematical Equations.
- Partial Differential Equations: Methods at EqWorld: The World of Mathematical Equations.
- Example problems with solutions at exampleproblems.com
- Partial Differential Equations at mathworld.wolfram.com
- Partial Differential Equations with Mathematica
- Partial Differential Equations in Cleve Moler: Numerical Computing with MATLAB
- Partial Differential Equations at nag.com
- Sanderson, Grant (April 21, 2019). "But what is a partial differential equation?".
*3Blue1Brown*– via YouTube.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.