Constraint counting

Last updated

In mathematics, constraint counting is counting the number of constraints in order to compare it with the number of variables, parameters, etc. that are free to be determined, the idea being that in most cases the number of independent choices that can be made is the excess of the latter over the former.

Contents

For example, in linear algebra if the number of constraints (independent equations) in a system of linear equations equals the number of unknowns then precisely one solution exists; if there are fewer independent equations than unknowns, an infinite number of solutions exist; and if the number of independent equations exceeds the number of unknowns, then no solutions exist.

In the context of partial differential equations, constraint counting is a crude but often useful way of counting the number of free functions needed to specify a solution to a partial differential equation.

Partial differential equations

Consider a second order partial differential equation in three variables, such as the two-dimensional wave equation

It is often profitable to think of such an equation as a rewrite rule allowing us to rewrite arbitrary partial derivatives of the function using fewer partials than would be needed for an arbitrary function. For example, if satisfies the wave equation, we can rewrite

where in the first equality, we appealed to the fact that partial derivatives commute.

Linear equations

To answer this in the important special case of a linear partial differential equation, Einstein asked: how many of the partial derivatives of a solution can be linearly independent? It is convenient to record his answer using an ordinary generating function

where is a natural number counting the number of linearly independent partial derivatives (of order k) of an arbitrary function in the solution space of the equation in question.

Whenever a function satisfies some partial differential equation, we can use the corresponding rewrite rule to eliminate some of them, because further mixed partials have necessarily become linearly dependent. Specifically, the power series counting the variety of arbitrary functions of three variables (no constraints) is

but the power series counting those in the solution space of some second order p.d.e. is

which records that we can eliminate one second order partial , three third order partials , and so forth.

More generally, the o.g.f. for an arbitrary function of n variables is

where the coefficients of the infinite power series of the generating function are constructed using an appropriate infinite sequence of binomial coefficients, and the power series for a function required to satisfy a linear m-th order equation is

Next,

which can be interpreted to predict that a solution to a second order linear p.d.e. in three variables is expressible by two freely chosen functions of two variables, one of which is used immediately, and the second, only after taking a first derivative, in order to express the solution.

General solution of initial value problem

To verify this prediction, recall the solution of the initial value problem

Applying the Laplace transform gives

Applying the Fourier transform to the two spatial variables gives

or

Applying the inverse Laplace transform gives

Applying the inverse Fourier transform gives

where

Here, p,q are arbitrary (sufficiently smooth) functions of two variables, so (due their modest time dependence) the integrals P,Q also count as "freely chosen" functions of two variables; as promised, one of them is differentiated once before adding to the other to express the general solution of the initial value problem for the two dimensional wave equation.

Quasilinear equations

In the case of a nonlinear equation, it will only rarely be possible to obtain the general solution in closed form. However, if the equation is quasilinear (linear in the highest order derivatives), then we can still obtain approximate information similar to the above: specifying a member of the solution space will be "modulo nonlinear quibbles" equivalent to specifying a certain number of functions in a smaller number of variables. The number of these functions is the Einstein strength of the p.d.e. In the simple example above, the strength is two, although in this case we were able to obtain more precise information.

Related Research Articles

Wave equation Second-order linear differential equation important in physics

The wave equation is a second-order linear partial differential equation for the description of waves—as they occur in classical physics—such as mechanical waves or light waves. It arises in fields like acoustics, electromagnetics, and fluid dynamics. Due to the fact that the second order wave equation describes the superposition of an incoming wave and an outgoing wave it is also called "Two-way wave equation".

Dirac delta function Pseudo-function δ such that an integral of δ(x-c)f(x) always takes the value of f(c)

In mathematics, the Dirac delta function, also known as the unit impulse symbol, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. It can also be interpreted as a linear functional that maps every function to its value at zero, or as the weak limit of a sequence of bump functions, which are zero over most of the real line, with a tall spike at the origin. Bump functions are thus sometimes called "approximate" or "nascent" delta functions.

Fourier transform Mathematical transform that expresses a function of time as a function of frequency

In mathematics, a Fourier transform (FT) is a mathematical transform that decomposes functions depending on space or time into functions depending on spatial or temporal frequency, such as the expression of a musical chord in terms of the volumes and frequencies of its constituent notes. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of space or time.

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , , or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf(p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f(p).

Hamiltonian mechanics Formulation of classical mechanics using momenta

Hamiltonian mechanics emerged in 1833 as a reformulation of Lagrangian mechanics. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena.

Elliptic operator

In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which implies the key property that the principal symbol is invertible, or equivalently that there are no real characteristic directions.

Method of characteristics Technique for solving hyperbolic partial differential equations

In mathematics, the method of characteristics is a technique for solving partial differential equations. Typically, it applies to first-order equations, although more generally the method of characteristics is valid for any hyperbolic partial differential equation. The method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data given on a suitable hypersurface.

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics. The Hamilton–Jacobi equation is particularly useful in identifying conserved quantities for mechanical systems, which may be possible even when the mechanical problem itself cannot be solved completely.

In mathematical analysis a pseudo-differential operator is an extension of the concept of differential operator. Pseudo-differential operators are used extensively in the theory of partial differential equations and quantum field theory.

In general relativity, the monochromatic electromagnetic plane wave spacetime is the analog of the monochromatic plane waves known from Maxwell's theory. The precise definition of the solution is quite complicated, but very instructive.

The eikonal equation is a non-linear partial differential equation encountered in problems of wave propagation, when the wave equation is approximated using the WKB theory. It is derivable from Maxwell's equations of electromagnetics, and provides a link between physical (wave) optics and geometric (ray) optics.

In theory of vibrations, Duhamel's integral is a way of calculating the response of linear systems and structures to arbitrary time-varying external perturbation.

In optics, the term soliton is used to refer to any optical field that does not change during propagation because of a delicate balance between nonlinear and linear effects in the medium. There are two main kinds of solitons:

In classical mechanics, a Liouville dynamical system is an exactly soluble dynamical system in which the kinetic energy T and potential energy V can be expressed in terms of the s generalized coordinates q as follows:

In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dissertation Hermann Weyl generalized the classical Sturm–Liouville theory on a finite closed interval to second order differential operators with singularities at the endpoints of the interval, possibly semi-infinite or infinite. Unlike the classical case, the spectrum may no longer consist of just a countable set of eigenvalues, but may also contain a continuous part. In this case the eigenfunction expansion involves an integral over the continuous part with respect to a spectral measure, given by the Titchmarsh–Kodaira formula. The theory was put in its final simplified form for singular differential equations of even degree by Kodaira and others, using von Neumann's spectral theorem. It has had important applications in quantum mechanics, operator theory and harmonic analysis on semisimple Lie groups.

In fracture mechanics, the energy release rate, , is the rate at which energy is transformed as a material undergoes fracture. Mathematically, the energy release rate is expressed as the decrease in total potential energy per increase in fracture surface area, and is thus expressed in terms of energy per unit area. Various energy balances can be constructed relating the energy released during fracture to the energy of the resulting new surface, as well as other dissipative processes such as plasticity and heat generation. The energy release rate is central to the field of fracture mechanics when solving problems and estimating material properties related to fracture and fatigue.

In mathematics, the Schauder estimates are a collection of results due to Juliusz Schauder concerning the regularity of solutions to linear, uniformly elliptic partial differential equations. The estimates say that when the equation has appropriately smooth terms and appropriately smooth solutions, then the Hölder norm of the solution can be controlled in terms of the Hölder norms for the coefficient and source terms. Since these estimates assume by hypothesis the existence of a solution, they are called a priori estimates.

The system size expansion, also known as van Kampen's expansion or the Ω-expansion, is a technique pioneered by Nico van Kampen used in the analysis of stochastic processes. Specifically, it allows one to find an approximation to the solution of a master equation with nonlinear transition rates. The leading order term of the expansion is given by the linear noise approximation, in which the master equation is approximated by a Fokker–Planck equation with linear coefficients determined by the transition rates and stoichiometry of the system.

Vasiliev equations are formally consistent gauge invariant nonlinear equations whose linearization over a specific vacuum solution describes free massless higher-spin fields on anti-de Sitter space. The Vasiliev equations are classical equations and no Lagrangian is known that starts from canonical two-derivative Frønsdal Lagrangian and is completed by interactions terms. There is a number of variations of Vasiliev equations that work in three, four and arbitrary number of space-time dimensions. Vasiliev's equations admit supersymmetric extensions with any number of super-symmetries and allow for Yang-Mills gaugings. Vasiliev's equations are background independent, the simplest exact solution being anti-de Sitter space. It is important to note that locality is not properly implemented and the equations give a solution of certain formal deformation procedure, which is difficult to map to field theory language. The higher-spin AdS/CFT correspondence is reviewed in Higher-spin theory article.

In combustion, a Burke–Schumann flame is a type of diffusion flame, established at the mouth of the two concentric ducts, by issuing fuel and oxidizer from the two region respectively. It is named after S.P. Burke and T.E.W. Schumann, who were able to predict the flame height and flame shape using their simple analysis of infinitely fast chemistry in 1928 at the First symposium on combustion.

References