Laplace invariant

Last updated

In differential equations, the Laplace invariant of any of certain differential operators is a certain function of the coefficients and their derivatives. Consider a bivariate hyperbolic differential operator of the second order

whose coefficients

are smooth functions of two variables. Its Laplace invariants have the form

Their importance is due to the classical theorem:

Theorem: Two operators of the form are equivalent under gauge transformations if and only if their Laplace invariants coincide pairwise.

Here the operators

are called equivalent if there is a gauge transformation that takes one to the other:

Laplace invariants can be regarded as factorization "remainders" for the initial operator A:

If at least one of Laplace invariants is not equal to zero, i.e.

then this representation is a first step of the Laplace–Darboux transformations used for solving non-factorizable bivariate linear partial differential equations (LPDEs).

If both Laplace invariants are equal to zero, i.e.

then the differential operator A is factorizable and corresponding linear partial differential equation of second order is solvable.

Laplace invariants have been introduced for a bivariate linear partial differential operator (LPDO) of order 2 and of hyperbolic type. They are a particular case of generalized invariants which can be constructed for a bivariate LPDO of arbitrary order and arbitrary type; see Invariant factorization of LPDOs.

See also

Related Research Articles

In mathematics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace who first studied its properties. This is often written as

Partial differential equation differential equation that contains unknown multivariable functions and their partial derivatives

In mathematics, a partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a computer model. A special case is ordinary differential equations (ODEs), which deal with functions of a single variable and their derivatives.

In mathematics, the discriminant of the quadratic polynomial

The Calculus of variations is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers. Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a function on Euclidean space. It is usually denoted by the symbols ∇·∇, 2. The Laplacian ∇·∇f(p) of a function f at a point p, is the rate at which the average value of f over spheres centered at p deviates from f(p) as the radius of the sphere shrinks towards 0. In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems such as cylindrical and spherical coordinates, the Laplacian also has a useful form.

In mathematics, Poisson's equation is a partial differential equation of elliptic type with broad utility in mechanical engineering and theoretical physics. It arises, for instance, to describe the potential field caused by a given charge or mass density distribution; with the potential field known, one can then calculate gravitational or electrostatic field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after the French mathematician, geometer, and physicist Siméon Denis Poisson.

In mathematics, a Green's function of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions is its impulse response.

Polyakov action An Action Principle from Conformal Field Theory

In physics, the Polyakov action is an action of the two-dimensional conformal field theory describing the worldsheet of a string in string theory. It was introduced by Stanley Deser and Bruno Zumino and independently by L. Brink, P. Di Vecchia and P. S. Howe, and has become associated with Alexander Polyakov after he made use of it in quantizing the string. The action reads

In mathematics and theoretical physics, an invariant differential operator is a kind of mathematical map from some objects to an object of similar type. These objects are typically functions on , functions on a manifold, vector valued functions, vector fields, or, more generally, sections of a vector bundle.

Mathematical descriptions of the electromagnetic field Formulations of electromagnetism

There are various mathematical descriptions of the electromagnetic field that are used in the study of electromagnetism, one of the four fundamental interactions of nature. In this article, several approaches are discussed, although the equations are in terms of electric and magnetic fields, potentials, and charges with currents, generally speaking.

Elliptic boundary value problem

In mathematics, an elliptic boundary value problem is a special kind of boundary value problem which can be thought of as the stable state of an evolution problem. For example, the Dirichlet problem for the Laplacian gives the eventual distribution of heat in a room several hours after the heating is turned on.

In applied mathematics, discontinuous Galerkin methods form a class of numerical methods for solving differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic, parabolic and mixed form problems arising from a wide range of applications. DG methods have in particular received considerable interest for problems with a dominant first-order part, e.g. in electrodynamics, fluid mechanics and plasma physics.

The factorization of a linear partial differential operator (LPDO) is an important issue in the theory of integrability, due to the Laplace-Darboux transformations, which allow construction of integrable LPDEs. Laplace solved the factorization problem for a bivariate hyperbolic operator of the second order, constructing two Laplace invariants. Each Laplace invariant is an explicit polynomial condition of factorization; coefficients of this polynomial are explicit functions of the coefficients of the initial LPDO. The polynomial conditions of factorization are called invariants because they have the same form for equivalent operators.

In mathematics, the Plancherel theorem for spherical functions is an important result in the representation theory of semisimple Lie groups, due in its final form to Harish-Chandra. It is a natural generalisation in non-commutative harmonic analysis of the Plancherel formula and Fourier inversion formula in the representation theory of the group of real numbers in classical harmonic analysis and has a similarly close interconnection with the theory of differential equations. It is the special case for zonal spherical functions of the general Plancherel theorem for semisimple Lie groups, also proved by Harish-Chandra. The Plancherel theorem gives the eigenfunction expansion of radial functions for the Laplacian operator on the associated symmetric space X; it also gives the direct integral decomposition into irreducible representations of the regular representation on L2(X). In the case of hyperbolic space, these expansions were known from prior results of Mehler, Weyl and Fock.

Mild-slope equation The combined effects of diffraction and refraction for water waves propagating over variable depth and with lateral boundaries

In fluid dynamics, the mild-slope equation describes the combined effects of diffraction and refraction for water waves propagating over bathymetry and due to lateral boundaries—like breakwaters and coastlines. It is an approximate model, deriving its name from being originally developed for wave propagation over mild slopes of the sea floor. The mild-slope equation is often used in coastal engineering to compute the wave-field changes near harbours and coasts.

In mathematics, the method of steepest descent or stationary-phase method or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point, in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.

In numerical mathematics, hierarchical matrices (H-matrices) are used as data-sparse approximations of non-sparse matrices. While a sparse matrix of dimension can be represented efficiently in units of storage by storing only its non-zero entries, a non-sparse matrix would require units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only units of storage, where is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations , preconditioning the resulting systems of linear equations , or solving elliptic partial differential equations , a rank proportional to with a small constant is sufficient to ensure an accuracy of . Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage: the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated in operations, where

In physics, geometrothermodynamics (GTD) is a formalism developed in 2007 by Hernando Quevedo to describe the properties of thermodynamic systems in terms of concepts of differential geometry.

In the study of differential equations, the Loewy decomposition breaks every linear ordinary differential equation (ODE) into what are called largest completely reducible components. It was introduced by Alfred Loewy.

References