Invariant factorization of LPDOs

Last updated

The factorization of a linear partial differential operator (LPDO) is an important issue in the theory of integrability, due to the Laplace-Darboux transformations, [1] which allow construction of integrable LPDEs. Laplace solved the factorization problem for a bivariate hyperbolic operator of the second order (see Hyperbolic partial differential equation), constructing two Laplace invariants. Each Laplace invariant is an explicit polynomial condition of factorization; coefficients of this polynomial are explicit functions of the coefficients of the initial LPDO. The polynomial conditions of factorization are called invariants because they have the same form for equivalent (i.e. self-adjoint) operators.

Contents

Beals-Kartashova-factorization (also called BK-factorization) is a constructive procedure to factorize a bivariate operator of the arbitrary order and arbitrary form. Correspondingly, the factorization conditions in this case also have polynomial form, are invariants and coincide with Laplace invariants for bivariate hyperbolic operators of the second order. The factorization procedure is purely algebraic, the number of possible factorizations depending on the number of simple roots of the Characteristic polynomial (also called symbol) of the initial LPDO and reduced LPDOs appearing at each factorization step. Below the factorization procedure is described for a bivariate operator of arbitrary form, of order 2 and 3. Explicit factorization formulas for an operator of the order can be found in [2] General invariants are defined in [3] and invariant formulation of the Beals-Kartashova factorization is given in [4]

Beals-Kartashova Factorization

Operator of order 2

Consider an operator

with smooth coefficients and look for a factorization

Let us write down the equations on explicitly, keeping in mind the rule of left composition, i.e. that

Then in all cases

where the notation is used.

Without loss of generality, i.e. and it can be taken as 1, Now solution of the system of 6 equations on the variables

can be found in three steps.

At the first step, the roots of a quadratic polynomial have to be found.

At the second step, a linear system of two algebraic equations has to be solved.

At the third step, one algebraic condition has to be checked.

Step 1. Variables

can be found from the first three equations,

The (possible) solutions are then the functions of the roots of a quadratic polynomial:

Let be a root of the polynomial then

Step 2. Substitution of the results obtained at the first step, into the next two equations

yields linear system of two algebraic equations:

In particularly, if the root is simple, i.e.

then these

equations have the unique solution:

At this step, for each root of the polynomial a corresponding set of coefficients is computed.

Step 3. Check factorization condition (which is the last of the initial 6 equations)

written in the known variables and ):

If

the operator is factorizable and explicit form for the factorization coefficients is given above.

Operator of order 3

Consider an operator

with smooth coefficients and look for a factorization

Similar to the case of the operator the conditions of factorization are described by the following system:

with and again i.e. and three-step procedure yields:

At the first step, the roots of a cubic polynomial

have to be found. Again denotes a root and first four coefficients are

At the second step, a linear system of three algebraic equations has to be solved:

At the third step, two algebraic conditions have to be checked.

Operator of order

Invariant Formulation

Definition The operators , are called equivalent if there is a gauge transformation that takes one to the other:

BK-factorization is then pure algebraic procedure which allows to construct explicitly a factorization of an arbitrary order LPDO in the form

with first-order operator where is an arbitrary simple root of the characteristic polynomial

Factorization is possible then for each simple root iff

for

for

for

and so on. All functions are known functions, for instance,

and so on.

Theorem All functions

are invariants under gauge transformations.

Definition Invariants are called generalized invariants of a bivariate operator of arbitrary order.

In particular case of the bivariate hyperbolic operator its generalized invariants coincide with Laplace invariants (see Laplace invariant).

Corollary If an operator is factorizable, then all operators equivalent to it, are also factorizable.

Equivalent operators are easy to compute:

and so on. Some example are given below:

Transpose

Factorization of an operator is the first step on the way of solving corresponding equation. But for solution we need right factors and BK-factorization constructs left factors which are easy to construct. On the other hand, the existence of a certain right factor of a LPDO is equivalent to the existence of a corresponding left factor of the transpose of that operator.

Definition The transpose of an operator is defined as and the identity implies that

Now the coefficients are

with a standard convention for binomial coefficients in several variables (see Binomial coefficient), e.g. in two variables

In particular, for the operator the coefficients are

For instance, the operator

is factorizable as

and its transpose is factorizable then as

See also

Notes

  1. Weiss (1986)
  2. R. Beals, E. Kartashova. Constructively factoring linear partial differential operators in two variables. Theor. Math. Phys. 145(2), pp. 1510-1523 (2005)
  3. E. Kartashova. A Hierarchy of Generalized Invariants for Linear Partial Differential Operators. Theor. Math. Phys. 147(3), pp. 839-846 (2006)
  4. E. Kartashova, O. Rudenko. Invariant Form of BK-factorization and its Applications. Proc. GIFT-2006, pp.225-241, Eds.: J. Calmet, R. W. Tucker, Karlsruhe University Press (2006); arXiv

Related Research Articles

Dirac delta function pseudo-function δ such that an integral of δ(x-c)f(x) always takes the value of f(c)

In mathematics, the Dirac delta function is a generalized function or distribution introduced by the physicist Paul Dirac. It is used to model the density of an idealized point mass or point charge as a function equal to zero everywhere except for zero and whose integral over the entire real line is equal to one. As there is no function that has these properties, the computations made by the theoretical physicists appeared to mathematicians as nonsense until the introduction of distributions by Laurent Schwartz to formalize and validate the computations. As a distribution, the Dirac delta function is a linear functional that maps every function to its value at zero. The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is a discrete analog of the Dirac delta function.

Noethers theorem Physical law that differentiable symmetries correspond to conservation laws

Noether's theorem or Noether's first theorem states that every differentiable symmetry of the action of a physical system has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918, after a special case was proven by E. Cosserat and F. Cosserat in 1909. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries over physical space.

In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A "canonical coordinate system" consists of canonical position and momentum variables that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates.

In differential geometry, the Lie derivative, named after Sophus Lie by Władysław Ślebodziński, evaluates the change of a tensor field, along the flow defined by another vector field. This change is coordinate invariant and therefore the Lie derivative is defined on any differentiable manifold.

In theoretical physics, a non-abelian gauge transformation means a gauge transformation taking values in some group G, the elements of which do not obey the commutative law when they are multiplied. By contrast, the original choice of gauge group in the physics of electromagnetism had been U(1), which is commutative.

In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function together with its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, i.e. a Banach space. Intuitively, a Sobolev space is a space of functions possessing sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.

The rigid rotor is a mechanical model of rotating systems. An arbitrary rigid rotor is a 3-dimensional rigid object, such as a top. To orient such an object in space requires three angles, known as Euler angles. A special rigid rotor is the linear rotor requiring only two angles to describe, for example of a diatomic molecule. More general molecules are 3-dimensional, such as water, ammonia, or methane.

LSZ reduction formula

In quantum field theory, the LSZ reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.

In calculus, Leibniz's rule for differentiation under the integral sign, named after Gottfried Leibniz, states that for an integral of the form

In complex analysis, functional analysis and operator theory, a Bergman space is a function space of holomorphic functions in a domain D of the complex plane that are sufficiently well-behaved at the boundary that they are absolutely integrable. Specifically, for 0 < p < ∞, the Bergman space Ap(D) is the space of all holomorphic functions in D for which the p-norm is finite:

Landau quantization in quantum mechanics is the quantization of the cyclotron orbits of charged particles in magnetic fields. As a result, the charged particles can only occupy orbits with discrete energy values, called Landau levels. The Landau levels are degenerate, with the number of electrons per level directly proportional to the strength of the applied magnetic field. Landau quantization is directly responsible for oscillations in electronic properties of materials as a function of the applied magnetic field. It is named after the Soviet physicist Lev Landau.

In mathematics and theoretical physics, an invariant differential operator is a kind of mathematical map from some objects to an object of similar type. These objects are typically functions on , functions on a manifold, vector valued functions, vector fields, or, more generally, sections of a vector bundle.

In the mathematical field of dynamical systems, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space S, a set of maps from S into itself that can be thought of as the set of all possible equations of motion, and a probability distribution Q on the set that represents the random choice of map. Motion in a random dynamical system can be informally thought of as a state evolving according to a succession of maps randomly chosen according to the distribution Q.

Static force fields are fields, such as a simple electric, magnetic or gravitational fields, that exist without excitations. The most common approximation method that physicists use for scattering calculations can be interpreted as static forces arising from the interactions between two bodies mediated by virtual particles, particles that exist for only a short time determined by the uncertainty principle. The virtual particles, also known as force carriers, are bosons, with different bosons associated with each force.

Mindlin–Reissner plate theory

The Mindlin–Reissner theory of plates is an extension of Kirchhoff–Love plate theory that takes into account shear deformations through-the-thickness of a plate. The theory was proposed in 1951 by Raymond Mindlin. A similar, but not identical, theory had been proposed earlier by Eric Reissner in 1945. Both theories are intended for thick plates in which the normal to the mid-surface remains straight but not necessarily perpendicular to the mid-surface. The Mindlin–Reissner theory is used to calculate the deformations and stresses in a plate whose thickness is of the order of one tenth the planar dimensions while the Kirchhoff–Love theory is applicable to thinner plates.

In numerical mathematics, hierarchical matrices (H-matrices) are used as data-sparse approximations of non-sparse matrices. While a sparse matrix of dimension can be represented efficiently in units of storage by storing only its non-zero entries, a non-sparse matrix would require units of storage, and using this type of matrices for large problems would therefore be prohibitively expensive in terms of storage and computing time. Hierarchical matrices provide an approximation requiring only units of storage, where is a parameter controlling the accuracy of the approximation. In typical applications, e.g., when discretizing integral equations , preconditioning the resulting systems of linear equations , or solving elliptic partial differential equations , a rank proportional to with a small constant is sufficient to ensure an accuracy of . Compared to many other data-sparse representations of non-sparse matrices, hierarchical matrices offer a major advantage: the results of matrix arithmetic operations like matrix multiplication, factorization or inversion can be approximated in operations, where

Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used for discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.

Averaged Lagrangian

In continuum mechanics, Whitham's averaged Lagrangian method – or in short Whitham's method – is used to study the Lagrangian dynamics of slowly-varying wave trains in an inhomogeneous (moving) medium. The method is applicable to both linear and non-linear systems. As a direct consequence of the averaging used in the method, wave action is a conserved property of the wave motion. In contrast, the wave energy is not necessarily conserved, due to the exchange of energy with the mean motion. However the total energy, the sum of the energies in the wave motion and the mean motion, will be conserved for a time-invariant Lagrangian. Further, the averaged Lagrangian has a strong relation to the dispersion relation of the system.

In Category theory and related fields of mathematics, an envelope is a construction that generalizes the operations of "exterior completion", like completion of a locally convex space, or Stone–Čech compactification of a topological space. A dual construction is called refinement.

This article summarizes important identities in exterior calculus.

References