Loewy decomposition

Last updated

In the study of differential equations, the Loewy decomposition breaks every linear ordinary differential equation (ODE) into what are called largest completely reducible components. It was introduced by Alfred Loewy. [1]

Contents

Solving differential equations is one of the most important subfields in mathematics. Of particular interest are solutions in closed form. Breaking ODEs into largest irreducible components, reduces the process of solving the original equation to solving irreducible equations of lowest possible order. This procedure is algorithmic, so that the best possible answer for solving a reducible equation is guaranteed. A detailed discussion may be found in. [2]

Loewy's results have been extended to linear partial differential equations (PDEs) in two independent variables. In this way, algorithmic methods for solving large classes of linear PDEs have become available.

Decomposing linear ordinary differential equations

Let denote the derivative with respect to the variable . A differential operator of order is a polynomial of the form

where the coefficients , are from some function field, the base field of . Usually it is the field of rational functions in the variable , i.e. . If is an indeterminate with , becomes a differential polynomial, and is the differential equation corresponding to .

An operator of order is called reducible if it may be represented as the product of two operators and , both of order lower than . Then one writes , i.e. juxtaposition means the operator product, it is defined by the rule ; is called a left factor of , a right factor. By default, the coefficient domain of the factors is assumed to be the base field of , possibly extended by some algebraic numbers, i.e. is allowed. If an operator does not allow any right factor it is called irreducible.

For any two operators and the least common left multiple is the operator of lowest order such that both and divide it from the right. The greatest common right divisior is the operator of highest order that divides both and from the right. If an operator may be represented as of irreducible operators it is called completely reducible. By definition, an irreducible operator is called completely reducible.

If an operator is not completely reducible, the of its irreducible right factors is divided out and the same procedure is repeated with the quotient. Due to the lowering of order in each step, this proceeding terminates after a finite number of iterations and the desired decomposition is obtained. Based on these considerations, Loewy [1] obtained the following fundamental result.

Theorem 1 (Loewy 1906)  Let be a derivative and . A differential operator

of order may be written uniquely as the product of completely reducible factors of maximal order over in the form

with . The factors are unique. Any factor , may be written as

with ; for , denotes an irreducible operator of order over .

The decomposition determined in this theorem is called the Loewy decomposition of . It provides a detailed description of the function space containing the solution of a reducible linear differential equation .

For operators of fixed order the possible Loewy decompositions, differing by the number and the order of factors, may be listed explicitly; some of the factors may contain parameters. Each alternative is called a type of Loewy decomposition. The complete answer for is detailed in the following corollary to the above theorem. [3]

Corollary 1 Let be a second-order operator. Its possible Loewy decompositions are denoted by , they may be described as follows; and are irreducible operators of order ; is a constant.

The decomposition type of an operator is the decomposition with the highest value of . An irreducible second-order operator is defined to have decomposition type .

The decompositions , and are completely reducible.

If a decomposition of type , or has been obtained for a second-order equation , a fundamental system may be given explicitly.

Corollary 2 Let be a second-order differential operator, , a differential indeterminate, and . Define for and , is a parameter; the barred quantities and are arbitrary numbers, . For the three nontrivial decompositions of Corollary 1 the following elements and of a fundamental system are obtained.

is not equivalent to .

Here two rational functions are called equivalent if there exists another rational function such that

There remains the question how to obtain a factorization for a given equation or operator. It turns out that for linear ode's finding the factors comes down to determining rational solutions of Riccati equations or linear ode's; both may be determined algorithmically. The two examples below show how the above corollary is applied.

Example 1 Equation 2.201 from Kamke's collection. [4] has the decomposition

The coefficients and are rational solutions of the Riccati equation , they yield the fundamental system

Example 2 An equation with a type decomposition is

The coefficient of the first-order factor is the rational solution of . Upon integration the fundamental system and for and respectively is obtained.

These results show that factorization provides an algorithmic scheme for solving reducible linear ode's. Whenever an equation of order 2 factorizes according to one of the types defined above the elements of a fundamental system are explicitly known, i.e. factorization is equivalent to solving it.

A similar scheme may be set up for linear ode's of any order, although the number of alternatives grows considerably with the order; for order the answer is given in full detail in. [2]

If an equation is irreducible it may occur that its Galois group is nontrivial, then algebraic solutions may exist. [5] If the Galois group is trivial it may be possible to express the solutions in terms of special function like e.g. Bessel or Legendre functions, see [6] or. [7]

Basic facts from differential algebra

In order to generalize Loewy's result to linear PDEs it is necessary to apply the more general setting of differential algebra. Therefore, a few basic concepts that are required for this purpose are given next.

A field is called a differential field if it is equipped with a derivation operator. An operator on a field is called a derivation operator if and for all elements . A field with a single derivation operator is called an ordinary differential field; if there is a finite set containing several commuting derivation operators the field is called a partial differential field.

Here differential operators with derivatives and with coefficients from some differential field are considered. Its elements have the form ; almost all coefficients are zero. The coefficient field is called the base field. If constructive and algorithmic methods are the main issue it is . The respective ring of differential operators is denoted by or . The ring is non-commutative, and similarly for the other variables; is from the base field.

For an operator of order the symbol of L is the homogeneous algebraic polynomial where and algebraic indeterminates.

Let be a left ideal which is generated by , . Then one writes . Because right ideals are not considered here, sometimes is simply called an ideal.

The relation between left ideals in and systems of linear PDEs is established as follows. The elements are applied to a single differential indeterminate . In this way the ideal corresponds to the system of PDEs , for the single function .

The generators of an ideal are highly non-unique; its members may be transformed in infinitely many ways by taking linear combinations of them or its derivatives without changing the ideal. Therefore, M. Janet [8] introduced a normal form for systems of linear PDEs (see Janet basis ). [9] They are the differential analog to Gröbner bases of commutative algebra (which were originally introduced by Bruno Buchberger); [10] therefore they are also sometimes called differential Gröbner basis.

In order to generate a Janet basis, a ranking of derivatives must be defined. It is a total ordering such that for any derivatives , and , and any derivation operator the relations , and are valid. Here graded lexicographic term orderings are applied. For partial derivatives of a single function their definition is analogous to the monomial orderings in commutative algebra. The S-pairs in commutative algebra correspond to the integrability conditions.

If it is assured that the generators of an ideal form a Janet basis the notation is applied.

Example 3 Consider the ideal

in term order with . Its generators are autoreduced. If the integrability condition

is reduced with respect to , the new generator is obtained. Adding it to the generators and performing all possible reductions, the given ideal is represented as . Its generators are autoreduced and the single integrability condition is satisfied, i.e. they form a Janet basis.

Given any ideal it may occur that it is properly contained in some larger ideal with coefficients in the base field of ; then is called a divisor of . In general, a divisor in a ring of partial differential operators need not be principal.

The greatest common right divisor (Gcrd) or sum of two ideals and is the smallest ideal with the property that both and are contained in it. If they have the representation and , for all and , the sum is generated by the union of the generators of and . The solution space of the equations corresponding to is the intersection of the solution spaces of its arguments.

The least common left multiple (Lclm) or left intersection of two ideals and is the largest ideal with the property that it is contained both in and . The solution space of is the smallest space containing the solution spaces of its arguments.

A special kind of divisor is the so-called Laplace divisor of a given operator , [2] page 34. It is defined as follows.

Definition Let be a partial differential operator in the plane; define

and

be ordinary differential operators with respect to or ; for all i; and are natural numbers not less than 2. Assume the coefficients , are such that and form a Janet basis. If is the smallest integer with this property then is called a Laplace divisor of . Similarly, if , are such that and form a Janet basis and is minimal, then is also called a Laplace divisor of .

In order for a Laplace divisor to exist the coeffients of an operator must obey certain constraints. [3] An algorithm for determining an upper bound for a Laplace divisor is not known at present, therefore in general the existence of a Laplace divisor may be undecidable.

Decomposing second-order linear partial differential equations in the plane

Applying the above concepts Loewy's theory may be generalized to linear PDEs. Here it is applied to individual linear PDEs of second order in the plane with coordinates and , and the principal ideals generated by the corresponding operators.

Second-order equations have been considered extensively in the literature of the 19th century,. [11] [12] Usually equations with leading derivatives or are distinguished. Their general solutions contain not only constants but undetermined functions of varying numbers of arguments; determining them is part of the solution procedure. For equations with leading derivative Loewy's results may be generalized as follows.

Theorem 2 Let the differential operator be defined by

where for all .

Let for and , and be first-order operators with ; is an undetermined function of a single argument. Then has a Loewy decomposition according to one of the following types.

The decomposition type of an operator is the decomposition with the highest value of . If does not have any first-order factor in the base field, its decomposition type is defined to be . Decompositions , and are completely reducible.

In order to apply this result for solving any given differential equation involving the operator the question arises whether its first-order factors may be determined algorithmically. The subsequent corollary provides the answer for factors with coefficients either in the base field or a universal field extension.

Corollary 3 In general, first-order right factors of a linear pde in the base field cannot be determined algorithmically. If the symbol polynomial is separable any factor may be determined. If it has a double root in general it is not possible to determine the right factors in the base field. The existence of factors in a universal field, i.e. absolute irreducibility, may always be decided.

The above theorem may be applied for solving reducible equations in closed form. Because there are only principal divisors involved the answer is similar as for ordinary second-order equations.

Proposition 1

Let a reducible second-order equation

where .

Define , for ; is a rational first integral of ; and the inverse ; both and are assumed to exist. Furthermore, define

for .

A differential fundamental system has the following structure for the various decompositions into first-order components.

The are undetermined functions of a single argument; , and are rational in all arguments; is assumed to exist. In general , they are determined by the coefficients , and of the given equation.

A typical example of a linear pde where factorization applies is an equation that has been discussed by Forsyth, [13] vol. VI, page 16,

Example 5 (Forsyth 1906) Consider the differential equation . Upon factorization the representation

is obtained. There follows

Consequently, a differential fundamental system is

and are undetermined functions.

If the only second-order derivative of an operator is , its possible decompositions involving only principal divisors may be described as follows.

Theorem 3 Let the differential operator be defined by

where for all .

Let and are first-order operators. has Loewy decompositions involving first-order principal divisors of the following form.

The decomposition type of an operator is the decomposition with highest value of . The decomposition of type is completely reducible

In addition there are five more possible decomposition types involving non-principal Laplace divisors as shown next.

Theorem 4 Let the differential operator be defined by

where for all .

and as well as and are defined above; furthermore , , . has Loewy decompositions involving Laplace divisors according to one of the following types; and obey .

If does not have a first order right factor and it may be shown that a Laplace divisor does not exist its decomposition type is defined to be . The decompositions , , and are completely reducible.

An equation that does not allow a decomposition involving principal divisors but is completely reducible with respect to non-principal Laplace divisors of type has been considered by Forsyth.

Example 6 (Forsyth 1906) Define

generating the principal ideal . A first-order factor does not exist. However, there are Laplace divisors

and

The ideal generated by has the representation , i.e. it is completely reducible; its decomposition type is . Therefore, the equation has the differential fundamental system

and

Decomposing linear PDEs of order higher than 2

It turns out that operators of higher order have more complicated decompositions and there are more alternatives, many of them in terms of non-principal divisors. The solutions of the corresponding equations get more complex. For equations of order three in the plane a fairly complete answer may be found in. [2] A typical example of a third-order equation that is also of historical interest is due to Blumberg. [14]

Example 7 (Blumberg 1912) In his dissertation Blumberg considered the third order operator

It allows the two first-order factors and . Their intersection is not principal; defining

it may be written as . Consequently, the Loewy decomposition of Blumbergs's operator is

It yields the following differential fundamental system for the differential equation .

and are an undetermined functions.

Factorizations and Loewy decompositions turned out to be an extremely useful method for determining solutions of linear differential equations in closed form, both for ordinary and partial equations. It should be possible to generalize these methods to equations of higher order, equations in more variables and system of differential equations.

Related Research Articles

Distributions, also known as Schwartz distributions or generalized functions, are objects that generalize the classical notion of functions in mathematical analysis. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equations. The idea is to write the solution of the differential equation as a sum of certain "basis functions" and then to choose the coefficients in the sum in order to satisfy the differential equation as well as possible.

<span class="mw-page-title-main">Differential operator</span> Typically linear operator defined in terms of differentiation of functions

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.

In physics, an operator is a function over a space of physical states onto another space of physical states. The simplest example of the utility of operators is the study of symmetry. Because of this, they are very useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless.

<span class="mw-page-title-main">Path integral formulation</span> Formulation of quantum mechanics

The path integral formulation is a description in quantum mechanics that generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude.

In mathematics and its applications, classical Sturm–Liouville theory is the theory of real second-order linear ordinary differential equations of the form:

<span class="mw-page-title-main">Radon transform</span> Integral transform

In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces, and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.

In quantum mechanics, the canonical commutation relation is the fundamental relation between canonical conjugate quantities. For example,

In mathematics, a holomorphic vector bundle is a complex vector bundle over a complex manifold X such that the total space E is a complex manifold and the projection map π : EX is holomorphic. Fundamental examples are the holomorphic tangent bundle of a complex manifold, and its dual, the holomorphic cotangent bundle. A holomorphic line bundle is a rank one holomorphic vector bundle.

In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation are listed below.

Quantum walks are quantum analogues of classical random walks. In contrast to the classical random walk, where the walker occupies definite states and the randomness arises due to stochastic transitions between states, in quantum walks randomness arises through: (1) quantum superposition of states, (2) non-random, reversible unitary evolution and (3) collapse of the wave function due to state measurements.

The adjoint state method is a numerical method for efficiently computing the gradient of a function or operator in a numerical optimization problem. It has applications in geophysics, seismic imaging, photonics and more recently in neural networks.

<span class="mw-page-title-main">Yang–Mills equations</span> Partial differential equations whose solutions are instantons

In physics and mathematics, and especially differential geometry and gauge theory, the Yang–Mills equations are a system of partial differential equations for a connection on a vector bundle or principal bundle. They arise in physics as the Euler–Lagrange equations of the Yang–Mills action functional. They have also found significant use in mathematics.

In quantum information theory, the Wehrl entropy, named after Alfred Wehrl, is a classical entropy of a quantum-mechanical density matrix. It is a type of quasi-entropy defined for the Husimi Q representation of the phase-space quasiprobability distribution. See for a comprehensive review of basic properties of classical, quantum and Wehrl entropies, and their implications in statistical mechanics.

In mathematics, a Janet basis is a normal form for systems of linear homogeneous partial differential equations (PDEs) that removes the inherent arbitrariness of any such system. It was introduced in 1920 by Maurice Janet. It was first called the Janet basis by Fritz Schwarz in 1998.

<span class="mw-page-title-main">Causal fermion systems</span> Candidate unified theory of physics

The theory of causal fermion systems is an approach to describe fundamental physics. It provides a unification of the weak, the strong and the electromagnetic forces with gravity at the level of classical field theory. Moreover, it gives quantum mechanics as a limiting case and has revealed close connections to quantum field theory. Therefore, it is a candidate for a unified physical theory. Instead of introducing physical objects on a preexisting spacetime manifold, the general concept is to derive spacetime as well as all the objects therein as secondary objects from the structures of an underlying causal fermion system. This concept also makes it possible to generalize notions of differential geometry to the non-smooth setting. In particular, one can describe situations when spacetime no longer has a manifold structure on the microscopic scale. As a result, the theory of causal fermion systems is a proposal for quantum geometry and an approach to quantum gravity.

Information field theory (IFT) is a Bayesian statistical field theory relating to signal reconstruction, cosmography, and other related areas. IFT summarizes the information available on a physical field using Bayesian probabilities. It uses computational techniques developed for quantum field theory and statistical field theory to handle the infinite number of degrees of freedom of a field and to derive algorithms for the calculation of field expectation values. For example, the posterior expectation value of a field generated by a known Gaussian process and measured by a linear device with known Gaussian noise statistics is given by a generalized Wiener filter applied to the measured data. IFT extends such known filter formula to situations with nonlinear physics, nonlinear devices, non-Gaussian field or noise statistics, dependence of the noise statistics on the field values, and partly unknown parameters of measurement. For this it uses Feynman diagrams, renormalisation flow equations, and other methods from mathematical physics.

Tau functions are an important ingredient in the modern theory of integrable systems, and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form. The term Tau function, or -function, was first used systematically by Mikio Sato and his students in the specific context of the Kadomtsev–Petviashvili equation, and related integrable hierarchies. It is a central ingredient in the theory of solitons. Tau functions also appear as matrix model partition functions in the spectral theory of Random Matrices, and may also serve as generating functions, in the sense of combinatorics and enumerative geometry, especially in relation to moduli spaces of Riemann surfaces, and enumeration of branched coverings, or so-called Hurwitz numbers.

References

  1. 1 2 Loewy, A. (1906). "Über vollständig reduzible lineare homogene Differentialgleichungen". Mathematische Annalen. 62: 89–117. doi:10.1007/bf01448417. S2CID   121139339.
  2. 1 2 3 4 , F.Schwarz, Loewy Decomposition of Linear Differential Equations, Springer, 2012
  3. 1 2 Schwarz, F. (2013). "Loewy Decomposition of linear Differential Equations". Bulletin of Mathematical Sciences. 3: 19–71. doi: 10.1007/s13373-012-0026-7 .
  4. E. Kamke, Differentialgleichungen I. Gewoehnliche Differentialgleichungen, Akademische Verlagsgesellschaft, Leipzig, 1964
  5. M. van der Put, M.Singer, Galois theory of linear differential equations, Grundlehren der Math. Wiss. 328, Springer, 2003
  6. M.Bronstein, S.Lafaille, Solutions of linear ordinary differential equations in terms of special functions, Proceedings of the 2002 International Symposium on Symbolic and Algebraic Computation; T.Mora, ed., ACM, New York, 2002, pp. 23–28
  7. F. Schwarz, Algorithmic Lie Theory for Solving Ordinary Differential Equations, CRC Press, 2007, page 39
  8. Janet, M. (1920). "Les systemes d'equations aux derivees partielles". Journal de Mathématiques. 83: 65–123.
  9. Janet Bases for Symmetry Groups, in: Gröbner Bases and Applications Lecture Notes Series 251, London Mathematical Society, 1998, pages 221–234, B. Buchberger and F. Winkler, Edts.
  10. Buchberger, B. (1970). "Ein algorithmisches Kriterium fuer die Loesbarkeit eines algebraischen Gleichungssystems". Aequ. Math. 4 (3): 374–383. doi:10.1007/bf01844169. S2CID   189834323.
  11. E. Darboux, Leçons sur la théorie générale des surfaces, vol. II, Chelsea Publishing Company, New York, 1972
  12. Édouard Goursat, Leçon sur l'intégration des équations aux dérivées partielles, vol. I and II, A. Hermann, Paris, 1898
  13. A.R.Forsyth, Theory of Differential Equations, vol. I,...,VI, Cambridge, At the University Press, 1906
  14. H.Blumberg, Ueber algebraische Eigenschaften von linearen homogenen Differentialausdruecken, Inaugural-Dissertation, Goettingen, 1912