Differential algebra

Last updated

In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.

Contents

More specifically, differential algebra refers to the theory introduced by Joseph Ritt in 1950, in which differential rings, differential fields, and differential algebras are rings, fields, and algebras equipped with finitely many derivations. [1] [2] [3]

A natural example of a differential field is the field of rational functions in one variable over the complex numbers, where the derivation is differentiation with respect to More generally, every differential equation may be viewed as an element of a differential algebra over the differential field generated by the (known) functions appearing in the equation.

History

Joseph Ritt developed differential algebra because he viewed attempts to reduce systems of differential equations to various canonical forms as an unsatisfactory approach. However, the success of algebraic elimination methods and algebraic manifold theory motivated Ritt to consider a similar approach for differential equations. [4] His efforts led to an initial paper Manifolds Of Functions Defined By Systems Of Algebraic Differential Equations and 2 books, Differential Equations From The Algebraic Standpoint and Differential Algebra. [5] [6] [2] Ellis Kolchin, Ritt's student, advanced this field and published Differential Algebra And Algebraic Groups. [1]

Differential rings

Definition

A derivation on a ring is a function such that

and

(Leibniz product rule),

for every and in

A derivation is linear over the integers since these identities imply and

A differential ring is a commutative ring equipped with one or more derivations that commute pairwise; that is,

for every pair of derivations and every [7] When there is only one derivation one talks often of an ordinary differential ring; otherwise, one talks of a partial differential ring.

A differential field is differentiable ring that is also a field. A differential algebra over a differential field is a differential ring that contains as a subring such that the restriction to of the derivations of equal the derivations of (A more general definition is given below, which covers the case where is not a field, and is essentially equivalent when is a field.)

A Witt algebra is a differential ring that contains the field of the rational numbers. Equivalently, this is a differential algebra over since can be considered as a differential field on which every derivation is the zero function.

The constants of a differential ring are the elements such that for every derivation The constants of a differential ring form a subring and the constants of a differentiable field form a subfield. [8] This meaning of "constant" generalizes the concept of a constant function, and must not be confused with the common meaning of a constant.

Basic formulas

In the following identities, is a derivation of a differential ring [9]

Higher-order derivations

A derivation operator or higher-order derivation[ citation needed ] is the composition of several derivations. As the derivations of a differential ring are supposed to commute, the order of the derivations does not matter, and a derivation operator may be written as

where are the derivations under consideration, are nonnegative integers, and the exponent of a derivation denotes the number of times this derivation is composed in the operator.

The sum is called the order of derivation. If the derivation operator is one of the original derivations. If , one has the identity function, which is generally considered as the unique derivation operator of order zero. With these conventions, the derivation operators form a free commutative monoid on the set of derivations under consideration.

A derivative of an element of a differential ring is the application of a derivation operator to that is, with the above notation, A proper derivative is a derivative of positive order. [7]

Differential ideals

A differential ideal of a differential ring is an ideal of the ring that is closed (stable) under the derivations of the ring; that is, for every derivation and every A differential ideal is said proper if it is not the whole ring. For avoiding confusion, an ideal that is not a differential ideal is sometimes called an algebraic ideal.

The radical of a differential ideal is the same as its radical as an algebraic ideal, that is, the set of the ring elements that have a power in the ideal. The radical of a differential ideal is also a differential ideal. A radical or perfect differential ideal is a differential ideal that equals its radical. [10] A prime differential ideal is a differential ideal that is prime in the usual sense; that is, if a product belongs to the ideal, at least one of the factors belongs to the ideal. A prime differential ideal is always a radical differential ideal.

A discovery of Ritt is that, although the classical theory of algebraic ideals does not work for differential ideals, a large part of it can be extended to radical differential ideals, and this makes them fundamental in differential algebra.

The intersection of any family of differential ideals is a differential ideal, and the intersection of any family of radical differential ideals is a radical differential ideal. [11] It follows that, given a subset of a differential ring, there are three ideals generated by it, which are the intersections of, respectively, all algebraic ideals, all differential ideals, and all radical differential ideals that contain it. [11] [12]

The algebraic ideal generated by is the set of the finite linear combinations of elements of and is commonly denoted as or

The differential ideal generated by is the set of the finite linear combinations of elements of and of the derivatives of any order of these elements; it is commonly denoted as When is finite, is generally not finitely generated as an algebraic ideal.

The radical differential ideal generated by is commonly denoted as There is no known way to characterize its element in a similar way as for the two other cases.

Differential polynomials

A differential polynomial over a differential field is a formalization of the concept of differential equation such that the known functions appearing in the equation belong to and the indeterminates are symbols for the unknown functions.

So, let be a differential field, which is typically (but not necessarily) a field of rational fractions (fractions of multivariate polynomials), equipped with derivations such that and if (the usual partial derivatives).

For defining the ring of differential polynomials over with indeterminates in with derivations one introduces an infinity of new indeterminates of the form where is any derivation operator of order higher than 1. With this notation, is the set of polynomials in all these indeterminates, with the natural derivations (each polynomial involves only a finite number of indeterminates). In particular, if one has

Even when a ring of differential polynomials is not Noetherian. This makes the theory of this generalization of polynomial rings difficult. However, two facts allow such a generalization.

Firstly, a finite number of differential polynomials involves together a finite number of indeterminates. Its follows that every property of polynomials that involves a finite number of polynomials remains true for differential polynomials. In particular, greatest common divisors exist, and a ring of differential polynomials is a unique factorization domain.

The second fact is that, if the field contains the field of rational numbers, the rings of differential polynomials over satisfy the ascending chain condition on radical differential ideals. This Ritt’s theorem is implied by its generalization, sometimes called the Ritt-Raudenbush basis theorem which asserts that if is a Ritt Algebra (that, is a differential ring containing the field of rational numbers), [13] that satisfies the ascending chain condition on radical differential ideals, then the ring of differential polynomials satisfies the same property (one passes from the univariate to the multivariate case by applying the theorem iteratively). [14] [15]

This Noetherian property implies that, in a ring of differential polynomials, every radical differential ideal I is finitely generated as a radical differential ideal; this means that there exists a finite set S of differential polynomials such that I is the smallest radical differential idesl containing S. [16] This allows representing a radical differential ideal by such a finite set of generators, and computing with these ideals. However, some usual computations of the algebraic case cannot be extended. In particular no algorithm is known for testing membership of an element in a radical differential ideal or the equality of two radical differential ideals.

Another consequence of the Noetherian property is that a radical differential ideal can be uniquely expressed as the intersection of a finite number of prime differential ideals, called essential prime components of the ideal. [17]

Elimination methods

Elimination methods are algorithms that preferentially eliminate a specified set of derivatives from a set of differential equations, commonly done to better understand and solve sets of differential equations.

Categories of elimination methods include characteristic set methods, differential Gröbner bases methods and resultant based methods. [1] [18] [19] [20] [21] [22] [23]

Common operations used in elimination algorithms include 1) ranking derivatives, polynomials, and polynomial sets, 2) identifying a polynomial's leading derivative, initial and separant, 3) polynomial reduction, and 4) creating special polynomial sets.

Ranking derivatives

The ranking of derivatives is a total order and an admisible order, defined as: [24] [25] [26]

Each derivative has an integer tuple, and a monomial order ranks the derivative by ranking the derivative's integer tuple. The integer tuple identifies the differential indeterminate, the derivative's multi-index and may identify the derivative's order. Types of ranking include: [27]

In this example, the integer tuple identifies the differential indeterminate and derivative's multi-index, and lexicographic monomial order, , determines the derivative's rank. [28]

.

Leading derivative, initial and separant

This is the standard polynomial form: . [24] [28]

Separant set is , initial set is and combined set is . [29]

Reduction

Partially reduced (partial normal form) polynomial with respect to polynomial indicates these polynomials are non-ground field elements, , and contains no proper derivative of . [30] [31] [29]

Partially reduced polynomial with respect to polynomial becomes reduced (normal form) polynomial with respect to if the degree of in is less than the degree of in . [30] [31] [29]

An autoreduced polynomial set has every polynomial reduced with respect to every other polynomial of the set. Every autoreduced set is finite. An autoreduced set is triangular meaning each polynomial element has a distinct leading derivative. [32] [30]

Ritt's reduction algorithm identifies integers and transforms a differential polynomial using pseudodivision to a lower or equally ranked remainder polynomial that is reduced with respect to the autoreduced polynomial set . The algorithm's first step partially reduces the input polynomial and the algorithm's second step fully reduces the polynomial. The formula for reduction is: [30]

Ranking polynomial sets

Set is a differential chain if the rank of the leading derivatives is and is reduced with respect to [33]

Autoreduced sets and each contain ranked polynomial elements. This procedure ranks two autoreduced sets by comparing pairs of identically indexed polynomials from both autoreduced sets. [34]

Polynomial sets

A characteristic set is the lowest ranked autoreduced subset among all the ideal's autoreduced subsets whose subset polynomial separants are non-members of the ideal . [35]

The delta polynomial applies to polynomial pair whose leaders share a common derivative, . The least common derivative operator for the polynomial pair's leading derivatives is , and the delta polynomial is: [36] [37]

A coherent set is a polynomial set that reduces its delta polynomial pairs to zero. [36] [37]

Regular system and regular ideal

A regular system contains a autoreduced and coherent set of differential equations and a inequation set with set reduced with respect to the equation set. [37]

Regular differential ideal and regular algebraic ideal are saturation ideals that arise from a regular system. [37] Lazard's lemma states that the regular differential and regular algebraic ideals are radical ideals. [38]

Rosenfeld–Gröbner algorithm

The Rosenfeld–Gröbner algorithm decomposes the radical differential ideal as a finite intersection of regular radical differential ideals. These regular differential radical ideals, represented by characteristic sets, are not necessarily prime ideals and the representation is not necessarily minimal. [39]

The membership problem is to determine if a differential polynomial is a member of an ideal generated from a set of differential polynomials . The Rosenfeld–Gröbner algorithm generates sets of Gröbner bases. The algorithm determines that a polynomial is a member of the ideal if and only if the partially reduced remainder polynomial is a member of the algebraic ideal generated by the Gröbner bases. [40]

The Rosenfeld–Gröbner algorithm facilitates creating Taylor series expansions of solutions to the differential equations. [41]

Examples

Differential fields

Example 1: is the differential meromorphic function field with a single standard derivation.

Example 2: is a differential field with a linear differential operator as the derivation.

Derivation

Define as shift operator for polynomial .

A shift-invariant operator commutes with the shift operator: .

The Pincherle derivative , a derivation of shift-invariant operator , is . [42]

Constants

Ring of integers is , and every integer is a constant.

Field of rational numbers is , and every rational number is a constant.

Differential subring

Constants form the subring of constants. [43]

Differential ideal

Element simply generates differential ideal in the differential ring . [44]

Algebra over a differential ring

Any ring with identity is a algebra. [45] Thus a differential ring is a algebra.

If ring is a subring of the center of unital ring , then is an algebra. [45] Thus, a differential ring is an algebra over its differential subring. This is the natural structure of an algebra over its subring. [30]

Special and normal polynomials

Ring has irreducible polynomials, (normal, squarefree) and (special, ideal generator).

Polynomials

Ranking

Ring has derivatives and

  • Map each derivative to an integer tuple: .
  • Rank derivatives and integer tuples: .

Leading derivative and initial

The leading derivatives, and initials are:

Separants

.

Autoreduced sets

  • Autoreduced sets are and . Each set is triangular with a distinct polynomial leading derivative.
  • The non-autoreduced set contains only partially reduced with respect to ; this set is non-triangular because the polynomials have the same leading derivative.

Applications

Symbolic integration

Symbolic integration uses algorithms involving polynomials and their derivatives such as Hermite reduction, Czichowski algorithm, Lazard-Rioboo-Trager algorithm, Horowitz-Ostrogradsky algorithm, squarefree factorization and splitting factorization to special and normal polynomials. [46]

Differential equations

Differential algebra can determine if a set of differential polynomial equations has a solution. A total order ranking may identify algebraic constraints. An elimination ranking may determine if one or a selected group of independent variables can express the differential equations. Using triangular decomposition and elimination order, it may be possible to solve the differential equations one differential indeterminate at a time in a step-wise method. Another approach is to create a class of differential equations with a known solution form; matching a differential equation to its class identifies the equation's solution. Methods are available to facilitate the numerical integration of a differential-algebraic system of equations. [47]

In a study of non-linear dynamical systems with chaos, researchers used differential elimination to reduce differential equations to ordinary differential equations involving a single state variable. They were successful in most cases, and this facilitated developing approximate solutions, efficiently evaluating chaos, and constructing Lypapunov functions. [48] Researchers have applied differential elimination to understanding cellular biology, compartmental biochemical models, parameter estimation and quasi-steady state approximation (QSSA) for biochemical reactions. [49] [50] Using differential Gröbner bases, researchers have investigated non-classical symmetry properties of non-linear differential equations. [51] Other applications include control theory, model theory, and algebraic geometry. [52] [16] [53] Differential algebra also applies to differential-difference equations. [54]

Algebras with derivations

Differential graded vector space

A vector space is a collection of vector spaces with integer degree for . A direct sum can represent this graded vector space: [55]

A differential graded vector space or chain complex , is a graded vector space with a differential map or boundary map with . [56]

A cochain complex is a graded vector space with a differential map or coboundary map with . [56]

Differential graded algebra

A differential graded algebra is a graded algebra with a linear derivation with that follows the graded Leibniz product rule. [57]

Lie algebra

A Lie algebra is a finite-dimensional real or complex vector space with a bilinear bracket operator with Skew symmetry and the Jacobi identity property. [58]

for all .

The adjoint operator, is a derivation of the bracket because the adjoint's effect on the binary bracket operation is analogous to the derivation's effect on the binary product operation. This is the inner derivation determined by . [59] [60]

The universal enveloping algebra of Lie algebra is a maximal associative algebra with identity, generated by Lie algebra elements and containing products defined by the bracket operation. Maximal means that a linear homomorphism maps the universal algebra to any other algebra that otherwise has these properties. The adjoint operator is a derivation following the Leibniz product rule. [61]

for all .

Weyl algebra

The Weyl algebra is an algebra over a ring with a specific noncommutative product: [62]

.

All other indeterminate products are commutative for :

.

A Weyl algebra can represent the derivations for a commutative ring's polynomials . The Weyl algebra's elements are endomorphisms, the elements function as standard derivations, and map compositions generate linear differential operators. D-module is a related approach for understanding differential operators. The endomorphisms are: [62]

Pseudodifferential operator ring

The associative, possibly noncommutative ring has derivation . [63]

The pseudo-differential operator ring is a left containing ring elements : [63] [64] [65]

The derivative operator is . [63]

The binomial coefficient is .

Pseudo-differential operator multiplication is: [63]

Open problems

The Ritt problem asks is there an algorithm that determines if one prime differential ideal contains a second prime differential ideal when characteristic sets identify both ideals. [66]

The Kolchin catenary conjecture states given a dimensional irreducible differential algebraic variety and an arbitrary point , a long gap chain of irreducible differential algebraic subvarieties occurs from to V. [67]

The Jacobi bound conjecture concerns the upper bound for the order of an differential variety's irreducible component. The polynomial's orders determine a Jacobi number, and the conjecture is the Jacobi number determines this bound. [68]

See also

Citations

  1. 1 2 3 Kolchin 1973
  2. 1 2 Ritt 1950
  3. Kaplansky 1976
  4. Ritt 1932, pp. iii–iv
  5. Ritt 1930
  6. Ritt 1932
  7. 1 2 Kolchin 1973, pp. 58–59
  8. Kolchin 1973, pp. 58–60
  9. Bronstein 2005, p. 76
  10. Sit 2002, pp. 3–4
  11. 1 2 Kolchin 1973, pp. 61–62
  12. Buium 1994, p. 21
  13. Kaplansky 1976, p. 12
  14. Kaplansky 1976, pp. 45, 48, 56–57
  15. Kolchin 1973, pp. 126–129
  16. 1 2 Marker 2000
  17. Hubert 2002, p. 8
  18. Li & Yuan 2019
  19. Boulier et al. 1995
  20. Mansfield 1991
  21. Ferro 2005
  22. Chardin 1991
  23. Wu 2005b
  24. 1 2 Kolchin 1973, pp. 75–76
  25. Gao et al. 2009, p. 1141
  26. Hubert 2002, p. 10
  27. Ferro & Gerdt 2003, p. 83
  28. 1 2 Wu 2005a, p. 4
  29. 1 2 3 Boulier et al. 1995, p. 159
  30. 1 2 3 4 5 Kolchin 1973, p. 75
  31. 1 2 Ferro & Gerdt 2003, p. 84
  32. Sit 2002, p. 6
  33. Li & Yuan 2019, p. 294
  34. Kolchin 1973, p. 81
  35. Kolchin 1973, p. 82
  36. 1 2 Kolchin 1973, p. 136
  37. 1 2 3 4 Boulier et al. 1995, p. 160
  38. Morrison 1999
  39. Boulier et al. 1995, p. 158
  40. Boulier et al. 1995, p. 164
  41. Boulier et al. 2009b
  42. Rota, Kahaner & Odlyzko 1973, p. 694
  43. Kolchin 1973, p. 60
  44. Sit 2002, p. 4
  45. 1 2 Dummit & Foote 2004, p. 343
  46. Bronstein 2005, pp. 41, 51, 53, 102, 299, 309
  47. Hubert 2002, pp. 41–47
  48. Harrington & VanGorder 2017
  49. Boulier 2007
  50. Boulier & Lemaire 2009a
  51. Clarkson & Mansfield 1994
  52. Diop 1992
  53. Buium 1994
  54. Gao et al. 2009
  55. Keller 2019, p. 48
  56. 1 2 Keller 2019, pp. 50–51
  57. Keller 2019, pp. 58–59
  58. Hall 2015, p. 49
  59. Hall 2015, p. 51
  60. Jacobson 1979, p. 9
  61. Hall 2015, p. 247
  62. 1 2 Lam 1991, pp. 7–8
  63. 1 2 3 4 Parshin 1999, p. 268
  64. Dummit & Foote 2004, p. 337
  65. Taylor 1991
  66. Golubitsky, Kondratieva & Ovchinnikov 2009
  67. Freitag, Sánchez & Simmons 2016
  68. Lando 1970

Related Research Articles

<span class="mw-page-title-main">Tangent</span> In mathematics, straight line touching a plane curve without crossing it

In geometry, the tangent line (or simply tangent) to a plane curve at a given point is, intuitively, the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve. More precisely, a straight line is tangent to the curve y = f(x) at a point x = c if the line passes through the point (c, f(c)) on the curve and has slope f'(c), where f' is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space.

<span class="mw-page-title-main">Laplace's equation</span> Second-order partial differential equation

In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as

<span class="mw-page-title-main">Navier–Stokes equations</span> Equations describing the motion of viscous fluid substances

The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).

In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).

<span class="mw-page-title-main">Differential operator</span> Typically linear operator defined in terms of differentiation of functions

In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.

In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring formed from the set of polynomials in one or more indeterminates with coefficients in another ring, often a field.

In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form

In analytical mechanics, generalized coordinates are a set of parameters used to represent the state of a system in a configuration space. These parameters must uniquely define the configuration of the system relative to a reference state. The generalized velocities are the time derivatives of the generalized coordinates of the system. The adjective "generalized" distinguishes these parameters from the traditional use of the term "coordinate" to refer to Cartesian coordinates.

In mathematics, the Schwarzian derivative is an operator similar to the derivative which is invariant under Möbius transformations. Thus, it occurs in the theory of the complex projective line, and in particular, in the theory of modular forms and hypergeometric functions. It plays an important role in the theory of univalent functions, conformal mapping and Teichmüller spaces. It is named after the German mathematician Hermann Schwarz.

In mathematics, Kähler differentials provide an adaptation of differential forms to arbitrary commutative rings or schemes. The notion was introduced by Erich Kähler in the 1930s. It was adopted as standard in commutative algebra and algebraic geometry somewhat later, once the need was felt to adapt methods from calculus and geometry over the complex numbers to contexts where such methods are not available.

In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.

In mathematics, the derivative is a fundamental construction of differential calculus and admits many possible generalizations within the fields of mathematical analysis, combinatorics, algebra, geometry, etc.

In mathematics, a Witt vector is an infinite sequence of elements of a commutative ring. Ernst Witt showed how to put a ring structure on the set of Witt vectors, in such a way that the ring of Witt vectors over the finite field of order is isomorphic to , the ring of -adic integers. They have a highly non-intuitive structure upon first glance because their additive and multiplicative structure depends on an infinite set of recursive formulas which do not behave like addition and multiplication formulas for standard p-adic integers.

In mathematics, the Butcher group, named after the New Zealand mathematician John C. Butcher by Hairer & Wanner (1974), is an infinite-dimensional Lie group first introduced in numerical analysis to study solutions of non-linear ordinary differential equations by the Runge–Kutta method. It arose from an algebraic formalism involving rooted trees that provides formal power series solutions of the differential equation modeling the flow of a vector field. It was Cayley (1857), prompted by the work of Sylvester on change of variables in differential calculus, who first noted that the derivatives of a composition of functions can be conveniently expressed in terms of rooted trees and their combinatorics.

<span class="mw-page-title-main">Lagrangian mechanics</span> Formulation of classical mechanics

In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle. It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique.

<span class="mw-page-title-main">Lie point symmetry</span>

Lie point symmetry is a concept in advanced mathematics. Towards the end of the nineteenth century, Sophus Lie introduced the notion of Lie group in order to study the solutions of ordinary differential equations (ODEs). He showed the following main property: the order of an ordinary differential equation can be reduced by one if it is invariant under one-parameter Lie group of point transformations. This observation unified and extended the available integration techniques. Lie devoted the remainder of his mathematical career to developing these continuous groups that have now an impact on many areas of mathematically based sciences. The applications of Lie groups to differential systems were mainly established by Lie and Emmy Noether, and then advocated by Élie Cartan.

In the study of differential equations, the Loewy decomposition breaks every linear ordinary differential equation (ODE) into what are called largest completely reducible components. It was introduced by Alfred Loewy.

In mathematics, a Janet basis is a normal form for systems of linear homogeneous partial differential equations (PDEs) that removes the inherent arbitrariness of any such system. It was introduced in 1920 by Maurice Janet. It was first called the Janet basis by Fritz Schwarz in 1998.

References