In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.
More specifically, differential algebra refers to the theory introduced by Joseph Ritt in 1950, in which differential rings, differential fields, and differential algebras are rings, fields, and algebras equipped with finitely many derivations. [1] [2] [3]
A natural example of a differential field is the field of rational functions in one variable over the complex numbers, where the derivation is differentiation with respect to More generally, every differential equation may be viewed as an element of a differential algebra over the differential field generated by the (known) functions appearing in the equation.
Joseph Ritt developed differential algebra because he viewed attempts to reduce systems of differential equations to various canonical forms as an unsatisfactory approach. However, the success of algebraic elimination methods and algebraic manifold theory motivated Ritt to consider a similar approach for differential equations. [4] His efforts led to an initial paper Manifolds Of Functions Defined By Systems Of Algebraic Differential Equations and 2 books, Differential Equations From The Algebraic Standpoint and Differential Algebra. [5] [6] [2] Ellis Kolchin, Ritt's student, advanced this field and published Differential Algebra And Algebraic Groups. [1]
A derivation on a ring is a function such that and
for every and in
A derivation is linear over the integers since these identities imply and
A differential ring is a commutative ring equipped with one or more derivations that commute pairwise; that is, for every pair of derivations and every [7] When there is only one derivation one talks often of an ordinary differential ring; otherwise, one talks of a partial differential ring.
A differential field is a differential ring that is also a field. A differential algebra over a differential field is a differential ring that contains as a subring such that the restriction to of the derivations of equal the derivations of (A more general definition is given below, which covers the case where is not a field, and is essentially equivalent when is a field.)
A Witt algebra is a differential ring that contains the field of the rational numbers. Equivalently, this is a differential algebra over since can be considered as a differential field on which every derivation is the zero function.
The constants of a differential ring are the elements such that for every derivation The constants of a differential ring form a subring and the constants of a differentiable field form a subfield. [8] This meaning of "constant" generalizes the concept of a constant function, and must not be confused with the common meaning of a constant.
In the following identities, is a derivation of a differential ring [9]
A derivation operator or higher-order derivation[ citation needed ] is the composition of several derivations. As the derivations of a differential ring are supposed to commute, the order of the derivations does not matter, and a derivation operator may be written as where are the derivations under consideration, are nonnegative integers, and the exponent of a derivation denotes the number of times this derivation is composed in the operator.
The sum is called the order of derivation. If the derivation operator is one of the original derivations. If , one has the identity function, which is generally considered as the unique derivation operator of order zero. With these conventions, the derivation operators form a free commutative monoid on the set of derivations under consideration.
A derivative of an element of a differential ring is the application of a derivation operator to that is, with the above notation, A proper derivative is a derivative of positive order. [7]
A differential ideal of a differential ring is an ideal of the ring that is closed (stable) under the derivations of the ring; that is, for every derivation and every A differential ideal is said to be proper if it is not the whole ring. For avoiding confusion, an ideal that is not a differential ideal is sometimes called an algebraic ideal.
The radical of a differential ideal is the same as its radical as an algebraic ideal, that is, the set of the ring elements that have a power in the ideal. The radical of a differential ideal is also a differential ideal. A radical or perfect differential ideal is a differential ideal that equals its radical. [10] A prime differential ideal is a differential ideal that is prime in the usual sense; that is, if a product belongs to the ideal, at least one of the factors belongs to the ideal. A prime differential ideal is always a radical differential ideal.
A discovery of Ritt is that, although the classical theory of algebraic ideals does not work for differential ideals, a large part of it can be extended to radical differential ideals, and this makes them fundamental in differential algebra.
The intersection of any family of differential ideals is a differential ideal, and the intersection of any family of radical differential ideals is a radical differential ideal. [11] It follows that, given a subset of a differential ring, there are three ideals generated by it, which are the intersections of, respectively, all algebraic ideals, all differential ideals, and all radical differential ideals that contain it. [11] [12]
The algebraic ideal generated by is the set of finite linear combinations of elements of and is commonly denoted as or
The differential ideal generated by is the set of the finite linear combinations of elements of and of the derivatives of any order of these elements; it is commonly denoted as When is finite, is generally not finitely generated as an algebraic ideal.
The radical differential ideal generated by is commonly denoted as There is no known way to characterize its element in a similar way as for the two other cases.
A differential polynomial over a differential field is a formalization of the concept of differential equation such that the known functions appearing in the equation belong to and the indeterminates are symbols for the unknown functions.
So, let be a differential field, which is typically (but not necessarily) a field of rational fractions (fractions of multivariate polynomials), equipped with derivations such that and if (the usual partial derivatives).
For defining the ring of differential polynomials over with indeterminates in with derivations one introduces an infinity of new indeterminates of the form where is any derivation operator of order higher than 1. With this notation, is the set of polynomials in all these indeterminates, with the natural derivations (each polynomial involves only a finite number of indeterminates). In particular, if one has
Even when a ring of differential polynomials is not Noetherian. This makes the theory of this generalization of polynomial rings difficult. However, two facts allow such a generalization.
Firstly, a finite number of differential polynomials involves together a finite number of indeterminates. Its follows that every property of polynomials that involves a finite number of polynomials remains true for differential polynomials. In particular, greatest common divisors exist, and a ring of differential polynomials is a unique factorization domain.
The second fact is that, if the field contains the field of rational numbers, the rings of differential polynomials over satisfy the ascending chain condition on radical differential ideals. This Ritt’s theorem is implied by its generalization, sometimes called the Ritt-Raudenbush basis theorem which asserts that if is a Ritt Algebra (that, is a differential ring containing the field of rational numbers), [13] that satisfies the ascending chain condition on radical differential ideals, then the ring of differential polynomials satisfies the same property (one passes from the univariate to the multivariate case by applying the theorem iteratively). [14] [15]
This Noetherian property implies that, in a ring of differential polynomials, every radical differential ideal I is finitely generated as a radical differential ideal; this means that there exists a finite set S of differential polynomials such that I is the smallest radical differential idesl containing S. [16] This allows representing a radical differential ideal by such a finite set of generators, and computing with these ideals. However, some usual computations of the algebraic case cannot be extended. In particular no algorithm is known for testing membership of an element in a radical differential ideal or the equality of two radical differential ideals.
Another consequence of the Noetherian property is that a radical differential ideal can be uniquely expressed as the intersection of a finite number of prime differential ideals, called essential prime components of the ideal. [17]
Elimination methods are algorithms that preferentially eliminate a specified set of derivatives from a set of differential equations, commonly done to better understand and solve sets of differential equations.
Categories of elimination methods include characteristic set methods, differential Gröbner bases methods and resultant based methods. [1] [18] [19] [20] [21] [22] [23]
Common operations used in elimination algorithms include 1) ranking derivatives, polynomials, and polynomial sets, 2) identifying a polynomial's leading derivative, initial and separant, 3) polynomial reduction, and 4) creating special polynomial sets.
The ranking of derivatives is a total order and an admisible order, defined as: [24] [25] [26]
Each derivative has an integer tuple, and a monomial order ranks the derivative by ranking the derivative's integer tuple. The integer tuple identifies the differential indeterminate, the derivative's multi-index and may identify the derivative's order. Types of ranking include: [27]
In this example, the integer tuple identifies the differential indeterminate and derivative's multi-index, and lexicographic monomial order, , determines the derivative's rank. [28]
This is the standard polynomial form: . [24] [28]
Separant set is , initial set is and combined set is . [29]
Partially reduced (partial normal form) polynomial with respect to polynomial indicates these polynomials are non-ground field elements, , and contains no proper derivative of . [30] [31] [29]
Partially reduced polynomial with respect to polynomial becomes reduced (normal form) polynomial with respect to if the degree of in is less than the degree of in . [30] [31] [29]
An autoreduced polynomial set has every polynomial reduced with respect to every other polynomial of the set. Every autoreduced set is finite. An autoreduced set is triangular meaning each polynomial element has a distinct leading derivative. [32] [30]
Ritt's reduction algorithm identifies integers and transforms a differential polynomial using pseudodivision to a lower or equally ranked remainder polynomial that is reduced with respect to the autoreduced polynomial set . The algorithm's first step partially reduces the input polynomial and the algorithm's second step fully reduces the polynomial. The formula for reduction is: [30]
Set is a differential chain if the rank of the leading derivatives is and is reduced with respect to [33]
Autoreduced sets and each contain ranked polynomial elements. This procedure ranks two autoreduced sets by comparing pairs of identically indexed polynomials from both autoreduced sets. [34]
A characteristic set is the lowest ranked autoreduced subset among all the ideal's autoreduced subsets whose subset polynomial separants are non-members of the ideal . [35]
The delta polynomial applies to polynomial pair whose leaders share a common derivative, . The least common derivative operator for the polynomial pair's leading derivatives is , and the delta polynomial is: [36] [37]
A coherent set is a polynomial set that reduces its delta polynomial pairs to zero. [36] [37]
A regular system contains a autoreduced and coherent set of differential equations and a inequation set with set reduced with respect to the equation set. [37]
Regular differential ideal and regular algebraic ideal are saturation ideals that arise from a regular system. [37] Lazard's lemma states that the regular differential and regular algebraic ideals are radical ideals. [38]
The Rosenfeld–Gröbner algorithm decomposes the radical differential ideal as a finite intersection of regular radical differential ideals. These regular differential radical ideals, represented by characteristic sets, are not necessarily prime ideals and the representation is not necessarily minimal. [39]
The membership problem is to determine if a differential polynomial is a member of an ideal generated from a set of differential polynomials . The Rosenfeld–Gröbner algorithm generates sets of Gröbner bases. The algorithm determines that a polynomial is a member of the ideal if and only if the partially reduced remainder polynomial is a member of the algebraic ideal generated by the Gröbner bases. [40]
The Rosenfeld–Gröbner algorithm facilitates creating Taylor series expansions of solutions to the differential equations. [41]
Example 1: is the differential meromorphic function field with a single standard derivation.
Example 2: is a differential field with a linear differential operator as the derivation, for any polynomial .
Define as shift operator for polynomial .
A shift-invariant operator commutes with the shift operator: .
The Pincherle derivative , a derivation of shift-invariant operator , is . [42]
Ring of integers is , and every integer is a constant.
Field of rational numbers is , and every rational number is a constant.
Constants form the subring of constants. [43]
Element simply generates differential ideal in the differential ring . [44]
Any ring with identity is a algebra. [45] Thus a differential ring is a algebra.
If ring is a subring of the center of unital ring , then is an algebra. [45] Thus, a differential ring is an algebra over its differential subring. This is the natural structure of an algebra over its subring. [30]
Ring has irreducible polynomials, (normal, squarefree) and (special, ideal generator).
Ring has derivatives and
The leading derivatives, and initials are:
Symbolic integration uses algorithms involving polynomials and their derivatives such as Hermite reduction, Czichowski algorithm, Lazard-Rioboo-Trager algorithm, Horowitz-Ostrogradsky algorithm, squarefree factorization and splitting factorization to special and normal polynomials. [46]
Differential algebra can determine if a set of differential polynomial equations has a solution. A total order ranking may identify algebraic constraints. An elimination ranking may determine if one or a selected group of independent variables can express the differential equations. Using triangular decomposition and elimination order, it may be possible to solve the differential equations one differential indeterminate at a time in a step-wise method. Another approach is to create a class of differential equations with a known solution form; matching a differential equation to its class identifies the equation's solution. Methods are available to facilitate the numerical integration of a differential-algebraic system of equations. [47]
In a study of non-linear dynamical systems with chaos, researchers used differential elimination to reduce differential equations to ordinary differential equations involving a single state variable. They were successful in most cases, and this facilitated developing approximate solutions, efficiently evaluating chaos, and constructing Lyapunov functions. [48] Researchers have applied differential elimination to understanding cellular biology, compartmental biochemical models, parameter estimation and quasi-steady state approximation (QSSA) for biochemical reactions. [49] [50] Using differential Gröbner bases, researchers have investigated non-classical symmetry properties of non-linear differential equations. [51] Other applications include control theory, model theory, and algebraic geometry. [52] [16] [53] Differential algebra also applies to differential-difference equations. [54]
A vector space is a collection of vector spaces with integer degree for . A direct sum can represent this graded vector space: [55]
A differential graded vector space or chain complex , is a graded vector space with a differential map or boundary map with . [56]
A cochain complex is a graded vector space with a differential map or coboundary map with . [56]
A differential graded algebra is a graded algebra with a linear derivation with that follows the graded Leibniz product rule. [57]
A Lie algebra is a finite-dimensional real or complex vector space with a bilinear bracket operator with Skew symmetry and the Jacobi identity property. [58]
for all .
The adjoint operator, is a derivation of the bracket because the adjoint's effect on the binary bracket operation is analogous to the derivation's effect on the binary product operation. This is the inner derivation determined by . [59] [60]
The universal enveloping algebra of Lie algebra is a maximal associative algebra with identity, generated by Lie algebra elements and containing products defined by the bracket operation. Maximal means that a linear homomorphism maps the universal algebra to any other algebra that otherwise has these properties. The adjoint operator is a derivation following the Leibniz product rule. [61]
for all .
The Weyl algebra is an algebra over a ring with a specific noncommutative product: [62]
All other indeterminate products are commutative for :
A Weyl algebra can represent the derivations for a commutative ring's polynomials . The Weyl algebra's elements are endomorphisms, the elements function as standard derivations, and map compositions generate linear differential operators. D-module is a related approach for understanding differential operators. The endomorphisms are: [62]
The associative, possibly noncommutative ring has derivation . [63]
The pseudo-differential operator ring is a left containing ring elements : [63] [64] [65]
The derivative operator is . [63]
The binomial coefficient is .
Pseudo-differential operator multiplication is: [63]
The Ritt problem asks is there an algorithm that determines if one prime differential ideal contains a second prime differential ideal when characteristic sets identify both ideals. [66]
The Kolchin catenary conjecture states given a dimensional irreducible differential algebraic variety and an arbitrary point , a long gap chain of irreducible differential algebraic subvarieties occurs from to V. [67]
The Jacobi bound conjecture concerns the upper bound for the order of an differential variety's irreducible component. The polynomial's orders determine a Jacobi number, and the conjecture is the Jacobi number determines this bound. [68]
In the field of complex analysis in mathematics, the Cauchy–Riemann equations, named after Augustin Cauchy and Bernhard Riemann, consist of a system of two partial differential equations which form a necessary and sufficient condition for a complex function of a complex variable to be complex differentiable.
In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as or where is the Laplace operator, is the divergence operator, is the gradient operator, and is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the original polynomial. The discriminant is widely used in polynomial factoring, number theory, and algebraic geometry.
In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols , (where is the nabla operator), or . In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other coordinate systems, such as cylindrical and spherical coordinates, the Laplacian also has a useful form. Informally, the Laplacian Δf (p) of a function f at a point p measures by how much the average value of f over small spheres or balls centered at p deviates from f (p).
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function.
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.
In analytical mechanics, generalized coordinates are a set of parameters used to represent the state of a system in a configuration space. These parameters must uniquely define the configuration of the system relative to a reference state. The generalized velocities are the time derivatives of the generalized coordinates of the system. The adjective "generalized" distinguishes these parameters from the traditional use of the term "coordinate" to refer to Cartesian coordinates.
In abstract algebra, the Weyl algebras are abstracted from the ring of differential operators with polynomial coefficients. They are named after Hermann Weyl, who introduced them to study the Heisenberg uncertainty principle in quantum mechanics.
In mathematics, the Schwarzian derivative is an operator similar to the derivative which is invariant under Möbius transformations. Thus, it occurs in the theory of the complex projective line, and in particular, in the theory of modular forms and hypergeometric functions. It plays an important role in the theory of univalent functions, conformal mapping and Teichmüller spaces. It is named after the German mathematician Hermann Schwarz.
In geometry, an envelope of a planar family of curves is a curve that is tangent to each member of the family at some point, and these points of tangency together form the whole envelope. Classically, a point on the envelope can be thought of as the intersection of two "infinitesimally adjacent" curves, meaning the limit of intersections of nearby curves. This idea can be generalized to an envelope of surfaces in space, and so on to higher dimensions.
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
An osculating circle is a circle that best approximates the curvature of a curve at a specific point. It is tangent to the curve at that point and has the same curvature as the curve at that point. The osculating circle provides a way to understand the local behavior of a curve and is commonly used in differential geometry and calculus.
In mathematics, the Butcher group, named after the New Zealand mathematician John C. Butcher by Hairer & Wanner (1974), is an infinite-dimensional Lie group first introduced in numerical analysis to study solutions of non-linear ordinary differential equations by the Runge–Kutta method. It arose from an algebraic formalism involving rooted trees that provides formal power series solutions of the differential equation modeling the flow of a vector field. It was Cayley (1857), prompted by the work of Sylvester on change of variables in differential calculus, who first noted that the derivatives of a composition of functions can be conveniently expressed in terms of rooted trees and their combinatorics.
In physics, Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle. It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique.
Lie point symmetry is a concept in advanced mathematics. Towards the end of the nineteenth century, Sophus Lie introduced the notion of Lie group in order to study the solutions of ordinary differential equations (ODEs). He showed the following main property: the order of an ordinary differential equation can be reduced by one if it is invariant under one-parameter Lie group of point transformations. This observation unified and extended the available integration techniques. Lie devoted the remainder of his mathematical career to developing these continuous groups that have now an impact on many areas of mathematically based sciences. The applications of Lie groups to differential systems were mainly established by Lie and Emmy Noether, and then advocated by Élie Cartan.
In the study of differential equations, the Loewy decomposition breaks every linear ordinary differential equation (ODE) into what are called largest completely reducible components. It was introduced by Alfred Loewy.
In mathematics, a Janet basis is a normal form for systems of linear homogeneous partial differential equations (PDEs) that removes the inherent arbitrariness of any such system. It was introduced in 1920 by Maurice Janet. It was first called the Janet basis by Fritz Schwarz in 1998.
Vasiliev equations are formally consistent gauge invariant nonlinear equations whose linearization over a specific vacuum solution describes free massless higher-spin fields on anti-de Sitter space. The Vasiliev equations are classical equations and no Lagrangian is known that starts from canonical two-derivative Frønsdal Lagrangian and is completed by interactions terms. There is a number of variations of Vasiliev equations that work in three, four and arbitrary number of space-time dimensions. Vasiliev's equations admit supersymmetric extensions with any number of super-symmetries and allow for Yang–Mills gaugings. Vasiliev's equations are background independent, the simplest exact solution being anti-de Sitter space. It is important to note that locality is not properly implemented and the equations give a solution of certain formal deformation procedure, which is difficult to map to field theory language. The higher-spin AdS/CFT correspondence is reviewed in Higher-spin theory article.
{{cite book}}
: CS1 maint: location missing publisher (link){{cite book}}
: CS1 maint: location missing publisher (link)