This article needs attention from an expert in mathematics. The specific problem is: to review the article.(October 2019) |
In mathematics a P-recursive equation is a linear equation of sequences where the coefficient sequences can be represented as polynomials. P-recursive equations are linear recurrence equations (or linear recurrence relations or linear difference equations) with polynomial coefficients. These equations play an important role in different areas of mathematics, specifically in combinatorics. The sequences which are solutions of these equations are called holonomic, P-recursive or D-finite.
From the late 1980s, the first algorithms were developed to find solutions for these equations. Sergei A. Abramov, Marko Petkovšek and Mark van Hoeij described algorithms to find polynomial, rational, hypergeometric and d'Alembertian solutions.
Let be a field of characteristic zero (for example ), polynomials for , a sequence and an unknown sequence. The equation
is called a linear recurrence equation with polynomial coefficients (all recurrence equations in this article are of this form). If and are both nonzero, then is called the order of the equation. If is zero the equation is called homogeneous, otherwise it is called inhomogeneous.
This can also be written as where is a linear recurrence operator with polynomial coefficients and is the shift operator, i.e. .
Let or equivalently be a recurrence equation with polynomial coefficients. There exist several algorithms which compute solutions of this equation. These algorithms can compute polynomial, rational, hypergeometric and d'Alembertian solutions. The solution of a homogeneous equation is given by the kernel of the linear recurrence operator: . As a subspace of the space of sequences this kernel has a basis. [1] Let be a basis of , then the formal sum for arbitrary constants is called the general solution of the homogeneous problem . If is a particular solution of , i.e. , then is also a solution of the inhomogeneous problem and it is called the general solution of the inhomogeneous problem.
In the late 1980s Sergei A. Abramov described an algorithm which finds the general polynomial solution of a recurrence equation, i.e. , with a polynomial right-hand side. He (and a few years later Marko Petkovšek) gave a degree bound for polynomial solutions. This way the problem can simply be solved by considering a system of linear equations. [2] [3] [4] In 1995 Abramov, Bronstein and Petkovšek showed that the polynomial case can be solved more efficiently by considering power series solution of the recurrence equation in a specific power basis (i.e. not the ordinary basis ). [5]
The other algorithms for finding more general solutions (e.g. rational or hypergeometric solutions) also rely on algorithms which compute polynomial solutions.
In 1989 Sergei A. Abramov showed that a general rational solution, i.e. , with polynomial right-hand side , can be found by using the notion of a universal denominator. A universal denominator is a polynomial such that the denominator of every rational solution divides . Abramov showed how this universal denominator can be computed by only using the first and the last coefficient polynomial and . Substituting this universal denominator for the unknown denominator of all rational solutions can be found by computing all polynomial solutions of a transformed equation. [6]
A sequence is called hypergeometric if the ratio of two consecutive terms is a rational function in , i.e. . This is the case if and only if the sequence is the solution of a first-order recurrence equation with polynomial coefficients. The set of hypergeometric sequences is not a subspace of the space of sequences as it is not closed under addition.
In 1992 Marko Petkovšek gave an algorithm to get the general hypergeometric solution of a recurrence equation where the right-hand side is the sum of hypergeometric sequences. The algorithm makes use of the Gosper-Petkovšek normal-form of a rational function. With this specific representation it is again sufficient to consider polynomial solutions of a transformed equation. [3]
A different and more efficient approach is due to Mark van Hoeij. Considering the roots of the first and the last coefficient polynomial and – called singularities – one can build a solution step by step making use of the fact that every hypergeometric sequence has a representation of the form
for some with for and . Here denotes the Gamma function and the algebraic closure of the field . Then the have to be singularities of the equation (i.e. roots of or ). Furthermore one can compute bounds for the exponents . For fixed values it is possible to make an ansatz which gives candidates for . For a specific one can again make an ansatz to get the rational function by Abramov's algorithm. Considering all possibilities one gets the general solution of the recurrence equation. [7] [8]
A sequence is called d'Alembertian if for some hypergeometric sequences and means that where denotes the difference operator, i.e. . This is the case if and only if there are first-order linear recurrence operators with rational coefficients such that . [4]
1994 Abramov and Petkovšek described an algorithm which computes the general d'Alembertian solution of a recurrence equation. This algorithm computes hypergeometric solutions and reduces the order of the recurrence equation recursively. [9]
The number of signed permutation matrices of size can be described by the sequence . A signed permutation matrix is a square matrix which has exactly one nonzero entry in every row and in every column. The nonzero entries can be . The sequence is determined by the linear recurrence equation with polynomial coefficients
and the initial values . Applying an algorithm to find hypergeometric solutions one can find the general hypergeometric solution
for some constant . Also considering the initial values, the sequence describes the number of signed permutation matrices. [10]
The number of involutions of a set with elements is given by the recurrence equation
Applying for example Petkovšek's algorithm it is possible to see that there is no polynomial, rational or hypergeometric solution for this recurrence equation. [4]
A function is called hypergeometric if where denotes the rational functions in and . A hypergeometric sum is a finite sum of the form where is hypergeometric. Zeilberger's creative telescoping algorithm can transform such a hypergeometric sum into a recurrence equation with polynomial coefficients. This equation can then be solved to get for example a linear combination of hypergeometric solutions which is called a closed form solution of . [4]
In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, such that the only solutions of interest are the integer ones. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents.
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members. The number of elements is called the length of the sequence. Unlike a set, the same elements can appear multiple times at different positions in a sequence, and unlike a set, the order does matter. Formally, a sequence can be defined as a function from natural numbers to the elements at each position. The notion of a sequence can be generalized to an indexed family, defined as a function from an arbitrary index set.
In mathematics, a recurrence relation is an equation according to which the th term of a sequence of numbers is equal to some combination of the previous terms. Often, only previous terms of the sequence appear in the equation, for a parameter that is independent of ; this number is called the order of the relation. If the values of the first numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.
In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.
In transcendental number theory, the Lindemann–Weierstrass theorem is a result that is very useful in establishing the transcendence of numbers. It states the following:
In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring K[x1, ..., xn] over a field K. A Gröbner basis allows many important properties of the ideal and the associated algebraic variety to be deduced easily, such as the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving systems of polynomial equations and computing the images of algebraic varieties under projections or rational maps.
In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rational numbers; they may be taken in any field K. In this case, one speaks of a rational function and a rational fraction over K. The values of the variables may be taken in any field L containing K. Then the domain of the function is the set of the values of the variables for which the denominator is not zero, and the codomain is L.
In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
In mathematics, an algebraic equation or polynomial equation is an equation of the form
In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.
In mathematics, and more specifically in analysis, a holonomic function is a smooth function of several variables that is a solution of a system of linear homogeneous differential equations with polynomial coefficients and satisfies a suitable dimension condition in terms of D-modules theory. More precisely, a holonomic function is an element of a holonomic module of smooth functions. Holonomic functions can also be described as differentiably finite functions, also known as D-finite functions. When a power series in the variables is the Taylor expansion of a holonomic function, the sequence of its coefficients, in one or several indices, is also called holonomic. Holonomic sequences are also called P-recursive sequences: they are defined recursively by multivariate recurrences satisfied by the whole sequence and by suitable specializations of it. The situation simplifies in the univariate case: any univariate sequence that satisfies a linear homogeneous recurrence relation with polynomial coefficients, or equivalently a linear homogeneous difference equation with polynomial coefficients, is holonomic.
In mathematics, Gosper's algorithm, due to Bill Gosper, is a procedure for finding sums of hypergeometric terms that are themselves hypergeometric terms. That is: suppose one has a(1) + ... + a(n) = S(n) − S(0), where S(n) is a hypergeometric term (i.e., S(n + 1)/S(n) is a rational function of n); then necessarily a(n) is itself a hypergeometric term, and given the formula for a(n) Gosper's algorithm finds that for S(n).
Petkovšek's algorithm is a computer algebra algorithm that computes a basis of hypergeometric terms solution of its input linear recurrence equation with polynomial coefficients. Equivalently, it computes a first order right factor of linear difference operators with polynomial coefficients. This algorithm was developed by Marko Petkovšek in his PhD-thesis 1992. The algorithm is implemented in all the major computer algebra systems.
In the study of differential equations, the Loewy decomposition breaks every linear ordinary differential equation (ODE) into what are called largest completely reducible components. It was introduced by Alfred Loewy.
In mathematics and theoretical computer science, a constant-recursive sequence is an infinite sequence of numbers where each number in the sequence is equal to a fixed linear combination of one or more of its immediate predecessors. A constant-recursive sequence is also known as a linear recurrence sequence, linear-recursive sequence, linear-recurrent sequence, a C-finite sequence, or a solution to a linear recurrence with constant coefficients.
In mathematics, a linear recurrence with constant coefficients sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as t, one period earlier denoted as t − 1, one period later as t + 1, etc.
In mathematics, the Fuchs relation is a relation between the starting exponents of formal series solutions of certain linear differential equations, so called Fuchsian equations. It is named after Lazarus Immanuel Fuchs.
The Fuchsian theory of linear differential equations, which is named after Lazarus Immanuel Fuchs, provides a characterization of various types of singularities and the relations among them.
In mathematics a P-recursive equation can be solved for polynomial solutions. Sergei A. Abramov in 1989 and Marko Petkovšek in 1992 described an algorithm which finds all polynomial solutions of those recurrence equations with polynomial coefficients. The algorithm computes a degree bound for the solution in a first step. In a second step an ansatz for a polynomial of this degree is used and the unknown coefficients are computed by a system of linear equations. This article describes this algorithm.
In mathematics, particularly in computer algebra, Abramov's algorithm computes all rational solutions of a linear recurrence equation with polynomial coefficients. The algorithm was published by Sergei A. Abramov in 1989.