Abramov's algorithm

Last updated

In mathematics, particularly in computer algebra, Abramov's algorithm computes all rational solutions of a linear recurrence equation with polynomial coefficients. The algorithm was published by Sergei A. Abramov in 1989. [1] [2]

Contents

Universal denominator

The main concept in Abramov's algorithm is a universal denominator. Let be a field of characteristic zero. The dispersion of two polynomials is defined as

where denotes the set of non-negative integers. Therefore the dispersion is the maximum such that the polynomial and the -times shifted polynomial have a common factor. It is if such a does not exist. The dispersion can be computed as the largest non-negative integer root of the resultant . [3] [4] Let be a recurrence equation of order with polynomial coefficients , polynomial right-hand side and rational sequence solution . It is possible to write for two relatively prime polynomials . Let and

where denotes the falling factorial of a function. Then divides . So the polynomial can be used as a denominator for all rational solutions and hence it is called a universal denominator. [5]

Algorithm

Let again be a recurrence equation with polynomial coefficients and a universal denominator. After substituting for an unknown polynomial and setting the recurrence equation is equivalent to

As the cancel this is a linear recurrence equation with polynomial coefficients which can be solved for an unknown polynomial solution . There are algorithms to find polynomial solutions. The solutions for can then be used again to compute the rational solutions . [2]

algorithm rational_solutions isinput: Linear recurrence equation .     output: The general rational solution  if there are any solutions, otherwise false.           Solve  for general polynomial solution if solution  exists thenreturn general solution elsereturn false     end if

Example

The homogeneous recurrence equation of order

over has a rational solution. It can be computed by considering the dispersion

This yields the following universal denominator:

and

Multiplying the original recurrence equation with and substituting leads to

This equation has the polynomial solution for an arbitrary constant . Using the general rational solution is

for arbitrary .

Related Research Articles

<span class="mw-page-title-main">Complex number</span> Number with a real and an imaginary part

In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted i, called the imaginary unit and satisfying the equation ; every complex number can be expressed in the form , where a and b are real numbers. Because no real number satisfies the above equation, i was called an imaginary number by René Descartes. For the complex number , a is called the real part, and b is called the imaginary part. The set of complex numbers is denoted by either of the symbols or C. Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers and are fundamental in many aspects of the scientific description of the natural world.

In mathematics, a finite field or Galois field is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are given by the integers mod p when p is a prime number.

<span class="mw-page-title-main">Integral domain</span> Commutative ring with no zero divisors other than zero

In mathematics, specifically abstract algebra, an integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Integral domains are generalizations of the ring of integers and provide a natural setting for studying divisibility. In an integral domain, every nonzero element a has the cancellation property, that is, if a ≠ 0, an equality ab = ac implies b = c.

In number theory, given a prime number p, the p-adic numbers form an extension of the rational numbers which is distinct from the real numbers, though with some similar properties; p-adic numbers can be written in a form similar to decimals, but with digits based on a prime number p rather than ten, and extending to the left rather than to the right. Formally, given a prime number p, a p-adic number can be defined as a series

In mathematics, Hilbert's Nullstellensatz is a theorem that establishes a fundamental relationship between geometry and algebra. This relationship is the basis of algebraic geometry. It relates algebraic sets to ideals in polynomial rings over algebraically closed fields. This relationship was discovered by David Hilbert, who proved the Nullstellensatz in his second major paper on invariant theory in 1893.

<span class="mw-page-title-main">Root of unity</span> Number that has an integer power equal to 1

In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power n. Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform.

<span class="mw-page-title-main">Spherical harmonics</span> Special mathematical functions defined on the surface of a sphere

In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields.

<span class="mw-page-title-main">Green's function</span> Impulse response of an inhomogeneous linear differential operator

In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.

In mathematics, the classical orthogonal polynomials are the most widely used orthogonal polynomials: the Hermite polynomials, Laguerre polynomials, Jacobi polynomials.

In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring K[x1, ..., xn] over a field K. A Gröbner basis allows many important properties of the ideal and the associated algebraic variety to be deduced easily, such as the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving systems of polynomial equations and computing the images of algebraic varieties under projections or rational maps.

In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rational numbers; they may be taken in any field K. In this case, one speaks of a rational function and a rational fraction over K. The values of the variables may be taken in any field L containing K. Then the domain of the function is the set of the values of the variables for which the denominator is not zero, and the codomain is L.

In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.

In algebra, the greatest common divisor of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.

Petkovšek's algorithm is a computer algebra algorithm that computes a basis of hypergeometric terms solution of its input linear recurrence equation with polynomial coefficients. Equivalently, it computes a first order right factor of linear difference operators with polynomial coefficients. This algorithm was developed by Marko Petkovšek in his PhD-thesis 1992. The algorithm is implemented in all the major computer algebra systems.

In the study of differential equations, the Loewy decomposition breaks every linear ordinary differential equation (ODE) into what are called largest completely reducible components. It was introduced by Alfred Loewy.

In mathematics, the Fuchs relation is a relation between the starting exponents of formal series solutions of certain linear differential equations, so called Fuchsian equations. It is named after Lazarus Immanuel Fuchs.

In mathematics a P-recursive equation can be solved for polynomial solutions. Sergei A. Abramov in 1989 and Marko Petkovšek in 1992 described an algorithm which finds all polynomial solutions of those recurrence equations with polynomial coefficients. The algorithm computes a degree bound for the solution in a first step. In a second step an ansatz for a polynomial of this degree is used and the unknown coefficients are computed by a system of linear equations. This article describes this algorithm.

In mathematics a P-recursive equation is a linear equation of sequences where the coefficient sequences can be represented as polynomials. P-recursive equations are linear recurrence equations with polynomial coefficients. These equations play an important role in different areas of mathematics, specifically in combinatorics. The sequences which are solutions of these equations are called holonomic, P-recursive or D-finite.

In number theory, Berlekamp's root finding algorithm, also called the Berlekamp–Rabin algorithm, is the probabilistic method of finding roots of polynomials over a field . The method was discovered by Elwyn Berlekamp in 1970 as an auxiliary to the algorithm for polynomial factorization over finite fields. The algorithm was later modified by Rabin for arbitrary finite fields in 1979. The method was also independently discovered before Berlekamp by other researchers.

References

  1. Abramov, Sergei A. (1989). "Rational solutions of linear differential and difference equations with polynomial coefficients". USSR Computational Mathematics and Mathematical Physics. 29 (6): 7–12. doi:10.1016/s0041-5553(89)80002-3. ISSN   0041-5553.
  2. 1 2 Abramov, Sergei A. (1995). "Rational solutions of linear difference and q -difference equations with polynomial coefficients". Proceedings of the 1995 international symposium on Symbolic and algebraic computation - ISSAC '95. pp. 285–289. doi:10.1145/220346.220383. ISBN   978-0897916998. S2CID   15424889.
  3. Man, Yiu-Kwong; Wright, Francis J. (1994). "Fast polynomial dispersion computation and its application to indefinite summation". Proceedings of the international symposium on Symbolic and algebraic computation - ISSAC '94. pp. 175–180. doi:10.1145/190347.190413. ISBN   978-0897916387. S2CID   2192728.
  4. Gerhard, Jürgen (2005). Modular Algorithms in Symbolic Summation and Symbolic Integration. Lecture Notes in Computer Science. Vol. 3218. doi:10.1007/b104035. ISBN   978-3-540-24061-7. ISSN   0302-9743.
  5. Chen, William Y. C.; Paule, Peter; Saad, Husam L. (2007). "Converging to Gosper's Algorithm". arXiv: 0711.3386 [math.CA].
Wikidata-logo.svg WikiProject Mathematics on Wikidata Wikidata-logo.svg