This article includes a list of general references, but it lacks sufficient corresponding inline citations .(September 2012) |
In algebra, the partial fraction decomposition or partial fraction expansion of a rational fraction (that is, a fraction such that the numerator and the denominator are both polynomials) is an operation that consists of expressing the fraction as a sum of a polynomial (possibly zero) and one or several fractions with a simpler denominator. [1]
The importance of the partial fraction decomposition lies in the fact that it provides algorithms for various computations with rational functions, including the explicit computation of antiderivatives, [2] Taylor series expansions, inverse Z-transforms, and inverse Laplace transforms. The concept was discovered independently in 1702 by both Johann Bernoulli and Gottfried Leibniz. [3]
In symbols, the partial fraction decomposition of a rational fraction of the form where f and g are polynomials, is the expression of the rational fraction as
where p(x) is a polynomial, and, for each j, the denominator gj (x) is a power of an irreducible polynomial (i.e. not factorizable into polynomials of positive degrees), and the numerator fj (x) is a polynomial of a smaller degree than the degree of this irreducible polynomial.
When explicit computation is involved, a coarser decomposition is often preferred, which consists of replacing "irreducible polynomial" by "square-free polynomial" in the description of the outcome. This allows replacing polynomial factorization by the much easier-to-compute square-free factorization. This is sufficient for most applications, and avoids introducing irrational coefficients when the coefficients of the input polynomials are integers or rational numbers.
Let be a rational fraction, where F and G are univariate polynomials in the indeterminate x over a field. The existence of the partial fraction can be proved by applying inductively the following reduction steps.
There exist two polynomials E and F1 such that and where denotes the degree of the polynomial P.
This results immediately from the Euclidean division of F by G, which asserts the existence of E and F1 such that and
This allows supposing in the next steps that
If and where G1 and G2 are coprime polynomials, then there exist polynomials and such that and
This can be proved as follows. Bézout's identity asserts the existence of polynomials C and D such that (by hypothesis, 1 is a greatest common divisor of G1 and G2).
Let with be the Euclidean division of DF by Setting one gets It remains to show that By reducing the last sum of fractions to a common denominator, one gets and thus
Using the preceding decomposition inductively one gets fractions of the form with where G is an irreducible polynomial. If k > 1, one can decompose further, by using that an irreducible polynomial is a square-free polynomial, that is, is a greatest common divisor of the polynomial and its derivative. If is the derivative of G, Bézout's identity provides polynomials C and D such that and thus Euclidean division of by gives polynomials and such that and Setting one gets with
Iterating this process with in place of leads eventually to the following theorem.
Theorem — Let f and g be nonzero polynomials over a field K. Write g as a product of powers of distinct irreducible polynomials :
There are (unique) polynomials b and aij with deg aij < deg pi such that
If deg f < deg g, then b = 0.
The uniqueness can be proved as follows. Let d = max(1 + deg f, deg g). All together, b and the aij have d coefficients. The shape of the decomposition defines a linear map from coefficient vectors to polynomials f of degree less than d. The existence proof means that this map is surjective. As the two vector spaces have the same dimension, the map is also injective, which means uniqueness of the decomposition. By the way, this proof induces an algorithm for computing the decomposition through linear algebra.
If K is the field of complex numbers, the fundamental theorem of algebra implies that all pi have degree one, and all numerators are constants. When K is the field of real numbers, some of the pi may be quadratic, so, in the partial fraction decomposition, quotients of linear polynomials by powers of quadratic polynomials may also occur.
In the preceding theorem, one may replace "distinct irreducible polynomials" by "pairwise coprime polynomials that are coprime with their derivative". For example, the pi may be the factors of the square-free factorization of g. When K is the field of rational numbers, as it is typically the case in computer algebra, this allows to replace factorization by greatest common divisor computation for computing a partial fraction decomposition.
For the purpose of symbolic integration, the preceding result may be refined into
Theorem — Let f and g be nonzero polynomials over a field K. Write g as a product of powers of pairwise coprime polynomials which have no multiple root in an algebraically closed field:
There are (unique) polynomials b and cij with deg cij < deg pi such that where denotes the derivative of
This reduces the computation of the antiderivative of a rational function to the integration of the last sum, which is called the logarithmic part, because its antiderivative is a linear combination of logarithms.
There are various methods to compute decomposition in the Theorem. One simple way is called Hermite's method. First, b is immediately computed by Euclidean division of f by g, reducing to the case where deg(f) < deg(g). Next, one knows deg(cij) < deg(pi), so one may write each cij as a polynomial with unknown coefficients. Reducing the sum of fractions in the Theorem to a common denominator, and equating the coefficients of each power of x in the two numerators, one gets a system of linear equations which can be solved to obtain the desired (unique) values for the unknown coefficients.
Given two polynomials and , where the αn are distinct constants and deg P < n, explicit expressions for partial fractions can be obtained by supposing that and solving for the ci constants, by substitution, by equating the coefficients of terms involving the powers of x, or otherwise. (This is a variant of the method of undetermined coefficients. After both sides of the equation are multiplied by Q(x), one side of the equation is a specific polynomial, and the other side is a polynomial with undetermined coefficients. The equality is possible only when the coefficients of like powers of x are equal. This yields n equations in n unknowns, the ck.)
A more direct computation, which is strongly related to Lagrange interpolation, consists of writing where is the derivative of the polynomial . The coefficients of are called the residues of f/g.
This approach does not account for several other cases, but can be modified accordingly:
In an example application of this procedure, (3x + 5)/(1 − 2x)2 can be decomposed in the form
Clearing denominators shows that 3x + 5 = A + B(1 − 2x). Expanding and equating the coefficients of powers of x gives
Solving this system of linear equations for A and B yields A = 13/2 and B = −3/2. Hence,
Over the complex numbers, suppose f(x) is a rational proper fraction, and can be decomposed into
Let then according to the uniqueness of Laurent series, aij is the coefficient of the term (x − xi)−1 in the Laurent expansion of gij(x) about the point xi, i.e., its residue
This is given directly by the formula or in the special case when xi is a simple root, when
Partial fractions are used in real-variable integral calculus to find real-valued antiderivatives of rational functions. Partial fraction decomposition of real rational functions is also used to find their Inverse Laplace transforms. For applications of partial fraction decomposition over the reals, see
Let be any rational function over the real numbers. In other words, suppose there exist real polynomials functions and , such that
By dividing both the numerator and the denominator by the leading coefficient of , we may assume without loss of generality that is monic. By the fundamental theorem of algebra, we can write
where , , are real numbers with , and , are positive integers. The terms are the linear factors of which correspond to real roots of , and the terms are the irreducible quadratic factors of which correspond to pairs of complex conjugate roots of .
Then the partial fraction decomposition of is the following:
Here, P(x) is a (possibly zero) polynomial, and the Air, Bir, and Cir are real constants. There are a number of ways the constants can be found.
The most straightforward method is to multiply through by the common denominator q(x). We then obtain an equation of polynomials whose left-hand side is simply p(x) and whose right-hand side has coefficients which are linear expressions of the constants Air, Bir, and Cir. Since two polynomials are equal if and only if their corresponding coefficients are equal, we can equate the coefficients of like terms. In this way, a system of linear equations is obtained which always has a unique solution. This solution can be found using any of the standard methods of linear algebra. It can also be found with limits (see Example 5).
Here, the denominator splits into two distinct linear factors:
so we have the partial fraction decomposition
Multiplying through by the denominator on the left-hand side gives us the polynomial identity
Substituting x = −3 into this equation gives A = −1/4, and substituting x = 1 gives B = 1/4, so that
After long division, we have
The factor x2 − 4x + 8 is irreducible over the reals, as its discriminant (−4)2 − 4×8 = −16 is negative. Thus the partial fraction decomposition over the reals has the shape
Multiplying through by x3 − 4x2 + 8x, we have the polynomial identity
Taking x = 0, we see that 16 = 8A, so A = 2. Comparing the x2 coefficients, we see that 4 = A + B = 2 + B, so B = 2. Comparing linear coefficients, we see that −8 = −4A + C = −8 + C, so C = 0. Altogether,
The fraction can be completely decomposed using complex numbers. According to the fundamental theorem of algebra every complex polynomial of degree n has n (complex) roots (some of which can be repeated). The second fraction can be decomposed to:
Multiplying through by the denominator gives:
Equating the coefficients of x and the constant (with respect to x) coefficients of both sides of this equation, one gets a system of two linear equations in D and E, whose solution is
Thus we have a complete decomposition:
One may also compute directly A, D and E with the residue method (see also example 4 below).
This example illustrates almost all the "tricks" we might need to use, short of consulting a computer algebra system.
After long division and factoring the denominator, we have
The partial fraction decomposition takes the form
Multiplying through by the denominator on the left-hand side we have the polynomial identity
Now we use different values of x to compute the coefficients:
Solving this we have:
Using these values we can write:
We compare the coefficients of x6 and x5 on both side and we have:
Therefore:
which gives us B = 0. Thus the partial fraction decomposition is given by:
Alternatively, instead of expanding, one can obtain other linear dependences on the coefficients computing some derivatives at in the above polynomial identity. (To this end, recall that the derivative at x = a of (x − a)mp(x) vanishes if m > 1 and is just p(a) for m = 1.) For instance the first derivative at x = 1 gives
that is 8 = 4B + 8 so B = 0.
Thus, f(z) can be decomposed into rational functions whose denominators are z+1, z−1, z+i, z−i. Since each term is of power one, −1, 1, −i and i are simple poles.
Hence, the residues associated with each pole, given by are respectively, and
Limits can be used to find a partial fraction decomposition. [4] Consider the following example:
First, factor the denominator which determines the decomposition:
Multiplying everything by , and taking the limit when , we get
On the other hand,
and thus:
Multiplying by x and taking the limit when , we have
and
This implies A + B = 0 and so .
For x = 0, we get and thus .
Putting everything together, we get the decomposition
Suppose we have the indefinite integral:
Before performing decomposition, it is obvious we must perform polynomial long division and factor the denominator. Doing this would result in:
Upon this, we may now perform partial fraction decomposition.
so: . Upon substituting our values, in this case, where x=1 to solve for B and x=-2 to solve for A, we will result in:
Plugging all of this back into our integral allows us to find the answer:
The partial fraction decomposition of a rational function can be related to Taylor's theorem as follows. Let
be real or complex polynomials assume that
satisfies
Also define
Then we have
if, and only if, each polynomial is the Taylor polynomial of of order at the point :
Taylor's theorem (in the real or complex case) then provides a proof of the existence and uniqueness of the partial fraction decomposition, and a characterization of the coefficients.
The above partial fraction decomposition implies, for each 1 ≤ i ≤ r, a polynomial expansion
so is the Taylor polynomial of , because of the unicity of the polynomial expansion of order , and by assumption .
Conversely, if the are the Taylor polynomials, the above expansions at each hold, therefore we also have
which implies that the polynomial is divisible by
For is also divisible by , so
is divisible by . Since
we then have
and we find the partial fraction decomposition dividing by .
The idea of partial fractions can be generalized to other integral domains, say the ring of integers where prime numbers take the role of irreducible denominators. For example:
In mathematics, a polynomial is a mathematical expression consisting of indeterminates and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer powers, and has a finite number of terms. An example of a polynomial of a single indeterminate x is x2 − 4x + 7. An example with three indeterminates is x3 + 2xyz2 − yz + 1.
In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. For example, 3 × 5 is an integer factorization of 15, and (x – 2)(x + 2) is a polynomial factorization of x2 – 4.
In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers a and b, also the coefficients of Bézout's identity, which are integers x and y such that
In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Generating functions are often expressed in closed form, by some expression involving operations on the formal series.
In mathematics, an irreducible polynomial is, roughly speaking, a polynomial that cannot be factored into the product of two non-constant polynomials. The property of irreducibility depends on the nature of the coefficients that are accepted for the possible factors, that is, the ring to which the coefficients of the polynomial and its possible factors are supposed to belong. For example, the polynomial x2 − 2 is a polynomial with integer coefficients, but, as every integer is also a real number, it is also a polynomial with real coefficients. It is irreducible if it is considered as a polynomial with integer coefficients, but it factors as if it is considered as a polynomial with real coefficients. One says that the polynomial x2 − 2 is irreducible over the integers but not over the reals.
In mathematics, an affine algebraic plane curve is the zero set of a polynomial in two variables. A projective algebraic plane curve is the zero set in a projective plane of a homogeneous polynomial in three variables. An affine algebraic plane curve can be completed in a projective algebraic plane curve by homogenizing its defining polynomial. Conversely, a projective algebraic plane curve of homogeneous equation h(x, y, t) = 0 can be restricted to the affine algebraic plane curve of equation h(x, y, 1) = 0. These two operations are each inverse to the other; therefore, the phrase algebraic plane curve is often used without specifying explicitly whether it is the affine or the projective case that is considered.
In algebra, a monic polynomial is a non-zero univariate polynomial in which the leading coefficient is equal to 1. That is to say, a monic polynomial is one that can be written as
In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rational numbers; they may be taken in any field K. In this case, one speaks of a rational function and a rational fraction over K. The values of the variables may be taken in any field L containing K. Then the domain of the function is the set of the values of the variables for which the denominator is not zero, and the codomain is L.
In mathematics, a Bézout matrix is a special square matrix associated with two polynomials, introduced by James Joseph Sylvester in 1853 and Arthur Cayley in 1857 and named after Étienne Bézout. Bézoutian may also refer to the determinant of this matrix, which is equal to the resultant of the two polynomials. Bézout matrices are sometimes used to test the stability of a given polynomial.
In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.
In mathematics, Puiseux series are a generalization of power series that allow for negative and fractional exponents of the indeterminate. For example, the series
In mathematics and computer algebra, factorization of polynomials or polynomial factorization expresses a polynomial with coefficients in a given field or in the integers as the product of irreducible factors with coefficients in the same domain. Polynomial factorization is one of the fundamental components of computer algebra systems.
In mathematics, an algebraic expression is an expression built up from constants variables, and the basic algebraic operations: addition (+), subtraction (-), multiplication (×), division (÷), whole number powers, and roots .. For example, is an algebraic expression. Since taking the square root is the same as raising to the power 1/2, the following is also an algebraic expression:
In mathematics, the degree of a polynomial is the highest of the degrees of the polynomial's monomials with non-zero coefficients. The degree of a term is the sum of the exponents of the variables that appear in it, and thus is a non-negative integer. For a univariate polynomial, the degree of the polynomial is simply the highest exponent occurring in the polynomial. The term order has been used as a synonym of degree but, nowadays, may refer to several other concepts.
In mathematics, the method of equating the coefficients is a way of solving a functional equation of two expressions such as polynomials for a number of unknown parameters. It relies on the fact that two expressions are identical precisely when corresponding coefficients are equal for each different type of term. The method is used to bring formulas into a desired form.
In algebra, an algebraic fraction is a fraction whose numerator and denominator are algebraic expressions. Two examples of algebraic fractions are and . Algebraic fractions are subject to the same laws as arithmetic fractions.
In algebra, the greatest common divisor of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.
A hyperelliptic curve is a particular kind of algebraic curve. There exist hyperelliptic curves of every genus . If the genus of a hyperelliptic curve equals 1, we simply call the curve an elliptic curve. Hence we can see hyperelliptic curves as generalizations of elliptic curves. There is a well-known group structure on the set of points lying on an elliptic curve over some field , which we can describe geometrically with chords and tangents. Generalizing this group structure to the hyperelliptic case is not straightforward. We cannot define the same group law on the set of points lying on a hyperelliptic curve, instead a group structure can be defined on the so-called Jacobian of a hyperelliptic curve. The computations differ depending on the number of points at infinity. Imaginary hyperelliptic curves are hyperelliptic curves with exactly 1 point at infinity: real hyperelliptic curves have two points at infinity.
In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them.
In mathematics, there are two types of hyperelliptic curves, a class of algebraic curves: real hyperelliptic curves and imaginary hyperelliptic curves which differ by the number of points at infinity. Hyperelliptic curves exist for every genus . The general formula of Hyperelliptic curve over a finite field is given by where satisfy certain conditions. In this page, we describe more about real hyperelliptic curves, these are curves having two points at infinity while imaginary hyperelliptic curves have one point at infinity.