Factorization of polynomials over finite fields

Last updated

In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them.

Contents

All factorization algorithms, including the case of multivariate polynomials over the rational numbers, reduce the problem to this case; see polynomial factorization. It is also used for various applications of finite fields, such as coding theory (cyclic redundancy codes and BCH codes), cryptography (public key cryptography by the means of elliptic curves), and computational number theory.

As the reduction of the factorization of multivariate polynomials to that of univariate polynomials does not have any specificity in the case of coefficients in a finite field, only polynomials with one variable are considered in this article.

Background

Finite field

The theory of finite fields, whose origins can be traced back to the works of Gauss and Galois, has played a part in various branches of mathematics. Due to the applicability of the concept in other topics of mathematics and sciences like computer science there has been a resurgence of interest in finite fields and this is partly due to important applications in coding theory and cryptography. Applications of finite fields introduce some of these developments in cryptography, computer algebra and coding theory.

A finite field or Galois field is a field with a finite order (number of elements). The order of a finite field is always a prime or a power of prime. For each prime power q = pr, there exists exactly one finite field with q elements, up to isomorphism. This field is denoted GF(q) or Fq. If p is prime, GF(p) is the prime field of order p; it is the field of residue classes modulo p, and its p elements are denoted 0, 1, ..., p−1. Thus a = b in GF(p) means the same as ab (mod p).

Irreducible polynomials

Let F be a finite field. As for general fields, a non-constant polynomial f in F[x] is said to be irreducible over F if it is not the product of two polynomials of positive degree. A polynomial of positive degree that is not irreducible over F is called reducible overF.

Irreducible polynomials allow us to construct the finite fields of non-prime order. In fact, for a prime power q, let Fq be the finite field with q elements, unique up to isomorphism. A polynomial f of degree n greater than one, which is irreducible over Fq, defines a field extension of degree n which is isomorphic to the field with qn elements: the elements of this extension are the polynomials of degree lower than n; addition, subtraction and multiplication by an element of Fq are those of the polynomials; the product of two elements is the remainder of the division by f of their product as polynomials; the inverse of an element may be computed by the extended GCD algorithm (see Arithmetic of algebraic extensions).

It follows that, to compute in a finite field of non prime order, one needs to generate an irreducible polynomial. For this, the common method is to take a polynomial at random and test it for irreducibility. For sake of efficiency of the multiplication in the field, it is usual to search for polynomials of the shape xn + ax + b.[ citation needed ] [1]

Irreducible polynomials over finite fields are also useful for pseudorandom number generators using feedback shift registers and discrete logarithm over F2n.

The number of irreducible monic polynomials of degree n over Fq is the number of aperiodic necklaces, given by Moreau's necklace-counting function Mq(n). The closely related necklace function Nq(n) counts monic polynomials of degree n which are primary (a power of an irreducible); or alternatively irreducible polynomials of all degrees d which divide n. [2]

Example

The polynomial P = x4 + 1 is irreducible over Q but not over any finite field.

  1. If then
  2. If then
  3. If then

Complexity

Polynomial factoring algorithms use basic polynomial operations such as products, divisions, gcd, powers of one polynomial modulo another, etc. A multiplication of two polynomials of degree at most n can be done in O(n2) operations in Fq using "classical" arithmetic, or in O(nlog(n) log(log(n)) ) operations in Fq using "fast" arithmetic. A Euclidean division (division with remainder) can be performed within the same time bounds. The cost of a polynomial greatest common divisor between two polynomials of degree at most n can be taken as O(n2) operations in Fq using classical methods, or as O(nlog2(n) log(log(n)) ) operations in Fq using fast methods. For polynomials h, g of degree at most n, the exponentiation hq mod g can be done with O(log(q)) polynomial products, using exponentiation by squaring method, that is O(n2log(q)) operations in Fq using classical methods, or O(nlog(q)log(n) log(log(n))) operations in Fq using fast methods.

In the algorithms that follow, the complexities are expressed in terms of number of arithmetic operations in Fq, using classical algorithms for the arithmetic of polynomials.

Factoring algorithms

Many algorithms for factoring polynomials over finite fields include the following three stages:

  1. Square-free factorization
  2. Distinct-degree factorization
  3. Equal-degree factorization

An important exception is Berlekamp's algorithm, which combines stages 2 and 3.

Berlekamp's algorithm

Berlekamp's algorithm is historically important as being the first factorization algorithm which works well in practice. However, it contains a loop on the elements of the ground field, which implies that it is practicable only over small finite fields. For a fixed ground field, its time complexity is polynomial, but, for general ground fields, the complexity is exponential in the size of the ground field.

Square-free factorization

The algorithm determines a square-free factorization for polynomials whose coefficients come from the finite field Fq of order q = pm with p a prime. This algorithm firstly determines the derivative and then computes the gcd of the polynomial and its derivative. If it is not one then the gcd is again divided into the original polynomial, provided that the derivative is not zero (a case that exists for non-constant polynomials defined over finite fields).

This algorithm uses the fact that, if the derivative of a polynomial is zero, then it is a polynomial in xp, which is, if the coefficients belong to Fp, the pth power of the polynomial obtained by substituting x by x1/p. If the coefficients do not belong to Fp, the pth root of a polynomial with zero derivative is obtained by the same substitution on x, completed by applying the inverse of the Frobenius automorphism to the coefficients.

This algorithm works also over a field of characteristic zero, with the only difference that it never enters in the blocks of instructions where pth roots are computed. However, in this case, Yun's algorithm is much more efficient because it computes the greatest common divisors of polynomials of lower degrees. A consequence is that, when factoring a polynomial over the integers, the algorithm which follows is not used: one first computes the square-free factorization over the integers, and to factor the resulting polynomials, one chooses a p such that they remain square-free modulo p.

Algorithm: SFF (Square-Free Factorization)     Input: A monic polynomial f in Fq[x] where q = pmOutput: Square-free factorization of fR ← 1         # Make w be the product (without multiplicity) of all factors of f that have      # multiplicity not divisible by pcgcd(f, f)     wf/c          # Step 1: Identify all factors in wi ← 1      whilew ≠ 1 doygcd(w, c)         facw / yRR · faciwy; cc / y; ii + 1      end while     # c is now the product (with multiplicity) of the remaining factors of f         # Step 2: Identify all remaining factors using recursion     # Note that these are the factors of f that have multiplicity divisible by pifc ≠ 1 thencc1/pRR·SFF(c)pend ifOutput(R)

The idea is to identify the product of all irreducible factors of f with the same multiplicity. This is done in two steps. The first step uses the formal derivative of f to find all the factors with multiplicity not divisible by p. The second step identifies the remaining factors. As all of the remaining factors have multiplicity divisible by p, meaning they are powers of p, one can simply take the pth square root and apply recursion.

Example of a square-free factorization

Let

to be factored over the field with three elements.

The algorithm computes first

Since the derivative is non-zero we have w = f/c = x2 + 2 and we enter the while loop. After one loop we have y = x + 2, z = x + 1 and R = x + 1 with updates i = 2, w = x + 2 and c = x8 + x7 + x6 + x2+x+1. The second time through the loop gives y = x + 2, z = 1, R = x + 1, with updates i = 3, w = x + 2 and c = x7 + 2x6 + x + 2. The third time through the loop also does not change R. For the fourth time through the loop we get y = 1, z = x + 2, R = (x + 1)(x + 2)4, with updates i = 5, w = 1 and c = x6 + 1. Since w = 1, we exit the while loop. Since c ≠ 1, it must be a perfect cube. The cube root of c, obtained by replacing x3 by x is x2 + 1, and calling the square-free procedure recursively determines that it is square-free. Therefore, cubing it and combining it with the value of R to that point gives the square-free decomposition

Distinct-degree factorization

This algorithm splits a square-free polynomial into a product of polynomials whose irreducible factors all have the same degree. Let fFq[x] of degree n be the polynomial to be factored.

Algorithm Distinct-degree factorization(DDF)     Input: A monic square-free polynomial  fFq[x]Output: The set of all pairs (g, d), such that               f has an irreducible factor of degree d and              g is the product of all monic irreducible factors of f of degree d.     Beginwhiledoifg ≠ 1, then;                 end ifi := i + 1;end while;         if, then;         if, then return{(f, 1)},             else returnSEnd

The correctness of the algorithm is based on the following:

Lemma. For i ≥ 1 the polynomial

is the product of all monic irreducible polynomials in Fq[x] whose degree divides i.

At first glance, this is not efficient since it involves computing the GCD of polynomials of a degree which is exponential in the degree of the input polynomial. However,

may be replaced by

Therefore, we have to compute:

there are two methods:

Method I. Start from the value of

computed at the preceding step and to compute its qth power modulo the new f*, using exponentiation by squaring method. This needs

arithmetic operations in Fq at each step, and thus

arithmetic operations for the whole algorithm.

Method II. Using the fact that the qth power is a linear map over Fq we may compute its matrix with

operations. Then at each iteration of the loop, compute the product of a matrix by a vector (with O(deg(f)2) operations). This induces a total number of operations in Fq which is

Thus this second method is more efficient and is usually preferred. Moreover, the matrix that is computed in this method is used, by most algorithms, for equal-degree factorization (see below); thus using it for the distinct-degree factorization saves further computing time.

Equal-degree factorization

Cantor–Zassenhaus algorithm

In this section, we consider the factorization of a monic squarefree univariate polynomial f, of degree n, over a finite field Fq, which has r ≥ 2 pairwise distinct irreducible factors each of degree d.

We first describe an algorithm by Cantor and Zassenhaus (1981) and then a variant that has a slightly better complexity. Both are probabilistic algorithms whose running time depends on random choices (Las Vegas algorithms), and have a good average running time. In next section we describe an algorithm by Shoup (1990), which is also an equal-degree factorization algorithm, but is deterministic. All these algorithms require an odd order q for the field of coefficients. For more factorization algorithms see e.g. Knuth's book The Art of Computer Programming volume 2.

Algorithm Cantor–Zassenhaus algorithm.     Input: A finite field Fq of odd order q.            A monic square free polynomial f in Fq[x] of degree n = rd,                  which has r ≥ 2 irreducible factors each of degree dOutput: The set of monic irreducible factors of f.      Factors := {f};     while Size(Factors) < rdo,         Choose h in Fq[x] with deg(h) < n at random;         for eachuin Factors with deg(u) > ddoif gcd(g, u) ≠ 1 and gcd(g, u) ≠ u, then                 Factors:= Factors;             endifendwhilereturn Factors

The correctness of this algorithm relies on the fact that the ring Fq[x]/f is a direct product of the fields Fq[x]/fi where fi runs on the irreducible factors of f. As all these fields have qd elements, the component of g in any of these fields is zero with probability

This implies that the polynomial gcd(g, u) is the product of the factors of g for which the component of g is zero.

It has been shown that the average number of iterations of the while loop of the algorithm is less than , giving an average number of arithmetic operations in Fq which is . [3]

In the typical case where dlog(q) > n, this complexity may be reduced to

by choosing h in the kernel of the linear map

and replacing the instruction

by

The proof of validity is the same as above, replacing the direct product of the fields Fq[x]/fi by the direct product of their subfields with q elements. The complexity is decomposed in for the algorithm itself, for the computation of the matrix of the linear map (which may be already computed in the square-free factorization) and O(n3) for computing its kernel. It may be noted that this algorithm works also if the factors have not the same degree (in this case the number r of factors, needed for stopping the while loop, is found as the dimension of the kernel). Nevertheless, the complexity is slightly better if square-free factorization is done before using this algorithm (as n may decrease with square-free factorization, this reduces the complexity of the critical steps).

Victor Shoup's algorithm

Like the algorithms of the preceding section, Victor Shoup's algorithm is an equal-degree factorization algorithm. [4] Unlike them, it is a deterministic algorithm. However, it is less efficient, in practice, than the algorithms of preceding section. For Shoup's algorithm, the input is restricted to polynomials over prime fields Fp.

The worst case time complexity of Shoup's algorithm has a factor Although exponential, this complexity is much better than previous deterministic algorithms (Berlekamp's algorithm) which have p as a factor. However, there are very few polynomials for which the computing time is exponential, and the average time complexity of the algorithm is polynomial in where d is the degree of the polynomial, and p is the number of elements of the ground field.

Let g = g1 ... gk be the desired factorization, where the gi are distinct monic irreducible polynomials of degree d. Let n = deg(g) = kd. We consider the ring R = Fq[x]/g and denote also by x the image of x in R. The ring R is the direct product of the fields Ri = Fq[x]/gi, and we denote by pi the natural homomorphism from the R onto Ri. The Galois group of Ri over Fq is cyclic of order d, generated by the field automorphism uup. It follows that the roots of gi in Ri are

Like in the preceding algorithm, this algorithm uses the same subalgebra B of R as the Berlekamp's algorithm, sometimes called the "Berlekamp subagebra" and defined as

A subset S of B is said a separating set if, for every 1  i < j  k there exists s  S such that . In the preceding algorithm, a separating set is constructed by choosing at random the elements of S. In Shoup's algorithm, the separating set is constructed in the following way. Let s in R[Y] be such that

Then is a separating set because for i =1, ..., k (the two monic polynomials have the same roots). As the gi are pairwise distinct, for every pair of distinct indexes (i, j), at least one of the coefficients sh will satisfy

Having a separating set, Shoup's algorithm proceeds as the last algorithm of the preceding section, simply by replacing the instruction "choose at random h in the kernel of the linear map " by "choose h + i with h in S and i in {1, ..., k−1}".

Time complexity

As described in previous sections, for the factorization over finite fields, there are randomized algorithms of polynomial time complexity (for example Cantor–Zassenhaus algorithm). There are also deterministic algorithms with a polynomial average complexity (for example Shoup's algorithm).

The existence of a deterministic algorithm with a polynomial worst-case complexity is still an open problem.

Rabin's test of irreducibility

Like distinct-degree factorization algorithm, Rabin's algorithm [5] is based on the Lemma stated above. Distinct-degree factorization algorithm tests every d not greater than half the degree of the input polynomial. Rabin's algorithm takes advantage that the factors are not needed for considering fewer d. Otherwise, it is similar to distinct-degree factorization algorithm. It is based on the following fact.

Let p1, ..., pk, be all the prime divisors of n, and denote , for 1 ≤ ik polynomial f in Fq[x] of degree n is irreducible in Fq[x] if and only if , for 1  i  k, and f divides . In fact, if f has a factor of degree not dividing n, then f does not divide ; if f has a factor of degree dividing n, then this factor divides at least one of the

Algorithm Rabin Irreducibility Test     Input: A monic polynomial f in Fq[x] of degree n,          p1, ..., pk all distinct prime divisors of n.     Output: Either "f is irreducible" or "f is reducible".      forj = 1 to kdo;     fori = 1 to kdo;         g := gcd(f, h);         ifg ≠ 1, then return "f is reducible" and STOP;     end for;     ;     ifg = 0, then return "f is irreducible",          else return "f is reducible"

The basic idea of this algorithm is to compute starting from the smallest by repeated squaring or using the Frobenius automorphism, and then to take the correspondent gcd. Using the elementary polynomial arithmetic, the computation of the matrix of the Frobenius automorphism needs operations in Fq, the computation of

needs O(n3) further operations, and the algorithm itself needs O(kn2) operations, giving a total of operations in Fq. Using fast arithmetic (complexity for multiplication and division, and for GCD computation), the computation of the by repeated squaring is , and the algorithm itself is , giving a total of operations in Fq.

See also

Related Research Articles

<span class="mw-page-title-main">Euclidean algorithm</span> Algorithm for computing greatest common divisors

In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements . It is an example of an algorithm, a step-by-step procedure for performing a calculation according to well-defined rules, and is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.

In mathematics, a finite field or Galois field is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are given by the integers mod p when p is a prime number.

In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers a and b, also the coefficients of Bézout's identity, which are integers x and y such that

In mathematics, an irreducible polynomial is, roughly speaking, a polynomial that cannot be factored into the product of two non-constant polynomials. The property of irreducibility depends on the nature of the coefficients that are accepted for the possible factors, that is, the field to which the coefficients of the polynomial and its possible factors are supposed to belong. For example, the polynomial x2 − 2 is a polynomial with integer coefficients, but, as every integer is also a real number, it is also a polynomial with real coefficients. It is irreducible if it is considered as a polynomial with integer coefficients, but it factors as if it is considered as a polynomial with real coefficients. One says that the polynomial x2 − 2 is irreducible over the integers but not over the reals.

<span class="mw-page-title-main">Polynomial ring</span> Algebraic structure

In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring formed from the set of polynomials in one or more indeterminates with coefficients in another ring, often a field.

In mathematics, the interplay between the Galois group G of a Galois extension L of a number field K, and the way the prime ideals P of the ring of integers OK factorise as products of prime ideals of OL, provides one of the richest parts of algebraic number theory. The splitting of prime ideals in Galois extensions is sometimes attributed to David Hilbert by calling it Hilbert theory. There is a geometric analogue, for ramified coverings of Riemann surfaces, which is simpler in that only one kind of subgroup of G need be considered, rather than two. This was certainly familiar before Hilbert.

In mathematics, Hensel's lemma, also known as Hensel's lifting lemma, named after Kurt Hensel, is a result in modular arithmetic, stating that if a univariate polynomial has a simple root modulo a prime number p, then this root can be lifted to a unique root modulo any higher power of p. More generally, if a polynomial factors modulo p into two coprime polynomials, this factorization can be lifted to a factorization modulo any higher power of p.

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.

In algebra, Gauss's lemma, named after Carl Friedrich Gauss, is a statement about polynomials over the integers, or, more generally, over a unique factorization domain. Gauss's lemma underlies all the theory of factorization and greatest common divisors of such polynomials.

In mathematics and computer algebra, factorization of polynomials or polynomial factorization expresses a polynomial with coefficients in a given field or in the integers as the product of irreducible factors with coefficients in the same domain. Polynomial factorization is one of the fundamental components of computer algebra systems.

In mathematics, particularly computational algebra, Berlekamp's algorithm is a well-known method for factoring polynomials over finite fields. The algorithm consists mainly of matrix reduction and polynomial GCD computations. It was invented by Elwyn Berlekamp in 1967. It was the dominant algorithm for solving the problem until the Cantor–Zassenhaus algorithm of 1981. It is currently implemented in many well-known computer algebra systems.

In computational algebra, the Cantor–Zassenhaus algorithm is a method for factoring polynomials over finite fields.

In field theory, a branch of mathematics, the minimal polynomial of an element α of an extension field of a field is, roughly speaking, the polynomial of lowest degree having coefficients in the smaller field, such that α is a root of the polynomial. If the minimal polynomial of α exists, it is unique. The coefficient of the highest-degree term in the polynomial is required to be 1.

In algebra, the greatest common divisor of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers.

In number theory, an average order of an arithmetic function is some simpler or better-understood function which takes the same values "on average".

A hyperelliptic curve is a particular kind of algebraic curve. There exist hyperelliptic curves of every genus . If the genus of a hyperelliptic curve equals 1, we simply call the curve an elliptic curve. Hence we can see hyperelliptic curves as generalizations of elliptic curves. There is a well-known group structure on the set of points lying on an elliptic curve over some field , which we can describe geometrically with chords and tangents. Generalizing this group structure to the hyperelliptic case is not straightforward. We cannot define the same group law on the set of points lying on a hyperelliptic curve, instead a group structure can be defined on the so-called Jacobian of a hyperelliptic curve. The computations differ depending on the number of points at infinity. Imaginary hyperelliptic curves are hyperelliptic curves with exactly 1 point at infinity: real hyperelliptic curves have two points at infinity.

In mathematics the Function Field Sieve is one of the most efficient algorithms to solve the Discrete Logarithm Problem (DLP) in a finite field. It has heuristic subexponential complexity. Leonard Adleman developed it in 1994 and then elaborated it together with M. D. Huang in 1999. Previous work includes the work of D. Coppersmith about the DLP in fields of characteristic two.

In mathematics, a linearised polynomial is a polynomial for which the exponents of all the constituent monomials are powers of q and the coefficients come from some extension field of the finite field of order q.

In number theory, Berlekamp's root finding algorithm, also called the Berlekamp–Rabin algorithm, is the probabilistic method of finding roots of polynomials over the field field with elements. The method was discovered by Elwyn Berlekamp in 1970 as an auxiliary to the algorithm for polynomial factorization over finite fields. The algorithm was later modified by Rabin for arbitrary finite fields in 1979. The method was also independently discovered before Berlekamp by other researchers.

In algebraic number theory, the Dedekind–Kummer theorem describes how a prime ideal in a Dedekind domain factors over the domain's integral closure.

References

Notes

  1. "Reducibility over $\mathbb{Z}_2$?". Mathematics Stack Exchange. Retrieved 2023-09-10.
  2. Christophe Reutenauer, Mots circulaires et polynomes irreductibles, Ann. Sci. math Quebec, vol 12, no 2, pp. 275-285
  3. Flajolet, Philippe; Steayaert, Jean-Marc (1982), Automata, languages and programming, Lecture Notes in Comput. Sci., vol. 140, Aarhus: Springer, pp. 239–251, doi:10.1007/BFb0012773, ISBN   978-3-540-11576-2
  4. Victor Shoup, On the deterministic complexity of factoring polynomials over finite fields, Information Processing Letters 33:261-267, 1990
  5. Rabin, Michael (1980). "Probabilistic algorithms in finite fields". SIAM Journal on Computing. 9 (2): 273–280. CiteSeerX   10.1.1.17.5653 . doi:10.1137/0209024.