Polynomial identity testing

Last updated

In mathematics, polynomial identity testing (PIT) is the problem of efficiently determining whether two multivariate polynomials are identical. More formally, a PIT algorithm is given an arithmetic circuit that computes a polynomial p in a field, and decides whether p is the zero polynomial. Determining the computational complexity required for polynomial identity testing, in particular finding deterministic algorithms for PIT, is one of the most important open problems in algebraic computing complexity.

Contents

Description

The question "Does equal " is a question about whether two polynomials are identical. As with any polynomial identity testing question, it can be trivially transformed into the question "Is a certain polynomial equal to 0?"; in this case we can ask "Does "? If we are given the polynomial as an algebraic expression (rather than as a black-box), we can confirm that the equality holds through brute-force multiplication and addition, but the time complexity of the brute-force approach grows as , where is the number of variables (here, : is the first and is the second), and is the degree of the polynomial (here, ). If and are both large, grows exponentially. [1]

PIT concerns whether a polynomial is identical to the zero polynomial, rather than whether the function implemented by the polynomial always evaluates to zero in the given domain. For example, the field with two elements, GF(2), contains only the elements 0 and 1. In GF(2), always evaluates to zero; despite this, PIT does not consider to be equal to the zero polynomial. [2]

Determining the computational complexity required for polynomial identity testing is one of the most important open problems in the mathematical subfield known as "algebraic computing complexity". [1] [3] The study of PIT is a building-block to many other areas of computational complexity, such as the proof that IP=PSPACE. [1] [4] In addition, PIT has applications to Tutte matrices and also to primality testing, where PIT techniques led to the AKS primality test, the first deterministic (though impractical) polynomial time algorithm for primality testing. [1]

Formal problem statement

Given an arithmetic circuit that computes a polynomial in a field, determine whether the polynomial is equal to the zero polynomial (that is, the polynomial with no nonzero terms). [1]

Solutions

In some cases, the specification of the arithmetic circuit is not given to the PIT solver, and the PIT solver can only input values into a "black box" that implements the circuit, and then analyze the output. Note that the solutions below assume that any operation (such as multiplication) in the given field takes constant time; further, all black-box algorithms below assume the size of the field is larger than the degree of the polynomial.

The Schwartz–Zippel algorithm provides a practical probabilistic solution, by simply randomly testing inputs and checking whether the output is zero. It was the first randomized polynomial time PIT algorithm to be proven correct. [1] The larger the domain the inputs are drawn from, the less likely Schwartz–Zippel is to fail. If random bits are in short supply, the Chen-Kao algorithm (over the rationals) or the Lewin-Vadhan algorithm (over any field) require fewer random bits at the cost of more required runtime. [2]

A sparse PIT has at most nonzero monomial terms. A sparse PIT can be deterministically solved in polynomial time of the size of the circuit and the number of monomials, [1] see also. [5]

A low degree PIT has an upper bound on the degree of the polynomial. Any low degree PIT problem can be reduced in subexponential time of the size of the circuit to a PIT problem for depth-four circuits; therefore, PIT for circuits of depth-four (and below) is intensely studied. [1]

See also

Related Research Articles

In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

In computational complexity theory, a decision problem is PSPACE-complete if it can be solved using an amount of memory that is polynomial in the input length and if every other problem that can be solved in polynomial space can be transformed to it in polynomial time. The problems that are PSPACE-complete can be thought of as the hardest problems in PSPACE, the class of decision problems solvable in polynomial space, because a solution to any one such problem could easily be used to solve any other problem in PSPACE.

In computational complexity theory, randomized polynomial time (RP) is the complexity class of problems for which a probabilistic Turing machine exists with these properties:

In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring K[x1, ..., xn] over a field K. A Gröbner basis allows many important properties of the ideal and the associated algebraic variety to be deduced easily, such as the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving systems of polynomial equations and computing the images of algebraic varieties under projections or rational maps.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

Descriptive complexity is a branch of computational complexity theory and of finite model theory that characterizes complexity classes by the type of logic needed to express the languages in them. For example, PH, the union of all complexity classes in the polynomial hierarchy, is precisely the class of languages expressible by statements of second-order logic. This connection between complexity and the logic of finite structures allows results to be transferred easily from one area to the other, facilitating new proof methods and providing additional evidence that the main complexity classes are somehow "natural" and not tied to the specific abstract machines used to define them.

In computational complexity theory, P/poly is a complexity class representing problems that can be solved by small circuits. More precisely, it is the set of formal languages that have polynomial-size circuit families. It can also be defined equivalently in terms of Turing machines with advice, extra information supplied to the Turing machine along with its input, that may depend on the input length but not on the input itself. In this formulation, P/poly is the class of decision problems that can be solved by a polynomial-time Turing machine with advice strings of length polynomial in the input size. These two different definitions make P/poly central to circuit complexity and non-uniform complexity.

In mathematics, the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root, or, equivalently, a common factor. In some older texts, the resultant is also called the eliminant.

In mathematics, the Schwartz–Zippel lemma is a tool commonly used in probabilistic polynomial identity testing, i.e. in the problem of determining whether a given multivariate polynomial is the 0-polynomial. It was discovered independently by Jack Schwartz, Richard Zippel, and Richard DeMillo and Richard J. Lipton, although DeMillo and Lipton's version was shown a year prior to Schwartz and Zippel's result. The finite field version of this bound was proved by Øystein Ore in 1922.

<span class="mw-page-title-main">Circuit complexity</span> Model of computational complexity

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of the Boolean circuits that compute them. A related notion is the circuit complexity of a recursive language that is decided by a uniform family of circuits .

In computational complexity theory, the language TQBF is a formal language consisting of the true quantified Boolean formulas. A (fully) quantified Boolean formula is a formula in quantified propositional logic where every variable is quantified, using either existential or universal quantifiers, at the beginning of the sentence. Such a formula is equivalent to either true or false. If such a formula evaluates to true, then that formula is in the language TQBF. It is also known as QSAT.

In computational complexity theory, arithmetic circuits are the standard model for computing polynomials. Informally, an arithmetic circuit takes as inputs either variables or numbers, and is allowed to either add or multiply two expressions it has already computed. Arithmetic circuits provide a formal way to understand the complexity of computing polynomials. The basic type of question in this line of research is "what is the most efficient way to compute a given polynomial ?"

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

In algorithmic game theory, a succinct game or a succinctly representable game is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of players, each facing strategies, requires listing utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of polynomial type if in a game represented by a string of length n the number of players, as well as the number of strategies of each player, is bounded by a polynomial in n.

In mathematics and computer algebra the factorization of a polynomial consists of decomposing it into a product of irreducible factors. This decomposition is theoretically possible and is unique for polynomials with coefficients in any field, but rather strong restrictions on the field of the coefficients are needed to allow the computation of the factorization by means of an algorithm. In practice, algorithms have been designed only for polynomials with coefficients in a finite field, in the field of rationals or in a finitely generated field extension of one of them.

References

  1. 1 2 3 4 5 6 7 8 Saxena, Nitin. "Progress on Polynomial Identity Testing." Bulletin of the EATCS 99 (2009): 49-79.
  2. 1 2 Shpilka, Amir, and Amir Yehudayoff. "Arithmetic circuits: A survey of recent results and open questions." Foundations and Trends in Theoretical Computer Science 5.3–4 (2010): 207-388.
  3. Dvir, Zeev, and Amir Shpilka. "Locally decodable codes with two queries and polynomial identity testing for depth 3 circuits." SIAM Journal on Computing 36.5 (2007): 1404-1434.
  4. Adi Shamir. "IP=PSPACE." Journal of the ACM (JACM) 39.4 (1992): 869-877.
  5. Grigoriev, Dima, Karpinski, Marek, and Singer, Michael F., "Fast Parallel Algorithms for Sparse Multivariate Polynomial Interpolation over Finite Fields", SIAM J. Comput., Vol 19, No.6, pp. 1059-1063, December 1990