Parsimonious reduction

Last updated

In computational complexity theory and game complexity, a parsimonious reduction is a transformation from one problem to another (a reduction) that preserves the number of solutions. Informally, it is a bijection between the respective sets of solutions of two problems. A general reduction from problem to problem is a transformation that guarantees that whenever has a solution also has at least one solution and vice versa. A parsimonious reduction guarantees that for every solution of , there exists a unique solution of and vice versa.

Contents

Parsimonious reductions are commonly used in computational complexity for proving the hardness of counting problems, for counting complexity classes such as #P. Additionally, they are used in game complexity, as a way to design hard puzzles that have a unique solution, as many types of puzzles require.

Formal definition

Let be an instance of problem . A Parsimonious reduction from problem to problem is a reduction such that the number of solutions to is equal to the number of solutions to problem . [1] If such a reduction exists, and if we have an oracle that counts the number of solutions to which is an instance of , then we can design an algorithm that counts the number of solutions to , the corresponding instance of . Consequently, if counting the number of solutions to the instances of is hard, then counting the number of solutions to must be hard as well.

Applications

Just as many-one reductions are important for proving NP-completeness, parsimonious reductions are important for proving completeness for counting complexity classes such as #P. [1] Because parsimonious reductions preserve the property of having a unique solution, they are also used in game complexity, to show the hardness of puzzles such as sudoku where the uniqueness of the solution is an important part of the definition of the puzzle. [2]

Specific types of parsimonious reductions may be defined by the computational complexity or other properties of the transformation algorithm. For instance, a polynomial-time parsimonious reduction is one in which the transformation algorithm takes polynomial time. These are the types of reduction used to prove #P-Completeness. [1] In parameterized complexity, FPT parsimonious reductions are used; these are parsimonious reductions whose transformation is a fixed-parameter tractable algorithm and that map bounded parameter values to bounded parameter values by a computable function. [3]

Polynomial-time parsimonious reductions are a special case of a more general class of reductions for counting problems, the polynomial-time counting reductions. [4]

One common technique used in proving that a reduction is parsimonious is to show that there is a bijection between the set of solutions to and the set of solutions to which guarantees that the number of solutions to both problems is the same.

Examples of parsimonious reduction in proving #P-completeness

The class #P contains the counting versions of NP decision problems. Given an instance of an NP decision problem the problem asks for the number of solutions to problem The examples of #P-completeness below rely on the fact that #SAT is #P-complete.

#3SAT

This is the counting version of 3SAT. One can show that any boolean formula can be rewritten as a formula in 3-CNF form. Any valid assignment of a boolean formula is a valid assignment of the corresponding 3-CNF formula, and vice versa. Hence, this reduction preserves the number of satisfying assignments, and is a parsimonious reduction. Then, #SAT and #3SAT are counting equivalents, and #3SAT is #P-complete as well.

Planar #3SAT

This is the counting version of Planar 3SAT. The hardness reduction from 3SAT to Planar 3SAT given by Lichtenstein [5] has the additional property that for every valid assignment of an instance of 3SAT, there is a unique valid assignment of the corresponding instance of Planar 3SAT, and vice versa. Hence the reduction is parsimonious, and consequently Planar #3SAT is #P-complete.

Hamiltonian Cycle

The counting version of this problem asks for the number of Hamiltonian cycles in a given directed graph. Seta Takahiro provided a reduction [6] from 3SAT to this problem when restricted to planar directed max degree-3 graphs. The reduction provides a bijection between the solutions to an instance of 3SAT and the solutions to an instance of Hamiltonian Cycle in planar directed max degree-3 graphs. Hence the reduction is parsimonious and Hamiltonian Cycle in planar directed max degree-3 graphs is #P-complete. Consequently, the general version of Hamiltonian Cycle problem must be #P-complete as well.

Shakashaka

Shakashaka is an example of how parsimonious reduction could be used in showing hardness of logic puzzles. The decision version of this problem asks whether there is a solution to a given instance of the puzzle. The counting version asks for the number of distinct solutions to such a problem. The reduction from Planar 3SAT given by Demaine, Okamoto, Uehara and Uno [7] also provides a bijection between the set of solutions to an instance of Planar 3SAT and the set of solutions to the corresponding instance of Shakashaka. Hence the reduction is parsimonious, and the counting version of Shakashaka is #P-complete.

Related Research Articles

The P versus NP problem is a major unsolved problem in computer science. It asks whether every problem whose solution can be quickly verified can also be solved quickly.

In computational complexity theory, the complexity class #P is the set of the counting problems associated with the decision problems in the set NP. More formally, #P is the class of function problems of the form "compute f(x)", where f is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. Unlike most well-known complexity classes, it is not a class of decision problems but a class of function problems. The most difficult, representative problems of this class are #P-complete.

The #P-complete problems form a complexity class in computational complexity theory. The problems in this complexity class are defined by having the following two properties:

In computational complexity theory, a decision problem is PSPACE-complete if it can be solved using an amount of memory that is polynomial in the input length and if every other problem that can be solved in polynomial space can be transformed to it in polynomial time. The problems that are PSPACE-complete can be thought of as the hardest problems in PSPACE, the class of decision problems solvable in polynomial space, because a solution to any one such problem could easily be used to solve any other problem in PSPACE.

In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second.

Combinatorial optimization Subfield of mathematical optimization

Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combinatorial optimization problems are the travelling salesman problem ("TSP"), the minimum spanning tree problem ("MST"), and the knapsack problem. In many such problems, such as the ones previously mentioned, exhaustive search is not tractable, and so specialized algorithms that quickly rule out large parts of the search space or approximation algorithms must be resorted to instead.

In theoretical computer science, the subgraph isomorphism problem is a computational task in which two graphs G and H are given as input, and one must determine whether G contains a subgraph that is isomorphic to H. Subgraph isomorphism is a generalization of both the maximum clique problem and the problem of testing whether a graph contains a Hamiltonian cycle, and is therefore NP-complete. However certain other cases of subgraph isomorphism may be solved in polynomial time.

In computer science, 2-satisfiability, 2-SAT or just 2SAT is a computational problem of assigning values to variables, each of which has two possible values, in order to satisfy a system of constraints on pairs of variables. It is a special case of the general Boolean satisfiability problem, which can involve constraints on more than two variables, and of constraint satisfaction problems, which can allow more than two choices for the value of each variable. But in contrast to those more general problems, which are NP-complete, 2-satisfiability can be solved in polynomial time.

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

In computational complexity theory, a function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem. For function problems, the output is not simply 'yes' or 'no'.

In computational complexity theory and computability theory, a counting problem is a type of computational problem. If R is a search problem then

MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:

In computational complexity theory, Polynomial Local Search (PLS) is a complexity class that models the difficulty of finding a locally optimal solution to an optimization problem. The main characteristics of problems that lie in PLS are that the cost of a solution can be calculated in polynomial time and the neighborhood of a solution can be searched in polynomial time. Therefore it is possible to verify whether or not a solution is a local optimum in polynomial time. Furthermore, depending on the problem and the algorithm that is used for solving the problem, it might be faster to find a local optimum instead of a global optimum.

In computational complexity theory, a gadget is a subset of a problem instance that simulates the behavior of one of the fundamental units of a different computational problem. Gadgets are typically used to construct reductions from one computational problem to another, as part of proofs of NP-completeness or other types of computational hardness. The component design technique is a method for constructing reductions by using gadgets.

NP-completeness Complexity class

In computational complexity theory, a problem is NP-complete when:

  1. it is a problem for which the correctness of each solution can be verified quickly and a brute-force search algorithm can find a solution by trying all possible solutions.
  2. the problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. In this sense, NP-complete problems are the hardest of the problems to which solutions can be verified quickly. If we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified.

In computer science, the Sharp Satisfiability Problem is the problem of counting the number of interpretations that satisfies a given Boolean formula, introduced by Valiant in 1979. In other words, it asks in how many ways the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. For example, the formula is satisfiable by three distinct boolean value assignments of the variables, namely, for any of the assignments, ,
, we have = TRUE.

In computational complexity theory, a gap reduction is a reduction to a particular type of decision problem, known as a c-gap problem. Such reductions provide information about the hardness of approximating solutions to optimization problems. In short, a gap problem refers to one wherein the objective is to distinguish between cases where the best solution is above one threshold from cases where the best solution is below another threshold, such that the two thresholds have a gap in between. Gap reductions can be used to demonstrate inapproximability results, as if a problem may be approximated to a better factor than the size of gap, then the approximation algorithm can be used to solve the corresponding gap problem.

In the computational complexity theory of counting problems, a polynomial-time counting reduction is a type of reduction used to define the notion of completeness for the complexity class ♯P. These reductions may also be called polynomial many-one counting reductions or weakly parsimonious reductions; they are analogous to many-one reductions for decision problems and they generalize the parsimonious reductions.

Planar SAT

In computer science, the planar 3-satisfiability problem is an extension of the classical Boolean 3-satisfiability problem to a planar incidence graph. In other words, it asks whether the variables of a given Boolean formula—whose incidence graph consisting of variables and clauses can be embedded on a plane—can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make = TRUE. In contrast, "a AND NOT a" is unsatisfiable.

References

  1. 1 2 3 Goldreich, Oded (2008), Computational Complexity: A Conceptual Perspective, Cambridge University Press, pp. 203–204, ISBN   9781139472746
  2. Yato, Takayuki; Seta, Takahiro (2003), "Complexity and Completeness of Finding Another Solution and Its Application to Puzzles", IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E86-A (5): 1052–1060
  3. Flum, J.; Grohe, M. (2006), Parameterized Complexity Theory, EATCS Texts in Theoretical Computer Science, Springer, p. 363, ISBN   9783540299530
  4. Gomes, Carla P.; Sabharwal, Ashish; Selman, Bart (2009), "Chapter 20. Model Counting", in Biere, Armin; Heule, Marijn; van Maaren, Hans; Walsh, Toby (eds.), Handbook of Satisfiability (PDF), Frontiers in Artificial Intelligence and Applications, vol. 185, IOS Press, pp. 633–654, ISBN   9781586039295 . See in particular pp. 634–635.
  5. Lichtenstein, David (May 1982). "Planar Formulae and Their Uses". SIAM Journal on Computing. 11 (2): 329–343. doi:10.1137/0211025. ISSN   0097-5397.
  6. Seta, Takahiro (2001). The Complexities of Puzzles, Cross Sum, and their Another Solution Problems (ASP). CiteSeerX   10.1.1.81.7891 .
  7. "JAIST Repository: Computational complexity and an integer programming model of Shakashaka". dspace.jaist.ac.jp. Retrieved 2019-05-15.