MAX-3SAT

Last updated

MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:

Contents

Given a 3-CNF formula Φ (i.e. with at most 3 variables per clause), find an assignment that satisfies the largest number of clauses.

MAX-3SAT is a canonical complete problem for the complexity class MAXSNP (shown complete in Papadimitriou pg. 314).

Approximability

The decision version of MAX-3SAT is NP-complete. Therefore, a polynomial-time solution can only be achieved if P = NP. An approximation within a factor of 2 can be achieved with this simple algorithm, however:

The Karloff-Zwick algorithm runs in polynomial-time and satisfies ≥ 7/8 of the clauses. While this algorithm is randomized, it can be derandomized using, e.g., the techniques from [1] to yield a deterministic (polynomial-time) algorithm with the same approximation guarantees.

Theorem 1 (inapproximability)

The PCP theorem implies that there exists an ε > 0 such that (1-ε)-approximation of MAX-3SAT is NP-hard.

Proof:

Any NP-complete problem by the PCP theorem. For x ∈ L, a 3-CNF formula Ψx is constructed so that

The Verifier V reads all required bits at once i.e. makes non-adaptive queries. This is valid because the number of queries remains constant.

Next we try to find a Boolean formula to simulate this. We introduce Boolean variables x1,...,xl, where l is the length of the proof. To demonstrate that the Verifier runs in Probabilistic polynomial-time, we need a correspondence between the number of satisfiable clauses and the probability the Verifier accepts.

It can be concluded that if this holds for every NP-complete problem then the PCP theorem must be true.

Theorem 2

Håstad [2] demonstrates a tighter result than Theorem 1 i.e. the best known value for ε.

He constructs a PCP Verifier for 3-SAT that reads only 3 bits from the Proof.

For every ε > 0, there is a PCP-verifier M for 3-SAT that reads a random string r of length and computes query positions ir, jr, kr in the proof π and a bit br. It accepts if and only if 'π(ir) ⊕ π(jr) ⊕ π(kr) = br.

The Verifier has completeness (1ε) and soundness 1/2 + ε (refer to PCP (complexity)). The Verifier satisfies

If the first of these two equations were equated to "=1" as usual, one could find a proof π by solving a system of linear equations (see MAX-3LIN-EQN) implying P = NP.

This is enough to prove the hardness of approximation ratio

MAX-3SAT(B) is the restricted special case of MAX-3SAT where every variable occurs in at most B clauses. Before the PCP theorem was proven, Papadimitriou and Yannakakis [3] showed that for some fixed constant B, this problem is MAX SNP-hard. Consequently, with the PCP theorem, it is also APX-hard. This is useful because MAX-3SAT(B) can often be used to obtain a PTAS-preserving reduction in a way that MAX-3SAT cannot. Proofs for explicit values of B include: all B ≥ 13, [4] [5] and all B ≥ 3 [6] (which is best possible).

Moreover, although the decision problem 2SAT is solvable in polynomial time, MAX-2SAT(3) is also APX-hard. [6]

The best possible approximation ratio for MAX-3SAT(B), as a function of B, is at least and at most , [7] unless NP=RP. Some explicit bounds on the approximability constants for certain values of B are known. [8] [9] [10] Berman, Karpinski and Scott proved that for the "critical" instances of MAX-3SAT in which each literal occurs exactly twice, and each clause is exactly of size 3, the problem is approximation hard for some constant factor. [11]

MAX-EkSAT is a parameterized version of MAX-3SAT where every clause has exactlyk literals, for k ≥ 3. It can be efficiently approximated with approximation ratio using ideas from coding theory.

It has been proved that random instances of MAX-3SAT can be approximated to within factor 8/9. [12]

Related Research Articles

<span class="mw-page-title-main">Interactive proof system</span>

In computational complexity theory, an interactive proof system is an abstract machine that models computation as the exchange of messages between two parties: a prover and a verifier. The parties interact by exchanging messages in order to ascertain whether a given string belongs to a language or not. The prover possesses unlimited computational resources but cannot be trusted, while the verifier has bounded computation power but is assumed to be always honest. Messages are sent between the verifier and prover until the verifier has an answer to the problem and has "convinced" itself that it is correct.

In computer science, 2-satisfiability, 2-SAT or just 2SAT is a computational problem of assigning values to variables, each of which has two possible values, in order to satisfy a system of constraints on pairs of variables. It is a special case of the general Boolean satisfiability problem, which can involve constraints on more than two variables, and of constraint satisfaction problems, which can allow more than two choices for the value of each variable. But in contrast to those more general problems, which are NP-complete, 2-satisfiability can be solved in polynomial time.

In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof, as used in the verifier-based definition of the complexity class NP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way.

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.

In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.

In computational complexity theory, the class APX is the set of NP optimization problems that allow polynomial-time approximation algorithms with approximation ratio bounded by a constant. In simple terms, problems in this class have efficient algorithms that can find an answer within some fixed multiplicative factor of the optimal answer.

In computational complexity theory, the unique games conjecture is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time, but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines.

In computational complexity theory, the PCP theorem states that every decision problem in the NP complexity class has probabilistically checkable proofs of constant query complexity and logarithmic randomness complexity.

The Karloff–Zwick algorithm, in computational complexity theory, is a randomised approximation algorithm taking an instance of MAX-3SAT Boolean satisfiability problem as input. If the instance is satisfiable, then the expected weight of the assignment found is at least 7/8 of optimal. There is strong evidence that the algorithm achieves 7/8 of optimal even on unsatisfiable MAX-3SAT instances. Howard Karloff and Uri Zwick presented the algorithm in 1997.

In computational complexity theory, the maximum satisfiability problem (MAX-SAT) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true.

In computational complexity theory, a branch of computer science, Schaefer's dichotomy theorem, proved by Thomas Jerome Schaefer, states necessary and sufficient conditions under which a finite set S of relations over the Boolean domain yields polynomial-time or NP-complete problems when the relations of S are used to constrain some of the propositional variables. It is called a dichotomy theorem because the complexity of the problem defined by S is either in P or is NP-complete, as opposed to one of the classes of intermediate complexity that is known to exist by Ladner's theorem.

In computational complexity theory, SNP is a complexity class containing a limited subset of NP based on its logical characterization in terms of graph-theoretical properties. It forms the basis for the definition of the class MaxSNP of optimization problems.

<span class="mw-page-title-main">Maximum cut</span> Problem of finding a maximum cut in a graph

In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets S and T, such that the number of edges between S and T is as large as possible. Finding such a cut is known as the max-cut problem.

<span class="mw-page-title-main">3-dimensional matching</span>

In the mathematical discipline of graph theory, a 3-dimensional matching is a generalization of bipartite matching to 3-partite hypergraphs, which consist of hyperedges each of which contains 3 vertices.

In theoretical computer science, the algorithmic Lovász local lemma gives an algorithmic way of constructing objects that obey a system of constraints with limited dependence.

MAXEkSAT is a problem in computational complexity theory that is a maximization version of the Boolean satisfiability problem 3SAT. In MAXEkSAT, each clause has exactly k literals, each with distinct variables, and is in conjunctive normal form. These are called k-CNF formulas. The problem is to determine the maximum number of clauses that can be satisfied by a truth assignment to the variables in the clauses.

In computational complexity theory, a branch of computer science, the Max/min CSP/Ones classification theorems state necessary and sufficient conditions that determine the complexity classes of problems about satisfying a subset S of boolean relations. They are similar to Schaefer's dichotomy theorem, which classifies the complexity of satisfying finite sets of relations; however, the Max/min CSP/Ones classification theorems give information about the complexity of approximating an optimal solution to a problem defined by S.

In computational complexity theory, a gap reduction is a reduction to a particular type of decision problem, known as a c-gap problem. Such reductions provide information about the hardness of approximating solutions to optimization problems. In short, a gap problem refers to one wherein the objective is to distinguish between cases where the best solution is above one threshold from cases where the best solution is below another threshold, such that the two thresholds have a gap in between. Gap reductions can be used to demonstrate inapproximability results, as if a problem may be approximated to a better factor than the size of gap, then the approximation algorithm can be used to solve the corresponding gap problem.

In computational complexity, not-all-equal 3-satisfiability (NAE3SAT) is an NP-complete variant of the Boolean satisfiability problem, often used in proofs of NP-completeness.

In quantum information theory, the no low-energy trivial state (NLTS) conjecture is a precursor to a quantum PCP theorem (qPCP) and posits the existence of families of Hamiltonians with all low-energy states of non-trivial complexity. It was formulated by Michael Freedman and Matthew Hastings in 2013. An NLTS proof would be a consequence of one aspect of qPCP problems – the inability to certify an approximation of local Hamiltonians via NP completeness. In other words, an NLTS proof would be one consequence of the QMA complexity of qPCP problems. On a high level, if proved, NLTS would be one property of the non-Newtonian complexity of quantum computation. NLTS and qPCP conjectures posit the near-infinite complexity involved in predicting the outcome of quantum systems with many interacting states. These calculations of complexity would have implications for quantum computing such as the stability of entangled states at higher temperatures, and the occurrence of entanglement in natural systems. There is currently a proof of NLTS conjecture published in preprint.

References

  1. Sivakumar, D. (19 May 2002), "Algorithmic derandomization via complexity theory", Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, pp. 619–626, doi:10.1145/509907.509996, ISBN   1581134959, S2CID   94045
  2. Håstad, Johan (2001). "Some optimal inapproximability results". Journal of the ACM. 48 (4): 798–859. CiteSeerX   10.1.1.638.2808 . doi:10.1145/502090.502098. S2CID   5120748.
  3. Christos Papadimitriou and Mihalis Yannakakis, Optimization, approximation, and complexity classes, Proceedings of the twentieth annual ACM symposium on Theory of computing, p.229-234, May 02–04, 1988.
  4. Rudich et al., "Computational Complexity Theory," IAS/Park City Mathematics Series, 2004 page 108 ISBN   0-8218-2872-X
  5. Sanjeev Arora, "Probabilistic Checking of Proofs and Hardness of Approximation Problems," Revised version of a dissertation submitted at CS Division, U C Berkeley, in August 1994. CS-TR-476-94. Section 7.2.
  6. 1 2 Ausiello, G., Crescenzi, P., Gambosi, G., Kann, V., Marchetti Spaccamela, A., and Protasi, M. (1999), Complexity and Approximation. Combinatorial Optimization Problems and their Approximability Properties, Springer-Verlag, Berlin. Section 8.4.
  7. Luca Trevisan. 2001. Non-approximability results for optimization problems on bounded degree instances. In Proceedings of the thirty-third annual ACM symposium on Theory of computing (STOC '01). ACM, New York, NY, USA, 453-461. DOI=10.1145/380752.380839 http://doi.acm.org/10.1145/380752.380839
  8. On some tighter inapproximability results, Piotr Berman and Marek Karpinski, Proc. ICALP 1999, pages 200--209.
  9. P. Berman and M. Karpinski, Improved Approximation Lower Bounds on Small Occurrence Optimization, ECCC TR 03-008 (2003)
  10. P. Berman, M. Karpinski and A. D. Scott, Approximation Hardness and Satisfiability of Bounded Occurrence Instances of SAT, ECCC TR 03-022 (2003).
  11. P. Berman, M. Karpinski and A. D. Scott, Approximation Hardness of Short Symmetric Instances of MAX-3SAT, ECCC TR 03-049 (2003).
  12. W.F.de la Vega and M.Karpinski, 9/8-Approximation Algorithm for Random MAX-3SAT, ECCC TR 02-070 (2002); RAIRO-Operations Research 41 (2007), pp.95-107]

Lecture Notes from University of California, Berkeley Coding theory notes at University at Buffalo