Unique games conjecture

Last updated

Contents

Unsolved problem in computer science:
Is the Unique Games Conjecture true?

In computational complexity theory, the unique games conjecture (often referred to as UGC) is a conjecture made by Subhash Khot in 2002. [1] [2] [3] The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P  NP, [4] then for many important problems it is not only impossible to get an exact solution in polynomial time (as postulated by the P versus NP problem), but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines.

The conjecture is unusual in that the academic world seems about evenly divided on whether it is true or not. [1]

Formulations

The unique games conjecture can be stated in a number of equivalent ways.

Unique label cover

The following formulation of the unique games conjecture is often used in hardness of approximation. The conjecture postulates the NP-hardness of the following promise problem known as label cover with unique constraints. For each edge, the colors on the two vertices are restricted to some particular ordered pairs. Unique constraints means that for each edge none of the ordered pairs have the same color for the same node.

This means that an instance of label cover with unique constraints over an alphabet of size k can be represented as a directed graph together with a collection of permutations πe: [k] → [k], one for each edge e of the graph. An assignment to a label cover instance gives to each vertex of G a value in the set [k] = {1, 2, ... k}, often called “colours.”

Such instances are strongly constrained in the sense that the colour of a vertex uniquely defines the colours of its neighbours, and hence for its entire connected component. Thus, if the input instance admits a valid assignment, then such an assignment can be found efficiently by iterating over all colours of a single node. In particular, the problem of deciding if a given instance admits a satisfying assignment can be solved in polynomial time.

The value of a unique label cover instance is the fraction of constraints that can be satisfied by any assignment. For satisfiable instances, this value is 1 and is easy to find. On the other hand, it seems to be very difficult to determine the value of an unsatisfiable game, even approximately. The unique games conjecture formalises this difficulty.

More formally, the (c, s)-gap label-cover problem with unique constraints is the following promise problem (Lyes, Lno):

where G is an instance of the label cover problem with unique constraints.

The unique games conjecture states that for every sufficiently small pair of constants ε, δ > 0, there exists a constant k such that the (1  δ, ε)-gap label-cover problem with unique constraints over alphabet of size k is NP-hard.

Instead of graphs, the label cover problem can be formulated in terms of linear equations. For example, suppose that we have a system of linear equations over the integers modulo 7:

This is an instance of the label cover problem with unique constraints. For example, the first equation corresponds to the permutation π(1, 2) where π(1, 2)(x1) = 2x2 modulo 7.

Two-prover proof systems

A unique game is a special case of a two-prover one-round (2P1R) game. A two-prover one-round game has two players (also known as provers) and a referee. The referee sends each player a question drawn from a known probability distribution, and the players each have to send an answer. The answers come from a set of fixed size. The game is specified by a predicate that depends on the questions sent to the players and the answers provided by them.

The players may decide on a strategy beforehand, although they cannot communicate with each other during the game. The players win if the predicate is satisfied by their questions and their answers.

A two-prover one-round game is called a unique game if for every question and every answer by the first player, there is exactly one answer by the second player that results in a win for the players, and vice versa. The value of a game is the maximum winning probability for the players over all strategies.

The unique games conjecture states that for every sufficiently small pair of constants ε, δ > 0, there exists a constant k such that the following promise problem (Lyes, Lno) is NP-hard:

where G is a unique game whose answers come from a set of size k.

Probabilistically checkable proofs

Alternatively, the unique games conjecture postulates the existence of a certain type of probabilistically checkable proof for problems in NP.

A unique game can be viewed as a special kind of nonadaptive probabilistically checkable proof with query complexity 2, where for each pair of possible queries of the verifier and each possible answer to the first query, there is exactly one possible answer to the second query that makes the verifier accept, and vice versa.

The unique games conjecture states that for every sufficiently small pair of constants there is a constant such that every problem in NP has a probabilistically checkable proof over an alphabet of size with completeness , soundness , and randomness complexity which is a unique game.

Relevance

Approximability results assuming P ≠ NP versus the UGC
ProblemPoly.-time approx.NP hardnessUG hardness
Max 2SAT [5] [6] [7]
Max cut [8] [6] [9]
Min vertex cover [10] [11]
Feedback arc set [12] [13] All constants [14]
Max acyclic subgraph [15] [16] [14]
Betweenness [17] [18]

Some very natural, intrinsically interesting statements about things like voting and foams just popped out of studying the UGC.... Even if the UGC turns out to be false, it has inspired a lot of interesting math research.

The unique games conjecture was introduced by Subhash Khot in 2002 in order to make progress on certain questions in the theory of hardness of approximation.

The truth of the unique games conjecture would imply the optimality of many known approximation algorithms (assuming P  NP). For example, the approximation ratio achieved by the algorithm of Goemans and Williamson for approximating the maximum cut in a graph is optimal to within any additive constant assuming the unique games conjecture and P  NP.

A list of results that the unique games conjecture is known to imply is shown in the adjacent table together with the corresponding best results for the weaker assumption P  NP. A constant of or means that the result holds for every constant (with respect to the problem size) strictly greater than or less than , respectively.

Discussion and alternatives

Currently, there is no consensus regarding the truth of the unique games conjecture. Certain stronger forms of the conjecture have been disproved.

A different form of the conjecture postulates that distinguishing the case when the value of a unique game is at least from the case when the value is at most is impossible for polynomial-time algorithms (but perhaps not NP-hard). This form of the conjecture would still be useful for applications in hardness of approximation.

The constant in the above formulations of the conjecture is necessary unless P = NP. If the uniqueness requirement is removed the corresponding statement is known to be true by the parallel repetition theorem, even when .

Marek Karpinski and Warren Schudy have constructed linear time approximation schemes for dense instances of unique games problem. [19]

In 2008, Prasad Raghavendra has shown that if the unique games conjecture is true, then for every constraint satisfaction problem the best approximation ratio is given by a certain simple semidefinite programming instance, which is in particular polynomial. [20]

In 2010, Prasad Raghavendra and David Steurer defined the Gap-Small-Set Expansion problem, and conjectured that it is NP-hard. This conjecture implies the unique games conjecture. [21] It has also been used to prove strong hardness of approximation results for finding complete bipartite subgraphs. [22]

In 2010, Sanjeev Arora, Boaz Barak and David Steurer found a subexponential time approximation algorithm for the unique games problem. [23]

In 2012, it was shown that distinguishing instances with value at most from instances with value at least is NP-hard. [24]

In 2018, after a series of papers, a weaker version of the conjecture, called the 2-2 games conjecture, was proven. In a certain sense, this proves "a half" of the original conjecture. [25] [26] This also improves the best known gap for unique label cover: it is NP-hard to distinguish instances with value at most from instances with value at least . [27]

Notes

  1. 1 2 3 Klarreich, Erica (October 6, 2011), "Approximately Hard: The Unique Games Conjecture", Simons Foundation, retrieved 2012-10-29
  2. Lipton, Dick (May 5, 2010), "Unique Games: A Three Act Play", Gödel’s Lost Letter and P=NP, retrieved 2012-10-29
  3. Khot, Subhash (2002), "On the power of unique 2-prover 1-round games", Proceedings of the thirty-fourth annual ACM symposium on Theory of computing, pp. 767–775, doi:10.1145/509907.510017, ISBN   1-58113-495-9, S2CID   207635974
  4. The unique games conjecture is vacuously true if P = NP, as then every problem in NP would also be NP-hard.
  5. Feige, Uriel; Goemans, Michel X. (1995), "Approximating the value of two prover proof systems, with applications to MAX 2SAT and MAX DICUT", Proc. 3rd Israel Symp. Theory of Computing and Systems, IEEE Computer Society Press, pp. 182–189
  6. 1 2 Håstad, Johan (1999), "Some Optimal Inapproximability Results", Journal of the ACM, 48 (4): 798–859, doi:10.1145/502090.502098, S2CID   5120748.
  7. Brakensiek, Joshua; Huang, Neng; Zwick, Uri (2024). "Tight approximability of MAX 2-SAT and relatives, under UGC". ACM-SIAM Symposium on Discrete Algorithms. arXiv: 2310.12911 .
  8. Goemans, Michel X.; Williamson, David P. (1995), "Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming", Journal of the ACM, 42 (6): 1115–1145, doi: 10.1145/227683.227684 , S2CID   15794408
  9. Khot, Subhash; Kindler, Guy; Mossel, Elchanan; O'Donnell, Ryan (2007), "Optimal inapproximability results for MAX-CUT and other two-variable CSPs?" (PDF), SIAM Journal on Computing , 37 (1): 319–357, doi:10.1137/S0097539705447372, S2CID   2090495
  10. Khot, Subhash; Minzer, Dor; Safra, Muli (2018), "Pseudorandom Sets in Grassmann Graph Have Near-Perfect Expansion", 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pp. 592–601, doi:10.1109/FOCS.2018.00062, ISBN   978-1-5386-4230-6, S2CID   3688775
  11. Khot, Subhash; Regev, Oded (2003), "Vertex cover might be hard to approximate to within 2  ε", IEEE Conference on Computational Complexity: 379–
  12. Even, G.; Naor, J.; Schieber, B.; Sudan, M. (1998), "Approximating minimum feedback sets and multicuts in directed graphs", Algorithmica , 20 (2): 151–174, doi:10.1007/PL00009191, MR   1484534, S2CID   2437790
  13. Dinur, Irit; Safra, Samuel (2005), "On the hardness of approximating minimum vertex cover" (PDF), Annals of Mathematics , 162 (1): 439–485, doi:10.4007/annals.2005.162.439 , retrieved 2010-03-05.
  14. 1 2 Guruswami, Venkatesan; Manokaran, Rajsekar; Raghavendra, Prasad (2008), "Beating the Random Ordering is Hard: Inapproximability of Maximum Acyclic Subgraph", 49th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2008, October 25-28, 2008, Philadelphia, PA, USA, pp. 573–582, doi:10.1109/FOCS.2008.51, ISBN   978-0-7695-3436-7, S2CID   8762205
  15. Berger, Bonnie; Shor, Peter W. (1997), "Tight bounds for the maximum acyclic subgraph problem", Journal of Algorithms, 25 (1): 1–18, doi:10.1006/jagm.1997.0864, MR   1474592
  16. Newman, A. (June 2000), Approximating the maximum acyclic subgraph (Master’s thesis), Massachusetts Institute of Technology, as cited by Guruswami, Manokaran & Raghavendra (2008)
  17. Chor, Benny; Sudan, Madhu (1998), "A geometric approach to betweenness", SIAM Journal on Discrete Mathematics , 11 (4): 511–523 (electronic), doi:10.1137/S0895480195296221, MR   1640920 .
  18. Charikar, Moses; Guruswami, Venkatesan; Manokaran, Rajsekar (2009), "Every permutation CSP of arity 3 is approximation resistant", 24th Annual IEEE Conference on Computational Complexity, pp. 62–73, doi:10.1109/CCC.2009.29, ISBN   978-0-7695-3717-7, MR   2932455, S2CID   257225 .
  19. Karpinski, Marek; Schudy, Warren (2009), "Linear time approximation schemes for the Gale-Berlekamp game and related minimization problems", Proceedings of the forty-first annual ACM symposium on Theory of computing, pp. 313–322, arXiv: 0811.3244 , doi:10.1145/1536414.1536458, ISBN   9781605585062, S2CID   6117694
  20. Raghavendra, Prasad (2008), "Optimal algorithms and inapproximability results for every CSP?" (PDF), in Dwork, Cynthia (ed.), Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 17-20, 2008, Association for Computing Machinery, pp. 245–254, doi:10.1145/1374376.1374414, ISBN   978-1-60558-047-0, S2CID   15075197
  21. Raghavendra, Prasad; Steurer, David (2010), "Graph expansion and the unique games conjecture" (PDF), STOC'10—Proceedings of the 2010 ACM International Symposium on Theory of Computing, Association for Computing Machinery, pp. 755–764, doi:10.1145/1806689.1806792, ISBN   978-1-4503-0050-6, MR   2743325, S2CID   1601199
  22. Manurangsi, Pasin (2017), "Inapproximability of Maximum Edge Biclique, Maximum Balanced Biclique and Minimum k-Cut from the Small Set Expansion Hypothesis", in Chatzigiannakis, Ioannis; Indyk, Piotr; Kuhn, Fabian; Muscholl, Anca (eds.), 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017), Leibniz International Proceedings in Informatics (LIPIcs), vol. 80, Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, pp. 79:1–79:14, doi: 10.4230/LIPIcs.ICALP.2017.79 , ISBN   978-3-95977-041-5
  23. Arora, Sanjeev; Barak, Boaz; Steurer, David (2015), "Subexponential algorithms for unique games and related problems", Journal of the ACM , 62 (5): Art. 42, 25, doi:10.1145/2775105, MR   3424199, S2CID   622344 . Previously announced at FOCS 2010.
  24. O'Donnell, Ryan; Wright, John (2012), "A new point of NP-hardness for unique games", Proceedings of the 2012 ACM Symposium on Theory of Computing (STOC'12), New York: ACM, pp. 289–306, doi:10.1145/2213977.2214005, ISBN   978-1-4503-1245-5, MR   2961512, S2CID   6737664 .
  25. Klarreich, Erica (April 24, 2018), "First Big Steps Toward Proving the Unique Games Conjecture", Quanta Magazine
  26. Barak, Boaz (January 10, 2018), "Unique Games Conjecture – halfway there?", Windows On Theory, retrieved 2023-03-15
  27. Khot, Subhash; Minzer, Dor; Safra, M. (October 2018), "Pseudorandom Sets in Grassmann Graph Have Near-Perfect Expansion", 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pp. 592–601, doi:10.1109/FOCS.2018.00062, ISBN   978-1-5386-4230-6, S2CID   3688775

Related Research Articles

In computer science, 2-satisfiability, 2-SAT or just 2SAT is a computational problem of assigning values to variables, each of which has two possible values, in order to satisfy a system of constraints on pairs of variables. It is a special case of the general Boolean satisfiability problem, which can involve constraints on more than two variables, and of constraint satisfaction problems, which can allow more than two choices for the value of each variable. But in contrast to those more general problems, which are NP-complete, 2-satisfiability can be solved in polynomial time.

<span class="mw-page-title-main">Independent set (graph theory)</span> Unrelated vertices in graphs

In graph theory, an independent set, stable set, coclique or anticlique is a set of vertices in a graph, no two of which are adjacent. That is, it is a set of vertices such that for every two vertices in , there is no edge connecting the two. Equivalently, each edge in the graph has at most one endpoint in . A set is independent if and only if it is a clique in the graph's complement. The size of an independent set is the number of vertices it contains. Independent sets have also been called "internally stable sets", of which "stable set" is a shortening.

<span class="mw-page-title-main">Vertex cover</span> Subset of a graphs vertices, including at least one endpoint of every edge

In graph theory, a vertex cover of a graph is a set of vertices that includes at least one endpoint of every edge of the graph.

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for scheduling on unrelated parallel machines.

<span class="mw-page-title-main">Set cover problem</span> Classical problem in combinatorics

The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory.

In the mathematical discipline of graph theory, a feedback vertex set (FVS) of a graph is a set of vertices whose removal leaves a graph without cycles. Equivalently, each FVS contains at least one vertex of any cycle in the graph. The feedback vertex set number of a graph is the size of a smallest feedback vertex set. The minimum feedback vertex set problem is an NP-complete problem; it was among the first problems shown to be NP-complete. It has wide applications in operating systems, database systems, and VLSI chip design.

<span class="mw-page-title-main">Feedback arc set</span> Edges that hit all cycles in a graph

In graph theory and graph algorithms, a feedback arc set or feedback edge set in a directed graph is a subset of the edges of the graph that contains at least one edge out of every cycle in the graph. Removing these edges from the graph breaks all of the cycles, producing an acyclic subgraph of the given graph, often called a directed acyclic graph. A feedback arc set with the fewest possible edges is a minimum feedback arc set and its removal leaves a maximum acyclic subgraph; weighted versions of these optimization problems are also used. If a feedback arc set is minimal, meaning that removing any edge from it produces a subset that is not a feedback arc set, then it has an additional property: reversing all of its edges, rather than removing them, produces a directed acyclic graph.

In computational complexity theory, the PCP theorem states that every decision problem in the NP complexity class has probabilistically checkable proofs of constant query complexity and logarithmic randomness complexity.

MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:

In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently. It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.

In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices. The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: lattice problems are an example of NP-hard problems which have been shown to be average-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers. For applications in such cryptosystems, lattices over vector spaces or free modules are generally considered.

<span class="mw-page-title-main">Maximum cut</span> Problem of finding a maximum cut in a graph

In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets S and T, such that the number of edges between S and T is as large as possible. Finding such a cut is known as the max-cut problem.

In computer science, hardness of approximation is a field that studies the algorithmic complexity of finding near-optimal solutions to optimization problems.

<span class="mw-page-title-main">Dense subgraph</span> Highly connected subgraph

In graph theory and computer science, a dense subgraph is a subgraph with many edges per vertex. This is formalized as follows: let G = (V, E) be an undirected graph and let S = (VS, ES) be a subgraph of G. Then the density of S is defined to be:

In computational complexity theory, a gap reduction is a reduction to a particular type of decision problem, known as a c-gap problem. Such reductions provide information about the hardness of approximating solutions to optimization problems. In short, a gap problem refers to one wherein the objective is to distinguish between cases where the best solution is above one threshold from cases where the best solution is below another threshold, such that the two thresholds have a gap in between. Gap reductions can be used to demonstrate inapproximability results, as if a problem may be approximated to a better factor than the size of gap, then the approximation algorithm can be used to solve the corresponding gap problem.

Egalitarian item allocation, also called max-min item allocation is a fair item allocation problem, in which the fairness criterion follows the egalitarian rule. The goal is to maximize the minimum value of an agent. That is, among all possible allocations, the goal is to find an allocation in which the smallest value of an agent is as large as possible. In case there are two or more allocations with the same smallest value, then the goal is to select, from among these allocations, the one in which the second-smallest value is as large as possible, and so on. Therefore, an egalitarian item allocation is sometimes called a leximin item allocation.

The welfare maximization problem is an optimization problem studied in economics and computer science. Its goal is to partition a set of items among agents with different utility functions, such that the welfare – defined as the sum of the agents' utilities – is as high as possible. In other words, the goal is to find an item allocation satisfying the utilitarian rule.

A parameterized approximation algorithm is a type of algorithm that aims to find approximate solutions to NP-hard optimization problems in polynomial time in the input size and a function of a specific parameter. These algorithms are designed to combine the best aspects of both traditional approximation algorithms and fixed-parameter tractability.

The small set expansion hypothesis or small set expansion conjecture in computational complexity theory is an unproven computational hardness assumption. Under the small set expansion hypothesis it is assumed to be computationally infeasible to distinguish between a certain class of expander graphs called "small set expanders" and other graphs that are very far from being small set expanders. This assumption implies the hardness of several other computational problems, and the optimality of certain known approximation algorithms.

Ryan O'Donnell is a Canadian theoretical computer scientist and a professor at Carnegie Mellon University. He is known for his work on the analysis of Boolean functions and for authoring the textbook on this subject. He is also known for his work on computational learning theory, hardness of approximation, property testing, quantum computation and quantum information.

References