Quasi-polynomial time

Last updated

In computational complexity theory and the analysis of algorithms, an algorithm is said to take quasi-polynomial time if its time complexity is quasi-polynomially bounded. That is, there should exist a constant such that the worst-case running time of the algorithm, on inputs of size , has an upper bound of the form

Contents

The decision problems with quasi-polynomial time algorithms are natural candidates for being NP-intermediate, neither having polynomial time nor likely to be NP-hard.

Complexity class

The complexity class QP consists of all problems that have quasi-polynomial time algorithms. It can be defined in terms of DTIME as follows. [1]

Examples

An early example of a quasi-polynomial time algorithm was the Adleman–Pomerance–Rumely primality test. [2] However, the problem of testing whether a number is a prime number has subsequently been shown to have a polynomial time algorithm, the AKS primality test. [3]

In some cases, quasi-polynomial time bounds can be proven to be optimal under the exponential time hypothesis or a related computational hardness assumption. For instance, this is true for the following problems:

Other problems for which the best known algorithm takes quasi-polynomial time include:

Problems for which a quasi-polynomial time algorithm has been announced but not fully published include:

In approximation algorithms

Quasi-polynomial time has also been used to study approximation algorithms. In particular, a quasi-polynomial-time approximation scheme (QPTAS) is a variant of a polynomial-time approximation scheme whose running time is quasi-polynomial rather than polynomial. Problems with a QPTAS include minimum-weight triangulation, [15] finding the maximum clique on the intersection graph of disks, [16] and determining the probability that a hypergraph becomes disconnected when some of its edges fail with given independent probabilities. [17]

More strongly, the problem of finding an approximate Nash equilibrium has a QPTAS, but cannot have a PTAS under the exponential time hypothesis. [18]

Related Research Articles

<span class="mw-page-title-main">Clique problem</span> Task of computing complete subgraphs

In computer science, the clique problem is the computational problem of finding cliques in a graph. It has several different formulations depending on which cliques, and what information about the cliques, should be found. Common formulations of the clique problem include finding a maximum clique, finding a maximum weight clique in a weighted graph, listing all maximal cliques, and solving the decision problem of testing whether a graph contains a clique larger than a given size.

<span class="mw-page-title-main">Time complexity</span> Estimate of time taken for running an algorithm

In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.

<span class="mw-page-title-main">Independent set (graph theory)</span> Unrelated vertices in graphs

In graph theory, an independent set, stable set, coclique or anticlique is a set of vertices in a graph, no two of which are adjacent. That is, it is a set of vertices such that for every two vertices in , there is no edge connecting the two. Equivalently, each edge in the graph has at most one endpoint in . A set is independent if and only if it is a clique in the graph's complement. The size of an independent set is the number of vertices it contains. Independent sets have also been called "internally stable sets", of which "stable set" is a shortening.

<span class="mw-page-title-main">Vertex cover</span> Subset of a graphs vertices, including at least one endpoint of every edge

In graph theory, a vertex cover of a graph is a set of vertices that includes at least one endpoint of every edge of the graph.

In the mathematical discipline of graph theory, a matching or independent edge set in an undirected graph is a set of edges without common vertices. In other words, a subset of the edges is a matching if each vertex appears in at most one edge of that matching. Finding a matching in a bipartite graph can be treated as a network flow problem.

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in Gurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

<span class="mw-page-title-main">Dominating set</span> Subset of a graphs nodes such that all other nodes link to at least one

In graph theory, a dominating set for a graph G is a subset D of its vertices, such that any vertex of G is in D, or has a neighbor in D. The domination numberγ(G) is the number of vertices in a smallest dominating set for G.

<span class="mw-page-title-main">Feedback arc set</span> Edges that hit all cycles in a graph

In graph theory and graph algorithms, a feedback arc set or feedback edge set in a directed graph is a subset of the edges of the graph that contains at least one edge out of every cycle in the graph. Removing these edges from the graph breaks all of the cycles, producing an acyclic subgraph of the given graph, often called a directed acyclic graph. A feedback arc set with the fewest possible edges is a minimum feedback arc set and its removal leaves a maximum acyclic subgraph; weighted versions of these optimization problems are also used. If a feedback arc set is minimal, meaning that removing any edge from it produces a subset that is not a feedback arc set, then it has an additional property: reversing all of its edges, rather than removing them, produces a directed acyclic graph.

In computational complexity theory, a computational hardness assumption is the hypothesis that a particular problem cannot be solved efficiently. It is not known how to prove (unconditional) hardness for essentially any useful problem. Instead, computer scientists rely on reductions to formally relate the hardness of a new or complicated problem to a computational hardness assumption about a problem that is better-understood.

<span class="mw-page-title-main">Triangle-free graph</span> Graph without triples of adjacent vertices

In the mathematical area of graph theory, a triangle-free graph is an undirected graph in which no three vertices form a triangle of edges. Triangle-free graphs may be equivalently defined as graphs with clique number ≤ 2, graphs with girth ≥ 4, graphs with no induced 3-cycle, or locally independent graphs.

In graph theory and theoretical computer science, the longest path problem is the problem of finding a simple path of maximum length in a given graph. A path is called simple if it does not have any repeated vertices; the length of a path may either be measured by its number of edges, or by the sum of the weights of its edges. In contrast to the shortest path problem, which can be solved in polynomial time in graphs without negative-weight cycles, the longest path problem is NP-hard and the decision version of the problem, which asks whether a path exists of at least some given length, is NP-complete. This means that the decision problem cannot be solved in polynomial time for arbitrary graphs unless P = NP. Stronger hardness results are also known showing that it is difficult to approximate. However, it has a linear time solution for directed acyclic graphs, which has important applications in finding the critical path in scheduling problems.

In graph theory, the planar separator theorem is a form of isoperimetric inequality for planar graphs, that states that any planar graph can be split into smaller pieces by removing a small number of vertices. Specifically, the removal of vertices from an n-vertex graph can partition the graph into disjoint subgraphs each of which has at most vertices.

<span class="mw-page-title-main">Maximum cut</span> Problem of finding a maximum cut in a graph

In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets S and T, such that the number of edges between S and T is as large as possible. Finding such a cut is known as the max-cut problem.

In combinatorial optimization, the matroid intersection problem is to find a largest common independent set in two matroids over the same ground set. If the elements of the matroid are assigned real weights, the weighted matroid intersection problem is to find a common independent set with the maximum possible weight. These problems generalize many problems in combinatorial optimization including finding maximum matchings and maximum weight matchings in bipartite graphs and finding arborescences in directed graphs.

In computational complexity theory, the exponential time hypothesis is an unproven computational hardness assumption that was formulated by Impagliazzo & Paturi (1999). It states that satisfiability of 3-CNF Boolean formulas cannot be solved in subexponential time, . More precisely, the usual form of the hypothesis asserts the existence of a number such that all algorithms that correctly solve this problem require time at least The exponential time hypothesis, if true, would imply that P ≠ NP, but it is a stronger statement. It implies that many computational problems are equivalent in complexity, in the sense that if one of them has a subexponential time algorithm then they all do, and that many known algorithms for these problems have optimal or near-optimal time complexity.

In the mathematical field of graph theory, the intersection number of a graph is the smallest number of elements in a representation of as an intersection graph of finite sets. In such a representation, each vertex is represented as a set, and two vertices are connected by an edge whenever their sets have a common element. Equivalently, the intersection number is the smallest number of cliques needed to cover all of the edges of .

<span class="mw-page-title-main">Dense subgraph</span> Highly connected subgraph

In graph theory and computer science, a dense subgraph is a subgraph with many edges per vertex. This is formalized as follows: let G = (V, E) be an undirected graph and let S = (VS, ES) be a subgraph of G. Then the density of S is defined to be:

<span class="mw-page-title-main">Planted clique</span> Complete subgraph added to a random graph

In computational complexity theory, a planted clique or hidden clique in an undirected graph is a clique formed from another graph by selecting a subset of vertices and adding edges between each pair of vertices in the subset. The planted clique problem is the algorithmic problem of distinguishing random graphs from graphs that have a planted clique. This is a variation of the clique problem; it may be solved in quasi-polynomial time but is conjectured not to be solvable in polynomial time for intermediate values of the clique size. The conjecture that no polynomial time solution exists is called the planted clique conjecture; it has been used as a computational hardness assumption.

<span class="mw-page-title-main">Matroid parity problem</span> Largest independent set of paired elements

In combinatorial optimization, the matroid parity problem is a problem of finding the largest independent set of paired elements in a matroid. The problem was formulated by Lawler (1976) as a common generalization of graph matching and matroid intersection. It is also known as polymatroid matching, or the matchoid problem.

The small set expansion hypothesis or small set expansion conjecture in computational complexity theory is an unproven computational hardness assumption. Under the small set expansion hypothesis it is assumed to be computationally infeasible to distinguish between a certain class of expander graphs called "small set expanders" and other graphs that are very far from being small set expanders. This assumption implies the hardness of several other computational problems, and the optimality of certain known approximation algorithms.

References

  1. Complexity Zoo : Class QP: Quasipolynomial-Time
  2. Adleman, Leonard M.; Pomerance, Carl; Rumely, Robert S. (1983), "On distinguishing prime numbers from composite numbers", Annals of Mathematics , 117 (1): 173–206, doi:10.2307/2006975, JSTOR   2006975
  3. Agrawal, Manindra; Kayal, Neeraj; Saxena, Nitin (2004), "PRIMES is in P" (PDF), Annals of Mathematics , 160 (2): 781–793, doi:10.4007/annals.2004.160.781, JSTOR   3597229
  4. Kisfaludi-Bak, Sándor (2020), "Hyperbolic intersection graphs and (quasi)-polynomial time", in Chawla, Shuchi (ed.), Proceedings of the 31st Annual ACM–SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5–8, 2020, pp. 1621–1638, arXiv: 1812.03960 , doi:10.1137/1.9781611975994.100, ISBN   978-1-61197-599-4
  5. Eppstein, David; Lincoln, Andrea; Williams, Virginia Vassilevska (2023), "Quasipolynomiality of the smallest missing induced subgraph", Journal of Graph Algorithms and Applications , 27 (5): 329–339, arXiv: 2306.11185 , doi: 10.7155/jgaa.00625
  6. Megiddo, Nimrod; Vishkin, Uzi (1988), "On finding a minimum dominating set in a tournament", Theoretical Computer Science , 61 (2–3): 307–316, doi:10.1016/0304-3975(88)90131-4, MR   0980249 . This paper predates the formulation of the exponential time hypothesis, but proves that a solution to the minimum dominating set in a tournament could be used to solve Boolean satisfiability with clauses and variables, which requires time exponential in the number of variables according to the exponential time hypothesis.
  7. Papadimitriou, Christos H.; Yannakakis, Mihalis (1996), "On limited nondeterminism and the complexity of the V-C dimension", Journal of Computer and System Sciences , 53 (2): 161–170, doi: 10.1006/jcss.1996.0058 , MR   1418886
  8. Manurangsi, Pasin (2023), "Improved inapproximability of VC dimension and Littlestone's dimension via (unbalanced) biclique", in Kalai, Yael Tauman (ed.), 14th Innovations in Theoretical Computer Science Conference, ITCS 2023, January 10-13, 2023, MIT, Cambridge, Massachusetts, USA, LIPIcs, vol. 251, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, pp. 85:1–85:18, arXiv: 2211.01443 , doi: 10.4230/LIPIcs.ITCS.2023.85
  9. Hazan, Elad; Krauthgamer, Robert (2011), "How hard is it to approximate the best Nash equilibrium?", SIAM Journal on Computing , 40 (1): 79–91, CiteSeerX   10.1.1.511.4422 , doi:10.1137/090766991, MR   2765712
  10. Eiter, Thomas; Makino, Kazuhisa; Gottlob, Georg (2008), "Computational aspects of monotone dualization: a brief survey", Discrete Applied Mathematics , 156 (11): 2035–2049, doi: 10.1016/j.dam.2007.04.017 , MR   2437000
  11. Calude, Cristian S.; Jain, Sanjay; Khoussainov, Bakhadyr; Li, Wei; Stephan, Frank (2022), "Deciding parity games in quasi-polynomial time", SIAM Journal on Computing , 51 (2): STOC17-152–STOC17-188, doi:10.1137/17M1145288, hdl: 2292/31757 , MR   4413072
  12. IPEC Nerode Prize, EATCS, retrieved 2023-12-03
  13. Klarreich, Erica (January 14, 2017), "Graph isomorphism vanquished — again", Quanta Magazine
  14. Marc Lackenby announces a new unknot recognition algorithm that runs in quasi-polynomial time, Mathematical Institute, University of Oxford, 2021-02-03, retrieved 2021-02-03
  15. Remy, Jan; Steger, Angelika (2009), "A quasi-polynomial time approximation scheme for minimum weight triangulation", Journal of the ACM , 56 (3), Article A15, doi:10.1145/1516512.1516517
  16. Bonnet, Édouard; Giannopoulos, Panos; Kim, Eun Jung; Rzazewski, Pawel; Sikora, Florian (2018), "QPTAS and subexponential algorithm for maximum clique on disk graphs", in Speckmann, Bettina; Tóth, Csaba D. (eds.), 34th International Symposium on Computational Geometry, SoCG 2018, June 11–14, 2018, Budapest, Hungary, LIPIcs, vol. 99, Schloss Dagstuhl – Leibniz-Zentrum für Informatik, pp. 12:1–12:15, doi: 10.4230/LIPICS.SOCG.2018.12
  17. Cen, Ruoxu; Li, Jason; Panigrahi, Debmalya (2024), "Hypergraph unreliability in quasi-polynomial time", in Mohar, Bojan; Shinkar, Igor; O'Donnell, Ryan (eds.), Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, Vancouver, BC, Canada, June 24-28, 2024, {ACM}, pp. 1700–1711, arXiv: 2403.18781 , doi:10.1145/3618260.3649753
  18. Braverman, Mark; Kun-Ko, Young; Weinstein, Omri (2015), "Approximating the best Nash equilibrium in -time breaks the Exponential Time Hypothesis", in Indyk, Piotr (ed.), Proceedings of the 26th Annual ACM–SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4–6, 2015, pp. 970–982, doi:10.1137/1.9781611973730.66