Holographic algorithm

Last updated

In computer science, a holographic algorithm is an algorithm that uses a holographic reduction. A holographic reduction is a constant-time reduction that maps solution fragments many-to-many such that the sum of the solution fragments remains unchanged. These concepts were introduced by Leslie Valiant, who called them holographic because "their effect can be viewed as that of producing interference patterns among the solution fragments". [1] The algorithms are unrelated to laser holography, except metaphorically. Their power comes from the mutual cancellation of many contributions to a sum, analogous to the interference patterns in a hologram. [2]

Contents

Holographic algorithms have been used to find polynomial-time solutions to problems without such previously known solutions for special cases of satisfiability, vertex cover, and other graph problems. [3] They have received notable coverage due to speculation that they are relevant to the P versus NP problem [2] and their impact on computational complexity theory. Although some of the general problems are #P-hard problems, the special cases solved are not themselves #P-hard, and thus do not prove FP = #P.

Holographic algorithms have some similarities with quantum computation, but are completely classical. [4]

Holant problems

Holographic algorithms exist in the context of Holant problems, which generalize counting constraint satisfaction problems (#CSP). A #CSP instance is a hypergraph G=(V,E) called the constraint graph. Each hyperedge represents a variable and each vertex is assigned a constraint A vertex is connected to an hyperedge if the constraint on the vertex involves the variable on the hyperedge. The counting problem is to compute

which is a sum over all variable assignments, the product of every constraint, where the inputs to the constrain are the variables on the incident hyperedges of .

A Holant problem is like a #CSP except the input must be a graph, not a hypergraph. Restricting the class of input graphs in this way is indeed a generalization. Given a #CSP instance, replace each hyperedge e of size s with a vertex v of degree s with edges incident to the vertices contained in e. The constraint on v is the equality function of arity s. This identifies all of the variables on the edges incident to v, which is the same effect as the single variable on the hyperedge e.

In the context of Holant problems, the expression in (1) is called the Holant after a related exponential sum introduced by Valiant. [5]

Holographic reduction

A standard technique in complexity theory is a many-one reduction, where an instance of one problem is reduced to an instance of another (hopefully simpler) problem. However, holographic reductions between two computational problems preserve the sum of solutions without necessarily preserving correspondences between solutions. [1] For instance, the total number of solutions in both sets can be preserved, even though individual problems do not have matching solutions. The sum can also be weighted, rather than simply counting the number of solutions, using linear basis vectors. [3]

General example

It is convenient to consider holographic reductions on bipartite graphs. A general graph can always be transformed it into a bipartite graph while preserving the Holant value. This is done by replacing each edge in the graph by a path of length 2, which is also known as the 2-stretch of the graph. To keep the same Holant value, each new vertex is assigned the binary equality constraint.

Consider a bipartite graph G=(U,V,E) where the constraint assigned to every vertex is and the constraint assigned to every vertex is . Denote this counting problem by If the vertices in U are viewed as one large vertex of degree |E|, then the constraint of this vertex is the tensor product of with itself |U| times, which is denoted by Likewise, if the vertices in V are viewed as one large vertex of degree |E|, then the constraint of this vertex is Let the constraint be represented by its weighted truth table as a row vector and the constraint be represented by its weighted truth table as a column vector. Then the Holant of this constraint graph is simply

Now for any complex 2-by-2 invertible matrix T (the columns of which are the linear basis vectors mentioned above), there is a holographic reduction between and To see this, insert the identity matrix in between to get

Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle = \left(f_u T^{\otimes (\deg u)}\right)^{\otimes |U|} \left(f_v (T^{-1})^{\otimes (\deg v)}\right)^{\otimes |V|}.}

Thus, and have exactly the same Holant value for every constraint graph. They essentially define the same counting problem.

Specific examples

Vertex covers and independent sets

Let G be a graph. There is a 1-to-1 correspondence between the vertex covers of G and the independent sets of G. For any set S of vertices of G, S is a vertex cover in G if and only if the complement of S is an independent set in G. Thus, the number of vertex covers in G is exactly the same as the number of independent sets in G.

The equivalence of these two counting problems can also be proved using a holographic reduction. For simplicity, let G be a 3-regular graph. The 2-stretch of G gives a bipartite graph H=(U,V,E), where U corresponds to the edges in G and V corresponds to the vertices in G. The Holant problem that naturally corresponds to counting the number of vertex covers in G is The truth table of OR2 as a row vector is (0,1,1,1). The truth table of EQUAL3 as a column vector is . Then under a holographic transformation by

which is the Holant problem that naturally corresponds to counting the number of independent sets in G.

History

As with any type of reduction, a holographic reduction does not, by itself, yield a polynomial time algorithm. In order to get a polynomial time algorithm, the problem being reduced to must also have a polynomial time algorithm. Valiant's original application of holographic algorithms used a holographic reduction to a problem where every constraint is realizable by matchgates, [1] which he had just proved is tractable by a further reduction to counting the number of perfect matchings in a planar graph. [6] The latter problem is tractable by the FKT algorithm, which dates to the 1960s.

Soon after, Valiant found holographic algorithms with reductions to matchgates for #7 Pl-Rtw-Mon-3CNF and #7Pl-3/2Bip-VC. [7] These problems may appear somewhat contrived, especially with respect to the modulus. Both problems were already known to be #P-hard when ignoring the modulus and Valiant supplied proofs of #P-hardness modulo 2, which also used holographic reductions. Valiant found these two problems by a computer search that looked for problems with holographic reductions to matchgates. He called their algorithms accidental algorithms, saying "when applying the term accidental to an algorithm we intend to point out that the algorithm arises from satisfying an apparently onerous set of constraints." The "onerous" set of constraints in question are polynomial equations that, if satisfied, imply the existence of a holographic reduction to matchgate realizable constraints.

After several years of developing (what is known as) matchgate signature theory, Jin-Yi Cai and Pinyan Lu were able to explain the existence of Valiant's two accidental algorithms. [3] These two problems are just special cases of two much larger families of problems: #2k-1Pl-Rtw-Mon-kCNF and #2k-1Pl-k/2Bip-VC for any positive integer k. The modulus 7 is just the third Mersenne number and Cai and Lu showed that these types of problems with parameter k can be solved in polynomial time exactly when the modulus is the kth Mersenne number by using holographic reductions to matchgates and the Chinese remainder theorem.

Around the same time, Jin-Yi Cai, Pinyan Lu and Mingji Xia gave the first holographic algorithm that did not reduce to a problem that is tractable by matchgates. [5] Instead, they reduced to a problem that is tractable by Fibonacci gates, which are symmetric constraints whose truth tables satisfy a recurrence relation similar to one that defines the Fibonacci numbers. They also used holographic reductions to prove that certain counting problems are #P-hard. Since then, holographic reductions have been used extensively as ingredients in both polynomial time algorithms and proofs of #P-hardness.

Related Research Articles

<span class="mw-page-title-main">Linear programming</span> Method to solve optimization problems

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming.

In quantum computing, Grover's algorithm, also known as the quantum search algorithm, is a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just evaluations of the function, where is the size of the function's domain. It was devised by Lov Grover in 1996.

<span class="mw-page-title-main">Hypergraph</span> Generalization of graph theory

In mathematics, a hypergraph is a generalization of a graph in which an edge can join any number of vertices. In contrast, in an ordinary graph, an edge connects exactly two vertices.

<span class="mw-page-title-main">Bipartite graph</span> Graph divided into two independent sets

In the mathematical field of graph theory, a bipartite graph is a graph whose vertices can be divided into two disjoint and independent sets and , that is, every edge connects a vertex in to one in . Vertex sets and are usually called the parts of the graph. Equivalently, a bipartite graph is a graph that does not contain any odd-length cycles.

<span class="mw-page-title-main">Maximum flow problem</span> Computational problem in graph theory

In optimization theory, maximum flow problems involve finding a feasible flow through a flow network that obtains the maximum possible flow rate.

An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.

<span class="mw-page-title-main">Combinatorial optimization</span> Subfield of mathematical optimization

Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combinatorial optimization problems are the travelling salesman problem ("TSP"), the minimum spanning tree problem ("MST"), and the knapsack problem. In many such problems, such as the ones previously mentioned, exhaustive search is not tractable, and so specialized algorithms that quickly rule out large parts of the search space or approximation algorithms must be resorted to instead.

<span class="mw-page-title-main">Graph coloring</span> Methodic assignment of colors to elements of a graph

In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edge so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color.

In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph, the discrete Laplace operator is more commonly called the Laplacian matrix.

<span class="mw-page-title-main">Chromatic polynomial</span> Function in algebraic graph theory

The chromatic polynomial is a graph polynomial studied in algebraic graph theory, a branch of mathematics. It counts the number of graph colorings as a function of the number of colors and was originally defined by George David Birkhoff to study the four color problem. It was generalised to the Tutte polynomial by Hassler Whitney and W. T. Tutte, linking it to the Potts model of statistical physics.

<span class="mw-page-title-main">Dominating set</span> Subset of a graphs nodes such that all other nodes link to at least one

In graph theory, a dominating set for a graph G is a subset D of its vertices, such that any vertex of G is in D, or has a neighbor in D. The domination numberγ(G) is the number of vertices in a smallest dominating set for G.

Set packing is a classical NP-complete problem in computational complexity theory and combinatorics, and was one of Karp's 21 NP-complete problems. Suppose one has a finite set S and a list of subsets of S. Then, the set packing problem asks if some k subsets in the list are pairwise disjoint.

In numerical analysis, Bairstow's method is an efficient algorithm for finding the roots of a real polynomial of arbitrary degree. The algorithm first appeared in the appendix of the 1920 book Applied Aerodynamics by Leonard Bairstow. The algorithm finds the roots in complex conjugate pairs using only real arithmetic.

In computational complexity theory, Polynomial Local Search (PLS) is a complexity class that models the difficulty of finding a locally optimal solution to an optimization problem. The main characteristics of problems that lie in PLS are that the cost of a solution can be calculated in polynomial time and the neighborhood of a solution can be searched in polynomial time. Therefore it is possible to verify whether or not a solution is a local optimum in polynomial time. Furthermore, depending on the problem and the algorithm that is used for solving the problem, it might be faster to find a local optimum instead of a global optimum.

The minimum-cost flow problem (MCFP) is an optimization and decision problem to find the cheapest possible way of sending a certain amount of flow through a flow network. A typical application of this problem involves finding the best delivery route from a factory to a warehouse where the road network has some capacity and cost associated. The minimum cost flow problem is one of the most fundamental among all flow and circulation problems because most other such problems can be cast as a minimum cost flow problem and also that it can be solved efficiently using the network simplex algorithm.

In computer science, a kernelization is a technique for designing efficient algorithms that achieve their efficiency by a preprocessing stage in which inputs to the algorithm are replaced by a smaller input, called a "kernel". The result of solving the problem on the kernel should either be the same as on the original input, or it should be easy to transform the output on the kernel to the desired output for the original problem.

The #P-completeness of 01-permanent, sometimes known as Valiant's theorem, is a mathematical proof about the permanent of matrices, considered a seminal result in computational complexity theory. In a 1979 scholarly paper, Leslie Valiant proved that the computational problem of computing the permanent of a matrix is #P-hard, even if the matrix is restricted to have entries that are all 0 or 1. In this restricted case, computing the permanent is even #P-complete, because it corresponds to the #P problem of counting the number of permutation matrices one can get by changing ones into zeroes.

Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical complexity classes.

The pebble motion problems, or pebble motion on graphs, are a set of related problems in graph theory dealing with the movement of multiple objects ("pebbles") from vertex to vertex in a graph with a constraint on the number of pebbles that can occupy a vertex at any time. Pebble motion problems occur in domains such as multi-robot motion planning and network routing. The best-known example of a pebble motion problem is the famous 15 puzzle where a disordered group of fifteen tiles must be rearranged within a 4x4 grid by sliding one tile at a time.

The Fisher–Kasteleyn–Temperley (FKT) algorithm, named after Michael Fisher, Pieter Kasteleyn, and Neville Temperley, counts the number of perfect matchings in a planar graph in polynomial time. This same task is #P-complete for general graphs. For matchings that are not required to be perfect, counting them remains #P-complete even for planar graphs. The key idea of the FKT algorithm is to convert the problem into a Pfaffian computation of a skew-symmetric matrix derived from a planar embedding of the graph. The Pfaffian of this matrix is then computed efficiently using standard determinant algorithms.

References

  1. 1 2 3 Valiant, Leslie (17–19 October 2004). Holographic Algorithms (Extended Abstract). FOCS 2004. Rome, Italy: IEEE Computer Society. pp. 306–315. doi:10.1109/FOCS.2004.34. ISBN   0-7695-2228-9.{{cite conference}}: |access-date= requires |url= (help); |archive-url= requires |url= (help)
  2. 1 2 Hayes, Brian (January–February 2008). "Accidental Algorithms". American Scientist .
  3. 1 2 3 Cai, Jin-Yi; Lu, Pinyan (2011). "Holographic algorithms: From art to science". J. Comput. Syst. Sci. 77 (1): 41–61. doi: 10.1016/j.jcss.2010.06.005 .
  4. Cai, Jin-Yi (June 2008). "Holographic algorithms: guest column". SIGACT News. 39 (2). New York, NY, USA: ACM: 51–81. doi:10.1145/1388240.1388254. ISSN   0163-5700. S2CID   2298274.
  5. 1 2 Cai, Jin-Yi; Lu, Pinyan; Xia, Mingji (2008). Holographic Algorithms by Fibonacci Gates and Holographic Reductions for Hardness. FOCS. IEEE Computer Society. pp. 644–653. doi:10.1109/FOCS.2008.34. ISBN   978-0-7695-3436-7.
  6. Valiant, Leslie (2002). "Quantum Circuits That Can Be Simulated Classically in Polynomial Time". SIAM Journal on Computing. 31 (4): 1229–1254. doi:10.1137/S0097539700377025.
  7. Leslie G. Valiant (2006). Accidental Algorthims[Accidental Algorithms]. Foundations of Computer Science, IEEE Annual Symposium on. IEEE Computer Society. pp. 509–517. doi:10.1109/FOCS.2006.7. ISBN   0-7695-2720-5.