In computational complexity theory and combinatorics, the token reconfiguration problem is a reconfiguration problem on a graph with both an initial and desired state for tokens.
Given a graph , an initial state of tokens is defined by a subset of the vertices of the graph; let . Moving a token from vertex to vertex is valid if and are joined by a path in that does not contain any other tokens; note that the distance traveled within the graph is inconsequential, and moving a token across multiple edges sequentially is considered a single move. A desired end state is defined as another subset . The goal is to minimize the number of valid moves to reach the end state from the initial state. [1]
The problem is motivated by so-called sliding puzzles, which are in fact a variant of this problem, often restricted to rectangular grid graphs with no holes. The most famous such puzzle, the 15 puzzle, is a variant of this problem on a 4 by 4 grid graph such that . One key difference between sliding block puzzles and the token reconfiguration problem is that in the original token reconfiguration problem, the tokens are indistinguishable. As a result, if the graph is connected, the token reconfiguration problem is always solvable; this is not necessarily the case for sliding block puzzles.
Calinescu, Dumitrescu, and Pach have shown several results regarding both the optimization and approximation of this problem on various types of graphs. [2]
Firstly, reducing to the case of trees, there is always a solution in at most moves, with at most one move per token. Furthermore, an optimal solution can be found in time linear in the size of the tree. Clearly, the first result extends to arbitrary graphs; the latter does not.
A sketch of the optimal algorithm for trees is as follows. First, we obtain an algorithm that moves each node exactly once, which may not be optimal. Do this recursively: consider any leaf of the smallest tree in the graph containing both the initial and desired sets. If a leaf of this tree is in both, remove it and recurse down. If a leaf is in the initial set only, find a path from it to a vertex in the desired set that does not pass through any other vertices in the desired set. Remove this path (it'll be the last move), and recurse down. The other case, where the leaf is in the desired set only, is symmetric. To extend to an algorithm that achieves the optimum, consider any token in both the initial and desired sets. If removing it would split the graph into subtrees, all of which have the same number of elements from the initial and desired sets, then do so and recurse. If there is no such token, then each token must move exactly once, and so the solution that moves all tokens exactly once must be optimal.
While the algorithm for finding the optimum on trees is linear time, finding the optimum for general graphs is NP-complete, a leap up in difficulty. It is in NP; the certificate is a sequence of moves, which is at most linear size, so it remains to show the problem is NP-hard as well. This is done via reduction from set cover.
Consider an instance of set cover, where we wish to cover all elements in a universe using subsets of using the minimum number of subsets. Construct a graph as follows:
Make a vertex for each of the elements in the universe and each of the subsets. Connect a subset vertex to an element vertex if the subset contains that element. Create a long path of size , and attach one end to every subset vertex. The initial set is the added path plus every subset vertex, and the final set is every subset vertex plus every element vertex.
To see why this is a reduction, consider the selection of which subset vertex tokens to move. Clearly, we must open up paths to each of the element vertices, and we do so by moving some of the subset vertex tokens. After doing so, each token on the long path must move once. Thus, the optimum cost is equal to the number of selected subsets plus the number of elements (the latter of which is notably a constant). So we have a polynomial-time reduction from set cover, which is NP-complete, to token reconfiguration. Thus token reconfiguration is also NP-complete on general graphs.
The token reconfiguration problem is APX-complete, meaning that in some sense, it is as hard to approximate as any problem that has a constant-factor approximation algorithm. The reduction is the same one as above, from set cover. However, the set cover problem is restricted to subsets of size at most 3, which is an APX-hard problem. [3]
Using exactly the same structure as above, we obtain an L-reduction, as the distance of any solution from optimum is equal between the set cover instance and the transformed token reconfiguration problem. The only change is the addition of the number of elements in the universe. Furthermore, the set cover optimum is at least 1/3 of the number of elements, due to the bounded subset size. Thus, the constants from the L-reduction are .
One can, in fact, modify the reduction to work for labeled token reconfiguration as well. To do so, attach a new vertex to each of the subset vertices, which is neither an initial nor desired vertex. Label the vertices on the long path 1 through , and do the same for the element vertices. Now, the solution consists of 'moving aside' each chosen subset vertex token, correctly placing the labeled vertices from the path, and returning the subset vertex tokens to the initial locations. This is an L-reduction with .
Calinescu, Dumitrescu, and Pach have also shown that there exists a 3-approximation for unlabeled token reconfiguration, so the problem is in APX as well and thus APX-complete. The proof is much more complicated and omitted here.
In combinatorial mathematics, the Steiner tree problem, or minimum Steiner tree problem, named after Jakob Steiner, is an umbrella term for a class of problems in combinatorial optimization. While Steiner tree problems may be formulated in a number of settings, they all require an optimal interconnect for a given set of objects and a predefined objective function. One well-known variant, which is often used synonymously with the term Steiner tree problem, is the Steiner tree problem in graphs. Given an undirected graph with non-negative edge weights and a subset of vertices, usually referred to as terminals, the Steiner tree problem in graphs requires a tree of minimum weight that contains all terminals and minimizes the total weight of its edges. Further well-known variants are the Euclidean Steiner tree problem and the rectilinear minimum Steiner tree problem.
In graph theory, an independent set, stable set, coclique or anticlique is a set of vertices in a graph, no two of which are adjacent. That is, it is a set of vertices such that for every two vertices in , there is no edge connecting the two. Equivalently, each edge in the graph has at most one endpoint in . A set is independent if and only if it is a clique in the graph's complement. The size of an independent set is the number of vertices it contains. Independent sets have also been called "internally stable sets", of which "stable set" is a shortening.
In graph theory, a vertex cover of a graph is a set of vertices that includes at least one endpoint of every edge of the graph.
In the mathematical discipline of graph theory, a matching or independent edge set in an undirected graph is a set of edges without common vertices. In other words, a subset of the edges is a matching if each vertex appears in at most one edge of that matching. Finding a matching in a bipartite graph can be treated as a network flow problem.
In graph theory, a dominating set for a graph G is a subset D of its vertices, such that any vertex of G is in D, or has a neighbor in D. The domination numberγ(G) is the number of vertices in a smallest dominating set for G.
In the mathematical discipline of graph theory, a feedback vertex set (FVS) of a graph is a set of vertices whose removal leaves a graph without cycles. Equivalently, each FVS contains at least one vertex of any cycle in the graph. The feedback vertex set number of a graph is the size of a smallest feedback vertex set. The minimum feedback vertex set problem is an NP-complete problem; it was among the first problems shown to be NP-complete. It has wide applications in operating systems, database systems, and VLSI chip design.
In graph theory and graph algorithms, a feedback arc set or feedback edge set in a directed graph is a subset of the edges of the graph that contains at least one edge out of every cycle in the graph. Removing these edges from the graph breaks all of the cycles, producing an acyclic subgraph of the given graph, often called a directed acyclic graph. A feedback arc set with the fewest possible edges is a minimum feedback arc set and its removal leaves a maximum acyclic subgraph; weighted versions of these optimization problems are also used. If a feedback arc set is minimal, meaning that removing any edge from it produces a subset that is not a feedback arc set, then it has an additional property: reversing all of its edges, rather than removing them, produces a directed acyclic graph.
In computational complexity theory, the class APX is the set of NP optimization problems that allow polynomial-time approximation algorithms with approximation ratio bounded by a constant. In simple terms, problems in this class have efficient algorithms that can find an answer within some fixed multiplicative factor of the optimal answer.
In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets. Any cut determines a cut-set, the set of edges that have one endpoint in each subset of the partition. These edges are said to cross the cut. In a connected graph, each cut-set determines a unique cut, and in some cases cuts are identified with their cut-sets rather than with their vertex partitions.
In the mathematical field of graph theory, a transitive reduction of a directed graph D is another directed graph with the same vertices and as few edges as possible, such that for all pairs of vertices v, w a (directed) path from v to w in D exists if and only if such a path exists in the reduction. Transitive reductions were introduced by Aho, Garey & Ullman (1972), who provided tight bounds on the computational complexity of constructing them.
In computer science, a kernelization is a technique for designing efficient algorithms that achieve their efficiency by a preprocessing stage in which inputs to the algorithm are replaced by a smaller input, called a "kernel". The result of solving the problem on the kernel should either be the same as on the original input, or it should be easy to transform the output on the kernel to the desired output for the original problem.
In graph theory, a clique cover or partition into cliques of a given undirected graph is a collection of cliques that cover the whole graph. A minimum clique cover is a clique cover that uses as few cliques as possible. The minimum k for which a clique cover exists is called the clique cover number of the given graph.
In graph theory, the planar separator theorem is a form of isoperimetric inequality for planar graphs, that states that any planar graph can be split into smaller pieces by removing a small number of vertices. Specifically, the removal of vertices from an n-vertex graph can partition the graph into disjoint subgraphs each of which has at most vertices.
In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets S and T, such that the number of edges between S and T is as large as possible. Finding such a cut is known as the max-cut problem.
In computer science, lexicographic breadth-first search or Lex-BFS is a linear time algorithm for ordering the vertices of a graph. The algorithm is different from a breadth-first search, but it produces an ordering that is consistent with breadth-first search.
The pebble motion problems, or pebble motion on graphs, are a set of related problems in graph theory dealing with the movement of multiple objects ("pebbles") from vertex to vertex in a graph with a constraint on the number of pebbles that can occupy a vertex at any time. Pebble motion problems occur in domains such as multi-robot motion planning and network routing. The best-known example of a pebble motion problem is the famous 15 puzzle where a disordered group of fifteen tiles must be rearranged within a 4x4 grid by sliding one tile at a time.
In graph theory, the metric k-center problem or vertex k-center problem is a classical combinatorial optimization problem studied in theoretical computer science that is NP-hard. Given n cities with specified distances, one wants to build k warehouses in different cities and minimize the maximum distance of a city to a warehouse. In graph theory, this means finding a set of k vertices for which the largest distance of any point to its closest vertex in the k-set is minimum. The vertices must be in a metric space, providing a complete graph that satisfies the triangle inequality. It has application in facility location and clustering.
In network theory, the Wiener connector is a means of maximizing efficiency in connecting specified "query vertices" in a network. Given a connected, undirected graph and a set of query vertices in a graph, the minimum Wiener connector is an induced subgraph that connects the query vertices and minimizes the sum of shortest path distances among all pairs of vertices in the subgraph. In combinatorial optimization, the minimum Wiener connector problem is the problem of finding the minimum Wiener connector. It can be thought of as a version of the classic Steiner tree problem, where instead of minimizing the size of the tree, the objective is to minimize the distances in the subgraph.
In theoretical computer science, nondeterministic constraint logic is a combinatorial system in which an orientation is given to the edges of a weighted undirected graph, subject to certain constraints. One can change this orientation by steps in which a single edge is reversed, subject to the same constraints. This is a form of reversible logic in that each sequence of edge orientation changes can be undone.
In combinatorial optimization, the matroid parity problem is a problem of finding the largest independent set of paired elements in a matroid. The problem was formulated by Lawler (1976) as a common generalization of graph matching and matroid intersection. It is also known as polymatroid matching, or the matchoid problem.
{{cite book}}
: |journal=
ignored (help)