Approximation-preserving reduction

Last updated

In computability theory and computational complexity theory, especially the study of approximation algorithms, an approximation-preserving reduction is an algorithm for transforming one optimization problem into another problem, such that the distance of solutions from optimal is preserved to some degree. Approximation-preserving reductions are a subset of more general reductions in complexity theory; the difference is that approximation-preserving reductions usually make statements on approximation problems or optimization problems, as opposed to decision problems.

Computability theory, also known as recursion theory, is a branch of mathematical logic, of computer science, and of the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, recursion theory overlaps with proof theory and effective descriptive set theory.

Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to NP-hard optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for Scheduling on Unrelated Parallel Machines.

Contents

Intuitively, problem A is reducible to problem B via an approximation-preserving reduction if, given an instance of problem A and a (possibly approximate) solver for problem B, one can convert the instance of problem A into an instance of problem B, apply the solver for problem B, and recover a solution for problem A that also has some guarantee of approximation.

Definition

Unlike reductions on decision problems, an approximation-preserving reduction must preserve more than the truth of the problem instances when reducing from one problem to another. It must also maintain some guarantee on the relationship between the cost of the solution to the cost of the optimum in both problems. To formalize:

Let and be optimization problems.

Let be an instance of problem , with optimal solution . Let denote the cost of a solution to an instance of problem . This is also the metric used to determine which solution is considered optimal.

An approximation-preserving reduction is a pair of functions (which often must be computable in polynomial time), such that:

Types

There are many different types of approximation-preserving reductions, all of which have a different guarantee (the third point in the definition above). However, unlike with other reductions, approximation-preserving reductions often overlap in what properties they demonstrate on optimization problems (e.g. complexity class membership or completeness, or inapproximability). The different types of reductions are used instead as varying reduction techniques, in that the applicable reduction which is most easily adapted to the problem is used.

Not all types of approximation-preserving reductions can be used to show membership in all approximability complexity classes, the most notable of which are PTAS and APX. A reduction below preserves membership in a complexity class C if, given a problem A that reduces to problem B via the reduction scheme, and B is in C, then A is in C as well. Some reductions shown below only preserve membership in APX or PTAS, but not the other. Because of this, careful choice must be made when selecting an approximation-preserving reductions, especially for the purpose of proving completeness of a problem within a complexity class.

In computer science, a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems.

In complexity theory the class APX is the set of NP optimization problems that allow polynomial-time approximation algorithms with approximation ratio bounded by a constant. In simple terms, problems in this class have efficient algorithms that can find an answer within some fixed multiplicative factor of the optimal answer.

In computational complexity theory, a computational problem is complete for a complexity class if it is, in a technical sense, among the "hardest" problems in the complexity class.

Crescenzi suggests that the three most ideal styles of reduction, for both ease of use and proving power, are PTAS reduction, AP reduction, and L-reduction. [1] The reduction descriptions that follow are from Crescenzi's survey of approximation-preserving reductions.

Strict reduction

Strict reduction is the simplest type of approximation-preserving reduction. In a strict reduction, the approximation ratio of a solution y' to an instance x' of a problem B must be at most as good as the approximation ratio of the corresponding solution y to instance x of problem A. In other words:

for .

Strict reduction is the most straightforward: if a strict reduction from problem A to problem B exists, then problem A can always be approximated to at least as good a ratio as problem B. Strict reduction preserves membership in both PTAS and APX.

There exists a similar concept of an S-reduction, for which , and the optima of the two corresponding instances must have the same cost as well. S-reduction is a very special case of strict reduction, and is even more constraining. In effect, the two problems A and B must be in near-perfect correspondence with each other. The existence of an S-reduction implies not only the existence of a strict reduction but every other reduction listed here.

L-reduction

L-reductions preserve membership in PTAS as well as APX (but only for minimization problems in the case of the latter). As a result, they cannot be used in general to prove completeness results about APX, Log-APX, or Poly-APX, but nevertheless they are valued for their natural formulation and ease of use in proofs. [1]

PTAS-reduction

PTAS-reduction is another commonly used reduction scheme. Though it preserves membership in PTAS, it does not do so for APX. Nevertheless, APX-completeness is defined in terms of PTAS reductions.

PTAS-reductions are a generalization of P-reductions, shown below, with the only difference being that the function is allowed to depend on the approximation ratio .

A-reduction and P-reduction

A-reduction and P-reduction are similar reduction schemes that can be used to show membership in APX and PTAS respectively. Both introduce a new function , defined on numbers greater than 1, which must be computable.

In an A-reduction, we have that

.

In a P-reduction, we have that

.

The existence of a P-reduction implies the existence of a PTAS-reduction.

E-reduction

E-reduction, which is a generalization of strict reduction but implies both A-reduction and P-reduction, is an example of a less restrictive reduction style that preserves membership not only in PTAS and APX, but also the larger classes Log-APX and Poly-APX. E-reduction introduces two new parameters, a polynomial and a constant . Its definition is as follows.

In an E-reduction, we have that for some polynomial and constant ,

To obtain an A-reduction from an E-reduction, let , and to obtain a P-reduction from an E-reduction, let .

AP-reduction

AP-reductions are used to define completeness in the classes Log-APX and Poly-APX. They are a special case of PTAS reduction, meeting the following restrictions.

In an AP-reduction, we have that for some constant ,

with the additional generalization that the function is allowed to depend on the approximation ratio , as in PTAS-reduction.

AP-reduction is also a generalization of E-reduction. An additional restriction actually needs to be imposed for AP-reduction to preserve Log-APX and Poly-APX membership, as E-reduction does: for fixed problem size, the computation time of must be non-increasing as the approximation ratio increases.

Gap reduction

A gap reduction is a type of reduction that, while useful in proving some inapproximability results, does not resemble the other reductions shown here. Gap reductions deal with optimization problems within a decision problem container, generated by changing the problem goal to distinguishing between the optimal solution and solutions some multiplicative factor worse than the optimum.

See also

Related Research Articles

Travelling salesman problem algorithmic problem

The travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science.

Combinatorial optimization subset of mathematical optimization

In Operations Research, applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects. In many such problems, exhaustive search is not tractable. It operates on the domain of those optimization problems, in which the set of feasible solutions is discrete or can be reduced to discrete, and in which the goal is to find the best solution. Some common problems involving combinatorial optimization are the travelling salesman problem ("TSP") and the minimum spanning tree problem ("MST").

Steiner tree problem

Steiner tree problem, or minimum Steiner tree problem, named after Jakob Steiner, is an umbrella term for a class of problems in combinatorial optimization. While Steiner tree problems may be formulated in a number of settings, they all require an optimal interconnect for a given set of objects and a predefined objective function. One well-known variant, which is often used synonymously with the term Steiner tree problem, is the Steiner tree problem in graphs. Given an undirected graph with non-negative edge weights and a subset of vertices, usually referred to as terminals, the Steiner tree problem in graphs requires a tree of minimum weight that contains all terminals. Further well-known variants are the Euclidean Steiner tree problem and the rectilinear minimum Steiner tree problem.

Reduction (complexity) the transformation of one computational problem to another, used to show that the second problem is as difficult as the first

In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A reduction from one problem to another may be used to show that the second problem is at least as difficult as the first.

In mathematics and computer science, an optimization problem is the problem of finding the best solution from all feasible solutions. Optimization problems can be divided into two categories depending on whether the variables are continuous or discrete. An optimization problem with discrete variables is known as a discrete optimization. In a discrete optimization problem, we are looking for an object such as an integer, permutation or graph from a finite set. Problems with continuous variables include constrained problems and multimodal problems.

MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:

In computer science, particularly the study of approximation algorithms, an L-reduction is a transformation of optimization problems which linearly preserves approximability features; it is one type of approximation-preserving reduction. L-reductions in studies of approximability of optimization problems play a similar role to that of polynomial reductions in the studies of computational complexity of decision problems.

In computational complexity theory, Polynomial Local Search (PLS) is a complexity class that models the difficulty of finding a locally optimal solution to an optimization problem. The main characteristics of problems that lie in PLS are that the cost of a solution can be calculated in polynomial time and the neighborhood of a solution can be searched in polynomial time. Therefore it is possible to verify whether or not a solution is a local optimum in polynomial time. Furthermore, depending on the problem and the algorithm that is used for solving the problem, it might be faster to find a local optimum instead of a global optimum.

Linear programming relaxation linear program formed from a problem in which variables must be 0 or 1 by allowing them to be real numbers between 0 and 1

In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable.

Interval scheduling is a class of problems in computer science, particularly in the area of algorithm design. The problems consider a set of tasks. Each task is represented by an interval describing the time in which it needs to be executed. For instance, task A might run from 2:00 to 5:00, task B might run from 4:00 to 10:00 and task C might run from 9:00 to 11:00. A subset of intervals is compatible if no two intervals overlap. For example, the subset {A,C} is compatible, as is the subset {B}; but neither {A,B} nor {B,C} are compatible subsets, because the corresponding intervals within each subset overlap.

In computational complexity theory, a PTAS reduction is an approximation-preserving reduction that is often used to perform reductions between solutions to optimization problems. It preserves the property that a problem has a polynomial time approximation scheme (PTAS) and is used to define completeness for certain classes of optimization problems such as APX. Notationally, if there is a PTAS reduction from a problem A to a problem B, we write .

In computational complexity theory, SNP is a complexity class containing a limited subset of NP based on its logical characterization in terms of graph-theoretical properties. It forms the basis for the definition of the class MaxSNP of optimization problems.

Maximum cut

For a graph, a maximum cut is a cut whose size is at least the size of any other cut. The problem of finding a maximum cut in a graph is known as the Max-Cut Problem.

In graph theory, the metric k-center or metric facility location problem is a combinatorial optimization problem studied in theoretical computer science. Given n cities with specified distances, one wants to build k warehouses in different cities and minimize the maximum distance of a city to a warehouse. In graph theory this means finding a set of k vertices for which the largest distance of any point to its closest vertex in the k-set is minimum. The vertices must be in a metric space, providing a complete graph that satisfies the triangle inequality.

In computational complexity theory, a gap reduction is a reduction to a particular type of decision problem, known as a c-gap problem. Such reductions provide information about the hardness of approximating solutions to optimization problems. In short, a gap problem refers to one wherein the objective is to distinguish between cases where the best solution is above one threshold from cases where the best solution is below another threshold, such that the two thresholds have a gap in between. Gap reductions can be used to demonstrate inapproximability results, as if a problem may be approximated to a better factor than the size of gap, then the approximation algorithm can be used to solve the corresponding gap problem.

In computational complexity theory and combinatorics, the token reconfiguration problem is an optimization problem on a graph with both an initial and desired state for tokens.

In global optimization, a state transition algorithm (STA) is an iterative method that generates a sequence of continually improved approximations to provide a solution to an optimization problem. It was first proposed by Zhou, et al.

References

  1. 1 2 Crescenzi, Pierluigi (1997). "A Short Guide To Approximation Preserving Reductions". Proceedings of the 12th Annual IEEE Conference on Computational Complexity. Washington, D.C.: IEEE Computer Society: 262-.