APX

Last updated

In complexity theory the class APX (an abbreviation of "approximable") is the set of NP optimization problems that allow polynomial-time approximation algorithms with approximation ratio bounded by a constant (or constant-factor approximation algorithms for short). In simple terms, problems in this class have efficient algorithms that can find an answer within some fixed multiplicative factor of the optimal answer.

Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

NP (complexity) computational complexity class of decision problems solvable by a non-deterministic Turing machine in polynomial time

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time.

In mathematics and computer science, an optimization problem is the problem of finding the best solution from all feasible solutions. Optimization problems can be divided into two categories depending on whether the variables are continuous or discrete. An optimization problem with discrete variables is known as a discrete optimization. In a discrete optimization problem, we are looking for an object such as an integer, permutation or graph from a countable set. Problems with continuous variables include constrained problems and multimodal problems.

Contents

An approximation algorithm is called an -approximation algorithm for input size if it can be proven that the solution that the algorithm finds is at most a multiplicative factor of times worse than the optimal solution. Here, is called the approximation ratio. Problems in APX are those with algorithms for which the approximation ratio is a constant . The approximation ratio is conventionally stated greater than 1. In the case of minimization problems, is the found solution's score divided by the optimum solution's score, while for maximization problems the reverse is the case. For maximization problems, where an inferior solution has a smaller score, is sometimes stated as less than 1; in such cases, the reciprocal of is the ratio of the score of the found solution to the score of the optimum solution.

A problem is said to have a polynomial-time approximation scheme (PTAS) if for every multiplicative factor of the optimum worse than 1 there is a polynomial-time algorithm to solve the problem to within that factor. Unless P = NP there exist problems that are in APX but without a PTAS, so the class of problems with a PTAS is strictly contained in APX. One such problem is the bin packing problem.

In computer science, a polynomial-time approximation scheme (PTAS) is a type of approximation algorithm for optimization problems.

In the bin packing problem, items of different volumes must be packed into a finite number of bins or containers each of volume V in a way that minimizes the number of bins used. In computational complexity theory, it is a combinatorial NP-hard problem. The decision problem is NP-complete.

APX-hardness and APX-completeness

A problem is said to be APX-hard if there is a PTAS reduction from every problem in APX to that problem, and to be APX-complete if the problem is APX-hard and also in APX. As a consequence of P ≠ NP ⇒ PTAS ≠ APX, if P ≠ NP is assumed, no APX-hard problem has a PTAS. In practice, reducing one problem to another to demonstrate APX-completeness is often done using other reduction schemes, such as L-reductions, which imply PTAS reductions.

In computational complexity theory, a PTAS reduction is an approximation-preserving reduction that is often used to perform reductions between solutions to optimization problems. It preserves the property that a problem has a polynomial time approximation scheme (PTAS) and is used to define completeness for certain classes of optimization problems such as APX. Notationally, if there is a PTAS reduction from a problem A to a problem B, we write .

In computer science, particularly the study of approximation algorithms, an L-reduction is a transformation of optimization problems which linearly preserves approximability features; it is one type of approximation-preserving reduction. L-reductions in studies of approximability of optimization problems play a similar role to that of polynomial reductions in the studies of computational complexity of decision problems.

Examples

One of the simplest APX-complete problems is MAX-3SAT-3, a variation of the boolean satisfiability problem. In this problem, we have a boolean formula in conjunctive normal form where each variable appears at most 3 times, and we wish to know the maximum number of clauses that can be simultaneously satisfied by a single assignment of true/false values to the variables.

MAX-3SAT is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:

In computer science, the Boolean satisfiability problem is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make = TRUE. In contrast, "a AND NOT a" is unsatisfiable.

In Boolean logic, a formula is in conjunctive normal form (CNF) or clausal normal form if it is a conjunction of one or more clauses, where a clause is a disjunction of literals; otherwise put, it is an AND of ORs. As a canonical normal form, it is useful in automated theorem proving and circuit theory.

Other APX-complete problems include:

Metric (mathematics) mathematical function that defines a distance between elements of a set

In mathematics, a metric or distance function is a function that defines a distance between each pair of elements of a set. A set with a metric is called a metric space. A metric induces a topology on a set, but not all topologies can be generated by a metric. A topological space whose topology can be described by a metric is called metrizable.

In computational complexity theory and combinatorics, the token reconfiguration problem is an optimization problem on a graph with both an initial and desired state for tokens.

PTAS

PTAS (polynomial time approximation scheme) consists of problems that can be approximated to within any constant factor besides 1 in time that is polynomial to the input size, but the polynomial depends on such factor. This class is a subset of APX.

APX-intermediate

Unless P = NP, there exist problems in APX that are neither in PTAS nor APX-complete. Such problems can be thought of as having a hardness between PTAS problems and APX-complete problems, and may be called APX-intermediate. The bin packing problem is thought to be APX-intermediate. Despite not having a known PTAS, the bin packing problem has several "asymptotic PTAS" algorithms, which behave like a PTAS when the optimum solution is large, so intuitively it may be easier than problems that are APX-hard.

One other example of a potentially APX-intermediate problem is min edge coloring.

f(n)-APX

One can also define a family of complexity classes -APX, where -APX contains problems with a polynomial time approximation algorithm with a approximation ratio. One can analogously define -APX-complete classes; some such classes contain well-known optimization problems. Log-APX-completeness and poly-APX-completeness are defined in terms of AP-reductions rather than PTAS-reductions; this is because PTAS-reductions are not strong enough to preserve membership in Log-APX and Poly-APX, even though they suffice for APX.

Log-APX-complete, consisting of the hardest problems that can be approximated efficiently to within a factor logarithmic in the input size, includes min dominating set when degree is unbounded.

Poly-APX-complete, consisting of the hardest problems that can be approximated efficiently to within a factor polynomial in the input size, includes max independent set in the general case.

There also exist problems that are exp-APX-complete, where the approximation ratio is exponential in the input size. This may occur when the approximation is dependent on the value of numbers within the problem instance; these numbers may be expressed in space logarithmic in their value, hence the exponential factor.

See also

Related Research Articles

NP-hardness complexity class

NP-hardness, in computational complexity theory, is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem.

Combinatorial optimization subset of mathematical optimization

In Operations Research, applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects. In many such problems, exhaustive search is not tractable. It operates on the domain of those optimization problems in which the set of feasible solutions is discrete or can be reduced to discrete, and in which the goal is to find the best solution. Some common problems involving combinatorial optimization are the travelling salesman problem ("TSP"), the minimum spanning tree problem ("MST"), and the knapsack problem.

Vertex cover a set of vertices that includes at least one endpoint of every edge in a graph

In the mathematical discipline of graph theory, a vertex cover of a graph is a set of vertices such that each edge of the graph is incident to at least one vertex of the set. The problem of finding a minimum vertex cover is a classical optimization problem in computer science and is a typical example of an NP-hard optimization problem that has an approximation algorithm. Its decision version, the vertex cover problem, was one of Karp's 21 NP-complete problems and is therefore a classical NP-complete problem in computational complexity theory. Furthermore, the vertex cover problem is fixed-parameter tractable and a central problem in parameterized complexity theory.

In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to NP-hard optimization problems with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides both is the classic approximation algorithm of Lenstra, Shmoys and Tardos for Scheduling on Unrelated Parallel Machines.

Reduction (complexity) the transformation of one computational problem to another, used to show that the second problem is as difficult as the first

In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A sufficiently efficient reduction from one problem to another may be used to show that the second problem is at least as difficult as the first.

The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory. It is one of Karp's 21 NP-complete problems shown to be NP-complete in 1972.

In computational complexity theory, Polynomial Local Search (PLS) is a complexity class that models the difficulty of finding a locally optimal solution to an optimization problem. The main characteristics of problems that lie in PLS are that the cost of a solution can be calculated in polynomial time and the neighborhood of a solution can be searched in polynomial time. Therefore it is possible to verify whether or not a solution is a local optimum in polynomial time. Furthermore, depending on the problem and the algorithm that is used for solving the problem, it might be faster to find a local optimum instead of a global optimum.

Interval scheduling is a class of problems in computer science, particularly in the area of algorithm design. The problems consider a set of tasks. Each task is represented by an interval describing the time in which it needs to be executed. For instance, task A might run from 2:00 to 5:00, task B might run from 4:00 to 10:00 and task C might run from 9:00 to 11:00. A subset of intervals is compatible if no two intervals overlap. For example, the subset {A,C} is compatible, as is the subset {B}; but neither {A,B} nor {B,C} are compatible subsets, because the corresponding intervals within each subset overlap.

In computational complexity theory, SNP is a complexity class containing a limited subset of NP based on its logical characterization in terms of graph-theoretical properties. It forms the basis for the definition of the class MaxSNP of optimization problems.

Maximum cut

For a graph, a maximum cut is a cut whose size is at least the size of any other cut. The problem of finding a maximum cut in a graph is known as the Max-Cut Problem.

NP-completeness complexity class

In computational complexity theory, a problem is NP-complete when it can be solved by a restricted class of brute force search algorithms and it can be used to simulate any other problem with a similar algorithm. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, whose validity can be tested quickly, such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty. The complexity class of problems of this form is called NP, an abbreviation for "nondeterministic polynomial time". A problem is said to be NP-hard if everything in NP can be transformed in polynomial time into it, and a problem is NP-complete if it is both in NP and NP-hard. The NP-complete problems represent the hardest problems in NP. If any NP-complete problem has a polynomial time algorithm, all problems in NP do. The set of NP-complete problems is often denoted by NP-C or NPC.

In computational complexity theory, a branch of computer science, the Max/min CSP/Ones classification theorems state necessary and sufficient conditions that determine the complexity classes of problems about satisfying a subset S of boolean relations. They are similar to Schaefer's dichotomy theorem, which classifies the complexity of satisfying finite sets of relations; however, the Max/min CSP/Ones classification theorems give information about the complexity of approximating an optimal solution to a problem defined by S.

In computability theory and computational complexity theory, especially the study of approximation algorithms, an approximation-preserving reduction is an algorithm for transforming one optimization problem into another problem, such that the distance of solutions from optimal is preserved to some degree. Approximation-preserving reductions are a subset of more general reductions in complexity theory; the difference is that approximation-preserving reductions usually make statements on approximation problems or optimization problems, as opposed to decision problems.

In mathematics applied to the study of networks, the Wiener connector, named in honor of chemist Harry Wiener who first introduced the Wiener Index, is a means of maximizing efficiency in connecting specified "query vertices" in a network. Given a connected, undirected graph and a set of query vertices in a graph, the minimum Wiener connector is an induced subgraph that connects the query vertices and minimizes the sum of shortest path distances among all pairs of vertices in the subgraph. In combinatorial optimization, the minimum Wiener connector problem is the problem of finding the minimum Wiener connector. It can be thought of as a version of the classic Steiner tree problem, where instead of minimizing the size of the tree, the objective is to minimize the distances in the subgraph.

In computer science and operations research, exact algorithms are algorithms that always solve an optimization problem to optimality.

References