WikiMili The Free Encyclopedia

In computer science, a **polynomial-time approximation scheme** (**PTAS**) is a type of approximation algorithm for optimization problems (most often, NP-hard optimization problems).

**Computer science** is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate, store, and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems.

In computer science and operations research, **approximation algorithms** are efficient algorithms that find approximate solutions to NP-hard optimization problems with **provable guarantees** on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there are also many approximation algorithms that provide an additive guarantee on the quality of the returned solution. A notable example of an approximation algorithm that provides *both* is the classic approximation algorithm of Lenstra, Shmoys and Tardos for Scheduling on Unrelated Parallel Machines.

In mathematics and computer science, an **optimization problem** is the problem of finding the *best* solution from all feasible solutions. Optimization problems can be divided into two categories depending on whether the variables are continuous or discrete. An optimization problem with discrete variables is known as a discrete optimization. In a discrete optimization problem, we are looking for an object such as an integer, permutation or graph from a countable set. Problems with continuous variables include constrained problems and multimodal problems.

A PTAS is an algorithm which takes an instance of an optimization problem and a parameter ε > 0 and, in polynomial time, produces a solution that is within a factor 1 + ε of being optimal (or 1 − ε for maximization problems). For example, for the Euclidean traveling salesman problem, a PTAS would produce a tour with length at most (1 + ε)*L*, with *L* being the length of the shortest tour.^{ [1] } There exists also PTAS for the class of all dense constraint satisfaction problems (CSPs).^{ [2] }^{[ clarification needed ]}

The running time of a PTAS is required to be polynomial in *n* for every fixed ε but can be different for different ε. Thus an algorithm running in time * O *(*n*^{1/ε}) or even *O*(*n*^{exp(1/ε)}) counts as a PTAS.

**Big O notation** is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called **Bachmann–Landau notation** or **asymptotic notation**.

A practical problem with PTAS algorithms is that the exponent of the polynomial could increase dramatically as ε shrinks, for example if the runtime is O(*n*^{(1/ε)!}). One way of addressing this is to define the **efficient polynomial-time approximation scheme** or **EPTAS**, in which the running time is required to be *O*(*n*^{c}) for a constant *c* independent of ε. This ensures that an increase in problem size has the same relative effect on runtime regardless of what ε is being used; however, the constant under the big-O can still depend on ε arbitrarily. Even more restrictive, and useful in practice, is the **fully polynomial-time approximation scheme** or **FPTAS**, which requires the algorithm to be polynomial in both the problem size *n* and 1/ε. All problems in FPTAS are fixed-parameter tractable. Both the knapsack problem and bin packing problem admit an FPTAS.^{ [3] }

The **knapsack problem** or **rucksack problem** is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.

In the **bin packing problem**, objects of different volumes must be packed into a finite number of bins or containers each of volume *V* in a way that minimizes the number of bins used. In computational complexity theory, it is a combinatorial NP-hard problem. The decision problem is NP-complete.

Any strongly NP-hard optimization problem with a polynomially bounded objective function cannot have an FPTAS unless P=NP.^{ [4] } However, the converse fails: e.g. if P does not equal NP, knapsack with two constraints is not strongly NP-hard, but has no FPTAS even when the optimal objective is polynomially bounded.^{ [5] }

Unless P = NP, it holds that FPTAS ⊊ PTAS ⊊ APX.^{ [6] } Consequently, under this assumption, APX-hard problems do not have PTASs.

In complexity theory the class **APX** is the set of **NP** optimization problems that allow polynomial-time approximation algorithms with approximation ratio bounded by a constant. In simple terms, problems in this class have efficient algorithms that can find an answer within some fixed multiplicative factor of the optimal answer.

Another deterministic variant of the PTAS is the **quasi-polynomial-time approximation scheme** or **QPTAS**. A QPTAS has time complexity for each fixed .

Some problems which do not have a PTAS may admit a randomized algorithm with similar properties, a **polynomial-time randomized approximation scheme** or **PRAS**. A PRAS is an algorithm which takes an instance of an optimization or counting problem and a parameter ε > 0 and, in polynomial time, produces a solution that has a *high probability* of being within a factor ε of optimal. Conventionally, "high probability" means probability greater than 3/4, though as with most probabilistic complexity classes the definition is robust to variations in this exact value (the bare minimum requirement is generally greater than 1/2). Like a PTAS, a PRAS must have running time polynomial in *n*, but not necessarily in ε; with further restrictions on the running time in ε, one can define an **efficient polynomial-time randomized approximation scheme** or **EPRAS** similar to the EPTAS, and a **fully polynomial-time randomized approximation scheme** or **FPRAS** similar to the FPTAS.^{ [4] }

The term PTAS may also be used to refer to the class of optimization problems that have a PTAS. PTAS is a subset of APX, and unless P = NP, it is a strict subset. ^{ [6] }

Membership in PTAS can be shown using a PTAS reduction, L-reduction, or P-reduction, all of which preserve PTAS membership, and these may also be used to demonstrate PTAS-completeness. On the other hand, showing non-membership in PTAS (namely, the nonexistence of a PTAS), may be done by showing that the problem is APX-hard, after which the existence of a PTAS would show P = NP. APX-hardness is commonly shown via PTAS reduction or AP-reduction.

The **♯P-complete** problems form a complexity class in computational complexity theory. The problems in this complexity class are defined by having the following two properties:

**NP-hardness**, in computational complexity theory, is the defining property of a class of problems that are, informally, "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem.

In computer science, the **time complexity** is the computational complexity that describes the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to differ by at most a constant factor.

In Operations Research, applied mathematics and theoretical computer science, **combinatorial optimization** is a topic that consists of finding an optimal object from a finite set of objects. In many such problems, exhaustive search is not tractable. It operates on the domain of those optimization problems, in which the set of feasible solutions is discrete or can be reduced to discrete, and in which the goal is to find the best solution. Some common problems involving combinatorial optimization are the travelling salesman problem ("TSP") and the minimum spanning tree problem ("MST").

In graph theory, a **dominating set** for a graph *G* = (*V*, *E*) is a subset *D* of *V* such that every vertex not in *D* is adjacent to at least one member of *D*. The **domination number** γ(*G*) is the number of vertices in a smallest dominating set for *G*.

In graph theory, a directed graph may contain directed cycles, a one-way loop of edges. In some applications, such cycles are undesirable, and we wish to eliminate them and obtain a directed acyclic graph (DAG). One way to do this is simply to drop edges from the graph to break the cycles. A **feedback arc set** (**FAS**) or **feedback edge set** is a set of edges which, when removed from the graph, leave a DAG. Put another way, it's a set containing at least one edge of every cycle in the graph.

The **Valiant–Vazirani theorem** is a theorem in computational complexity theory stating that if there is a polynomial time algorithm for Unambiguous-SAT, then NP = RP. It was proven by Leslie Valiant and Vijay Vazirani in their paper titled *NP is as easy as detecting unique solutions* published in 1986. The proof is based on the Mulmuley–Vazirani–Vazirani isolation lemma, which was subsequently used for a number of important applications in theoretical computer science.

**MAX-3SAT** is a problem in the computational complexity subfield of computer science. It generalises the Boolean satisfiability problem (SAT) which is a decision problem considered in complexity theory. It is defined as:

In computational complexity theory, the **maximum satisfiability problem** (**MAX-SAT**) is the problem of determining the maximum number of clauses, of a given Boolean formula in conjunctive normal form, that can be made true by an assignment of truth values to the variables of the formula. It is a generalization of the Boolean satisfiability problem, which asks whether there exists a truth assignment that makes all clauses true.

In computational complexity, **strong NP-completeness** is a property of computational problems that is a special case of NP-completeness. A general computational problem may have numerical parameters. For example, the input to the bin packing problem is a list of objects of specific sizes and a size for the bins that must contain the objects—these object sizes and bin size are numerical parameters.

In computational complexity theory, a **PTAS reduction** is an approximation-preserving reduction that is often used to perform reductions between solutions to optimization problems. It preserves the property that a problem has a polynomial time approximation scheme (PTAS) and is used to define completeness for certain classes of optimization problems such as APX. Notationally, if there is a PTAS reduction from a problem A to a problem B, we write .

In computational complexity theory, the **average-case complexity** of an algorithm is the amount of some computational resource used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case complexity which considers the maximal complexity of the algorithm over all possible inputs.

For a graph, a **maximum cut** is a cut whose size is at least the size of any other cut. The problem of finding a maximum cut in a graph is known as the **Max-Cut Problem.**

In computational complexity theory, a problem is **NP-complete** when it can be solved by a restricted class of brute force search algorithms and it can be used to simulate any other problem with a similar algorithm. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, whose validity can be tested quickly, such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty. The complexity class of problems of this form is called NP, an abbreviation for "nondeterministic polynomial time". A problem is said to be NP-hard if everything in NP can be transformed in polynomial time into it, and a problem is NP-complete if it is both in NP and NP-hard. The NP-complete problems represent the hardest problems in NP. If any NP-complete problem has a polynomial time algorithm, all problems in NP do. The set of NP-complete problems is often denoted by **NP-C** or **NPC**.

In mathematics, the **minimum k-cut**, is a combinatorial optimization problem that requires finding a set of edges whose removal would partition the graph to at least

In computability theory and computational complexity theory, especially the study of approximation algorithms, an **approximation-preserving reduction** is an algorithm for transforming one optimization problem into another problem, such that the distance of solutions from optimal is preserved to some degree. Approximation-preserving reductions are a subset of more general reductions in complexity theory; the difference is that approximation-preserving reductions usually make statements on approximation problems or optimization problems, as opposed to decision problems.

In computer science and operations research, **exact algorithms** are algorithms that always solve an optimization problem to optimality. Unless P = NP, such an algorithm cannot run in worst-case polynomial time but there has been extensive research on finding exact algorithms whose running time is exponential with a low base.

- ↑ Sanjeev Arora, Polynomial-time Approximation Schemes for Euclidean TSP and other Geometric Problems, Journal of the ACM 45(5) 753–782, 1998.
- ↑ Arora, S.; Karger, D.; Karpinski, M. (1999), "Polynomial Time Approximation Schemes for Dense Instances of NP-Hard Problems",
*Journal of Computer and System Sciences*,**58**(1): 193–210, doi:10.1006/jcss.1998.1605 - ↑ Vazirani, Vijay (2001).
*Approximation algorithms*. Berlin: Springer. pp. 74–83. ISBN 3540653678. OCLC 47097680. - 1 2 Vazirani, Vijay V. (2003).
*Approximation Algorithms*. Berlin: Springer. pp. 294–295. ISBN 3-540-65367-8. - ↑ H. Kellerer and U. Pferschy and D. Pisinger (2004).
*Knapsack Problems*. Springer.CS1 maint: Uses authors parameter (link) - 1 2 Jansen, Thomas (1998), "Introduction to the Theory of Complexity and Approximation Algorithms", in Mayr, Ernst W.; Prömel, Hans Jürgen; Steger, Angelika,
*Lectures on Proof Verification and Approximation Algorithms*, Springer, pp. 5–28, doi:10.1007/BFb0053011, ISBN 9783540642015 . See discussion following Definition 1.30 on p. 20.

- Complexity Zoo: PTAS, EPTAS, FPTAS
- Pierluigi Crescenzi, Viggo Kann, Magnús Halldórsson, Marek Karpinski, and Gerhard Woeginger,
*A compendium of NP optimization problems*– list which NP optimization problems have PTAS.

This page is based on this Wikipedia article

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.

Text is available under the CC BY-SA 4.0 license; additional terms may apply.

Images, videos and audio are available under their respective licenses.