FL (complexity)

Last updated

In computational complexity theory, the complexity class FL is the set of function problems which can be solved by a deterministic Turing machine in a logarithmic amount of memory space. [1] As in the definition of L, the machine reads its input from a read-only tape and writes its output to a write-only tape; the logarithmic space restriction applies only to the read/write working tape.

Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form:

In computational complexity theory, a function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem. For function problems, the output is not simply 'yes' or 'no'.

Loosely speaking, a function problem takes a complicated input and produces a (perhaps equally) complicated output. Function problems are distinguished from decision problems, which produce only Yes or No answers and corresponds to the set L of decision problems which can be solved in deterministic logspace. FL is a subset of FP , the set of function problems which can be solved in deterministic polynomial time. [1]

Decision problem

In computability theory and computational complexity theory, a decision problem is a problem that can be posed as a yes-no question of the input values. An example of a decision problem is deciding whether a given natural number is prime. Another is the problem "given two numbers x and y, does x evenly divide y?". The answer is either 'yes' or 'no' depending upon the values of x and y. A method for solving a decision problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the decision problem "given two numbers x and y, does x evenly divide y?" would give the steps for determining whether x evenly divides y. One such algorithm is long division. If the remainder is zero the answer is 'yes', otherwise it is 'no'. A decision problem which can be solved by an algorithm is called decidable.

In computational complexity theory, L is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.

In computational complexity theory, the complexity class FP is the set of function problems which can be solved by a deterministic Turing machine in polynomial time; it is the function problem version of the decision problem class P. Roughly speaking, it is the class of functions that can be efficiently computed on classical computers without randomization.

FL is known to contain several natural problems, including arithmetic on numbers. Addition, subtraction and multiplication of two numbers are fairly simple, but achieving division is a far deeper problem which was open for decades. [2] [3]

Similarly one may define FNL, which has the same relation with NL as FNP has with NP. [1]

In computational complexity theory, NL is the complexity class containing decision problems which can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.

In computational complexity theory, the complexity class FNP is the function problem extension of the decision problem class NP. The name is somewhat of a misnomer, since technically it is a class of binary relations, not functions, as the following formal definition explains:

NP (complexity) computational complexity class of decision problems solvable by a non-deterministic Turing machine in polynomial time

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time.

Related Research Articles

In theoretical computer science, a non-deterministic Turing machine is a theoretical model of computation. They are used in thought experiments to examine the abilities and limitations of computers. One of the most important open problems in theoretical computer science is the P vs. NP problem, which concerns the question of how difficult it is to simulate non-deterministic computation with a deterministic computer.

In complexity theory and computability theory, an oracle machine is an abstract machine used to study decision problems. It can be visualized as a Turing machine with a black box, called an oracle, which is able to solve certain decision problems in a single operation. The problem can be of any complexity class. Even undecidable problems, such as the halting problem, can be used.

In computability theory, a probabilistic Turing machine is a non-deterministic Turing machine which chooses between the available transitions at each point according to some probability distribution.

In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm.

In computational complexity theory, P, also known as PTIME or DTIME(nO ), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.

In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the problem of determining whether a Boolean formula is satisfiable.

In computational complexity theory, the complexity class NEXPTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time .

In computational complexity theory, an advice string is an extra input to a Turing machine that is allowed to depend on the length n of the input, but not on the input itself. A decision problem is in the complexity class P/f(n) if there is a polynomial time Turing machine M with the following property: for any n, there is an advice string A of length f(n) such that, for any input x of length n, the machine M correctly decides the problem on the input x, given x and A.

In computational complexity theory, SL is the complexity class of problems log-space reducible to USTCON, which is the problem of determining whether there exists a path between two vertices in an undirected graph, otherwise described as the problem of determining whether two vertices are in the same connected component. This problem is also called the undirected reachability problem. It does not matter whether many-one reducibility or Turing reducibility is used. Although originally described in terms of symmetric Turing machines, that equivalent formulation is very complex, and the reducibility definition is what is used in practice.

Randomized Logarithmic-space (RL), sometimes called RLP, is the complexity class of computational complexity theory problems solvable in logarithmic space and polynomial time with probabilistic Turing machines with one-sided error. It is named in analogy with RP, which is similar but has no logarithmic space restriction.

In computational complexity theory, the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(s ) = co-NSPACE(s ) for any function s(n) ≥ log n. The result is equivalently stated as NL = co-NL; although this is the special case when s(n) = log n, it implies the general theorem by a standard padding argument. The result solved the second LBA problem.

In computational complexity theory, NL-complete is a complexity class containing the languages that are complete for NL, the class of decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space. The NL-complete languages are the most "difficult" or "expressive" problems in NL. If a method exists for solving any one of the NL-complete problems in logarithmic memory space, then NL = L.

In computational complexity theory, L/poly is the complexity class of logarithmic space machines with a polynomial amount of advice. L/poly is a non-uniform logarithmic space class, analogous to the non-uniform polynomial time class P/poly.

References

  1. 1 2 3 Àlvarez, Carme; Balcázar, José L.; Jenner, Birgit (1991), "Functional oracle queries as a measure of parallel time", STACS 91, Lecture Notes in Computer Science, 480, Springer, pp. 422–433, doi:10.1007/BFb0020817 .
  2. Chiu, A.; Davida, G.; Litow, B. (2001), "Division in logspace-uniform NC1", RAIRO Theoretical Informatics and Applications, 35: 259–276.
  3. Allender, Eric (2004), "The division breakthroughs" (PDF), Current Trends in Theoretical Computer Science, World Scientific, pp. 147–164.