In computational complexity theory, SC (Steve's Class, named after Stephen Cook) [1] is the complexity class of problems solvable by a deterministic Turing machine in polynomial time (class P ) and polylogarithmic space (class PolyL ) (that is, O((log n)k) space for some constant k). It may also be called DTISP(poly, polylog), where DTISP stands for deterministic time and space. Note that the definition of SC differs from P∩PolyL, since for the former, it is required that a single algorithm runs in both polynomial time and polylogarithmic space; while for the latter, two separate algorithms will suffice: one that runs in polynomial time, and another that runs in polylogarithmic space. (It is unknown whether SC and P∩PolyL are equivalent).
DCFL , the strict subset of context-free languages recognized by deterministic pushdown automata, is contained in SC, as shown by Cook in 1979. [2] It is open if all context-free languages can be recognized in SC, although they are known be in P∩PolyL. [3]
It is open if directed st-connectivity is in SC, although it is known to be in P∩PolyL (because of a DFS algorithm and Savitch's theorem). This question is equivalent to NL ⊆ SC.
RL and BPL are classes of problems acceptable by probabilistic Turing machines in logarithmic space and polynomial time. Noam Nisan showed in 1992 the weak derandomization result that both are contained in SC. [4] In other words, given polylogarithmic space, a deterministic machine can simulate logarithmic space probabilistic algorithms.
In computational complexity theory, a branch of computer science, bounded-error probabilistic polynomial time (BPP) is the class of decision problems solvable by a probabilistic Turing machine in polynomial time with an error probability bounded by 1/3 for all instances. BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in polynomial time with a deterministic machine, since a deterministic machine is a special case of a probabilistic machine.
In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.
In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. As a consequence, a probabilistic Turing machine can—unlike a deterministic Turing Machine—have stochastic results; that is, on a given input and instruction state machine, it may have different run times, or it may not halt at all; furthermore, it may accept an input in one execution and reject the same input in another execution.
The space complexity of an algorithm or a computer program is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output are random variables.
In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.
In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof, as used in the verifier-based definition of the complexity class NP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way.
In computational complexity theory, P, also known as PTIME or DTIME(nO(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time.
In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-complete. That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem.
In computational complexity theory, the complexity class NEXPTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time .
In computational complexity theory, NL is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.
In computational complexity theory, L is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.
In computational complexity theory, SL is the complexity class of problems log-space reducible to USTCON, which is the problem of determining whether there exists a path between two vertices in an undirected graph, otherwise described as the problem of determining whether two vertices are in the same connected component. This problem is also called the undirected reachability problem. It does not matter whether many-one reducibility or Turing reducibility is used. Although originally described in terms of symmetric Turing machines, that equivalent formulation is very complex, and the reducibility definition is what is used in practice.
Randomized Logarithmic-space (RL), sometimes called RLP, is the complexity class of computational complexity theory problems solvable in logarithmic space and polynomial time with probabilistic Turing machines with one-sided error. It is named in analogy with RP, which is similar but has no logarithmic space restriction.
In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution.
In computational complexity theory, L/poly is the complexity class of logarithmic space machines with a polynomial amount of advice. L/poly is a non-uniform logarithmic space class, analogous to the non-uniform polynomial time class P/poly.
In formal language theory, deterministic context-free languages (DCFL) are a proper subset of context-free languages. They are the context-free languages that can be accepted by a deterministic pushdown automaton. DCFLs are always unambiguous, meaning that they admit an unambiguous grammar. There are non-deterministic unambiguous CFLs, so DCFLs form a proper subset of unambiguous CFLs.
In computational complexity theory, BPL, sometimes called BPLP, is the complexity class of problems solvable in logarithmic space and polynomial time with probabilistic Turing machines with two-sided error. It is named in analogy with BPP, which is similar but has no logarithmic space restriction.
In computational complexity theory, a problem is NP-complete when:
{{citation}}
: CS1 maint: location missing publisher (link).