Symmetric Turing machine

Last updated

A symmetric Turing machine is a Turing machine which has a configuration graph that is undirected (that is, configuration i yields configuration j if and only if j yields i).

Contents

Definition of symmetric Turing machines

Formally, we define a variant of Turing machines with a set of transitions of the form , where p,q are states, ab,cd are pairs of symbols and D is a direction. If D is left, then the head of a machine in state p above a tape symbol b preceded by a symbol a can be transitioned by moving the head left, changing the state to q and replacing the symbols a,b by c,d. The opposite transition can always be applied. If D is right the transition is analogous. The ability to peek at two symbols and change both at a time is non-essential, but makes the definition easier.

Such machines were first defined in 1982 by Harry R. Lewis and Christos Papadimitriou, [1] [2] who were looking for a class in which to place USTCON, the problem asking whether there is a path between two given vertices s,t in an undirected graph. Until this time, it could be placed only in NL, despite seeming not to require nondeterminism (the asymmetric variant STCON was known to be complete for NL). Symmetric Turing machines are a kind of Turing machine with limited nondeterministic power, and were shown to be at least as powerful as deterministic Turing machines, giving an interesting case in between.

is the class of the languages accepted by a symmetric Turing machine running in time . It can easily be proved that by limiting the nondeterminism of any machine in to an initial stage where a string of symbols is nondeterministically written, followed by deterministic computations.

SL=L

SSPACE(S(n)) is the class of the languages accepted by a symmetric Turing machine running in space and SL =SSPACE(log(n)).

SL can equivalently be defined as the class of problems logspace reducible to USTCON. Lewis and Papadimitriou by their definition showed this by constructing a nondeterministic machine for USTCON with properties that they showed are sufficient to make a construction of an equivalent symmetric Turing machine possible. Then, they observed that any language in SL is logspace reducible to USTCON as from the properties of the symmetric computation we can view the special configuration as the undirected edges of the graph.

In 2004, Omer Reingold proved that SL=L by showing a deterministic algorithm for USTCON running in logarithmic space, [3] for which he received the 2005 Grace Murray Hopper Award and (together with Avi Wigderson and Salil Vadhan) the 2009 Gödel Prize. The proof uses the zig-zag product to efficiently construct expander graphs.

Notes

  1. Jesper Jansson. Deterministic Space-Bounded Graph Connectivity Algorithms. Manuscript. 1998.
  2. Harry R. Lewis and Christos H. Papadimitriou. Symmetric space-bounded computation. Theoretical Computer Science. pp.161-187. 1982.
  3. Reingold, Omer (2008), "Undirected connectivity in log-space", Journal of the ACM , 55 (4): 1–24, doi:10.1145/1391289.1391291, MR   2445014, S2CID   207168478, ECCC   TR04-094

Related Research Articles

<span class="mw-page-title-main">NP (complexity)</span> Complexity class used to classify decision problems

In computational complexity theory, NP is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.

In theoretical computer science, a nondeterministic Turing machine (NTM) is a theoretical model of computation whose governing rules specify more than one possible action when in some given situations. That is, an NTM's next state is not completely determined by its action and the current symbol it sees, unlike a deterministic Turing machine.

In computational complexity theory, the complexity class EXPTIME (sometimes called EXP or DEXPTIME) is the set of all decision problems that are solvable by a deterministic Turing machine in exponential time, i.e., in O(2p(n)) time, where p(n) is a polynomial function of n.

The space complexity of an algorithm or a data structure is the amount of memory space required to solve an instance of the computational problem as a function of characteristics of the input. It is the memory required by an algorithm until it executes completely. This includes the memory space used by its inputs, called input space, and any other (auxiliary) memory it uses during execution, which is called auxiliary space.

<span class="mw-page-title-main">Complexity class</span> Set of problems in computational complexity theory

In computational complexity theory, a complexity class is a set of computational problems "of related resource-based complexity". The two most commonly analyzed resources are time and memory.

In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function ,

In computational complexity theory, non-deterministic space or NSPACE is the computational resource describing the memory space for a non-deterministic Turing machine. It is the non-deterministic counterpart of DSPACE.

In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm.

In computational complexity theory, DTIME is the computational resource of computation time for a deterministic Turing machine. It represents the amount of time that a "normal" physical computer would take to solve a certain computational problem using a certain algorithm. It is one of the most well-studied complexity resources, because it corresponds so closely to an important real-world resource.

In computational complexity theory, the complexity class NTIME(f(n)) is the set of decision problems that can be solved by a non-deterministic Turing machine which runs in time O(f(n)). Here O is the big O notation, f is some function, and n is the size of the input (for which the problem is to be decided).

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

In computational complexity theory, the complexity class NEXPTIME is the set of decision problems that can be solved by a non-deterministic Turing machine using time .

In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP. The concept of an ATM was set forth by Chandra and Stockmeyer and independently by Kozen in 1976, with a joint journal publication in 1981.

In computational complexity theory, NL is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space.

In computational complexity theory, L is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be read as well as written. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of boolean flags, and many basic logspace algorithms use the memory in this way.

In computational complexity theory, SL is the complexity class of problems log-space reducible to USTCON, which is the problem of determining whether there exists a path between two vertices in an undirected graph, otherwise described as the problem of determining whether two vertices are in the same connected component. This problem is also called the undirected reachability problem. It does not matter whether many-one reducibility or Turing reducibility is used. Although originally described in terms of symmetric Turing machines, that equivalent formulation is very complex, and the reducibility definition is what is used in practice.

In computational complexity theory, the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize. In its general form the theorem states that NSPACE(s(n)) = co-NSPACE(s(n)) for any function s(n) ≥ log n. The result is equivalently stated as NL = co-NL; although this is the special case when s(n) = log n, it implies the general theorem by a standard padding argument. The result solved the second LBA problem.

In computer science, st-connectivity or STCON is a decision problem asking, for vertices s and t in a directed graph, if t is reachable from s.

In computational complexity theory, NL-complete is a complexity class containing the languages that are complete for NL, the class of decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space. The NL-complete languages are the most "difficult" or "expressive" problems in NL. If a deterministic algorithm exists for solving any one of the NL-complete problems in logarithmic memory space, then NL = L.

In graph theory, the zig-zag product of regular graphs , denoted by , is a binary operation which takes a large graph and a small graph and produces a graph that approximately inherits the size of the large one but the degree of the small one. An important property of the zig-zag product is that if is a good expander, then the expansion of the resulting graph is only slightly worse than the expansion of .

References