Pointer machine

Last updated

In theoretical computer science, a pointer machine is an atomistic abstract computational machine whose storage structure is a graph. A pointer algorithm could also be an algorithm restricted to the pointer machine model. [1]

Contents

Some particular types of pointer machines are called a linking automaton, a KU-machine, an SMM, an atomistic LISP machine, a tree-pointer machine, etc. [2]

Pointer machines do not have arithmetic instructions. Computation proceeds only by reading input symbols, modifying and doing various tests on its storage structure—the pattern of nodes and pointers, and outputting symbols based on the tests. In this sense, the model is similar to the Turing machine.

Types of "pointer machines"

Both Gurevich and Ben-Amram list a number of very similar "atomistic" models of "abstract machines"; [3] [2] Ben-Amram believes that the "atomistic models" must be distinguished from "high-level" models. The following atomistic models will be presented below:

Ben-Amram also presents the following varieties, not further discussed in this article:

Schönhage's storage modification machine (SMM) model

The following presentation follows van Emde Boas. [6]

The machine consists of a fixed alphabet of input symbols, a fixed program, and a mutable directed graph with its arrows labelled by alphabet symbols. The graph is the machine's storage. Each node of the graph has exactly one outgoing arrow labelled with each symbol, although some of these may loop back into the original node. One fixed node of the graph is identified as the start or "active" node.

Each word of symbols in the alphabet can then be translated to a pathway through the machine; for example, 10011 would translate to taking edge 1 from the start node, then edge 0 from the resulting node, then edge 0, then edge 1, then edge 1. Thus a word identifies a node, the final node of the path, but this identification will change as the graph changes during the computation.

The machine can receive instructions which change the layout of the graph. The basic instructions are:

(1) new w instruction, which creates a new node at the end of the path w, with all its edges directed to the next-to-last node in w. (2) set w to v instruction which (re)directs an edge to a different node. Here w and v represent words. The instruction results in changing the destination of the last edge in the path w.

Some steps in the execution of a 2-symbol {0,1} machine with instructions: (1) new e; (2) new 1; (3) new 11. Instruction #1 initializes the storage graph as a single node, node 1, in the storage graph. Pointer-machine 2 .JPG
Some steps in the execution of a 2-symbol {0,1} machine with instructions: (1) new ε; (2) new 1; (3) new 11. Instruction #1 initializes the storage graph as a single node, node 1, in the storage graph.

(3) Ifv = wthen instruction z : Conditional instruction that compares two paths represented by words w and v to see if they end at the same node; if so jump to instruction z else continue. This instruction serves the same purpose as the if command in any imperative programming language.

Evolution of the storage graph in a 2-symbol {0,1} machine with instructions: (1) new e; (2) new 1; (3) new 11; (4) new 10; (5) set 111 to 10. At this time, if the machine were to do the if 10=111 then xxx, then the test would be successful and the machine would indeed jump to xxx. Pointer-machine 1 .JPG
Evolution of the storage graph in a 2-symbol {0,1} machine with instructions: (1) new ε; (2) new 1; (3) new 11; (4) new 10; (5) set 111 to 10. At this time, if the machine were to do the if 10=111 then xxx, then the test would be successful and the machine would indeed jump to xxx.

(4) read and write instructions for input/output, accessing a read-only input tape and a write-only output tape, both containing symbols of the alphabet.

Knuth noted that the SMM model coincides with a type of "linking automaton" briefly explained in volume one of The Art of Computer Programming. [4]

Kolmogorov–Uspenskii machine (KU-machine) model

KUM differs from SMM in allowing only invertible pointers: for every pointer from a node x to a node y, an inverse pointer from y to x must be present, labeled by the same symbol. In other words, the storage graph is undirected. Since outgoing pointers must be labeled by distinct symbols of the alphabet, both KUM and SMM graphs have O(1) outdegree. However, KUM pointers' invertibility restricts the in-degree to O(1), as well. This addresses some concerns for physical (as opposite to purely informational) realism.

There are other, minor differences between the models, such as the form of the program - a state table instead of a list of instructions.

Considerations regarding the pointer-machine model

Use of the model in complexity theory: van Emde Boas (1990) expresses concern that this form of abstract model is:

"an interesting theoretical model, but ... its attractiveness as a fundamental model for complexity theory is questionable. Its time measure is based on uniform time in a context where this measure is known to underestimate the true time complexity. The same observation holds for the space measure for the machine" (van Emde Boas (1990) p. 35)

Gurevich also expresses concern:

"Pragmatically speaking, the Schönhage model provides a good measure of time complexity at the current state of the art (though I would prefer something along the lines of the random access computers of Angluin and Valiant)". [7]

Schönhage demonstrates the real-time equivalences of two types of random-access machine with the SMM. [4]

Algorithms in the SMM model: Schönhage demonstrates that the SMM can perform integer multiplication in linear time. [4]

Potential uses for the model: Gurevich wonders whether or not a parallel KU machine "resembles somewhat the human brain" [8]

Parallel computing: All the models mentioned above are sequential. A parallel (atomistic) pointer machine model has been proposed by Cook and Dymond [9] ; a high-level (non-atomistic) parallel pointer machine model has also been used [10]

See also

Register machine—generic register-based abstract machine computational model

Turing machine—generic tape-based abstract machine computational model

Further reading

Most references and a bibliography are to be found at the article Register machine. The following are particular to this article:

Jan van Leeuwen, ed. "Handbook of Theoretical Computer Science. Volume A: Algorithms and Complexity, The MIT PRESS/Elsevier, 1990. ISBN   0-444-88071-2 (volume A).
van Emde Boas' treatment of SMMs appears on pp. 32-35. This treatment clarifies Schönhage 1980 -- it closely follows but expands slightly the Schönhage treatment. Both references may be needed for effective understanding.

Related Research Articles

<span class="mw-page-title-main">Algorithm</span> Sequence of operations for a task

In mathematics and computer science, an algorithm is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes and deduce valid inferences, achieving automation eventually. Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus".

In computability theory, the Church–Turing thesis is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.

<span class="mw-page-title-main">Turing machine</span> Computation model defining an abstract machine

A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm.

In computer science, an abstract machine is a theoretical model that allows for a detailed and precise analysis of how a computer system functions. It is similar to a mathematical function in that it receives inputs and produces outputs based on predefined rules. Abstract machines vary from literal machines in that they are expected to perform correctly and independently of hardware. Abstract machines are "machines" because they allow step-by-step execution of programmes; they are "abstract" because they ignore many aspects of actual (hardware) machines. A typical abstract machine consists of a definition in terms of input, output, and the set of allowable operations used to turn the former into the latter. They can be used for purely theoretical reasons as well as models for real-world computer systems. In the theory of computation, abstract machines are often used in thought experiments regarding computability or to analyse the complexity of algorithms. This use of abstract machines is fundamental to the field of computational complexity theory, such as finite state machines, Mealy machines, push-down automata, and Turing machines.

Hypercomputation or super-Turing computation is a set of models of computation that can provide outputs that are not Turing-computable. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

Solomonoff's theory of inductive inference is a mathematical theory of induction introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.

In mathematical logic and theoretical computer science, a register machine is a generic class of abstract machines used in a manner similar to a Turing machine. All the models are Turing equivalent.

In computer science, random-access machine (RAM) is an abstract machine in the general class of register machines. The RAM is very similar to the counter machine but with the added capability of 'indirect addressing' of its registers. Like the counter machine, The RAM has its instructions in the finite-state portion of the machine.

In computer science, interactive computation is a mathematical model for computation that involves input/output communication with the external world during computation.

Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the general recursive functions.

In computer science, st-connectivity or STCON is a decision problem asking, for vertices s and t in a directed graph, if t is reachable from s.

A Post–Turing machine is a "program formulation" of a type of Turing machine, comprising a variant of Emil Post's Turing-equivalent model of computation. Post's model and Turing's model, though very similar to one another, were developed independently. Turing's paper was received for publication in May 1936, followed by Post's in October. A Post–Turing machine uses a binary alphabet, an infinite sequence of binary storage locations, and a primitive programming language with instructions for bi-directional movement among the storage locations and alteration of their contents one at a time. The names "Post–Turing program" and "Post–Turing machine" were used by Martin Davis in 1973–1974. Later in 1980, Davis used the name "Turing–Post program".

A Turing machine is a hypothetical computing device, first conceived by Alan Turing in 1936. Turing machines manipulate symbols on a potentially infinite strip of tape according to a finite table of rules, and they provide the theoretical underpinnings for the notion of a computer algorithm.

Algorithm characterizations are attempts to formalize the word algorithm. Algorithm does not have a generally accepted formal definition. Researchers are actively working on this problem. This article will present some of the "characterizations" of the notion of "algorithm" in more detail.

In theoretical computer science the random-access stored-program (RASP) machine model is an abstract machine used for the purposes of algorithm development and algorithm complexity theory.

A counter machine is an abstract machine used in a formal logic and theoretical computer science to model computation. It is the most primitive of the four types of register machines. A counter machine comprises a set of one or more unbounded registers, each of which can hold a single non-negative integer, and a list of arithmetic and control instructions for the machine to follow. The counter machine is typically used in the process of designing parallel algorithms in relation to the mutual exclusion principle. When used in this manner, the counter machine is used to model the discrete time-steps of a computational system in relation to memory accesses. By modeling computations in relation to the memory accesses for each respective computational step, parallel algorithms may be designed in such a matter to avoid interlocking, the simultaneous writing operation by two threads to the same memory address.

There are many variants of the counter machine, among them those of Hermes, Ershov, Péter, Minsky, Lambek, Shepherdson and Sturgis, and Schönhage. These are explained below.

<span class="mw-page-title-main">Yuri Gurevich</span> American computer scientist

Yuri Gurevich, Professor Emeritus at the University of Michigan, is an American computer scientist and mathematician and the inventor of abstract state machines.

References

  1. Cloteaux, Brian; Ranjan, Desh (2006). "Some Separation Results Between Classes of Pointer Algorithms".
  2. 1 2 Amir Ben-Amram (1995). What is a "Pointer machine"?, SIGACT News (ACM Special Interest Group on Automata and Computability Theory), volume 26, 1995.
  3. Yuri Gurevich (2000), Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, vol. 1, no. 1, (July 2000), pages 77–111.
  4. 1 2 3 4 Arnold Schönhage (1980), Storage Modification Machines, SIAM Journal on Computing Vol. 9, No. 3, August 1980.
  5. Andrey Kolmogorov and V. Uspenskii, On the definition of an algorithm, Uspekhi Mat. Nauk 13 (1958), 3-28. English translation in American Mathematical Society Translations, Series II, Volume 29 (1963), pp. 217–245.
  6. Peter van Emde Boas, Machine Models and Simulations pp. 3–66 in: Jan van Leeuwen, ed. "Handbook of Theoretical Computer Science. Volume A: Algorithms and Complexity, The MIT PRESS/Elsevier, 1990. ISBN   0-444-88071-2 (volume A).
  7. Gurevich (1988) p. 6 with reference to Angluin D. and Valiant L. G., "Fast Probabilistic Algorithms for Hamiltonian Circuits and Matchings", Journal of Computer and System Sciences 18 (1979) 155-193.
  8. Yuri Gurevich (1988), On Kolmogorov Machines and Related Issues, the column on "Logic in Computer Science", Bulletin of European Association for Theoretical Computer Science, Number 35, June 1988, 71-82.
  9. Cook, Stephen A.; Dymond, Patrick W. (March 1993). "Parallel pointer machines". Computational Complexity. 3: 19–30. doi:10.1007/BF01200405.
  10. Goodrich, M. T.; Kosaraju, S. R. (1996). "Sorting on a parallel pointer machine with applications to set expression evaluation". Journal of the ACM. 43 (2): 331–361. doi:10.1145/226643.226670.