Unbounded nondeterminism

Last updated

In computer science, unbounded nondeterminism or unbounded indeterminacy is a property of concurrency by which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced. Unbounded nondeterminism became an important issue in the development of the denotational semantics of concurrency, and later became part of research into the theoretical concept of hypercomputation [1] .

Contents

Fairness

Discussion of unbounded nondeterminism tends to get involved with discussions of fairness. The basic concept is that all computation paths must be "fair" in the sense that if the machine enters a state infinitely often, it must take every possible transition from that state. This amounts to requiring that the machine be guaranteed to service a request if it can, since an infinite sequence of states will only be allowed if there is no transition that leads to the request being serviced. Equivalently, every possible transition must occur eventually in an infinite computation, although it may take an unbounded amount of time for the transition to occur. This concept is to be distinguished from the local fairness of flipping a "fair" coin, by which it is understood that it is possible for the outcome to always be heads for any finite number of steps, although as the number of steps increases, this will almost surely not happen.

An example of the role of fair or unbounded nondeterminism in the merging of strings was given by William D. Clinger, in his 1981 thesis. He defined a "fair merge" of two strings to be a third string in which each character of each string must occur eventually. He then considered the set of all fair merges of two strings merge(S, T), assuming it to be a monotone function. Then he argued that merge(⊥,1ω)⊆ merge(0,1ω), where is the empty stream. Now merge(⊥,1ω) = {1ω}, so it must be that 1ω is an element of merge(0,1ω), a contradiction. He concluded that:

It appears that a fair merge cannot be written as a nondeterministic data flow program operating on streams. [2]

On the possibility of implementing unbounded nondeterminism

Edsger Dijkstra argued that it is impossible to implement systems with unbounded nondeterminism [3] . For this reason, Tony Hoare suggested that "an efficient implementation should try to be reasonably fair." [4]

Nondeterministic automata

Nondeterministic Turing machines have only bounded nondeterminism. Likewise sequential programs containing guarded commands as the only sources of nondeterminism have only bounded nondeterminism. [3] Briefly, choice nondeterminism is bounded. Gordon Plotkin gave a proof in his original paper on powerdomains:

Now the set of initial segments of execution sequences of a given nondeterministic program P, starting from a given state, will form a tree. The branching points will correspond to the choice points in the program. Since there are always only finitely many alternatives at each choice point, the branching factor of the tree is always finite. That is, the tree is finitary. Now Kőnig's lemma says that if every branch of a finitary tree is finite, then so is the tree itself. In the present case this means that if every execution sequence of P terminates, then there are only finitely many execution sequences. So if an output set of P is infinite, it must contain [a nonterminating computation]. [5]

Indeterminacy versus nondeterministic automata

William Clinger provided the following analysis of the above proof by Plotkin:

This proof depends upon the premise that if every node x of a certain infinite branch can be reached by some computation c, then there exists a computation c that visits every node x on the branch. ... Clearly this premise follows not from logic but rather from the interpretation given to choice points. This premise fails for arrival nondeterminism [in the arrival of messages in the Actor model] because of finite delay [in the arrival of messages]. Though each node on an infinite branch must lie on a branch with a limit, the infinite branch need not itself have a limit. Thus the existence of an infinite branch does not necessarily imply a nonterminating computation. [2]

Unbounded nondeterminism and noncomputability

Spaan et al. have argued that it is possible for an unboundedly nondeterministic program to solve the halting problem; their algorithm consists of two parts defined as follows: [6]

The first part of the program requests a natural number from the second part; after receiving it, it will iterate the desired Turing machine for that many steps, and accept or reject according to whether the machine has yet halted.

The second part of the program nondeterministically chooses a natural number on request. The number is stored in a variable which is initialized to 0; then the program repeatedly chooses whether to increment the variable, or service the request. The fairness constraint requires that the request eventually be serviced, for otherwise there is an infinite loop in which only the "increment the variable" branch is ever taken.

Clearly, if the machine does halt, this algorithm has a path which accepts. If the machine does not halt, this algorithm will always reject, no matter what number the second part of the program returns.

Arguments for dealing with unbounded nondeterminism

Clinger and Carl Hewitt [ citation needed ] have developed a model (known as the Actor model) of concurrent computation with the property of unbounded nondeterminism built in [Clinger 1981; [7] ; [8] ; [9] ]; this allows computations that cannot be implemented by Turing Machines, as seen above. However, these researchers emphasize that their model of concurrent computations cannot implement any functions that are outside the class of recursive functions [ citation needed ] defined by Church, Kleene, Turing, etc. (See Indeterminacy in concurrent computation.)

Hewitt justified his use of unbounded nondeterminism by arguing that there is no bound that can be placed on how long it takes a computational circuit called an arbiter to settle (see metastability in electronics). Arbiters are used in computers to deal with the circumstance that computer clocks operate asynchronously with input from outside, e.g.., keyboard input, disk access, network input, etc. So it could take an unbounded time for a message sent to a computer to be received and in the meantime the computer could traverse an unbounded number of states.

He further argued that Electronic mail enables unbounded nondeterminism since mail can be stored on servers indefinitely before being delivered, and that data links to servers on the Internet can likewise be out of service indefinitely. This gave rise to the unbounded nondeterminism controversy. [10]

Hewitt's analysis of fairness

Hewitt argued that issues in fairness derive in part from the global state point of view. The oldest models of computation (e.g.. Turing machines, Post productions, the lambda calculus, etc.) are based on mathematics that makes use of a global state to represent a computational step. Each computational step is from one global state of the computation to the next global state. The global state approach was continued in automata theory for finite state machines and push down stack machines including their nondeterministic versions. All of these models have the property of bounded nondeterminism: if a machine always halts when started in its initial state, then there is a bound on the number of states in which it can halt.

Hewitt argued that there is a fundamental difference between choices in global state nondeterminism and the arrival order indeterminacy (nondeterminism) of his Actor model. In global state nondeterminism, a "choice" is made for the "next" global state. In arrival order indeterminacy, arbitration locally decides each arrival order in an unbounded amount of time. While a local arbitration is proceeding, unbounded activity can take place elsewhere. There is no global state and consequently no "choice" to be made as to the "next" global state.

Related Research Articles

Logic programming is a programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, answer set programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:

In theoretical computer science, a nondeterministic Turing machine (NTM) is a theoretical model of computation whose governing rules specify more than one possible action when in some given situations. That is, an NTM's next state is not completely determined by its action and the current symbol it sees, unlike a deterministic Turing machine.

<span class="mw-page-title-main">Turing machine</span> Computation model defining an abstract machine

A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm.

In computability theory, a system of data-manipulation rules is said to be Turing-complete or computationally universal if it can be used to simulate any Turing machine. This means that this system is able to recognize or decide other data-manipulation rule sets. Turing completeness is used as a way to express the power of such a data-manipulation rule set. Virtually all programming languages today are Turing-complete.

In computer science, denotational semantics is an approach of formalizing the meanings of programming languages by constructing mathematical objects that describe the meanings of expressions from the languages. Other approaches providing formal semantics of programming languages include axiomatic semantics and operational semantics.

Hypercomputation or super-Turing computation refers to models of computation that can provide outputs that are not Turing-computable. Super-Turing computing, introduced at the early 1990's by Hava Siegelmann, refers to such neurological inspired, biological and physical realizable computing; It became the mathematical foundations of Lifelong Machine Learning. Hypercomputation, introduced as a field of science in the late 1990s, is said to be based on the Super Turing but it also includes constructs which are philosophical. For example, a machine that could solve the halting problem would be a hypercomputer; so too would one that can correctly evaluate every statement in Peano arithmetic.

Nondeterminism or nondeterministic may refer to:

<span class="mw-page-title-main">Concurrency (computer science)</span> Ability to execute a task in a non-serial manner

In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability of a program, algorithm, or problem into order-independent or partially-ordered components or units of computation.

The actor model in computer science is a mathematical model of concurrent computation that treats an actor as the basic building block of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging.

In computer science, a linear bounded automaton is a restricted form of Turing machine.

In theoretical computer science, Actor model theory concerns theoretical issues for the Actor model.

In computer science, the Actor model and process calculi are two closely related approaches to the modelling of concurrent digital computation. See Actor model and process calculi history.

In computer science, the Actor model, first published in 1973, is a mathematical model of concurrent computation.

In denotational semantics and domain theory, power domains are domains of nondeterministic and concurrent computations.

Indeterminacy in concurrent computation is concerned with the effects of indeterminacy in concurrent computation. Computation is an area in which indeterminacy is becoming increasingly important because of the massive increase in concurrency due to networking and the advent of many-core computer architectures. These computer systems make use of arbiters which give rise to indeterminacy.

The actor model and process calculi share an interesting history and co-evolution.

The denotational semantics of the Actor model is the subject of denotational domain theory for Actors. The historical development of this subject is recounted in [Hewitt 2008b].

In computer science, the Actor model, first published in 1973, is a mathematical model of concurrent computation. This article reports on the later history of the Actor model in which major themes were investigation of the basic power of the model, study of issues of compositionality, development of architectures, and application to Open systems. It is the follow on article to Actor model middle history which reports on the initial implementations, initial applications, and development of the first proof theory and denotational model.

A read-only Turing machine or two-way deterministic finite-state automaton (2DFA) is class of models of computability that behave like a standard Turing machine and can move in both directions across input, except cannot write to its input tape. The machine in its bare form is equivalent to a deterministic finite automaton in computational power, and therefore can only parse a regular language.

In mathematics, S2S is the monadic second order theory with two successors. It is one of the most expressive natural decidable theories known, with many decidable theories interpretable in S2S. Its decidability was proved by Rabin in 1969.

References

  1. Ord, Toby (2002). "Hypercomputation: computing more than the Turing machine". arXiv: math/0209332 .
  2. 1 2 Clinger, William D. (May 1, 1981). Foundations of Actor Semantics (AI Technical Report). Massachusetts Institute of Technology. hdl:1721.1/6935.
  3. 1 2 Dijkstra, Edsger (1976). A Discipline of Programming. Prentice-Hall Series in Automatic Computation. Prentice-Hall. ISBN   9780613924115.
  4. Hoare, C. A. R. (August 1978). "Communicating Sequential Processes". Communications of the ACM. 21 (8): 666–677. doi:10.1145/359576.359585. S2CID   849342.
  5. Plotkin, Gordon (September 1976). "A powerdomain construction". SIAM Journal on Computing. 5 (3): 452–487. doi:10.1137/0205035.
  6. Spaan, Edith; Torenvliet, Leen; van Emde Boas, Peter (February 1989). "Nondeterminism, Fairness and a Fundamental Analogy". Bulletin of the EATCS. 37: 186–193.
  7. Hewitt, Carl (April 1985). "The Challenge of Open Systems". BYTE. McGraw Hill. pp. 223–242. ISSN   0360-5280. Reprinted as Hewitt, Carl (April 1990). "The Challenge of Open Systems". In Partridge, Derek; Wilks, Yorick (eds.). The Foundations of Artificial Intelligence: A Sourcebook. Cambridge University Press. pp. 383–395. ISBN   9780521359443.
  8. Hewitt, Carl; Agha, Gul (1988). "Guarded Horn clause languages: are they deductive and logical?". Proceedings of the International Conference on Fifth Generation Computer Systems. FGCS 1988. Tokyo, Japan: OHMSHA Ltd. Tokyo and Springer-Verlag. pp. 650–657. ISBN   3540195580. Also as Hewitt, Carl; Agha, Gul (June 1991). "Guarded Horn clause languages: are they deductive and logical?". In Winston, Patrick Henry; Shellard, Sarah Alexandra (eds.). Artificial Intelligence at MIT: Expanding Frontiers. MIT Press. pp. 582–593. ISBN   9780262231503.
  9. Hewitt, Carl (May 2006). "What Is Commitment? Physical, Organizational, and Social". Coordination, Organizations, Institutions, and Norms in Agent Systems II. AAMAS 2006 International Workshop, COIN. Hakodate, Japan: Springer Berlin Heidelberg. pp. 293–307. doi:10.1007/978-3-540-74459-7_19.
  10. Hewitt, Carl (March 2006). "The Repeated Demise of Logic Programming and Why It Will Be Reincarnated". What Went Wrong and Why: Lessons from AI Research and Applications. 2006 AAAI Spring Symposium (Technical Report). Stanford, California: AAAI. pp. 2–9. SS-06-08. Retrieved March 10, 2022.