Process calculus

Last updated

In computer science, the process calculi (or process algebras) are a diverse family of related approaches for formally modelling concurrent systems. Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also provide algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes (e.g., using bisimulation). Leading examples of process calculi include CSP, CCS, ACP, and LOTOS. [1] More recent additions to the family include the π-calculus, the ambient calculus, PEPA, the fusion calculus and the join-calculus.

Contents

Essential features

While the variety of existing process calculi is very large (including variants that incorporate stochastic behaviour, timing information, and specializations for studying molecular interactions), there are several features that all process calculi have in common: [2]

Mathematics of processes

To define a process calculus, one starts with a set of names (or channels ) whose purpose is to provide means of communication. In many implementations, channels have rich internal structure to improve efficiency, but this is abstracted away in most theoretic models. In addition to names, one needs a means to form new processes from old ones. The basic operators, always present in some form or other, allow: [3]

Parallel composition

Parallel composition of two processes and , usually written , is the key primitive distinguishing the process calculi from sequential models of computation. Parallel composition allows computation in and to proceed simultaneously and independently. But it also allows interaction, that is synchronisation and flow of information from to (or vice versa) on a channel shared by both. Crucially, an agent or process can be connected to more than one channel at a time.

Channels may be synchronous or asynchronous. In the case of a synchronous channel, the agent sending a message waits until another agent has received the message. Asynchronous channels do not require any such synchronization. In some process calculi (notably the π-calculus) channels themselves can be sent in messages through (other) channels, allowing the topology of process interconnections to change. Some process calculi also allow channels to be created during the execution of a computation.

Communication

Interaction can be (but isn't always) a directed flow of information. That is, input and output can be distinguished as dual interaction primitives. Process calculi that make such distinctions typically define an input operator (e.g.) and an output operator (e.g.), both of which name an interaction point (here ) that is used to synchronise with a dual interaction primitive.

Should information be exchanged, it will flow from the outputting to the inputting process. The output primitive will specify the data to be sent. In , this data is . Similarly, if an input expects to receive data, one or more bound variables will act as place-holders to be substituted by data, when it arrives. In , plays that role. The choice of the kind of data that can be exchanged in an interaction is one of the key features that distinguishes different process calculi.

Sequential composition

Sometimes interactions must be temporally ordered. For example, it might be desirable to specify algorithms such as: first receive some data on and then send that data on . Sequential composition can be used for such purposes. It is well known from other models of computation. In process calculi, the sequentialisation operator is usually integrated with input or output, or both. For example, the process will wait for an input on . Only when this input has occurred will the process be activated, with the received data through substituted for identifier .

Reduction semantics

The key operational reduction rule, containing the computational essence of process calculi, can be given solely in terms of parallel composition, sequentialization, input, and output. The details of this reduction vary among the calculi, but the essence remains roughly the same. The reduction rule is:

The interpretation to this reduction rule is:

  1. The process sends a message, here , along the channel . Dually, the process receives that message on channel .
  2. Once the message has been sent, becomes the process , while becomes the process , which is with the place-holder substituted by , the data received on .

The class of processes that is allowed to range over as the continuation of the output operation substantially influences the properties of the calculus.

Hiding

Processes do not limit the number of connections that can be made at a given interaction point. But interaction points allow interference (i.e. interaction). For the synthesis of compact, minimal and compositional systems, the ability to restrict interference is crucial. Hiding operations allow control of the connections made between interaction points when composing agents in parallel. Hiding can be denoted in a variety of ways. For example, in the π-calculus the hiding of a name in can be expressed as , while in CSP it might be written as .

Recursion and replication

The operations presented so far describe only finite interaction and are consequently insufficient for full computability, which includes non-terminating behaviour. Recursion and replication are operations that allow finite descriptions of infinite behaviour. Recursion is well known from the sequential world. Replication can be understood as abbreviating the parallel composition of a countably infinite number of processes:

Null process

Process calculi generally also include a null process (variously denoted as , , , , or some other appropriate symbol) which has no interaction points. It is utterly inactive and its sole purpose is to act as the inductive anchor on top of which more interesting processes can be generated.

Discrete and continuous process algebra

Process algebra has been studied for discrete time and continuous time (real time or dense time). [4]

History

In the first half of the 20th century, various formalisms were proposed to capture the informal concept of a computable function, with μ-recursive functions, Turing machines and the lambda calculus possibly being the best-known examples today. The surprising fact that they are essentially equivalent, in the sense that they are all encodable into each other, supports the Church-Turing thesis. Another shared feature is more rarely commented on: they all are most readily understood as models of sequential computation. The subsequent consolidation of computer science required a more subtle formulation of the notion of computation, in particular explicit representations of concurrency and communication. Models of concurrency such as the process calculi, Petri nets in 1962, and the actor model in 1973 emerged from this line of inquiry.

Research on process calculi began in earnest with Robin Milner's seminal work on the Calculus of Communicating Systems (CCS) during the period from 1973 to 1980. C.A.R. Hoare's Communicating Sequential Processes (CSP) first appeared in 1978, and was subsequently developed into a full-fledged process calculus during the early 1980s. There was much cross-fertilization of ideas between CCS and CSP as they developed. In 1982 Jan Bergstra and Jan Willem Klop began work on what came to be known as the Algebra of Communicating Processes (ACP), and introduced the term process algebra to describe their work. [1] CCS, CSP, and ACP constitute the three major branches of the process calculi family: the majority of the other process calculi can trace their roots to one of these three calculi.

Current research

Various process calculi have been studied and not all of them fit the paradigm sketched here. The most prominent example may be the ambient calculus. This is to be expected as process calculi are an active field of study. Currently research on process calculi focuses on the following problems.

Software implementations

The ideas behind process algebra have given rise to several tools including:

Relationship to other models of concurrency

The history monoid is the free object that is generically able to represent the histories of individual communicating processes. A process calculus is then a formal language imposed on a history monoid in a consistent fashion. [6] That is, a history monoid can only record a sequence of events, with synchronization, but does not specify the allowed state transitions. Thus, a process calculus is to a history monoid what a formal language is to a free monoid (a formal language is a subset of the set of all possible finite-length strings of an alphabet generated by the Kleene star).

The use of channels for communication is one of the features distinguishing the process calculi from other models of concurrency, such as Petri nets and the actor model (see Actor model and process calculi). One of the fundamental motivations for including channels in the process calculi was to enable certain algebraic techniques, thereby making it easier to reason about processes algebraically.

See also

Related Research Articles

In mathematics, a geometric algebra is an extension of elementary algebra to work with geometrical objects such as vectors. Geometric algebra is built out of two fundamental operations, addition and the geometric product. Multiplication of vectors results in higher-dimensional objects called multivectors. Compared to other formalisms for manipulating geometric objects, geometric algebra is noteworthy for supporting vector division and addition of objects of different dimensions.

<span class="mw-page-title-main">Inner product space</span> Generalization of the dot product; used to define Hilbert spaces

In mathematics, an inner product space is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often denoted with angle brackets such as in . Inner products allow formal definitions of intuitive geometric notions, such as lengths, angles, and orthogonality of vectors. Inner product spaces generalize Euclidean vector spaces, in which the inner product is the dot product or scalar product of Cartesian coordinates. Inner product spaces of infinite dimension are widely used in functional analysis. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898.

In quantum mechanics, a density matrix is a matrix that describes the quantum state of a physical system. It allows for the calculation of the probabilities of the outcomes of any measurement performed upon this system, using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed states. Mixed states arise in quantum mechanics in two different situations:

  1. when the preparation of the system is not fully known, and thus one must deal with a statistical ensemble of possible preparations, and
  2. when one wants to describe a physical system that is entangled with another, without describing their combined state; this case is typical for a system interacting with some environment.

In mathematics, in particular abstract algebra, a graded ring is a ring such that the underlying additive group is a direct sum of abelian groups such that . The index set is usually the set of nonnegative integers or the set of integers, but can be any monoid. The direct sum decomposition is usually referred to as gradation or grading.

In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite dimension is a basis for whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection is also orthonormal, and every orthonormal basis for arises in this fashion.

In computer science, communicating sequential processes (CSP) is a formal language for describing patterns of interaction in concurrent systems. It is a member of the family of mathematical theories of concurrency known as process algebras, or process calculi, based on message passing via channels. CSP was highly influential in the design of the occam programming language and also influenced the design of programming languages such as Limbo, RaftLib, Erlang, Go, Crystal, and Clojure's core.async.

<span class="mw-page-title-main">Semiring</span> Algebraic ring that need not have additive negative elements

In abstract algebra, a semiring is an algebraic structure. It is a generalization of a ring, dropping the requirement that each element must have an additive inverse. At the same time, it is a generalization of bounded distributive lattices.

In theoretical computer science, the π-calculus is a process calculus. The π-calculus allows channel names to be communicated along the channels themselves, and in this way it is able to describe concurrent computations whose network configuration may change during the computation.

In mathematics, weak convergence in a Hilbert space is convergence of a sequence of points in the weak topology.

In computer science, the Actor model and process calculi are two closely related approaches to the modelling of concurrent digital computation. See Actor model and process calculi history.

Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts.

<span class="mw-page-title-main">Beam emittance</span> Property of a charged particle beam

In accelerator physics, emittance is a property of a charged particle beam. It refers to the area occupied by the beam in a position-and-momentum phase space.

The actor model and process calculi share an interesting history and co-evolution.

The algebra of communicating processes (ACP) is an algebraic approach to reasoning about concurrent systems. It is a member of the family of mathematical theories of concurrency known as process algebras or process calculi. ACP was initially developed by Jan Bergstra and Jan Willem Klop in 1982, as part of an effort to investigate the solutions of unguarded recursive equations. More so than the other seminal process calculi, the development of ACP focused on the algebra of processes, and sought to create an abstract, generalized axiomatic system for processes, and in fact the term process algebra was coined during the research that led to ACP.

Holevo's theorem is an important limitative theorem in quantum computing, an interdisciplinary field of physics and computer science. It is sometimes called Holevo's bound, since it establishes an upper bound to the amount of information that can be known about a quantum state. It was published by Alexander Holevo in 1973.

Interaction nets are a graphical model of computation devised by Yves Lafont in 1990 as a generalisation of the proof structures of linear logic. An interaction net system is specified by a set of agent types and a set of interaction rules. Interaction nets are an inherently distributed model of computation in the sense that computations can take place simultaneously in many parts of an interaction net, and no synchronisation is needed. The latter is guaranteed by the strong confluence property of reduction in this model of computation. Thus interaction nets provide a natural language for massive parallelism. Interaction nets are at the heart of many implementations of the lambda calculus, such as efficient closed reduction and optimal, in Lévy's sense, Lambdascope.

In computer science, in the area of formal language theory, frequent use is made of a variety of string functions; however, the notation used is different from that used for computer programming, and some commonly used functions in the theoretical realm are rarely used when programming. This article defines some of these basic terms.

In mathematics and computer science, a history monoid is a way of representing the histories of concurrently running computer processes as a collection of strings, each string representing the individual history of a process. The history monoid provides a set of synchronization primitives for providing rendezvous points between a set of independently executing processes or threads.

In computer science, a computation is said to diverge if it does not terminate or terminates in an exceptional state. Otherwise it is said to converge. In domains where computations are expected to be infinite, such as process calculi, a computation is said to diverge if it fails to be productive.

Quantum refereed game in quantum information processing is a class of games in the general theory of quantum games. It is played between two players, Alice and Bob, and arbitrated by a referee. The referee outputs the pay-off for the players after interacting with them for a fixed number of rounds, while exchanging quantum information.

References

  1. 1 2 Baeten, J.C.M. (2004). "A brief history of process algebra" (PDF). Rapport CSR 04-02. Vakgroep Informatica, Technische Universiteit Eindhoven.
  2. Pierce, Benjamin (1996-12-21). "Foundational Calculi for Programming Languages". The Computer Science and Engineering Handbook. CRC Press. pp. 2190–2207. ISBN   0-8493-2909-4.
  3. Baeten, J.C.M.; Bravetti, M. (August 2005). "A Generic Process Algebra". Algebraic Process Calculi: The First Twenty Five Years and Beyond (BRICS Notes Series NS-05-3). Bertinoro, Forlì, Italy: BRICS, Department of Computer Science, University of Aarhus. Retrieved 2007-12-29.
  4. Baeten, J. C. M.; Middelburg, C. A. (2000). "Process algebra with timing: Real time and discrete time": 627–684. CiteSeerX   10.1.1.42.729 .{{cite journal}}: Cite journal requires |journal= (help)
  5. Sangiorgi, Davide (1993). "From π-calculus to higher-order π-calculus — and back". In Gaudel, M. -C.; Jouannaud, J. -P. (eds.). TAPSOFT'93: Theory and Practice of Software Development. Lecture Notes in Computer Science. Vol. 668. Springer Berlin Heidelberg. pp. 151–166. doi: 10.1007/3-540-56610-4_62 . ISBN   9783540475989.
  6. Mazurkiewicz, Antoni (1995). "Introduction to Trace Theory". In Diekert, V.; Rozenberg, G. (eds.). The Book of Traces. Singapore: World Scientific. pp. 3–41. ISBN   981-02-2058-8. Archived from the original (PostScript) on 2011-06-13. Retrieved 2009-04-29.

Further reading