In computer science, communicating sequential processes (CSP) is a formal language for describing patterns of interaction in concurrent systems. [1] It is a member of the family of mathematical theories of concurrency known as process algebras, or process calculi, based on message passing via channels. CSP was highly influential in the design of the occam programming language [1] [2] and also influenced the design of programming languages such as Limbo, [3] RaftLib, Erlang, [4] Go, [5] [3] Crystal, and Clojure's core.async. [6]
CSP was first described in a 1978 article by Tony Hoare, [7] but has since evolved substantially. [8] CSP has been practically applied in industry as a tool for specifying and verifying the concurrent aspects of a variety of different systems, such as the T9000 Transputer, [9] as well as a secure e-commerce system. [10] The theory of CSP itself is also still the subject of active research, including work to increase its range of practical applicability (e.g., increasing the scale of the systems that can be tractably analyzed). [11]
The version of CSP presented in Hoare's original 1978 article was essentially a concurrent programming language rather than a process calculus. It had a substantially different syntax than later versions of CSP, did not possess mathematically defined semantics, [12] and was unable to represent unbounded nondeterminism. [13] Programs in the original CSP were written as a parallel composition of a fixed number of sequential processes communicating with each other strictly through synchronous message-passing. In contrast to later versions of CSP, each process was assigned an explicit name, and the source or destination of a message was defined by specifying the name of the intended sending or receiving process. For example, the process
COPY = *[c:character; west?c → east!c]
repeatedly receives a character from the process named west
and sends that character to process named east
. The parallel composition
[west::DISASSEMBLE || X::COPY || east::ASSEMBLE]
assigns the names west
to the DISASSEMBLE
process, X
to the COPY
process, and east
to the ASSEMBLE
process, and executes these three processes concurrently. [7]
Following the publication of the original version of CSP, Hoare, Stephen Brookes, and A. W. Roscoe developed and refined the theory of CSP into its modern, process algebraic form. The approach taken in developing CSP into a process algebra was influenced by Robin Milner's work on the Calculus of Communicating Systems (CCS) and conversely. The theoretical version of CSP was initially presented in a 1984 article by Brookes, Hoare, and Roscoe, [14] and later in Hoare's book Communicating Sequential Processes, [12] which was published in 1985. In September 2006, that book was still the third-most cited computer science reference of all time according to Citeseer [ citation needed ] (albeit an unreliable source due to the nature of its sampling). The theory of CSP has undergone a few minor changes since the publication of Hoare's book. Most of these changes were motivated by the advent of automated tools for CSP process analysis and verification. Roscoe's The Theory and Practice of Concurrency [1] describes this newer version of CSP.
An early and important application of CSP was its use for specification and verification of elements of the INMOS T9000 Transputer, a complex superscalar pipelined processor designed to support large-scale multiprocessing. CSP was employed in verifying the correctness of both the processor pipeline and the Virtual Channel Processor, which managed off-chip communications for the processor. [9]
Industrial application of CSP to software design has usually focused on dependable and safety-critical systems. For example, the Bremen Institute for Safe Systems and Daimler-Benz Aerospace modeled a fault-management system and avionics interface (consisting of about 23,000 lines of code) intended for use on the International Space Station in CSP, and analyzed the model to confirm that their design was free of deadlock and livelock. [15] [16] The modeling and analysis process was able to uncover a number of errors that would have been difficult to detect using testing alone. Similarly, Praxis High Integrity Systems applied CSP modeling and analysis during the development of software (approximately 100,000 lines of code) for a secure smart-card certification authority to verify that their design was secure and free of deadlock. Praxis claims that the system has a much lower defect rate than comparable systems. [10]
Since CSP is well-suited to modeling and analyzing systems that incorporate complex message exchanges, it has also been applied to the verification of communications and security protocols. A prominent example of this sort of application is Lowe's use of CSP and the FDR refinement-checker to discover a previously unknown attack on the Needham–Schroeder public-key authentication protocol, and then to develop a corrected protocol able to defeat the attack. [17]
As its name suggests, CSP allows the description of systems in terms of component processes that operate independently, and interact with each other solely through message-passing communication. However, the "Sequential" part of the CSP name is now something of a misnomer, since modern CSP allows component processes to be defined both as sequential processes, and as the parallel composition of more primitive processes. The relationships between different processes, and the way each process communicates with its environment, are described using various process algebraic operators. Using this algebraic approach, quite complex process descriptions can be easily constructed from a few primitive elements.
CSP provides two classes of primitives in its process algebra:
CSP has a wide range of algebraic operators. The principal ones are:
One of the archetypal CSP examples is an abstract representation of a chocolate vending machine and its interactions with a person wishing to buy some chocolate. This vending machine might be able to carry out two different events, “coin” and “choc” which represent the insertion of payment and the delivery of a chocolate respectively. A machine which demands payment (only in cash) before offering a chocolate can be written as:
A person who might choose to use a coin or card to make payments could be modelled as:
These two processes can be put in parallel, so that they can interact with each other. The behaviour of the composite process depends on the events that the two component processes must synchronise on. Thus,
whereas if synchronization was only required on “coin”, we would obtain
If we abstract this latter composite process by hiding the “coin” and “card” events, i.e.
we get the nondeterministic process
This is a process which either offers a “choc” event and then stops, or just stops. In other words, if we treat the abstraction as an external view of the system (e.g., someone who does not see the decision reached by the person), nondeterminism has been introduced.
The syntax of CSP defines the “legal” ways in which processes and events may be combined. Let e be an event, and X be a set of events. Then the basic syntax of CSP can be defined as:
Note that, in the interests of brevity, the syntax presented above omits the process, which represents divergence, as well as various operators such as alphabetized parallel, piping, and indexed choices.
This section needs expansion. You can help by adding to it. (June 2008) |
CSP has been imbued with several different formal semantics, which define the meaning of syntactically correct CSP expressions. The theory of CSP includes mutually consistent denotational semantics, algebraic semantics, and operational semantics.
The three major denotational models of CSP are the traces model, the stable failures model, and the failures/divergences model. Semantic mappings from process expressions to each of these three models provide the denotational semantics for CSP. [1]
The traces model defines the meaning of a process expression as the set of sequences of events (traces) that the process can be observed to perform. For example,
More formally, the meaning of a process P in the traces model is defined as such that:
where is the set of all possible finite sequences of events.
The stable failures model extends the traces model with refusal sets, which are sets of events that a process can refuse to perform. A failure is a pair , consisting of a trace s, and a refusal set X which identifies the events that a process may refuse once it has executed the trace s. The observed behavior of a process in the stable failures model is described by the pair . For example,
The failures/divergence model further extends the failures model to handle divergence. The semantics of a process in the failures/divergences model is a pair where is defined as the set of all traces that can lead to divergent behavior and .
Over the years, a number of tools for analyzing and understanding systems described using CSP have been produced. Early tool implementations used a variety of machine-readable syntaxes for CSP, making input files written for different tools incompatible. However, most CSP tools have now standardized on the machine-readable dialect of CSP devised by Bryan Scattergood, sometimes referred to as CSPM. [18] The CSPM dialect of CSP possesses a formally defined operational semantics, which includes an embedded functional programming language.
The most well-known CSP tool is probably Failures/Divergence Refinement 2 ( FDR2 ), which is a commercial product developed by Formal Systems (Europe) Ltd. FDR2 is often described as a model checker, but is technically a refinement checker, in that it converts two CSP process expressions into Labelled Transition Systems (LTSs), and then determines whether one of the processes is a refinement of the other within some specified semantic model (traces, failures, or failures/divergence). [19] FDR2 applies various state-space compression algorithms to the process LTSs in order to reduce the size of the state-space that must be explored during a refinement check. FDR2 has been succeeded by FDR3, a completely re-written version incorporating amongst other things parallel execution and an integrated type checker. It is released by the University of Oxford, which also released FDR2 in the period 2008–12. [20]
The Adelaide Refinement Checker (ARC) [21] is a CSP refinement checker developed by the Formal Modelling and Verification Group at The University of Adelaide. ARC differs from FDR2 in that it internally represents CSP processes as Ordered Binary Decision Diagrams (OBDDs), which alleviates the state explosion problem of explicit LTS representations without requiring the use of state-space compression algorithms such as those used in FDR2.
The ProB project, [22] which is hosted by the Institut für Informatik, Heinrich-Heine-Universität Düsseldorf, was originally created to support analysis of specifications constructed in the B method. However, it also includes support for analysis of CSP processes both through refinement checking, and LTL model-checking. ProB can also be used to verify properties of combined CSP and B specifications. A ProBE CSP Animator is integrated in FDR3.
The Process Analysis Toolkit (PAT) [23] [24] is a CSP analysis tool developed in the School of Computing at the National University of Singapore. PAT is able to perform refinement checking, LTL model-checking, and simulation of CSP and Timed CSP processes. The PAT process language extends CSP with support for mutable shared variables, asynchronous message passing, and a variety of fairness and quantitative time related process constructs such as deadline
and waituntil
. The underlying design principle of the PAT process language is to combine a high-level specification language with procedural programs (e.g. an event in PAT may be a sequential program or even an external C# library call) for greater expressiveness. Mutable shared variables and asynchronous channels provide a convenient syntactic sugar for well-known process modelling patterns used in standard CSP. The PAT syntax is similar, but not identical, to CSPM. [25] The principal differences between the PAT syntax and standard CSPM are the use of semicolons to terminate process expressions, the inclusion of syntactic sugar for variables and assignments, and the use of slightly different syntax for internal choice and parallel composition.
VisualNets [26] produces animated visualisations of CSP systems from specifications, and supports timed CSP.
CSPsim [27] is a lazy simulator. It does not model check CSP, but is useful for exploring very large (potentially infinite) systems.
SyncStitch is a CSP refinement checker with interactive modeling and analyzing environment. It has a graphical state-transition diagram editor. The user can model the behavior of processes as not only CSP expressions but also state-transition diagrams. The result of checking are also reported graphically as computation-trees and can be analyzed interactively with peripheral inspecting tools. In addition to refinement checks, It can perform deadlock check and livelock check.
Several other specification languages and formalisms have been derived from, or inspired by, the classic untimed CSP, including:
In as much as it is concerned with concurrent processes that exchange messages, the actor model is broadly similar to CSP. However, the two models make some fundamentally different choices with regard to the primitives they provide:
Note that the aforementioned properties do not necessarily refer to the original CSP paper by Hoare, but rather the modern incarnation of the idea as seen in implementations such as Go and Clojure's core.async. In the original paper, channels were not a central part of the specification, and the sender and receiver processes actually identify each other by name.
In 1990, “A Queen’s Award for Technological Achievement has been conferred ... on [Oxford University] Computing Laboratory. The award recognises a successful collaboration between the laboratory and Inmos Ltd. … Inmos’ flagship product is the ‘transputer’, a microprocessor with many of the parts that would normally be needed in addition built into the same single component.” [29] According to Tony Hoare, [30] “The INMOS Transputer was as embodiment of the ideas … of building microprocessors that could communicate with each other along wires that would stretch between their terminals. The founder had the vision that the CSP ideas were ripe for industrial exploitation, and he made that the basis of the language for programming Transputers, which was called Occam. … The company estimated it enabled them to deliver the hardware one year earlier than would otherwise have happened. They applied for and won a Queen’s award for technological achievement, in conjunction with Oxford University Computing Laboratory.”
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.
In quantum mechanics, a density matrix is a matrix that describes an ensemble of physical systems as quantum states. It allows for the calculation of the probabilities of the outcomes of any measurements performed upon the systems of the ensemble using the Born rule. It is a generalization of the more usual state vectors or wavefunctions: while those can only represent pure states, density matrices can also represent mixed ensembles. Mixed ensembles arise in quantum mechanics in two different situations:
In computer science, domain relational calculus (DRC) is a calculus that was introduced by Michel Lacroix and Alain Pirotte as a declarative database query language for the relational data model.
In thermodynamics, the Helmholtz free energy is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.
An operator is a function over a space of physical states onto another space of states. The simplest example of the utility of operators is the study of symmetry. Because of this, they are useful tools in classical mechanics. Operators are even more important in quantum mechanics, where they form an intrinsic part of the formulation of the theory.
In theoretical computer science, the π-calculus is a process calculus. The π-calculus allows channel names to be communicated along the channels themselves, and in this way it is able to describe concurrent computations whose network configuration may change during the computation.
In classical statistical mechanics, the equipartition theorem relates the temperature of a system to its average energies. The equipartition theorem is also known as the law of equipartition, equipartition of energy, or simply equipartition. The original idea of equipartition was that, in thermal equilibrium, energy is shared equally among all of its various forms; for example, the average kinetic energy per degree of freedom in translational motion of a molecule should equal that in rotational motion.
In physics, the S-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT).
In differential geometry, the Frenet–Serret formulas describe the kinematic properties of a particle moving along a differentiable curve in three-dimensional Euclidean space , or the geometric properties of the curve itself irrespective of any motion. More specifically, the formulas describe the derivatives of the so-called tangent, normal, and binormal unit vectors in terms of each other. The formulas are named after the two French mathematicians who independently discovered them: Jean Frédéric Frenet, in his thesis of 1847, and Joseph Alfred Serret, in 1851. Vector notation and linear algebra currently used to write these formulas were not yet available at the time of their discovery.
In computer science, the process calculi are a diverse family of related approaches for formally modelling concurrent systems. Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also provide algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes. Leading examples of process calculi include CSP, CCS, ACP, and LOTOS. More recent additions to the family include the π-calculus, the ambient calculus, PEPA, the fusion calculus and the join-calculus.
Default logic is a non-monotonic logic proposed by Raymond Reiter to formalize reasoning with default assumptions.
The Lawson criterion is a figure of merit used in nuclear fusion research. It compares the rate of energy being generated by fusion reactions within the fusion fuel to the rate of energy losses to the environment. When the rate of production is higher than the rate of loss, the system will produce net energy. If enough of that energy is captured by the fuel, the system will become self-sustaining and is said to be ignited.
In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.
The Stokes parameters are a set of values that describe the polarization state of electromagnetic radiation. They were defined by George Gabriel Stokes in 1852, as a mathematically convenient alternative to the more common description of incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. The effect of an optical system on the polarization of light can be determined by constructing the Stokes vector for the input light and applying Mueller calculus, to obtain the Stokes vector of the light leaving the system. They can be determined from directly observable phenomena. The original Stokes paper was discovered independently by Francis Perrin in 1942 and by Subrahamanyan Chandrasekhar in 1947, who named it as the Stokes parameters.
Ostwald ripening is a phenomenon observed in solid solutions and liquid sols that involves the change of an inhomogeneous structure over time, in that small crystals or sol particles first dissolve and then redeposit onto larger crystals or sol particles.
Photon polarization is the quantum mechanical description of the classical polarized sinusoidal plane electromagnetic wave. An individual photon can be described as having right or left circular polarization, or a superposition of the two. Equivalently, a photon can be described as having horizontal or vertical linear polarization, or a superposition of the two.
In Riemannian geometry, Gauss's lemma asserts that any sufficiently small sphere centered at a point in a Riemannian manifold is perpendicular to every geodesic through the point. More formally, let M be a Riemannian manifold, equipped with its Levi-Civita connection, and p a point of M. The exponential map is a mapping from the tangent space at p to M:
In computer science, a computation is said to diverge if it does not terminate or terminates in an exceptional state. Otherwise it is said to converge. In domains where computations are expected to be infinite, such as process calculi, a computation is said to diverge if it fails to be productive.
The cluster-expansion approach is a technique in quantum mechanics that systematically truncates the BBGKY hierarchy problem that arises when quantum dynamics of interacting systems is solved. This method is well suited for producing a closed set of numerically computable equations that can be applied to analyze a great variety of many-body and/or quantum-optical problems. For example, it is widely applied in semiconductor quantum optics and it can be applied to generalize the semiconductor Bloch equations and semiconductor luminescence equations.
In computational and mathematical biology, a biological lattice-gas cellular automaton (BIO-LGCA) is a discrete model for moving and interacting biological agents, a type of cellular automaton. The BIO-LGCA is based on the lattice-gas cellular automaton (LGCA) model used in fluid dynamics. A BIO-LGCA model describes cells and other motile biological agents as point particles moving on a discrete lattice, thereby interacting with nearby particles. Contrary to classic cellular automaton models, particles in BIO-LGCA are defined by their position and velocity. This allows to model and analyze active fluids and collective migration mediated primarily through changes in momentum, rather than density. BIO-LGCA applications include cancer invasion and cancer progression.