Concurrency (computer science)

Last updated
The "Dining Philosophers", a classic problem involving concurrency and shared resources An illustration of the dining philosophers problem.png
The "Dining Philosophers", a classic problem involving concurrency and shared resources

In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability of a program, algorithm, or problem into order-independent or partially-ordered components or units of computation. [1]

Contents

According to Rob Pike, concurrency is the composition of independently executing computations, [2] and concurrency is not parallelism: concurrency is about dealing with lots of things at once but parallelism is about doing lots of things at once. Concurrency is about structure, parallelism is about execution, concurrency provides a way to structure a solution to solve a problem that may (but not necessarily) be parallelizable. [3]

A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi, the parallel random-access machine model, the actor model and the Reo Coordination Language.

History

As Leslie Lamport (2015) notes, "While concurrent program execution had been considered for years, the computer science of concurrency began with Edsger Dijkstra's seminal 1965 paper that introduced the mutual exclusion problem. ... The ensuing decades have seen a huge growth of interest in concurrency—particularly in distributed systems. Looking back at the origins of the field, what stands out is the fundamental role played by Edsger Dijkstra". [4]

Issues

Because computations in a concurrent system can interact with each other while being executed, the number of possible execution paths in the system can be extremely large, and the resulting outcome can be indeterminate. Concurrent use of shared resources can be a source of indeterminacy leading to issues such as deadlocks, and resource starvation. [5]

Design of concurrent systems often entails finding reliable techniques for coordinating their execution, data exchange, memory allocation, and execution scheduling to minimize response time and maximise throughput. [6]

Theory

Concurrency theory has been an active field of research in theoretical computer science. One of the first proposals was Carl Adam Petri's seminal work on Petri nets in the early 1960s. In the years since, a wide variety of formalisms have been developed for modeling and reasoning about concurrency.

Models

A number of formalisms for modeling and understanding concurrent systems have been developed, including: [7]

Some of these models of concurrency are primarily intended to support reasoning and specification, while others can be used through the entire development cycle, including design, implementation, proof, testing and simulation of concurrent systems. Some of these are based on message passing, while others have different mechanisms for concurrency.

The proliferation of different models of concurrency has motivated some researchers to develop ways to unify these different theoretical models. For example, Lee and Sangiovanni-Vincentelli have demonstrated that a so-called "tagged-signal" model can be used to provide a common framework for defining the denotational semantics of a variety of different models of concurrency, [9] while Nielsen, Sassone, and Winskel have demonstrated that category theory can be used to provide a similar unified understanding of different models. [10]

The Concurrency Representation Theorem in the actor model provides a fairly general way to represent concurrent systems that are closed in the sense that they do not receive communications from outside. (Other concurrency systems, e.g., process calculi can be modeled in the actor model using a two-phase commit protocol. [11] ) The mathematical denotation denoted by a closed system S is constructed increasingly better approximations from an initial behavior called S using a behavior approximating function progressionS to construct a denotation (meaning ) for S as follows: [12]

DenoteS ≡ ⊔i∈ωprogressionSi(⊥S)

In this way, S can be mathematically characterized in terms of all its possible behaviors.

Logics

Various types of temporal logic [13] can be used to help reason about concurrent systems. Some of these logics, such as linear temporal logic and computation tree logic, allow assertions to be made about the sequences of states that a concurrent system can pass through. Others, such as action computational tree logic, Hennessy–Milner logic, and Lamport's temporal logic of actions, build their assertions from sequences of actions (changes in state). The principal application of these logics is in writing specifications for concurrent systems. [5]

Practice

Concurrent programming encompasses programming languages and algorithms used to implement concurrent systems. Concurrent programming is usually considered[ by whom? ] to be more general than parallel programming because it can involve arbitrary and dynamic patterns of communication and interaction, whereas parallel systems generally[ according to whom? ] have a predefined and well-structured communications pattern. The base goals of concurrent programming include correctness, performance and robustness. Concurrent systems such as Operating systems and Database management systems are generally designed[ by whom? ] to operate indefinitely, including automatic recovery from failure, and not terminate unexpectedly (see Concurrency control). Some[ example needed ] concurrent systems implement a form of transparent concurrency, in which concurrent computational entities may compete for and share a single resource, but the complexities of this competition and sharing are shielded from the programmer.

Because they use shared resources, concurrent systems in general[ according to whom? ] require the inclusion of some[ example needed ] kind of arbiter somewhere in their implementation (often in the underlying hardware), to control access to those resources. The use of arbiters introduces the possibility of indeterminacy in concurrent computation which has major implications for practice including correctness and performance. For example, arbitration introduces unbounded nondeterminism which raises issues with model checking because it causes explosion in the state space and can even cause models to have an infinite number of states.

Some concurrent programming models include coprocesses and deterministic concurrency. In these models, threads of control explicitly yield their timeslices, either to the system or to another process.

See also

Related Research Articles

In computer science, denotational semantics is an approach of formalizing the meanings of programming languages by constructing mathematical objects that describe the meanings of expressions from the languages. Other approaches providing formal semantics of programming languages include axiomatic semantics and operational semantics.

<span class="mw-page-title-main">Parallel computing</span> Programming paradigm in which many processes are executed simultaneously

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.

<span class="mw-page-title-main">Leslie Lamport</span> American computer scientist and mathematician

Leslie B. Lamport is an American computer scientist and mathematician. Lamport is best known for his seminal work in distributed systems, and as the initial developer of the document preparation system LaTeX and the author of its first manual.

<span class="mw-page-title-main">Model checking</span> Computer science field

In computer science, model checking or property checking is a method for checking whether a finite-state model of a system meets a given specification. This is typically associated with hardware or software systems, where the specification contains liveness requirements as well as safety requirements.

<span class="mw-page-title-main">Theoretical computer science</span> Subfield of computer science and mathematics

Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory.

In programming language theory, semantics is the rigorous mathematical study of the meaning of programming languages. Semantics assigns computational meaning to valid strings in a programming language syntax.

The actor model in computer science is a mathematical model of concurrent computation that treats an actor as the basic building block of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging.

In theoretical computer science, Actor model theory concerns theoretical issues for the Actor model.

In computer science, the Actor model and process calculi are two closely related approaches to the modelling of concurrent digital computation. See Actor model and process calculi history.

In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its generality: how well a range of different problems can be expressed for a variety of different architectures, and its performance: how efficiently the compiled programs can execute. The implementation of a parallel programming model can take the form of a library invoked from a sequential language, as an extension to an existing language, or as an entirely new language.

Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts.

In computer science, the Actor model, first published in 1973, is a mathematical model of concurrent computation.

In computer science, unbounded nondeterminism or unbounded indeterminacy is a property of concurrency by which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced. Unbounded nondeterminism became an important issue in the development of the denotational semantics of concurrency, and later became part of research into the theoretical concept of hypercomputation.

Indeterminacy in concurrent computation is concerned with the effects of indeterminacy in concurrent computation. Computation is an area in which indeterminacy is becoming increasingly important because of the massive increase in concurrency due to networking and the advent of many-core computer architectures. These computer systems make use of arbiters which gives rise to indeterminacy.

The actor model and process calculi share an interesting history and co-evolution.

Uzi Vishkin is a computer scientist at the University of Maryland, College Park, where he is Professor of Electrical and Computer Engineering at the University of Maryland Institute for Advanced Computer Studies (UMIACS). Uzi Vishkin is known for his work in the field of parallel computing. In 1996, he was inducted as a Fellow of the Association for Computing Machinery, with the following citation: "One of the pioneers of parallel algorithms research, Dr. Vishkin's seminal contributions played a leading role in forming and shaping what thinking in parallel has come to mean in the fundamental theory of Computer Science."

Explicit Multi-Threading (XMT) is a computer science paradigm for building and programming parallel computers designed around the parallel random-access machine (PRAM) parallel computational model. A more direct explanation of XMT starts with the rudimentary abstraction that made serial computing simple: that any single instruction available for execution in a serial program executes immediately. A consequence of this abstraction is a step-by-step (inductive) explication of the instruction available next for execution. The rudimentary parallel abstraction behind XMT, dubbed Immediate Concurrent Execution (ICE) in Vishkin (2011), is that indefinitely many instructions available for concurrent execution execute immediately. A consequence of ICE is a step-by-step (inductive) explication of the instructions available next for concurrent execution. Moving beyond the serial von Neumann computer, the aspiration of XMT is that computer science will again be able to augment mathematical induction with a simple one-line computing abstraction.

<span class="mw-page-title-main">Jayadev Misra</span> American computer scientist (born 1947)

Jayadev Misra is an Indian-born computer scientist who has spent most of his professional career in the United States. He is the Schlumberger Centennial Chair Emeritus in computer science and a University Distinguished Teaching Professor Emeritus at the University of Texas at Austin. Professionally he is known for his contributions to the formal aspects of concurrent programming and for jointly spearheading, with Sir Tony Hoare, the project on Verified Software Initiative (VSI).

References

  1. Lamport, Leslie (July 1978). "Time, Clocks, and the Ordering of Events in a Distributed System" (PDF). Communications of the ACM. 21 (7): 558–565. doi:10.1145/359545.359563. S2CID   215822405 . Retrieved 4 February 2016.
  2. "Go Concurrency Patterns". talks.golang.org. Retrieved 2021-04-08.
  3. "Concurrency is not Parallelism". talks.golang.org. Retrieved 2021-04-08.
  4. Lamport, Leslie. "Turing Lecture: The Computer Science of Concurrency: The Early Years (Communications of the ACM, Vol. 58 No. 6, June 2015)". ACM . Retrieved 22 Mar 2017.
  5. 1 2 Cleaveland, Rance; Scott Smolka (December 1996). "Strategic Directions in Concurrency Research". ACM Computing Surveys. 28 (4): 607. doi:10.1145/242223.242252. S2CID   13264261.
  6. Campbell, Colin; Johnson, Ralph; Miller, Ade; Toub, Stephen (August 2010). Parallel Programming with Microsoft .NET. Microsoft Press. ISBN   978-0-7356-5159-3.
  7. Filman, Robert; Daniel Friedman (1984). Coordinated Computing - Tools and Techniques for Distributed Software . McGraw-Hill. ISBN   978-0-07-022439-1.
  8. Keller, Jörg; Christoph Keßler; Jesper Träff (2001). Practical PRAM Programming. John Wiley and Sons.
  9. Lee, Edward; Alberto Sangiovanni-Vincentelli (December 1998). "A Framework for Comparing Models of Computation" (PDF). IEEE Transactions on CAD . 17 (12): 1217–1229. doi:10.1109/43.736561.
  10. Mogens Nielsen; Vladimiro Sassone; Glynn Winskel (1993). "Relationships Between Models of Concurrency". REX School/Symposium.
  11. Frederick Knabe. A Distributed Protocol for Channel-Based Communication with Choice PARLE 1992.
  12. William Clinger (June 1981). "Foundations of Actor Semantics". Mathematics Doctoral Dissertation. MIT. hdl:1721.1/6935.{{cite journal}}: Cite journal requires |journal= (help)
  13. Roscoe, Colin (2001). Modal and Temporal Properties of Processes. Springer. ISBN   978-0-387-98717-0.

Further reading