List of concurrent and parallel programming languages

Last updated

This article lists concurrent and parallel programming languages, categorizing them by a defining paradigm. Concurrent and parallel programming languages involve multiple timelines. Such languages provide synchronization constructs whose behavior is defined by a parallel execution model. A concurrent programming language is defined as one which uses the concept of simultaneously executing processes or threads of execution as a means of structuring a program. A parallel language is able to express programs that are executable on more than one processor. Both types are listed, as concurrency is a useful tool in expressing parallelism, but it is not necessary. In both cases, the features must be part of the language syntax and not an extension such as a library (libraries such as the posix-thread library implement a parallel execution model but lack the syntax and grammar required to be a programming language).

Contents

The following categories aim to capture the main, defining feature of the languages contained, but they are not necessarily orthogonal.

Coordination languages

Dataflow programming

Distributed computing

Event-driven and hardware description

Functional programming

Logic programming

Monitor-based

Multi-threaded

Object-oriented programming

Partitioned global address space (PGAS)

Message passing

Actor model

CSP-based

APIs/frameworks

These application programming interfaces support parallelism in host languages.

See also

Related Research Articles

In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a declarative programming paradigm in which function definitions are trees of expressions that map values to other values, rather than a sequence of imperative statements which update the running state of the program.

This is a "genealogy" of programming languages. Languages are categorized under the ancestor language with the strongest influence. Those ancestor languages are listed in alphabetic order. Any such categorization has a large arbitrary element, since programming languages often incorporate major ideas from multiple sources.

Programming languages can be grouped by the number and types of paradigms supported.

<span class="mw-page-title-main">History of programming languages</span>

The history of programming languages spans from documentation of early mechanical computers to modern tools for software development. Early programming languages were highly specialized, relying on mathematical notation and similarly obscure syntax. Throughout the 20th century, research in compiler theory led to the creation of high-level programming languages, which use a more accessible syntax to communicate instructions.

<span class="mw-page-title-main">Concurrency (computer science)</span> Ability to execute a task in a non-serial manner

In computer science, concurrency is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability of a program, algorithm, or problem into order-independent or partially-ordered components or units of computation.

In computer science, message passing is a technique for invoking behavior on a computer. The invoking program sends a message to a process and relies on that process and its supporting infrastructure to then select and run some appropriate code. Message passing differs from conventional programming where a process, subroutine, or function is directly invoked by name. Message passing is key to some models of concurrency and object-oriented programming.

In computer science, future, promise, delay, and deferred refer to constructs used for synchronizing program execution in some concurrent programming languages. They describe an object that acts as a proxy for a result that is initially unknown, usually because the computation of its value is not yet complete.

In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its generality: how well a range of different problems can be expressed for a variety of different architectures, and its performance: how efficiently the compiled programs can execute. The implementation of a parallel programming model can take the form of a library invoked from a programming language, as an extension to an existing languages.

Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts.

Higher-order programming is a style of computer programming that uses software components, like functions, modules or objects, as values. It is usually instantiated with, or borrowed from, models of computation such as lambda calculus which make heavy use of higher-order functions. A programming language can be considered higher-order if components, such as procedures or labels, can be used just like data. For example, these elements could be used in the same way as arguments or values.

Programming languages are used for controlling the behavior of a machine. Like natural languages, programming languages follow rules for syntax and semantics.

A foreign function interface (FFI) is a mechanism by which a program written in one programming language can call routines or make use of services written or compiled in another one. An FFI is often used in contexts where calls are made into a binary dynamic-link library.

In computer programming, a green thread is a thread that is scheduled by a runtime library or virtual machine (VM) instead of natively by the underlying operating system (OS). Green threads emulate multithreaded environments without relying on any native OS abilities, and they are managed in user space instead of kernel space, enabling them to work in environments that do not have native thread support.

This comparison of programming languages compares the features of language syntax (format) for over 50 computer programming languages.

Task parallelism is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurrently performed by processes or threads—across different processors. In contrast to data parallelism which involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data. A common type of task parallelism is pipelining, which consists of moving a single set of data through a series of separate tasks where each task can execute independently of the others.

Join-patterns provides a way to write concurrent, parallel and distributed computer programs by message passing. Compared to the use of threads and locks, this is a high level programming model using communication constructs model to abstract the complexity of concurrent environment and to allow scalability. Its focus is on the execution of a chord between messages atomically consumed from a group of channels.

References

  1. Thom Frühwirth (9 July 2009). Constraint Handling Rules. Cambridge University Press. ISBN   978-0-521-87776-3.
  2. "Using Threads to Run Code Simultaneously - The Rust Programming Language". doc.rust-lang.org. Retrieved 2022-10-11.
  3. Documentation » The Python Standard Library » Concurrent Execution
  4. "Using Message Passing to Transfer Data Between Threads - The Rust Programming Language". doc.rust-lang.org. Retrieved 2022-10-11.
  5. Alan Kay The Early History Of Smalltalk
  6. "Crystal Programming Language – Concurrency" . Retrieved 10 August 2018.