Channel (programming)

Last updated

In computing, a channel is a model for interprocess communication and synchronization via message passing. A message may be sent over a channel, and another process or thread is able to receive messages sent over a channel it has a reference to, as a stream. Different implementations of channels may be buffered or not, and either synchronous or asynchronous.

Contents

libthread channels

The multithreading library, libthread, which was first created for the operating system Plan 9, offers inter-thread communication based on fixed-size channels.

OCaml events

The OCaml event module offers typed channels for synchronization. When the module's send and receive functions are called, they create corresponding send and receive events which can be synchronized.

Examples

Lua Love2D

The Love2D library which uses the Lua programming language implements channels with push and pop operations similar to stacks. The pop operation will block so as long as there is data resident on the stack. A demand operation is equivalent to pop, except it will block until there is data on the stack

-- A string containing code which will be interpreted by a function such as loadstring(),-- but on the C side to start a native thread.localthreadCode=[[    love.thread.getChannel("test"):push("Hello world!")]]functionlove.load()-- Start the thread.thread=love.thread.newThread(threadCode)thread:start()-- The thread will block until "Hello world!" is popped off channel test's stack.-- Because the channel can be popped from before the thread first executes, there may not be data on the stack.-- in that case use :demand() instead of :pop() because :demand() will block until there is data on the stack and then return the data.print(love.thread.getChannel("test"):demand())-- The thread can now finish.end

XMOS XC

The XMOS programming language XC provides a primitive type "chan" and two operators "<:" and ":>" for sending and receiving data from a channel. [1]

In this example, two hardware threads are started on the XMOS, running the two lines in the "par" block. The first line transmits the number 42 through the channel while the second waits until it is received and sets the value of x. The XC language also allows asynchronous receiving on channels through a select statement.

chanc;intx;par{c<:42;c:>x;}

Go

This snippet of Go code performs similarly to the XC code. First the channel c is created, then a goroutine is spawned which sends 42 through the channel. When the number is put in the channel x is set to 42. Go allows channels to buffer contents, as well as non blocking receiving through the use of a select block. [2]

c:=make(chanint)gofunc(){c<-42}()x:=<-c

Rust

Rust provides asynchronous channels for communication between threads. Channels allow a unidirectional flow of information between two end-points: the Sender and the Receiver. [3]

usestd::sync::mpsc;usestd::thread;fnmain(){let(tx,rx)=mpsc::channel();thread::spawn(move||{tx.send(123).unwrap();});letresult=rx.recv();println!("{:?}",result);}

Applications

In addition to their fundamental use for interprocess communication, channels can be used as a primitive to implement various other concurrent programming constructs which can be realized as streams. For example, channels can be used to construct futures and promises, where a future is a one-element channel, and a promise is a process that sends to the channel, fulfilling the future. [4] Similarly, iterators can be constructed directly from channels. [5]

List of implementations

Related Research Articles

<span class="mw-page-title-main">Thread (computing)</span> Smallest sequence of programmed instructions that can be managed independently by a scheduler

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. In many cases, a thread is a component of a process.

In computer science, threaded code is a programming technique where the code has a form that essentially consists entirely of calls to subroutines. It is often used in compilers, which may generate code in that form or be implemented in that form themselves. The code may be processed by an interpreter or it may simply be a sequence of machine code call instructions.

<span class="mw-page-title-main">Inter-process communication</span> How computer operating systems enable data sharing

In computer science, inter-process communication (IPC), also spelled interprocess communication, are the mechanisms provided by an operating system for processes to manage shared data. Typically, applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as commonly seen in distributed computing.

Message Passing Interface (MPI) is a standardized and portable message-passing standard designed to function on parallel computing architectures. The MPI standard defines the syntax and semantics of library routines that are useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several open-source MPI implementations, which fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications.

Coroutines are computer program components that allow execution to be suspended and resumed, generalizing subroutines for cooperative multitasking. Coroutines are well-suited for implementing familiar program components such as cooperative tasks, exceptions, event loops, iterators, infinite lists and pipes.

In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; for some operations, these algorithms provide a useful alternative to traditional blocking implementations. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.

In computer science, message passing is a technique for invoking behavior on a computer. The invoking program sends a message to a process and relies on that process and its supporting infrastructure to then select and run some appropriate code. Message passing differs from conventional programming where a process, subroutine, or function is directly invoked by name. Message passing is key to some models of concurrency and object-oriented programming.

Stackless Python, or Stackless, is a Python programming language interpreter, so named because it avoids depending on the C call stack for its own stack. In practice, Stackless Python uses the C stack, but the stack is cleared between function calls. The most prominent feature of Stackless is microthreads, which avoid much of the overhead associated with usual operating system threads. In addition to Python features, Stackless also adds support for coroutines, communication channels, and task serialization.

Concurrent ML (CML) is a multi-paradigm, general-purpose, high-level, functional programming language. It is a dialect of the programming language ML which is a concurrent extension of the Standard ML language, characterized by its ability to allow creating composable communication abstractions that are first-class rather than built into the language. The design of CML and its primitive operations have been adopted in several other programming languages, such as GNU Guile, Racket, and Manticore.

In computer science, future, promise, delay, and deferred refer to constructs used for synchronizing program execution in some concurrent programming languages. They describe an object that acts as a proxy for a result that is initially unknown, usually because the computation of its value is not yet complete.

Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts.

Handel-C is a high-level hardware description language aimed at low-level hardware and is most commonly used in programming FPGAs. Handel-C is to hardware design what the first high-level programming languages were to programming CPUs. It is a turing-complete rich subset of the C programming language, with an emphasis on parallel computing.

In computer science, synchronization is the task of coordinating multiple of processes to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action.

Thrift is an interface definition language and binary communication protocol used for defining and creating services for programming languages. It was developed by Facebook. Since 2020, it is an open source project in the Apache Software Foundation.

Joins is an asynchronous concurrent computing API (Join-pattern) from Microsoft Research for the .NET Framework. It is based on join calculus and makes the concurrency constructs of the Cω language available as a CLI assembly that any CLI compliant language can use.

In computing, a memory model describes the interactions of threads through memory and their shared use of the data.

<span class="mw-page-title-main">Go (programming language)</span> Programming language

Go is a statically typed, compiled high-level programming language designed at Google by Robert Griesemer, Rob Pike, and Ken Thompson. It is syntactically similar to C, but also has memory safety, garbage collection, structural typing, and CSP-style concurrency. It is often referred to as Golang because of its former domain name, golang.org, but its proper name is Go.

Synchronous Interprocess Messaging Project for LINUX (SIMPL) is a free and open-source project that allows QNX-style synchronous message passing by adding a Linux library using user space techniques like shared memory and Unix pipes to implement SendMssg/ReceiveMssg/ReplyMssg inter-process messaging mechanisms.

Join-patterns provides a way to write concurrent, parallel and distributed computer programs by message passing. Compared to the use of threads and locks, this is a high level programming model using communication constructs model to abstract the complexity of concurrent environment and to allow scalability. Its focus is on the execution of a chord between messages atomically consumed from a group of channels.

In computer programming, several language mechanisms exist for exception handling. The term exception is typically used to denote a data structure storing information about an exceptional condition. One mechanism to transfer control, or raise an exception, is known as a throw; the exception is said to be thrown. Execution is transferred to a catch.

References

  1. "XMOS Programming Guide | XMOS". Archived from the original on 2016-03-04. Retrieved 2015-05-10.
  2. "Effective Go - the Go Programming Language".
  3. "Channels - Rust By Example". doc.rust-lang.org. Retrieved 28 November 2020.
  4. "Futures Archived 2020-12-04 at the Wayback Machine ", Go Language Patterns Archived 2020-11-11 at the Wayback Machine
  5. "Iterators Archived 2020-10-15 at the Wayback Machine ", Go Language Patterns Archived 2020-11-11 at the Wayback Machine
  6. Sufrin, Bernard (2021-07-13), ThreadCSO (PDF), retrieved 2023-02-17
  7. Sufrin, Bernard (2021-07-13), ThreadCSO , retrieved 2023-02-17
  8. "stlab is the ongoing work of what was Adobe's Software Technology Lab. The Adobe Source Libraries (ASL), Platform Libraries, and new stlab libraries are hosted on github". 2021-01-31.