Reactor pattern

Last updated

The reactor software design pattern is an event handling strategy that can respond to many potential service requests concurrently. The pattern's key component is an event loop, running in a single thread or process, which demultiplexes incoming requests and dispatches them to the correct request handler. [1]

Contents

By relying on event-based mechanisms rather than blocking I/O or multi-threading, a reactor can handle many concurrent I/O bound requests with minimal delay. [2] A reactor also allows for easily modifying or expanding specific request handler routines, though the pattern does have some drawbacks and limitations. [1]

With its balance of simplicity and scalability, the reactor has become a central architectural element in several server applications and software frameworks for networking. Derivations such as the multireactor and proactor also exist for special cases where even greater throughput, performance, or request complexity are necessary. [1] [2] [3] [4]

Overview

Practical considerations for the client–server model in large networks, such as the C10k problem for web servers, were the original motivation for the reactor pattern. [5]

A naive approach to handle service requests from many potential endpoints, such as network sockets or file descriptors, is to listen for new requests from within an event loop, then immediately read the earliest request. Once the entire request has been read, it can be processed and forwarded on by directly calling the appropriate handler. An entirely "iterative" server like this, which handles one request from start-to-finish per iteration of the event loop, is logically valid. However, it will fall behind once it receives multiple requests in quick succession. The iterative approach cannot scale because reading the request blocks the server's only thread until the full request is received, and I/O operations are typically much slower than other computations. [2]

One strategy to overcome this limitation is multi-threading: by immediately splitting off each new request into its own worker thread, the first request will no longer block the event loop, which can immediately iterate and handle another request. This "thread per connection" design scales better than a purely iterative one, but it still contains multiple inefficiencies and will struggle past a point. From a standpoint of underlying system resources, each new thread or process imposes overhead costs in memory and processing time (due to context switching). The fundamental inefficiency of each thread waiting for I/O to finish isn't resolved either. [1] [2]

From a design standpoint, both approaches tightly couple the general demultiplexer with specific request handlers too, making the server code brittle and tedious to modify. These considerations suggest a few major design decisions:

  1. Retain a single-threaded event handler; multi-threading introduces overhead and complexity without resolving the real issue of blocking I/O
  2. Use an event notification mechanism to demultiplex requests only after I/O is complete (so I/O is effectively non-blocking)
  3. Register request handlers as callbacks with the event handler for better separation of concerns

Combining these insights leads to the reactor pattern, which balances the advantages of single-threading with high throughput and scalability. [1] [2]

Usage

The reactor pattern can be a good starting point for any concurrent, event-handling problem. The pattern is not restricted to network sockets either; hardware I/O, file system or database access, inter-process communication, and even abstract message passing systems are all possible use-cases.[ citation needed ]

However, the reactor pattern does have limitations, a major one being the use of callbacks, which make program analysis and debugging more difficult, a problem common to designs with inverted control. [1] The simpler thread-per-connection and fully iterative approaches avoid this and can be valid solutions if scalability or high-throughput are not required. [lower-alpha 1] [ citation needed ]

Single-threading can also become a drawback in use-cases that require maximum throughput, or when requests involve significant processing. Different multi-threaded designs can overcome these limitations, and in fact, some still use the reactor pattern as a sub-component for handling events and I/O. [1]

Applications

The reactor pattern (or a variant of it) has found a place in many web servers, application servers, and networking frameworks:

Structure

A reactive application consists of several moving parts and will rely on some support mechanisms: [1]

Handle
An identifier and interface to a specific request, with IO and data. This will often take the form of a socket, file descriptor, or similar mechanism, which should be provided by most modern operating systems.
Demultiplexer
An event notifier that can efficiently monitor the status of a handle, then notify other subsystems of a relevant status change (typically an IO handle becoming "ready to read"). Traditionally this role was filled by the select() system call, but more contemporary examples include epoll, kqueue, and IOCP.
Dispatcher
The actual event loop of the reactive application, this component maintains the registry of valid event handlers, then invokes the appropriate handler when an event is raised.
Event Handler
Also known as a request handler, this is the specific logic for processing one type of service request. The reactor pattern suggests registering these dynamically with the dispatcher as callbacks for greater flexibility. By default, a reactor does not use multi-threading but invokes a request handler within the same thread as the dispatcher.
Event Handler Interface
An abstract interface class, representing the general properties and methods of an event handler. Each specific handler must implement this interface while the dispatcher will operate on the event handlers through this interface.

Variants

The standard reactor pattern is sufficient for many applications, but for particularly demanding ones, tweaks can provide even more power at the price of extra complexity.

One basic modification is to invoke event handlers in their own threads for more concurrency. Running the handlers in a thread pool, rather than spinning up new threads as needed, will further simplify the multi-threading and minimize overhead. This makes the thread pool a natural complement to the reactor pattern in many use-cases. [2]

Another way to maximize throughput is to partly reintroduce the approach of the "thread per connection" server, with replicated dispatchers / event loops running concurrently. However, rather than the number of connections, one configures the dispatcher count to match the available CPU cores of the underlying hardware.

Known as a multireactor, this variant ensures a dedicated server is fully using the hardware's processing power. Because the distinct threads are long-running event loops, the overhead of creating and destroying threads is limited to server startup and shutdown. With requests distributed across independent dispatchers, a multireactor also provides better availability and robustness; should an error occur and a single dispatcher fail, it will only interrupt requests allocated to that event loop. [3] [4]

For particularly complex services, where synchronous and asynchronous demands must be combined, one other alternative is the proactor pattern. This pattern is more intricate than a reactor, with its own engineering details, but it still makes use of a reactor subcomponent to solve the problem of blocking IO. [3]

See also

Related patterns:

Notes

  1. That said, a rule-of-thumb in software design is that if application demands can potentially increase past an assumed limit, one should expect that someday they will.

Related Research Articles

In computer programming, event-driven programming is a programming paradigm in which the flow of the program is determined by external events. Typical event can be UI events from mice, keyboards, touchpads and touchscreens, or external sensor inputs, or be programmatically generated from other programs or threads, or network events.

In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows.

<span class="mw-page-title-main">Thread pool</span> Software design pattern

In computer programming, a thread pool is a software design pattern for achieving concurrency of execution in a computer program. Often also called a replicated workers or worker-crew model, a thread pool maintains multiple threads waiting for tasks to be allocated for concurrent execution by the supervising program. By maintaining a pool of threads, the model increases performance and avoids latency in execution due to frequent creation and destruction of threads for short-lived tasks. The number of available threads is tuned to the computing resources available to the program, such as a parallel task queue after completion of execution.

The event dispatching thread (EDT) is a background thread used in Java to process events from the Abstract Window Toolkit (AWT) graphical user interface event queue. It is an example of the generic concept of event-driven programming, that is popular in many other contexts than Java, for example, web browsers, or web servers.

In software engineering, inversion of control (IoC) is a design pattern in which custom-written portions of a computer program receive the flow of control from a generic framework. The term "inversion" is historical: a software architecture with this design "inverts" control as compared to procedural programming. In procedural programming, a program's custom code calls reusable libraries to take care of generic tasks, but with inversion of control, it is the framework that calls the custom code.

<span class="mw-page-title-main">Twisted (software)</span> Event-driven network programming framework

Twisted is an event-driven network programming framework written in Python and licensed under the MIT License.

In computer science, asynchronous I/O is a form of input/output processing that permits other processing to continue before the I/O operation has finished. A name used for asynchronous I/O in the Windows API is overlapped I/O.

The Perl Object Environment (POE) is a library of Perl modules written in the Perl programming language by Rocco Caputo et al.

The message loop is an obligatory section of code in every program that uses a graphical user interface under Microsoft Windows. Windows programs that have a GUI are event-driven. Windows maintains an individual message queue for each thread that has created a window. Usually only the first thread creates windows. Windows places messages into that queue whenever mouse activity occurs on that thread's window, whenever keyboard activity occurs while that window has focus, and at other times. A process can also add messages to its own queue. To accept user input, and for other reasons, each thread with a window must continuously retrieve messages from its queue, and act on them. A programmer makes the process do that by writing a loop that calls GetMessage, and then calls DispatchMessage, and repeats indefinitely. This is the message loop. There usually is a message loop in the main program, which runs on the main thread, and additional message loop in each created modal dialog. Messages for every window of the process pass through its message queue, and are handled by its message loop. A message loop is one kind of event loop.

In computer science, the event loop is a programming construct or design pattern that waits for and dispatches events or messages in a program. The event loop works by making a request to some internal or external "event provider", then calls the relevant event handler. The event loop is also sometimes referred to as the message dispatcher, message loop, message pump, or run loop.

The Parallel Virtual File System (PVFS) is an open-source parallel file system. A parallel file system is a type of distributed file system that distributes file data across multiple servers and provides for concurrent access by multiple tasks of a parallel application. PVFS was designed for use in large scale cluster computing. PVFS focuses on high performance access to large data sets. It consists of a server process and a client library, both of which are written entirely of user-level code. A Linux kernel module and pvfs-client process allow the file system to be mounted and used with standard utilities. The client library provides for high performance access via the message passing interface (MPI). PVFS is being jointly developed between The Parallel Architecture Research Laboratory at Clemson University and the Mathematics and Computer Science Division at Argonne National Laboratory, and the Ohio Supercomputer Center. PVFS development has been funded by NASA Goddard Space Flight Center, The DOE Office of Science Advanced Scientific Computing Research program, NSF PACI and HECURA programs, and other government and private agencies. PVFS is now known as OrangeFS in its newest development branch.

In programming and software design, an event is an action or occurrence recognized by software, often originating asynchronously from the external environment, that may be handled by the software. Computer events can be generated or triggered by the system, by the user, or in other ways. Typically, events are handled synchronously with the program flow; that is, the software may have one or more dedicated places where events are handled, frequently an event loop.

Service-oriented programming (SOP) is a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs. Services can represent steps of business processes and thus one of the main applications of this paradigm is the cost-effective delivery of standalone or composite business applications that can "integrate from the inside-out". It inherently promotes service-oriented architecture (SOA), however, it is not the same as SOA. While SOA focuses on communication between systems using "services", SOP provides a new technique to build agile application modules using in-memory services as the unit of work.

In object-oriented design, the chain-of-responsibility pattern is a behavioral design pattern consisting of a source of command objects and a series of processing objects. Each processing object contains logic that defines the types of command objects that it can handle; the rest are passed to the next processing object in the chain. A mechanism also exists for adding new processing objects to the end of this chain.

The C10k problem is the problem of optimizing network sockets to handle a large number of clients at the same time. The name C10k is a numeronym for concurrently handling ten thousand connections. Handling many concurrent connections is a different problem from handling many requests per second: the latter requires high throughput, while the former does not have to be fast, but requires efficient scheduling of connections.

<span class="mw-page-title-main">Node.js</span> JavaScript runtime environment

Node.js is a cross-platform, open-source JavaScript runtime environment that can run on Windows, Linux, Unix, macOS, and more. Node.js runs on the V8 JavaScript engine, and executes JavaScript code outside a web browser.

FastCGI is a binary protocol for interfacing interactive programs with a web server. It is a variation on the earlier Common Gateway Interface (CGI). FastCGI's main aim is to reduce the overhead related to interfacing between web server and CGI programs, allowing a server to handle more web page requests per unit of time.

<span class="mw-page-title-main">ReactiveX</span> Program library

ReactiveX is a software library originally created by Microsoft that allows imperative programming languages to operate on sequences of data regardless of whether the data is synchronous or asynchronous. It provides a set of sequence operators that operate on each item in the sequence. It is an implementation of reactive programming and provides a blueprint for the tools to be implemented in multiple programming languages.

Asynchrony, in computer programming, refers to the occurrence of events independent of the main program flow and ways to deal with such events. These may be "outside" events such as the arrival of signals, or actions instigated by a program that take place concurrently with program execution, without the program blocking to wait for results. Asynchronous input/output is an example of the latter case of asynchrony, and lets programs issue commands to storage or network devices that service these requests while the processor continues executing the program. Doing so provides a degree of parallelism.

Pattern-Oriented Software Architecture is a series of software engineering books describing software design patterns.

References

  1. 1 2 3 4 5 6 7 8 9 10 11 Schmidt, Douglas C. (1995). "Chapter 29: Reactor: An Object Behavioral Pattern for Demultiplexing and Dispatching Handles for Synchronous Events" (PDF). In Coplien, James O. (ed.). Pattern Languages of Program Design. Vol. 1 (1st ed.). Addison-Wesley. ISBN   9780201607345.
  2. 1 2 3 4 5 6 7 Devresse, Adrien (20 June 2014). "Efficient parallel I/O on multi-core architectures" (PDF). 2nd Thematic CERN School of Computing. CERN. Archived (PDF) from the original on 8 August 2022. Retrieved 14 September 2023.
  3. 1 2 3 4 5 Escoffier, Clement; Finnegan, Ken (November 2021). "Chapter 4. Design Principles of Reactive Systems" . Reactive Systems in Java. O'Reilly Media. ISBN   9781492091721.
  4. 1 2 3 Garrett, Owen (10 June 2015). "Inside NGINX: How We Designed for Performance & Scale". NGINX. F5, Inc. Archived from the original on 20 August 2023. Retrieved 10 September 2023.
  5. Kegel, Dan (5 February 2014). "The C10k problem". Dan Kegel's Web Hostel. Archived from the original on 6 September 2023. Retrieved 10 September 2023.
  6. Bonér, Jonas (15 June 2022). "The Reactive Patterns: 3. Isolate Mutations". The Reactive Principles. Retrieved 20 September 2023.
  7. "Network Programming: Writing network and internet applications" (PDF). POCO Project. Applied Informatics Software Engineering GmbH. 2010. pp. 21–22. Retrieved 20 September 2023.
  8. Stoyanchev, Rossen (9 February 2016). "Reactive Spring". Spring.io. Retrieved 20 September 2023.

Specific applications:

Sample implementations: