Reentrancy (computing)

Last updated

Reentrancy is a programming concept where a function or subroutine can be interrupted and then resumed before it finishes executing. This means that the function can be called again before it completes its previous execution. Reentrant code is designed to be safe and predictable when multiple instances of the same function are called simultaneously or in quick succession. A computer program or subroutine is called reentrant if multiple invocations can safely run concurrently on multiple processors, or if on a single-processor system its execution can be interrupted and a new execution of it can be safely started (it can be "re-entered"). The interruption could be caused by an internal action such as a jump or call, or by an external action such as an interrupt or signal, unlike recursion, where new invocations can only be caused by internal call.

Contents

This definition originates from multiprogramming environments, where multiple processes may be active concurrently and where the flow of control could be interrupted by an interrupt and transferred to an interrupt service routine (ISR) or "handler" subroutine. Any subroutine used by the handler that could potentially have been executing when the interrupt was triggered should be reentrant. Similarly, code shared by two processors accessing shared data should be reentrant. Often, subroutines accessible via the operating system kernel are not reentrant. Hence, interrupt service routines are limited in the actions they can perform; for instance, they are usually restricted from accessing the file system and sometimes even from allocating memory.

Reentrancy is neither necessary nor sufficient for thread-safety in multi-threaded environments. In other words, a reentrant subroutine can be thread-safe, [1] but is not guaranteed to be[ citation needed ]. Conversely, thread-safe code need not be reentrant (see below for examples).

Other terms used for reentrant programs include "sharable code". [2] Reentrant subroutines are sometimes marked in reference material as being "signal safe". [3] Reentrant programs are often [lower-alpha 1] "pure procedures".

Background

Reentrancy is not the same thing as idempotence, in which the function may be called more than once yet generate exactly the same output as if it had only been called once. Generally speaking, a function produces output data based on some input data (though both are optional, in general). Shared data could be accessed by any function at any time. If data can be changed by any function (and none keep track of those changes), there is no guarantee to those that share a datum that that datum is the same as at any time before.

Data has a characteristic called scope, which describes where in a program the data may be used. Data scope is either global (outside the scope of any function and with an indefinite extent) or local (created each time a function is called and destroyed upon exit).

Local data is not shared by any routines, re-entering or not; therefore, it does not affect re-entrance. Global data is defined outside functions and can be accessed by more than one function, either in the form of global variables (data shared between all functions), or as static variables (data shared by all invocations of the same function). In object-oriented programming, global data is defined in the scope of a class and can be private, making it accessible only to functions of that class. There is also the concept of instance variables, where a class variable is bound to a class instance. For these reasons, in object-oriented programming, this distinction is usually reserved for the data accessible outside of the class (public), and for the data independent of class instances (static).

Reentrancy is distinct from, but closely related to, thread-safety. A function can be thread-safe and still not reentrant. For example, a function could be wrapped all around with a mutex (which avoids problems in multithreading environments), but, if that function were used in an interrupt service routine, it could starve waiting for the first execution to release the mutex. The key for avoiding confusion is that reentrant refers to only one thread executing. It is a concept from the time when no multitasking operating systems existed.

Rules for reentrancy

Reentrant code may not hold any static or global non-constant data without synchronization.
Reentrant functions can work with global data. For example, a reentrant interrupt service routine could grab a piece of hardware status to work with (e.g., serial port read buffer) which is not only global, but volatile. Still, typical use of static variables and global data is not advised, in the sense that, except in sections of code that are synchronized, only atomic read-modify-write instructions should be used in these variables (it should not be possible for an interrupt or signal to come during the execution of such an instruction). Note that in C, even a read or write is not guaranteed to be atomic; it may be split into several reads or writes. [4] The C standard and SUSv3 provide sig_atomic_t for this purpose, although with guarantees only for simple reads and writes, not for incrementing or decrementing. [5] More complex atomic operations are available in C11, which provides stdatomic.h.
Reentrant code may not modify itself without synchronization.
The operating system might allow a process to modify its code. There are various reasons for this (e.g., blitting graphics quickly) but this generally requires synchronization to avoid problems with reentrancy.

It may, however, modify itself if it resides in its own unique memory. That is, if each new invocation uses a different physical machine code location where a copy of the original code is made, it will not affect other invocations even if it modifies itself during execution of that particular invocation (thread).

Reentrant code may not call non-reentrant computer programs or routines without synchronization.
Multiple levels of user, object, or process priority or multiprocessing usually complicate the control of reentrant code. It is important to keep track of any access or side effects that are done inside a routine designed to be reentrant.

Reentrancy of a subroutine that operates on operating-system resources or non-local data depends on the atomicity of the respective operations. For example, if the subroutine modifies a 64-bit global variable on a 32-bit machine, the operation may be split into two 32-bit operations, and thus, if the subroutine is interrupted while executing, and called again from the interrupt handler, the global variable may be in a state where only 32 bits have been updated. The programming language might provide atomicity guarantees for interruption caused by an internal action such as a jump or call. Then the function f in an expression like (global:=1) + (f()), where the order of evaluation of the subexpressions might be arbitrary in a programming language, would see the global variable either set to 1 or to its previous value, but not in an intermediate state where only part has been updated. (The latter can happen in C, because the expression has no sequence point.) The operating system might provide atomicity guarantees for signals, such as a system call interrupted by a signal not having a partial effect. The processor hardware might provide atomicity guarantees for interrupts, such as interrupted processor instructions not having partial effects.

Examples

To illustrate reentrancy, this article uses as an example a C utility function, swap(), that takes two pointers and transposes their values, and an interrupt-handling routine that also calls the swap function.

Neither reentrant nor thread-safe

This is an example swap function that fails to be reentrant or thread-safe. Since the tmp variable is globally shared, without synchronization, among any concurrent instances of the function, one instance may interfere with the data relied upon by another. As such, it should not have been used in the interrupt service routine isr():

inttmp;voidswap(int*x,int*y){tmp=*x;*x=*y;/* Hardware interrupt might invoke isr() here. */*y=tmp;}voidisr(){intx=1,y=2;swap(&x,&y);}

Thread-safe but not reentrant

The function swap() in the preceding example can be made thread-safe by making tmp thread-local. It still fails to be reentrant, and this will continue to cause problems if isr() is called in the same context as a thread already executing swap():

_Thread_localinttmp;voidswap(int*x,int*y){tmp=*x;*x=*y;/* Hardware interrupt might invoke isr() here. */*y=tmp;}voidisr(){intx=1,y=2;swap(&x,&y);}

Reentrant and thread-safe

An implementation of swap() that allocates tmp on the stack instead of globally and that is called only with unshared variables as parameters [lower-alpha 2] is both thread-safe and reentrant. Thread-safe because the stack is local to a thread and a function acting just on local data will always produce the expected result. There is no access to shared data therefore no data race.

voidswap(int*x,int*y){inttmp;tmp=*x;*x=*y;*y=tmp;/* Hardware interrupt might invoke isr() here. */}voidisr(){intx=1,y=2;swap(&x,&y);}

Reentrant interrupt handler

A reentrant interrupt handler is an interrupt handler that re-enables interrupts early in the interrupt handler. This may reduce interrupt latency. [6] In general, while programming interrupt service routines, it is recommended to re-enable interrupts as soon as possible in the interrupt handler. This practice helps to avoid losing interrupts. [7]

Further examples

In the following code, neither f nor g functions is reentrant.

intv=1;intf(){v+=2;returnv;}intg(){returnf()+2;}

In the above, f() depends on a non-constant global variable v; thus, if f() is interrupted during execution by an ISR which modifies v, then reentry into f() will return the wrong value of v. The value of v and, therefore, the return value of f, cannot be predicted with confidence: they will vary depending on whether an interrupt modified v during f's execution. Hence, f is not reentrant. Neither is g, because it calls f, which is not reentrant.

These slightly altered versions are reentrant:

intf(inti){returni+2;}intg(inti){returnf(i)+2;}

In the following, the function is thread-safe, but not (necessarily) reentrant:

intfunction(){mutex_lock();// ...// function body// ...mutex_unlock();}

In the above, function() can be called by different threads without any problem. But, if the function is used in a reentrant interrupt handler and a second interrupt arises inside the function, the second routine will hang forever. As interrupt servicing can disable other interrupts, the whole system could suffer.

Notes

  1. A program that serializes self-modification may be reentrant, and a pure procedure that updates global data without proper serialization may fail to be reentrant.
  2. If isr() called swap() with one or two global variables as parameters then swap() would not be reentrant

See also

Related Research Articles

In multi-threaded computer programming, a function is thread-safe when it can be invoked or accessed concurrently by multiple threads without causing unexpected behavior, race conditions, or data corruption. As in the multi-threaded context where a program executes several threads simultaneously in a shared address space and each of those threads has access to all every other thread's memory, thread-safe functions need to ensures all those threads behave properly and fulfill their design specifications without unintended interaction.

In programming languages, a closure, also lexical closure or function closure, is a technique for implementing lexically scoped name binding in a language with first-class functions. Operationally, a closure is a record storing a function together with an environment. The environment is a mapping associating each free variable of the function with the value or reference to which the name was bound when the closure was created. Unlike a plain function, a closure allows the function to access those captured variables through the closure's copies of their values or references, even when the function is invoked outside their scope.

In computing, inline expansion, or inlining, is a manual or compiler optimization that replaces a function call site with the body of the called function. Inline expansion is similar to macro expansion, but occurs during compilation, without changing the source code, while macro expansion occurs prior to compilation, and results in different text that is then processed by the compiler.

In computer programming, a parameter or a formal argument is a special kind of variable used in a subroutine to refer to one of the pieces of data provided as input to the subroutine. These pieces of data are the values of the arguments with which the subroutine is going to be called/invoked. An ordered list of parameters is usually included in the definition of a subroutine, so that, each time the subroutine is called, its arguments for that call are evaluated, and the resulting values can be assigned to the corresponding parameters.

In computer science, compare-and-swap (CAS) is an atomic instruction used in multithreading to achieve synchronization. It compares the contents of a memory location with a given value and, only if they are the same, modifies the contents of that memory location to a new given value. This is done as a single atomic operation. The atomicity guarantees that the new value is calculated based on up-to-date information; if the value had been updated by another thread in the meantime, the write would fail. The result of the operation must indicate whether it performed the substitution; this can be done either with a simple boolean response, or by returning the value read from the memory location.

In computer systems programming, an interrupt handler, also known as an interrupt service routine or ISR, is a special block of code associated with a specific interrupt condition. Interrupt handlers are initiated by hardware interrupts, software interrupt instructions, or software exceptions, and are used for implementing device drivers or transitions between protected modes of operation, such as system calls.

Signals are standardized messages sent to a running program to trigger specific behavior, such as quitting or error handling. They are a limited form of inter-process communication (IPC), typically used in Unix, Unix-like, and other POSIX-compliant operating systems.

Cilk, Cilk++, Cilk Plus and OpenCilk are general-purpose programming languages designed for multithreaded parallel computing. They are based on the C and C++ programming languages, which they extend with constructs to express parallel loops and the fork–join idiom.

<span class="mw-page-title-main">Linearizability</span> Property of some operation(s) in concurrent programming

In concurrent programming, an operation is linearizable if it consists of an ordered list of invocation and response events, that may be extended by adding response events such that:

  1. The extended list can be re-expressed as a sequential history.
  2. That sequential history is a subset of the original unextended list.

In computer programming, a nested function is a named function that is defined within another, enclosing, block and is lexically scoped within the enclosing block – meaning it is only callable by name within the body of the enclosing block and can use identifiers declared in outer blocks, including outer functions. The enclosing block is typically, but not always, another function.

In concurrent programming, a monitor is a synchronization construct that prevents threads from concurrently accessing a shared object's state and allows them to wait for the state to change. They provide a mechanism for threads to temporarily give up exclusive access in order to wait for some condition to be met, before regaining exclusive access and resuming their task. A monitor consists of a mutex (lock) and at least one condition variable. A condition variable is explicitly 'signalled' when the object's state is modified, temporarily passing the mutex to another thread 'waiting' on the conditional variable.

In computer programming, thread-local storage (TLS) is a memory management method that uses static or global memory local to a thread. The concept allows storage of data that appears to be global in a system with separate threads.

In computer science, a call stack is a stack data structure that stores information about the active subroutines of a computer program. This type of stack is also known as an execution stack, program stack, control stack, run-time stack, or machine stack, and is often shortened to simply "the stack". Although maintenance of the call stack is important for the proper functioning of most software, the details are normally hidden and automatic in high-level programming languages. Many computer instruction sets provide special instructions for manipulating stacks.

In computer science, the fetch-and-add (FAA) CPU instruction atomically increments the contents of a memory location by a specified value.

In a programming language, an evaluation strategy is a set of rules for evaluating expressions. The term is often used to refer to the more specific notion of a parameter-passing strategy that defines the kind of value that is passed to the function for each parameter and whether to evaluate the parameters of a function call, and if so in what order. The notion of reduction strategy is distinct, although some authors conflate the two terms and the definition of each term is not widely agreed upon.

In computing, a procedural parameter is a parameter of a procedure that is itself a procedure.

In compiler optimization, escape analysis is a method for determining the dynamic scope of pointers – where in the program a pointer can be accessed. It is related to pointer analysis and shape analysis.

In computer programming, a pure function is a function that has the following properties:

  1. the function return values are identical for identical arguments, and
  2. the function has no side effects.

The 65xx family of microprocessors, consisting of the MOS Technology 6502 and its derivatives, the WDC 65C02, WDC 65C802 and WDC 65C816, and CSG 65CE02, all handle interrupts in a similar fashion. There are three hardware interrupt signals common to all 65xx processors and one software interrupt, the BRK instruction. The WDC 65C816 adds a fourth hardware interrupt—ABORT, useful for implementing virtual memory architectures—and the COP software interrupt instruction, intended for use in a system with a coprocessor of some type.

In computer programming, a function, procedure, method, subroutine, routine, or subprogram is a callable unit of software logic that has a well-defined interface and behavior and can be invoked multiple times.

References

  1. Kerrisk 2010, p.  657.
  2. Ralston 2000, p. 1514–1515.
  3. "pthread_cond_init()--Initialize Condition Variable". IBM Knowledge Center. Retrieved 2019-10-05.
  4. Preshing, Jeff (2013-06-18). "Atomic vs. Non-Atomic Operations". Preshing on Programming. Archived from the original on 2014-12-03. Retrieved 2018-04-24.
  5. Kerrisk 2010, p.  428.
  6. Sloss et al. 2004, p.  342.
  7. Regehr, John (2006). "Safe and Structured Use of Interrupts in Real-time and Embedded Software" (PDF). Handbook of Real-Time and Embedded Systems. CRC Press. Archived (PDF) from the original on 2007-08-24 via the author's website at the University of Utah School of Computing.

Works cited

Further reading