Reentrant mutex

Last updated

In computer science, the reentrant mutex (recursive mutex, recursive lock) is a particular type of mutual exclusion (mutex) device that may be locked multiple times by the same process/thread, without causing a deadlock.

Contents

While any attempt to perform the "lock" operation on an ordinary mutex (lock) would either fail or block when the mutex is already locked, on a recursive mutex this operation will succeed if and only if the locking thread is the one that already holds the lock. Typically, a recursive mutex tracks the number of times it has been locked, and requires equally many unlock operations to be performed before other threads may lock it.

Motivation

Recursive mutexes solve the problem of non-reentrancy with regular mutexes: if a function that takes a lock and executes a callback is itself called by the callback, deadlock ensues. [1] In pseudocode, that is the following situation:

var m : Mutex  // A non-recursive mutex, initially unlocked.  function lock_and_call(i : Integer)     m.lock()     callback(i)     m.unlock()  function callback(i : Integer)     if i > 0         lock_and_call(i - 1)  lock_and_call(1)  // Invoking the function

Given these definitions, the function call lock_and_call(1) will cause the following sequence of events:

Replacing the mutex with a recursive one solves the problem, because the final m.lock() will succeed without blocking.

Practical use

W. Richard Stevens notes that recursive locks are "tricky" to use correctly, and recommends their use for adapting single-threaded code without changing APIs, but "only when no other solution is possible". [2]

The Java language's native synchronization mechanism, monitor, uses recursive locks. Syntactically, a lock is a block of code with the 'synchronized' keyword preceding it and any Object reference in parentheses that will be used as the mutex. Inside the synchronized block, the given object can be used as a condition variable by doing a wait(), notify(), or notifyAll() on it. Thus all Objects are both recursive mutexes and condition variables. [3]

Example

  1. Thread A calls function F which acquires a reentrant lock for itself before proceeding
  2. Thread B calls function F which attempts to acquire a reentrant lock for itself but cannot due to one already outstanding, resulting in either a block (it waits), or a timeout if requested
  3. Thread A's F calls itself recursively. It already owns the lock, so it will not block itself (no deadlock). This is the central idea of a reentrant mutex, and is what makes it different from a regular lock.
  4. Thread B's F is still waiting, or has caught the timeout and worked around it
  5. Thread A's F finishes and releases its lock(s)
  6. Thread B's F can now acquire a reentrant lock and proceed if it was still waiting

Software emulation

Software emulation can be accomplished[ clarification needed ] using the following structure:[ citation needed ]

Acquisition

  1. Acquire the control condition.
  2. If the owner is set and not the current thread, wait for the control condition to be notified (this also releases the condition).
  3. Set the owner to the current thread. The owner identifier should have already been cleared at this point unless the acquirer is already the owner.
  4. Increment the acquisition count (should always result in 1 for new owners).
  5. Release the control condition.

Release

  1. Acquire the control condition, asserting that the owner is the releaser.
  2. Decrement the acquisition count, asserting that the count is greater than or equal to zero.
  3. If the acquisition count is zero, clear the owner information and notify the control condition.
  4. Release the control condition.

Related Research Articles

A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time applications that process data as it comes in, typically without buffer delays. Processing time requirements are measured in tenths of seconds or shorter increments of time. A real-time system is a time-bound system which has well-defined, fixed time constraints. Processing must be done within the defined constraints or the system will fail. They either are event-driven or time-sharing. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts. Most RTOSs use a pre-emptive scheduling algorithm.

Mutual exclusion

In computer science, mutual exclusion is a property of concurrency control, which is instituted for the purpose of preventing race conditions. It is the requirement that one thread of execution never enters a critical section while a concurrent thread of execution is already accessing critical section, which refers to an interval of time during which a thread of execution accesses a shared resource, such as [Shared data objects, shared resources, shared memory].

Thread (computing) Smallest sequence of programmed instructions that can be managed independently by a scheduler

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. The implementation of threads and processes differs between operating systems, but in most cases a thread is a component of a process. The multiple threads of a given process may be executed concurrently, sharing resources such as memory, while different processes do not share these resources. In particular, the threads of a process share its executable code and the values of its dynamically allocated variables and non-thread-local global variables at any given time.

Thread safety is a computer programming concept applicable to multi-threaded code. Thread-safe code only manipulates shared data structures in a manner that ensures that all threads behave properly and fulfill their design specifications without unintended interaction. There are various strategies for making thread-safe data structures.

In computing, a computer program or subroutine is called reentrant if multiple invocations can safely run concurrently on a single processor system, where a reentrant procedure can be interrupted in the middle of its execution and then safely be called again ("re-entered") before its previous invocations complete execution. The interruption could be caused by an internal action such as a jump or call, or by an external action such as an interrupt or signal, unlike recursion, where new invocations can only be caused by internal call.

Semaphore (programming) Variable used in a concurrent system

In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple processes and avoid critical section problems in a concurrent system such as a multitasking operating system. A trivial semaphore is a plain variable that is changed depending on programmer-defined conditions.

In computer science, a lock or mutex is a synchronization primitive: a mechanism that enforces limits on access to a resource when there are many threads of execution. A lock is designed to enforce a mutual exclusion concurrency control policy, and with a variety of possible methods there exists multiple unique implementations for different applications.

In software engineering, a spinlock is a lock which causes a thread trying to acquire it to simply wait in a loop ("spin") while repeatedly checking if the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of busy waiting. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on blocks, or "goes to sleep".

In computer science, read-copy-update (RCU) is a synchronization mechanism that avoids the use of lock primitives while multiple threads concurrently read and update elements that are linked through pointers and that belong to shared data structures.

In computer science, serializing tokens are a concept in concurrency control arising from the ongoing development of DragonFly BSD. According to Matthew Dillon, they are most akin to SPLs, except a token works across multiple CPUs while SPLs only work within a single CPU's domain.

In computing, a futex is a kernel system call that programmers can use to implement basic locking, or as a building block for higher-level locking abstractions such as semaphores and POSIX mutexes or condition variables.

Resource acquisition is initialization (RAII) is a programming idiom used in several object-oriented, statically-typed programming languages to describe a particular language behavior. In RAII, holding a resource is a class invariant, and is tied to object lifetime: resource allocation is done during object creation, by the constructor, while resource deallocation (release) is done during object destruction, by the destructor. In other words, resource acquisition must succeed for initialization to succeed. Thus the resource is guaranteed to be held between when initialization finishes and finalization starts, and to be held only when the object is alive. Thus if there are no object leaks, there are no resource leaks.

In concurrent programming, a monitor is a synchronization construct that allows threads to have both mutual exclusion and the ability to wait (block) for a certain condition to become false. Monitors also have a mechanism for signaling other threads that their condition has been met. A monitor consists of a mutex (lock) object and condition variables. A condition variable essentially is a container of threads that are waiting for a certain condition. Monitors provide a mechanism for threads to temporarily give up exclusive access in order to wait for some condition to be met, before regaining exclusive access and resuming their task.

In computer science, a readers–writer is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, while write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers or readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updated atomically and is invalid until the update is complete.

In computer science, synchronization refers to one of two distinct but related concepts: synchronization of processes, and synchronization of data. Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action. Data synchronization refers to the idea of keeping multiple copies of a dataset in coherence with one another, or to maintain data integrity. Process synchronization primitives are commonly used to implement data synchronization.

In parallel computing, a barrier is a type of synchronization method. A barrier for a group of threads or processes in the source code means any thread/process must stop at this point and cannot proceed until all other threads/processes reach this barrier.

In computing, the producer–consumer problem is a classic example of a multi-process synchronization problem, the first version of which was proposed by Edsger W. Dijkstra in 1965 in his unpublished manuscript, in which the buffer was unbounded, and subsequently published with a bounded buffer in 1972. In the first version of the problem, there are two cyclic processes, a producer and a consumer, which share a common, fixed-size buffer used as a queue. The producer repeatedly generates data and writes it into the buffer. The consumer repeatedly reads the data in the buffer, removing it in the course of reading it, and using that data in some way. In the first version of the problem, with an unbounded buffer, the problem is how to design the producer and consumer code so that, in their exchange of data, no data is lost or duplicated, data is read by the consumer in the order it is written by the producer, and both processes make as much progress as possible. In the later formulation of the problem, Dijkstra proposed multiple producers and consumers sharing a finite collection of buffers. This added the additional problem of preventing producers from trying to write into buffers when all were full, and trying to prevent consumers from reading a buffer when all were empty.

DioneOS is a multitasking preemptive, real-time operating system. The system is designed for Texas Instruments MSP430x microcontrollers. Target microcontroller platform has limited resources, i.e. system clock frequency in tens of MHz and memories amount from tens to a few hundreds KB. The system is adapted to such conditions by providing its compact and efficient image. The efficiency term means here minimization of additional CPU load caused by the system usage. According to this definition, the system is more effective when it consumes less CPU time for execution of its internal parts.

In computer science, deadlock prevention algorithms are used in concurrent programming when multiple processes must acquire more than one shared resource. If two or more concurrent processes obtain multiple resources indiscriminately, a situation can occur where each process has a resource needed by another process. As a result, none of the processes can obtain all the resources it needs, so all processes are blocked from further execution. This situation is called a deadlock. A deadlock prevention algorithm organizes resource usage by each process to ensure that at least one process is always able to get all the resources it needs. One such example of deadlock algorithm is Banker's algorithm.

The Java programming language's Java Collections Framework version 1.5 and later defines and implements the original regular single-threaded Maps, and also new thread-safe Maps implementing the java.util.concurrent.ConcurrentMapinterface among other concurrent interfaces. In Java 1.6, the java.util.NavigableMap interface was added, extending java.util.SortedMap, and the java.util.concurrent.ConcurrentNavigableMap interface was added as a subinterface combination.

References

  1. Buschmann, Frank; Henney, Kevlin; Schmidt, Douglas C. (2007). Pattern-Oriented Software Architecture, A Pattern Language for Distributed Computing. John Wiley & Sons. p. 374. ISBN   9780470065303.
  2. Stevens, W. Richard; Rago, Stephen A. (2013). Advanced Programming in the UNIX Environment. Addison-Wesley. p. 434.
  3. David Hovemeyer. "Lecture 17: Java Threads, Synchronization". CS 365 - Parallel and Distributed Computing. Lecture notes, York College of Pennsylvania . Retrieved 4 June 2015.