Dekker's algorithm

Last updated

Dekker's algorithm is the first known correct solution to the mutual exclusion problem in concurrent programming where processes only communicate via shared memory. The solution is attributed to Dutch mathematician Th. J. Dekker by Edsger W. Dijkstra in an unpublished paper on sequential process descriptions [1] and his manuscript on cooperating sequential processes. [2] It allows two threads to share a single-use resource without conflict, using only shared memory for communication.

Contents

It avoids the strict alternation of a naïve turn-taking algorithm, and was one of the first mutual exclusion algorithms to be invented.

Overview

If two processes attempt to enter a critical section at the same time, the algorithm will allow only one process in, based on whose turn it is. If one process is already in the critical section, the other process will busy wait for the first process to exit. This is done by the use of two flags, wants_to_enter[0] and wants_to_enter[1], which indicate an intention to enter the critical section on the part of processes 0 and 1, respectively, and a variable turn that indicates who has priority between the two processes.

Dekker's algorithm Dekker's Algorithm.svg
Dekker's algorithm

Dekker's algorithm can be expressed in pseudocode, as follows. [3]

    variables         wants_to_enter : array of 2 booleans         turn : integer      wants_to_enter[0] ← false     wants_to_enter[1] ← false     turn ← 0   // or 1 
p0:    wants_to_enter[0] ← true    while wants_to_enter[1] {       if turn ≠ 0 {          wants_to_enter[0] ← false          while turn ≠ 0 {            // busy wait          }          wants_to_enter[0] ← true       }    }     // critical section    ...    turn ← 1    wants_to_enter[0] ← false    // remainder section 
p1:    wants_to_enter[1] ← true    while wants_to_enter[0] {       if turn ≠ 1 {          wants_to_enter[1] ← false          while turn ≠ 1 {            // busy wait          }          wants_to_enter[1] ← true       }    }     // critical section    ...    turn ← 0    wants_to_enter[1] ← false    // remainder section 

Processes indicate an intention to enter the critical section which is tested by the outer while loop. If the other process has not flagged intent, the critical section can be entered safely irrespective of the current turn. Mutual exclusion will still be guaranteed as neither process can become critical before setting their flag (implying at least one process will enter the while loop). This also guarantees progress as waiting will not occur on a process which has withdrawn intent to become critical. Alternatively, if the other process's variable was set the while loop is entered and the turn variable will establish who is permitted to become critical. Processes without priority will withdraw their intention to enter the critical section until they are given priority again (the inner while loop). Processes with priority will break from the while loop and enter their critical section.

Dekker's algorithm guarantees mutual exclusion, freedom from deadlock, and freedom from starvation. Let us see why the last property holds. Suppose p0 is stuck inside the while wants_to_enter[1] loop forever. There is freedom from deadlock, so eventually p1 will proceed to its critical section and set turn = 0 (and the value of turn will remain unchanged as long as p0 doesn't progress). Eventually p0 will break out of the inner while turn ≠ 0 loop (if it was ever stuck on it). After that it will set wants_to_enter[0] to true and settle down to waiting for wants_to_enter[1] to become false (since turn = 0, it will never do the actions in the while loop). The next time p1 tries to enter its critical section, it will be forced to execute the actions in its while wants_to_enter[0] loop. In particular, it will eventually set wants_to_enter[1] to false and get stuck in the while turn ≠ 1 loop (since turn remains 0). The next time control passes to p0, it will exit the while wants_to_enter[1] loop and enter its critical section.

If the algorithm were modified by performing the actions in the while wants_to_enter[1] loop without checking if turn = 0, then there is a possibility of starvation. Thus all the steps in the algorithm are necessary.

Notes

One advantage of this algorithm is that it doesn't require special test-and-set (atomic read/modify/write) instructions and is therefore highly portable between languages and machine architectures. One disadvantage is that it is limited to two processes and makes use of busy waiting instead of process suspension. (The use of busy waiting suggests that processes should spend a minimum amount of time inside the critical section.)

Modern operating systems provide mutual exclusion primitives that are more general and flexible than Dekker's algorithm. However, in the absence of actual contention between the two processes, the entry and exit from critical section is extremely efficient when Dekker's algorithm is used.

Many modern CPUs execute their instructions in an out-of-order fashion; even memory accesses can be reordered (see memory ordering). This algorithm won't work on SMP machines equipped with these CPUs without the use of memory barriers.

Additionally, many optimizing compilers can perform transformations that will cause this algorithm to fail regardless of the platform. In many languages, it is legal for a compiler to detect that the flag variables wants_to_enter[0] and wants_to_enter[1] are never accessed in the loop. It can then remove the writes to those variables from the loop, using a process called loop-invariant code motion. It would also be possible for many compilers to detect that the turn variable is never modified by the inner loop, and perform a similar transformation, resulting in a potential infinite loop. If either of these transformations is performed, the algorithm will fail, regardless of architecture.

To alleviate this problem, volatile variables should be marked as modifiable outside the scope of the currently executing context. For example, in C, C++, C# or Java, one would annotate these variables as 'volatile'. Note however that the C/C++ "volatile" attribute only guarantees that the compiler generates code with the proper ordering; it does not include the necessary memory barriers to guarantee in-order execution of that code. C++11 atomic variables can be used to guarantee the appropriate ordering requirements — by default, operations on atomic variables are sequentially consistent so if the wants_to_enter and turn variables are atomic a naive implementation will "just work". Alternatively, ordering can be guaranteed by the explicit use of separate fences, with the load and store operations using a relaxed ordering.

See also

Related Research Articles

<span class="mw-page-title-main">Edsger W. Dijkstra</span> Dutch computer scientist (1930–2002)

Edsger Wybe Dijkstra was a Dutch computer scientist, programmer, software engineer, and science essayist.

<span class="mw-page-title-main">Mutual exclusion</span> In computing, restricting data to be accessible by one thread at a time

In computer science, mutual exclusion is a property of concurrency control, which is instituted for the purpose of preventing race conditions. It is the requirement that one thread of execution never enters a critical section while a concurrent thread of execution is already accessing said critical section, which refers to an interval of time during which a thread of execution accesses a shared resource or shared memory.

<span class="mw-page-title-main">Semaphore (programming)</span> Variable used in a concurrent system

In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. Semaphores are a type of synchronization primitive. A trivial semaphore is a plain variable that is changed depending on programmer-defined conditions.

In computer science, a lock or mutex is a synchronization primitive: a mechanism that enforces limits on access to a resource when there are many threads of execution. A lock is designed to enforce a mutual exclusion concurrency control policy, and with a variety of possible methods there exists multiple unique implementations for different applications.

<span class="mw-page-title-main">Dining philosophers problem</span> Problem used to illustrate synchronization issues and techniques for resolving them

In computer science, the dining philosophers problem is an example problem often used in concurrent algorithm design to illustrate synchronization issues and techniques for resolving them.

Peterson's algorithm is a concurrent programming algorithm for mutual exclusion that allows two or more processes to share a single-use resource without conflict, using only shared memory for communication. It was formulated by Gary L. Peterson in 1981. While Peterson's original formulation worked with only two processes, the algorithm can be generalized for more than two.

In computer science, the sleeping barber problem is a classic inter-process communication and synchronization problem that illustrates the complexities that arise when there are multiple operating system processes.

In computer science, the test-and-set instruction is an instruction used to write (set) 1 to a memory location and return its old value as a single atomic operation. The caller can then "test" the result to see if the state was changed by the call. If multiple processes may access the same memory location, and if a process is currently performing a test-and-set, no other process may begin another test-and-set until the first process's test-and-set is finished. A central processing unit (CPU) may use a test-and-set instruction offered by another electronic component, such as dual-port RAM; a CPU itself may also offer a test-and-set instruction.

In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; for some operations, these algorithms provide a useful alternative to traditional blocking implementations. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.

Release consistency is one of the synchronization-based consistency models used in concurrent programming.

In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior, so parts of the program where the shared resource is accessed need to be protected in ways that avoid the concurrent access. One way to do so is known as a critical section or critical region. This protected section cannot be entered by more than one process or thread at a time; others are suspended until the first leaves the critical section. Typically, the critical section accesses a shared resource, such as a data structure, a peripheral device, or a network connection, that would not operate correctly in the context of multiple concurrent accesses.

<span class="mw-page-title-main">Linearizability</span> Property of some operation(s) in concurrent programming

In concurrent programming, an operation is linearizable if it consists of an ordered list of invocation and response events, that may be extended by adding response events such that:

  1. The extended list can be re-expressed as a sequential history.
  2. That sequential history is a subset of the original unextended list.

In concurrent programming, a monitor is a synchronization construct that allows threads to have both mutual exclusion and the ability to wait (block) for a certain condition to become false. Monitors also have a mechanism for signaling other threads that their condition has been met. A monitor consists of a mutex (lock) object and condition variables. A condition variable is essentially a container of threads that are waiting for a certain condition. Monitors provide a mechanism for threads to temporarily give up exclusive access in order to wait for some condition to be met, before regaining exclusive access and resuming their task.

In computer science, a ticket lock is a synchronization mechanism, or locking algorithm, that is a type of spinlock that uses "tickets" to control which thread of execution is allowed to enter a critical section.

In computer science, the fetch-and-add (FAA) CPU instruction atomically increments the contents of a memory location by a specified value.

Lamport's bakery algorithm is a computer algorithm devised by computer scientist Leslie Lamport, as part of his long study of the formal correctness of concurrent systems, which is intended to improve the safety in the usage of shared resources among multiple threads by means of mutual exclusion.

In computer science, synchronization is the task of coordinating multiple of processes to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action.

In computing, the producer-consumer problem is a family of problems described by Edsger W. Dijkstra since 1965.

Szymański's Mutual Exclusion Algorithm is a mutual exclusion algorithm devised by computer scientist Dr. Bolesław Szymański, which has many favorable properties including linear wait, and which extension solved the open problem posted by Leslie Lamport whether there is an algorithm with a constant number of communication bits per process that satisfies every reasonable fairness and failure-tolerance requirement that Lamport conceived of.

In computer science, interference freedom is a technique for proving partial correctness of concurrent programs with shared variables. Hoare logic had been introduced earlier to prove correctness of sequential programs. In her PhD thesis under advisor David Gries, Susan Owicki extended this work to apply to concurrent programs.

References

  1. Dijkstra, Edsger W. Over de sequentialiteit van procesbeschrijvingen (EWD-35) (PDF). E.W. Dijkstra Archive. Center for American History, University of Texas at Austin. ( transcription ) (undated, 1962 or 1963); English translation About the sequentiality of process descriptions
  2. Dijkstra, Edsger W. Cooperating sequential processes (EWD-123) (PDF). E.W. Dijkstra Archive. Center for American History, University of Texas at Austin. ( transcription ) (September 1965)
  3. Alagarsamy, K. (2003). "Some Myths About Famous Mutual Exclusion Algorithms". ACM SIGACT News. 34 (3): 94–103. doi:10.1145/945526.945527. S2CID   7545330.