Semaphore (programming)

Last updated
A semaphore (Dutch: seinpaal
, the term used in Dijkstra's original description ). Rail-semaphore-signal-Dave-F.jpg
A semaphore (Dutch : seinpaal, the term used in Dijkstra's original description ).

In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. Semaphores are a type of synchronization primitive. A trivial semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions.

Contents

A useful way to think of a semaphore as used in a real-world system is as a record of how many units of a particular resource are available, coupled with operations to adjust that record safely (i.e., to avoid race conditions) as units are acquired or become free, and, if necessary, wait until a unit of the resource becomes available.

Semaphores are a useful tool in the prevention of race conditions; however, their use is not a guarantee that a program is free from these problems. Semaphores which allow an arbitrary resource count are called counting semaphores, while semaphores which are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores and are used to implement locks.

The semaphore concept was invented by Dutch computer scientist Edsger Dijkstra in 1962 or 1963, [2] when Dijkstra and his team were developing an operating system for the Electrologica X8. That system eventually became known as the THE multiprogramming system.

Library analogy

Suppose a physical library has 10 identical study rooms, to be used by one student at a time. Students must request a room from the front desk if they wish to use a study room. If no rooms are free, students wait at the desk until someone relinquishes a room. When a student has finished using a room, the student must return to the desk and indicate that one room has become free.

In the simplest implementation, the clerk at the front desk knows only the number of free rooms available, which they only know correctly if all of the students actually use their room while they have signed up for them and return them when they're done. When a student requests a room, the clerk decreases this number. When a student releases a room, the clerk increases this number. The room can be used for as long as desired, and so it is not possible to book rooms ahead of time.

In this scenario the front desk count-holder represents a counting semaphore, the rooms are the resource, and the students represent processes/threads. The value of the semaphore in this scenario is initially 10, with all rooms empty. When a student requests a room, they are granted access, and the value of the semaphore is changed to 9. After the next student comes, it drops to 8, then 7 and so on. If someone requests a room and the current value of the semaphore is 0, [3] they are forced to wait until a room is freed (when the count is increased from 0). If one of the rooms was released, but there are several students waiting, then any method can be used to select the one who will occupy the room (like FIFO or randomly picking one). And of course, a student needs to inform the clerk about releasing their room only after really leaving it, otherwise, there can be an awkward situation when such student is in the process of leaving the room (they are packing their textbooks, etc.) and another student enters the room before they leave it.

Important observations

When used to control access to a pool of resources, a semaphore tracks only how many resources are free; it does not keep track of which of the resources are free. Some other mechanism (possibly involving more semaphores) may be required to select a particular free resource.

The paradigm is especially powerful because the semaphore count may serve as a useful trigger for a number of different actions. The librarian above may turn the lights off in the study hall when there are no students remaining, or may place a sign that says the rooms are very busy when most of the rooms are occupied.

The success of the protocol requires applications to follow it correctly. Fairness and safety are likely to be compromised (which practically means a program may behave slowly, act erratically, hang or crash) if even a single process acts incorrectly. This includes:

Even if all processes follow these rules, multi-resource deadlock may still occur when there are different resources managed by different semaphores and when processes need to use more than one resource at a time, as illustrated by the dining philosophers problem.

Semantics and implementation

Counting semaphores are equipped with two operations, historically denoted as P and V (see § Operation names for alternative names). Operation V increments the semaphore S, and operation P decrements it.

The value of the semaphore S is the number of units of the resource that are currently available. The P operation wastes time or sleeps until a resource protected by the semaphore becomes available, at which time the resource is immediately claimed. The V operation is the inverse: it makes a resource available again after the process has finished using it. One important property of semaphore S is that its value cannot be changed except by using the V and P operations.

A simple way to understand wait (P) and signal (V) operations is:

Many operating systems provide efficient semaphore primitives that unblock a waiting process when the semaphore is incremented. This means that processes do not waste time checking the semaphore value unnecessarily.

The counting semaphore concept can be extended with the ability to claim or return more than one "unit" from the semaphore, a technique implemented in Unix. The modified V and P operations are as follows, using square brackets to indicate atomic operations, i.e., operations which appear indivisible from the perspective of other processes:

function V(semaphore S, integer I):     [S ← S + I]  function P(semaphore S, integer I):     repeat:         [if S ≥ I:         S ← S − I         break]

However, the remainder of this section refers to semaphores with unary V and P operations, unless otherwise specified.

To avoid starvation, a semaphore has an associated queue of processes (usually with FIFO semantics). If a process performs a P operation on a semaphore that has the value zero, the process is added to the semaphore's queue and its execution is suspended. When another process increments the semaphore by performing a V operation, and there are processes on the queue, one of them is removed from the queue and resumes execution. When processes have different priorities the queue may be ordered by priority, so that the highest priority process is taken from the queue first.

If the implementation does not ensure atomicity of the increment, decrement and comparison operations, then there is a risk of increments or decrements being forgotten, or of the semaphore value becoming negative. Atomicity may be achieved by using a machine instruction that is able to read, modify and write the semaphore in a single operation. In the absence of such a hardware instruction, an atomic operation may be synthesized through the use of a software mutual exclusion algorithm. On uniprocessor systems, atomic operations can be ensured by temporarily suspending preemption or disabling hardware interrupts. This approach does not work on multiprocessor systems where it is possible for two programs sharing a semaphore to run on different processors at the same time. To solve this problem in a multiprocessor system a locking variable can be used to control access to the semaphore. The locking variable is manipulated using a test-and-set-lock command.

Examples

Trivial example

Consider a variable A and a boolean variable S. A is only accessed when S is marked true. Thus, S is a semaphore for A.

One can imagine a stoplight signal (S) just before a train station (A). In this case, if the signal is green, then one can enter the train station. If it is yellow or red (or any other color), the train station cannot be accessed.

Login queue

Consider a system that can only support ten users (S=10). Whenever a user logs in, P is called, decrementing the semaphore S by 1. Whenever a user logs out, V is called, incrementing S by 1 representing a login slot that has become available. When S is 0, any users wishing to log in must wait until S increases; the login request is enqueued onto a FIFO queue until a slot is freed. Mutual exclusion is used to ensure that requests are enqueued in order. Whenever S increases (login slots available), a login request is dequeued, and the user owning the request is allowed to log in. If S is already greater than 0, then login requests are immediately dequeued.

Producer–consumer problem

In the producer–consumer problem, one process (the producer) generates data items and another process (the consumer) receives and uses them. They communicate using a queue of maximum size N and are subject to the following conditions:

The semaphore solution to the producer–consumer problem tracks the state of the queue with two semaphores: emptyCount, the number of empty places in the queue, and fullCount, the number of elements in the queue. To maintain integrity, emptyCount may be lower (but never higher) than the actual number of empty places in the queue, and fullCount may be lower (but never higher) than the actual number of items in the queue. Empty places and items represent two kinds of resources, empty boxes and full boxes, and the semaphores emptyCount and fullCount maintain control over these resources.

The binary semaphore useQueue ensures that the integrity of the state of the queue itself is not compromised, for example, by two producers attempting to add items to an empty queue simultaneously, thereby corrupting its internal state. Alternatively a mutex could be used in place of the binary semaphore.

The emptyCount is initially N, fullCount is initially 0, and useQueue is initially 1.

The producer does the following repeatedly:

produce:     P(emptyCount)     P(useQueue)     putItemIntoQueue(item)     V(useQueue)     V(fullCount)

The consumer does the following repeatedly

consume:     P(fullCount)     P(useQueue)     item ← getItemFromQueue()     V(useQueue)     V(emptyCount)

Below is a substantive example:

  1. A single consumer enters its critical section. Since fullCount is 0, the consumer blocks.
  2. Several producers enter the producer critical section. No more than N producers may enter their critical section due to emptyCount constraining their entry.
  3. The producers, one at a time, gain access to the queue through useQueue and deposit items in the queue.
  4. Once the first producer exits its critical section, fullCount is incremented, allowing one consumer to enter its critical section.

Note that emptyCount may be much lower than the actual number of empty places in the queue, for example, in the case where many producers have decremented it but are waiting their turn on useQueue before filling empty places. Note that emptyCount + fullCount ≤ N always holds, with equality if and only if no producers or consumers are executing their critical sections.

Passing the baton pattern

The "Passing the baton" pattern [4] [5] [6] proposed by Gregory R. Andrews is a generic scheme to solve many complex concurrent programming problems in which multiple processes compete for the same resource with complex access conditions (such as satisfying specific priority criteria or avoiding starvation). Given a shared resource, the pattern requires a private "priv" semaphore (initialized to zero) for each process (or class of processes) involved and a single mutual exclusion "mutex" semaphore (initialized to one). The pseudo-code for each process is:

voidprocess(intproc_id,intres_id){resource_acquire(proc_id,res_id);<usetheresourceres_id>;resource_release(proc_id,res_id);}

The pseudo-code of the resource acquisition and release primitives are:

voidresource_acquire(intproc_id,intres_id){P(mutex);if(<theconditiontoaccessres_idisnotverifiedforproc_id>){<indicatethatproc_idissuspendedforres_id>;V(mutex);P(priv[proc_id]);<indicatethatproc_idisnotsuspendedforres_idanymore>;}<indicatethatproc_idisaccessingtheresource>;pass_the_baton();// See below}
voidresource_release(intproc_id,intres_id){P(mutex);<indicatethatproc_idisnotaccessingtheresourceres_idanymore>;pass_the_baton();// See below}

Both primitives in turn use the "pass_the_baton" method, whose pseudo-code is:

voidpass_the_baton(intres_id){if<theconditiontoaccessres_idistrueforatleastonesuspendedprocess>{intp=<choosetheprocesstowake>;V(priv[p]);}else{V(mutex);}}

Remarks

The pattern is called "passing the baton" because a process that releases the resource as well as a freshly reactivated process will activate at most one suspended process, that is, shall "pass the baton to it". The mutex is released only when a process is going to suspend itself (resource_acquire), or when pass_the_baton is unable to reactivate another suspended process.

Operation names

The canonical names V and P come from the initials of Dutch words. V is generally explained as verhogen ("increase"). Several explanations have been offered for P, including proberen ("to test" or "to try"), [7] passeren ("pass"), and pakken ("grab"). Dijkstra's earliest paper on the subject [2] gives passering ("passing") as the meaning for P, and vrijgave ("release") as the meaning for V. It also mentions that the terminology is taken from that used in railroad signals. Dijkstra subsequently wrote that he intended P to stand for prolaag, [8] short for probeer te verlagen, literally "try to reduce", or to parallel the terms used in the other case, "try to decrease". [9] [10] [11]

In ALGOL 68, the Linux kernel, [12] and in some English textbooks, the V and P operations are called, respectively, up and down. In software engineering practice, they are often called signal and wait, [13] release and acquire [13] (standard Java library), [14] or post and pend. Some texts call them vacate and procure to match the original Dutch initials. [15] [16]

Semaphores vs. mutexes

A mutex is a locking mechanism that sometimes uses the same basic implementation as the binary semaphore. The differences between them are in how they are used. While a binary semaphore may be colloquially referred to as a mutex, a true mutex has a more specific use-case and definition, in that only the task that locked the mutex is supposed to unlock it. This constraint aims to handle some potential problems of using semaphores:

  1. Priority inversion: If the mutex knows who locked it and is supposed to unlock it, it is possible to promote the priority of that task whenever a higher-priority task starts waiting on the mutex.
  2. Premature task termination: Mutexes may also provide deletion safety, where the task holding the mutex cannot be accidentally deleted. [ citation needed ]
  3. Termination deadlock: If a mutex-holding task terminates for any reason, the OS can release the mutex and signal waiting tasks of this condition.
  4. Recursion deadlock: a task is allowed to lock a reentrant mutex multiple times as it unlocks it an equal number of times.
  5. Accidental release: An error is raised on the release of the mutex if the releasing task is not its owner.

See also

Related Research Articles

<span class="mw-page-title-main">Edsger W. Dijkstra</span> Dutch computer scientist (1930–2002)

Edsger Wybe Dijkstra was a Dutch computer scientist, programmer, software engineer, and science essayist.

A real-time operating system (RTOS) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in a multitasking or multiprogramming environment. Processing time requirements need to be fully understood and bound rather than just kept as a minimum. All processing must occur within the defined constraints. Real-time operating systems are event-driven and preemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts.

Dekker's algorithm is the first known correct solution to the mutual exclusion problem in concurrent programming where processes only communicate via shared memory. The solution is attributed to Dutch mathematician Th. J. Dekker by Edsger W. Dijkstra in an unpublished paper on sequential process descriptions and his manuscript on cooperating sequential processes. It allows two threads to share a single-use resource without conflict, using only shared memory for communication.

<span class="mw-page-title-main">Mutual exclusion</span> In computing, restricting data to be accessible by one thread at a time

In computer science, mutual exclusion is a property of concurrency control, which is instituted for the purpose of preventing race conditions. It is the requirement that one thread of execution never enters a critical section while a concurrent thread of execution is already accessing said critical section, which refers to an interval of time during which a thread of execution accesses a shared resource or shared memory.

<span class="mw-page-title-main">Dijkstra's algorithm</span> Graph search algorithm

Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later.

In computer science, a lock or mutex is a synchronization primitive that prevents state from being modified or accessed by multiple threads of execution at once. Locks enforce mutual exclusion concurrency control policies, and with a variety of possible methods there exist multiple unique implementations for different applications.

<span class="mw-page-title-main">Dining philosophers problem</span> Problem used to illustrate synchronization issues and techniques for resolving them

In computer science, the dining philosophers problem is an example problem often used in concurrent algorithm design to illustrate synchronization issues and techniques for resolving them.

In computer science, the sleeping barber problem is a classic inter-process communication and synchronization problem that illustrates the complexities that arise when there are multiple operating system processes.

In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; for some operations, these algorithms provide a useful alternative to traditional blocking implementations. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.

In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior, so parts of the program where the shared resource is accessed need to be protected in ways that avoid the concurrent access. One way to do so is known as a critical section or critical region. This protected section cannot be entered by more than one process or thread at a time; others are suspended until the first leaves the critical section. Typically, the critical section accesses a shared resource, such as a data structure, a peripheral device, or a network connection, that would not operate correctly in the context of multiple concurrent accesses.

In computing, a futex is a kernel system call that programmers can use to implement basic locking, or as a building block for higher-level locking abstractions such as semaphores and POSIX mutexes or condition variables.

In concurrent programming, a monitor is a synchronization construct that prevents threads from concurrently accessing a shared object's state and allows them to wait for the state to change. They provide a mechanism for threads to temporarily give up exclusive access in order to wait for some condition to be met, before regaining exclusive access and resuming their task. A monitor consists of a mutex (lock) and at least one condition variable. A condition variable is explicitly 'signalled' when the object's state is modified, temporarily passing the mutex to another thread 'waiting' on the conditional variable.

The THE multiprogramming system or THE OS was a computer operating system designed by a team led by Edsger W. Dijkstra, described in monographs in 1965-66 and published in 1968. Dijkstra never named the system; "THE" is simply the abbreviation of "Technische Hogeschool Eindhoven", then the name of the Eindhoven University of Technology of the Netherlands. The THE system was primarily a batch system that supported multitasking; it was not designed as a multi-user operating system. It was much like the SDS 940, but "the set of processes in the THE system was static".

In computer science, the reentrant mutex is a particular type of mutual exclusion (mutex) device that may be locked multiple times by the same process/thread, without causing a deadlock.

In computer science, a readers–writer is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, whereas write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers and readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updated atomically and is invalid until the update is complete.

In computer science, the readers–writers problems are examples of a common computing problem in concurrency. There are at least three variations of the problems, which deal with situations in which many concurrent threads of execution try to access the same shared resource at one time.

In computer science, synchronization is the task of coordinating multiple of processes to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action.

Banker's algorithm is a resource allocation and deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of predetermined maximum possible amounts of all resources, and then makes an "s-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue.

In computing, the producer-consumer problem is a family of problems described by Edsger W. Dijkstra since 1965.

In computer science, interference freedom is a technique for proving partial correctness of concurrent programs with shared variables. Hoare logic had been introduced earlier to prove correctness of sequential programs. In her PhD thesis under advisor David Gries, Susan Owicki extended this work to apply to concurrent programs.

References

  1. Dijkstra, Edsger W. Over seinpalen (EWD-74) (PDF). E.W. Dijkstra Archive. Center for American History, University of Texas at Austin. ( transcription )
  2. 1 2 Dijkstra, Edsger W. Over de sequentialiteit van procesbeschrijvingen (EWD-35) (PDF). E.W. Dijkstra Archive. Center for American History, University of Texas at Austin. ( transcription ) (undated, 1962 or 1963)
  3. The Little Book of Semaphores Allen B. Downey
  4. Andrews, Gregory R. (1999). Foundations of Multithreaded, Parallel, and Distributed Programming. Addison-Wesley.
  5. Carver, Richard H.; Thai, Kuo-Chung (2005). Modern Multithreading: Implementing, Testing, and Debugging Multithreaded Java and C++/Pthreads/Win32 Programs. Wiley.
  6. Maurer, Christian (2021). Nonsequential and Distributed Programming with Go. Springer.
  7. Silberschatz, Abraham; Galvin, Peter Baer; Gagne, Greg (2008), Operating System Concepts (8th ed.), John Wiley & Sons. Inc, p. 234, ISBN   978-0-470-12872-5
  8. Dijkstra, Edsger W. EWD-74 (PDF). E.W. Dijkstra Archive. Center for American History, University of Texas at Austin. ( transcription )
  9. Dijkstra, Edsger W. MULTIPROGAMMERING EN DE X8 (EWD-51) (PDF). E.W. Dijkstra Archive. Center for American History, University of Texas at Austin. ( transcription ) (in Dutch)
  10. Dijkstra's own translation reads "try-and-decrease", although that phrase might be confusing for those unaware of the colloquial "try-and..."
  11. (PATCH 1/19) MUTEX: Introduce simple mutex implementation Linux Kernel Mailing List, 19 December 2005
  12. Linux Kernel hacking HOWTO Archived 2010-05-28 at the Wayback Machine LinuxGrill.com
  13. 1 2 Mullender, Sape; Cox, Russ (2008). Semaphores in Plan 9 (PDF). 3rd International Workshop on Plan 9.
  14. java.util.concurrent.Semaphore
  15. "exec.library/Procure". amigadev.elowar.com. Retrieved 2016-09-19.
  16. "exec.library/Vacate". amigadev.elowar.com. Retrieved 2016-09-19.

Introductions

References