Lock (computer science)

Last updated

In computer science, a lock or mutex (from mutual exclusion) is a synchronization primitive that prevents state from being modified or accessed by multiple threads of execution at once. Locks enforce mutual exclusion concurrency control policies, and with a variety of possible methods there exist multiple unique implementations for different applications.

Contents

Types

Generally, locks are advisory locks, where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implement mandatory locks, where attempting unauthorized access to a locked resource will force an exception in the entity attempting to make the access.

The simplest type of lock is a binary semaphore. It provides exclusive access to the locked data. Other schemes also provide shared access for reading data. Other widely implemented access modes are exclusive, intend-to-exclude and intend-to-upgrade.

Another way to classify locks is by what happens when the lock strategy prevents the progress of a thread. Most locking designs block the execution of the thread requesting the lock until it is allowed to access the locked resource. With a spinlock, the thread simply waits ("spins") until the lock becomes available. This is efficient if threads are blocked for a short time, because it avoids the overhead of operating system process re-scheduling. It is inefficient if the lock is held for a long time, or if the progress of the thread that is holding the lock depends on preemption of the locked thread.

Locks typically require hardware support for efficient implementation. This support usually takes the form of one or more atomic instructions such as "test-and-set", "fetch-and-add" or "compare-and-swap". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation.

Uniprocessor architectures have the option of using uninterruptible sequences of instructions—using special instructions or instruction prefixes to disable interrupts temporarily—but this technique does not work for multiprocessor shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantial synchronization issues.

The reason an atomic operation is required is because of concurrency, where more than one task executes the same logic. For example, consider the following C code:

if(lock==0){// lock free, set itlock=myPID;}

The above example does not guarantee that the task has the lock, since more than one task can be testing the lock at the same time. Since both tasks will detect that the lock is free, both tasks will attempt to set the lock, not knowing that the other task is also setting the lock. Dekker's or Peterson's algorithm are possible substitutes if atomic locking operations are not available.

Careless use of locks can result in deadlock or livelock. A number of strategies can be used to avoid or recover from deadlocks or livelocks, both at design-time and at run-time. (The most common strategy is to standardize the lock acquisition sequences so that combinations of inter-dependent locks are always acquired in a specifically defined "cascade" order.)

Some languages do support locks syntactically. An example in C# follows:

publicclassAccount// This is a monitor of an account{privatedecimal_balance=0;privateobject_balanceLock=newobject();publicvoidDeposit(decimalamount){// Only one thread at a time may execute this statement.lock(_balanceLock){_balance+=amount;}}publicvoidWithdraw(decimalamount){// Only one thread at a time may execute this statement.lock(_balanceLock){_balance-=amount;}}}

The code lock(this) can lead to problems if the instance can be accessed publicly. [1]

Similar to Java, C# can also synchronize entire methods, by using the MethodImplOptions.Synchronized attribute. [2] [3]

[MethodImpl(MethodImplOptions.Synchronized)]publicvoidSomeMethod(){// do stuff}

Granularity

Before being introduced to lock granularity, one needs to understand three concepts about locks:

There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization.

An important property of a lock is its granularity . The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less lock overhead when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased lock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a deadlock.[ citation needed ]

In a database management system, for example, a lock could protect, in order of decreasing granularity, part of a field, a field, a record, a data page, or an entire table. Coarse granularity, such as using table locks, tends to give the best performance for a single user, whereas fine granularity, such as record locks, tends to give the best performance for multiple users.

Database locks

Database locks can be used as a means of ensuring transaction synchronicity. i.e. when making transaction processing concurrent (interleaving transactions), using 2-phased locks ensures that the concurrent execution of the transaction turns out equivalent to some serial ordering of the transaction. However, deadlocks become an unfortunate side-effect of locking in databases. Deadlocks are either prevented by pre-determining the locking order between transactions or are detected using waits-for graphs. An alternate to locking for database synchronicity while avoiding deadlocks involves the use of totally ordered global timestamps.

There are mechanisms employed to manage the actions of multiple concurrent users on a database—the purpose is to prevent lost updates and dirty reads. The two types of locking are pessimistic locking and optimistic locking:

Where to use pessimistic locking: this is mainly used in environments where data-contention (the degree of users request to the database system at any one time) is heavy; where the cost of protecting data through locks is less than the cost of rolling back transactions, if concurrency conflicts occur. Pessimistic concurrency is best implemented when lock times will be short, as in programmatic processing of records. Pessimistic concurrency requires a persistent connection to the database and is not a scalable option when users are interacting with data, because records might be locked for relatively large periods of time. It is not appropriate for use in Web application development.
Where to use optimistic locking: this is appropriate in environments where there is low contention for data, or where read-only access to data is required. Optimistic concurrency is used extensively in .NET to address the needs of mobile and disconnected applications, [4] where locking data rows for prolonged periods of time would be infeasible. Also, maintaining record locks requires a persistent connection to the database server, which is not possible in disconnected applications.

Lock compatibility table

Several variations and refinements of these major lock types exist, with respective variations of blocking behavior. If a first lock blocks another lock, the two locks are called incompatible; otherwise the locks are compatible. Often, lock types blocking interactions are presented in the technical literature by a Lock compatibility table. The following is an example with the common, major lock types:

Lock compatibility table
Lock typeread-lockwrite-lock
read-lockX
write-lockXX

Comment: In some publications, the table entries are simply marked "compatible" or "incompatible", or respectively "yes" or "no". [5]

Disadvantages

Lock-based resource protection and thread/process synchronization have many disadvantages:

Some concurrency control strategies avoid some or all of these problems. For example, a funnel or serializing tokens can avoid the biggest problem: deadlocks. Alternatives to locking include non-blocking synchronization methods, like lock-free programming techniques and transactional memory. However, such alternative methods often require that the actual lock mechanisms be implemented at a more fundamental level of the operating software. Therefore, they may only relieve the application level from the details of implementing locks, with the problems listed above still needing to be dealt with beneath the application.

In most cases, proper locking depends on the CPU providing a method of atomic instruction stream synchronization (for example, the addition or deletion of an item into a pipeline requires that all contemporaneous operations needing to add or delete other items in the pipe be suspended during the manipulation of the memory content required to add or delete the specific item). Therefore, an application can often be more robust when it recognizes the burdens it places upon an operating system and is capable of graciously recognizing the reporting of impossible demands.[ citation needed ]

Lack of composability

One of lock-based programming's biggest problems is that "locks don't compose": it is hard to combine small, correct lock-based modules into equally correct larger programs without modifying the modules or at least knowing about their internals. Simon Peyton Jones (an advocate of software transactional memory) gives the following example of a banking application: [6] design a class Account that allows multiple concurrent clients to deposit or withdraw money to an account; and give an algorithm to transfer money from one account to another. The lock-based solution to the first part of the problem is:

class Account:     member balance: Integer     member mutex: Lock      method deposit(n: Integer)            mutex.lock()            balance ← balance + n            mutex.unlock()      method withdraw(n: Integer)            deposit(−n)

The second part of the problem is much more complicated. A transfer routine that is correct for sequential programs would be

function transfer(from: Account, to: Account, amount: Integer)     from.withdraw(amount)     to.deposit(amount)

In a concurrent program, this algorithm is incorrect because when one thread is halfway through transfer, another might observe a state where amount has been withdrawn from the first account, but not yet deposited into the other account: money has gone missing from the system. This problem can only be fixed completely by taking locks on both account prior to changing any of the two accounts, but then the locks have to be taken according to some arbitrary, global ordering to prevent deadlock:

function transfer(from: Account, to: Account, amount: Integer)     if from < to    // arbitrary ordering on the locks         from.lock()         to.lock()     else         to.lock()         from.lock()     from.withdraw(amount)     to.deposit(amount)     from.unlock()     to.unlock()

This solution gets more complicated when more locks are involved, and the transfer function needs to know about all of the locks, so they cannot be hidden.

Language support

Programming languages vary in their support for synchronization:

Mutexes vs. semaphores

A mutex is a locking mechanism that sometimes uses the same basic implementation as the binary semaphore. The differences between them are in how they are used. While a binary semaphore may be colloquially referred to as a mutex, a true mutex has a more specific use-case and definition, in that only the task that locked the mutex is supposed to unlock it. This constraint aims to handle some potential problems of using semaphores:

  1. Priority inversion: If the mutex knows who locked it and is supposed to unlock it, it is possible to promote the priority of that task whenever a higher-priority task starts waiting on the mutex.
  2. Premature task termination: Mutexes may also provide deletion safety, where the task holding the mutex cannot be accidentally deleted. [ citation needed ]
  3. Termination deadlock: If a mutex-holding task terminates for any reason, the OS can release the mutex and signal waiting tasks of this condition.
  4. Recursion deadlock: a task is allowed to lock a reentrant mutex multiple times as it unlocks it an equal number of times.
  5. Accidental release: An error is raised on the release of the mutex if the releasing task is not its owner.

See also

Related Research Articles

A real-time operating system (RTOS) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in a multitasking or multiprogramming environment. Processing time requirements need to be fully understood and bound rather than just kept as a minimum. All processing must occur within the defined constraints. Real-time operating systems are event-driven and preemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts.

<span class="mw-page-title-main">Mutual exclusion</span> In computing, restricting data to be accessible by one thread at a time

In computer science, mutual exclusion is a property of concurrency control, which is instituted for the purpose of preventing race conditions. It is the requirement that one thread of execution never enters a critical section while a concurrent thread of execution is already accessing said critical section, which refers to an interval of time during which a thread of execution accesses a shared resource or shared memory.

<span class="mw-page-title-main">Thread (computing)</span> Smallest sequence of programmed instructions that can be managed independently by a scheduler

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. In many cases, a thread is a component of a process.

Thread safety is a computer programming concept applicable to multi-threaded code. Thread-safe code only manipulates shared data structures in a manner that ensures that all threads behave properly and fulfill their design specifications without unintended interaction. There are various strategies for making thread-safe data structures.

<span class="mw-page-title-main">Semaphore (programming)</span> Variable used in a concurrent system

In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. Semaphores are a type of synchronization primitive. A trivial semaphore is a plain variable that is changed depending on programmer-defined conditions.

In information technology and computer science, especially in the fields of computer programming, operating systems, multiprocessors, and databases, concurrency control ensures that correct results for concurrent operations are generated, while getting those results as quickly as possible.

In software engineering, a spinlock is a lock that causes a thread trying to acquire it to simply wait in a loop ("spin") while repeatedly checking whether the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of busy waiting. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on blocks or "goes to sleep".

In computer science, read-copy-update (RCU) is a synchronization mechanism that avoids the use of lock primitives while multiple threads concurrently read and update elements that are linked through pointers and that belong to shared data structures.

In computer science, priority inversion is a scenario in scheduling in which a high-priority task is indirectly superseded by a lower-priority task effectively inverting the assigned priorities of the tasks. This violates the priority model that high-priority tasks can only be prevented from running by higher-priority tasks. Inversion occurs when there is a resource contention with a low-priority task that is then preempted by a medium-priority task.

In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; for some operations, these algorithms provide a useful alternative to traditional blocking implementations. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress. "Non-blocking" was used as a synonym for "lock-free" in the literature until the introduction of obstruction-freedom in 2003.

In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior, so parts of the program where the shared resource is accessed need to be protected in ways that avoid the concurrent access. One way to do so is known as a critical section or critical region. This protected section cannot be entered by more than one process or thread at a time; others are suspended until the first leaves the critical section. Typically, the critical section accesses a shared resource, such as a data structure, a peripheral device, or a network connection, that would not operate correctly in the context of multiple concurrent accesses.

In concurrent programming, a monitor is a synchronization construct that prevents threads from concurrently accessing a shared object's state and allows them to wait for the state to change. They provide a mechanism for threads to temporarily give up exclusive access in order to wait for some condition to be met, before regaining exclusive access and resuming their task. A monitor consists of a mutex (lock) and at least one condition variable. A condition variable is explicitly 'signalled' when the object's state is modified, temporarily passing the mutex to another thread 'waiting' on the conditional variable.

In computer science and engineering, transactional memory attempts to simplify concurrent programming by allowing a group of load and store instructions to execute in an atomic way. It is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. Transactional memory systems provide high-level abstraction as an alternative to low-level thread synchronization. This abstraction allows for coordination between concurrent reads and writes of shared data in parallel systems.

In computer science, a readers–writer is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, whereas write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers and readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updated atomically and is invalid until the update is complete.

In computer science, synchronization is the task of coordinating multiple of processes to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action.

The Java programming language and the Java virtual machine (JVM) is designed to support concurrent programming. All execution takes place in the context of threads. Objects and resources can be accessed by many separate threads. Each thread has its own path of execution, but can potentially access any object in the program. The programmer must ensure read and write access to objects is properly coordinated between threads. Thread synchronization ensures that objects are modified by only one thread at a time and prevents threads from accessing partially updated objects during modification by another thread. The Java language has built-in constructs to support this coordination.

oneAPI Threading Building Blocks, is a C++ template library developed by Intel for parallel programming on multi-core processors. Using TBB, a computation is broken down into tasks that can run in parallel. The library manages and schedules threads to execute these tasks.

In operating systems, a giant lock, also known as a big-lock or kernel-lock, is a lock that may be used in the kernel to provide concurrency control required by symmetric multiprocessing (SMP) systems.

In computer science, a concurrent data structure is a particular way of storing and organizing data for access by multiple computing threads on a computer.

In computer science, deadlock prevention algorithms are used in concurrent programming when multiple processes must acquire more than one shared resource. If two or more concurrent processes obtain multiple resources indiscriminately, a situation can occur where each process has a resource needed by another process. As a result, none of the processes can obtain all the resources it needs, so all processes are blocked from further execution. This situation is called a deadlock. A deadlock prevention algorithm organizes resource usage by each process to ensure that at least one process is always able to get all the resources it needs. One such example of deadlock algorithm is Banker's algorithm.

References

  1. "lock Statement (C# Reference)".
  2. "ThreadPoolPriority, and MethodImplAttribute". MSDN. p. ??. Retrieved 2011-11-22.
  3. "C# From a Java Developer's Perspective". Archived from the original on 2013-01-02. Retrieved 2011-11-22.
  4. "Designing Data Tier Components and Passing Data Through Tiers". Microsoft. August 2002. Archived from the original on 2008-05-08. Retrieved 2008-05-30.
  5. "Lock Based Concurrency Control Protocol in DBMS". GeeksforGeeks. 2018-03-07. Retrieved 2023-12-28.
  6. Peyton Jones, Simon (2007). "Beautiful concurrency" (PDF). In Wilson, Greg; Oram, Andy (eds.). Beautiful Code: Leading Programmers Explain How They Think. O'Reilly.
  7. ISO/IEC 8652:2007. "Protected Units and Protected Objects". Ada 2005 Reference Manual. Retrieved 2010-02-27. A protected object provides coordinated access to shared data, through calls on its visible protected operations, which can be protected subprograms or protected entries.{{cite book}}: CS1 maint: numeric names: authors list (link)
  8. ISO/IEC 8652:2007. "Example of Tasking and Synchronization". Ada 2005 Reference Manual. Retrieved 2010-02-27.{{cite book}}: CS1 maint: numeric names: authors list (link)
  9. Marshall, Dave (March 1999). "Mutual Exclusion Locks" . Retrieved 2008-05-30.
  10. "Synchronize". msdn.microsoft.com. Retrieved 2008-05-30.
  11. "Synchronization". Sun Microsystems . Retrieved 2008-05-30.
  12. "Apple Threading Reference". Apple, inc. Retrieved 2009-10-17.
  13. "NSLock Reference". Apple, inc. Retrieved 2009-10-17.
  14. "NSRecursiveLock Reference". Apple, inc. Retrieved 2009-10-17.
  15. "NSConditionLock Reference". Apple, inc. Retrieved 2009-10-17.
  16. "NSLocking Protocol Reference". Apple, inc. Retrieved 2009-10-17.
  17. "flock".
  18. "The Mutex class". Archived from the original on 2017-07-04. Retrieved 2016-12-29.
  19. Lundh, Fredrik (July 2007). "Thread Synchronization Mechanisms in Python". Archived from the original on 2020-11-01. Retrieved 2008-05-30.
  20. John Reid (2010). "Coarrays in the next Fortran Standard" (PDF). Retrieved 2020-02-17.
  21. "Programming Ruby: Threads and Processes". 2001. Retrieved 2008-05-30.
  22. "std::sync::Mutex - Rust". doc.rust-lang.org. Retrieved 3 November 2020.
  23. "Shared-State Concurrency - The Rust Programming Language". doc.rust-lang.org. Retrieved 3 November 2020.
  24. Marlow, Simon (August 2013). "Basic concurrency: threads and MVars". Parallel and Concurrent Programming in Haskell. O’Reilly Media. ISBN   9781449335946.
  25. Marlow, Simon (August 2013). "Software transactional memory". Parallel and Concurrent Programming in Haskell. O’Reilly Media. ISBN   9781449335946.
  26. "sync package - sync - pkg.go.dev". pkg.go.dev. Retrieved 2021-11-23.