Dining philosophers problem

Last updated
Illustration of the dining philosophers problem. Each philosopher has a bowl of spaghetti and can reach two of the forks. Dining philosophers diagram.jpg
Illustration of the dining philosophers problem. Each philosopher has a bowl of spaghetti and can reach two of the forks.

In computer science, the dining philosophers problem is an example problem often used in concurrent algorithm design to illustrate synchronization issues and techniques for resolving them.

Contents

It was originally formulated in 1965 by Edsger Dijkstra as a student exam exercise, presented in terms of computers competing for access to tape drive peripherals. Soon after, Tony Hoare gave the problem its present form. [1] [2] [3] [4]

Problem statement

Five philosophers dine together at the same table. Each philosopher has his own plate at the table. There is a fork between each plate. The dish served is a kind of spaghetti which has to be eaten with two forks. Each philosopher can only alternately think and eat. Moreover, a philosopher can only eat his spaghetti when he has both a left and right fork. Thus two forks will only be available when his two nearest neighbors are thinking, not eating. After an individual philosopher finishes eating, he will put down both forks. The problem is how to design a regimen (a concurrent algorithm) such that any philosopher will not starve; i.e., each can forever continue to alternate between eating and thinking, assuming that no philosopher can know when others may want to eat or think (an issue of incomplete information).

Problems

The problem was designed to illustrate the challenges of avoiding deadlock, a system state in which no progress is possible. To see that a proper solution to this problem is not obvious, consider a proposal in which each philosopher is instructed to behave as follows:

With these instructions, the situation may arise where each philosopher holds the fork to their left; in that situation, they will all be stuck forever, waiting for the other fork to be available: it is a deadlock.

Resource starvation, mutual exclusion and livelock are other types of sequence and access problems.

Solutions

Dijkstra's solution

Dijkstra's solution uses one mutex, one semaphore per philosopher and one state variable per philosopher. This solution is more complex than the resource hierarchy solution. [5] [4] This is a C++20 version of Dijkstra's solution with Tanenbaum's changes:

#include<chrono>#include<iostream>#include<mutex>#include<random>#include<semaphore>#include<thread>constexprconstsize_tN=5;// number of philosophers (and forks)enumclassState{THINKING=0,// philosopher is THINKINGHUNGRY=1,// philosopher is trying to get forksEATING=2,// philosopher is EATING};size_tinlineleft(size_ti){// number of the left neighbor of philosopher i, for whom both forks are availablereturn(i-1+N)%N;// N is added for the case when  i - 1 is negative}size_tinlineright(size_ti){// number of the right neighbour of the philosopher i, for whom both forks are availablereturn(i+1)%N;}Statestate[N];// array to keep track of everyone's both_forks_available statestd::mutexcritical_region_mtx;// mutual exclusion for critical regions for // (picking up and putting down the forks)std::mutexoutput_mtx;// for synchronized cout (printing THINKING/HUNGRY/EATING status)// array of binary semaphors, one semaphore per philosopher.// Acquired semaphore means philosopher i has acquired (blocked) two forksstd::binary_semaphoreboth_forks_available[N]{std::binary_semaphore{0},std::binary_semaphore{0},std::binary_semaphore{0},std::binary_semaphore{0},std::binary_semaphore{0}};size_tmy_rand(size_tmin,size_tmax){staticstd::mt19937rnd(std::time(nullptr));returnstd::uniform_int_distribution<>(min,max)(rnd);}voidtest(size_ti)// if philosopher i is hungry and both neighbours are not eating then eat{// i: philosopher number, from 0 to N-1if(state[i]==State::HUNGRY&&state[left(i)]!=State::EATING&&state[right(i)]!=State::EATING){state[i]=State::EATING;both_forks_available[i].release();// forks are no longer needed for this eat session}}voidthink(size_ti){size_tduration=my_rand(400,800);{std::lock_guard<std::mutex>lk(output_mtx);// critical section for uninterrupted printstd::cout<<i<<" is thinking "<<duration<<"ms\n";}std::this_thread::sleep_for(std::chrono::milliseconds(duration));}voidtake_forks(size_ti){{std::lock_guard<std::mutex>lk{critical_region_mtx};// enter critical regionstate[i]=State::HUNGRY;// record fact that philosopher i is State::HUNGRY{std::lock_guard<std::mutex>lk(output_mtx);// critical section for uninterrupted printstd::cout<<"\t\t"<<i<<" is State::HUNGRY\n";}test(i);// try to acquire (a permit for) 2 forks}// exit critical regionboth_forks_available[i].acquire();// block if forks were not acquired}voideat(size_ti){size_tduration=my_rand(400,800);{std::lock_guard<std::mutex>lk(output_mtx);// critical section for uninterrupted printstd::cout<<"\t\t\t\t"<<i<<" is eating "<<duration<<"ms\n";}std::this_thread::sleep_for(std::chrono::milliseconds(duration));}voidput_forks(size_ti){std::lock_guard<std::mutex>lk{critical_region_mtx};// enter critical regionstate[i]=State::THINKING;// philosopher has finished State::EATINGtest(left(i));// see if left neighbor can now eattest(right(i));// see if right neighbor can now eat// exit critical region by exiting the function}voidphilosopher(size_ti){while(true){// repeat foreverthink(i);// philosopher is State::THINKINGtake_forks(i);// acquire two forks or blockeat(i);// yum-yum, spaghettiput_forks(i);// put both forks back on table and check if neighbours can eat}}intmain(){std::cout<<"dp_14\n";std::jthreadt0([&]{philosopher(0);});// [&] means every variable outside the ensuing lambda std::jthreadt1([&]{philosopher(1);});// is captured by referencestd::jthreadt2([&]{philosopher(2);});std::jthreadt3([&]{philosopher(3);});std::jthreadt4([&]{philosopher(4);});}

The function test() and its use in take_forks() and put_forks() make the Dijkstra solution deadlock-free.

Resource hierarchy solution

This solution assigns a partial order to the resources (the forks, in this case), and establishes the convention that all resources will be requested in order, and that no two resources unrelated by order will ever be used by a single unit of work at the same time. Here, the resources (forks) will be numbered 1 through 5 and each unit of work (philosopher) will always pick up the lower-numbered fork first, and then the higher-numbered fork, from among the two forks he plans to use. The order in which each philosopher puts down the forks does not matter. In this case, if four of the five philosophers simultaneously pick up their lower-numbered forks, only the highest-numbered fork will remain on the table, so the fifth philosopher will not be able to pick up any fork. Moreover, only one philosopher will have access to that highest-numbered fork, so he will be able to eat using two forks. This can intuitively be thought of as having one "left-handed" philosopher at the table, who – unlike all the other philosophers – takes his fork from the left first.

While the resource hierarchy solution avoids deadlocks, it is not always practical, especially when the list of required resources is not completely known in advance. For example, if a unit of work holds resources 3 and 5 and then determines it needs resource 2, it must release 5, then 3 before acquiring 2, and then it must re-acquire 3 and 5 in that order. Computer programs that access large numbers of database records would not run efficiently if they were required to release all higher-numbered records before accessing a new record, making the method impractical for that purpose. [2]

The resource hierarchy solution is not fair. If philosopher 1 is slow to take a fork, and if philosopher 2 is quick to think and pick its forks back up, then philosopher 1 will never get to pick up both forks. A fair solution must guarantee that each philosopher will eventually eat, no matter how slowly that philosopher moves relative to the others.

The following source code is a C++11 implementation of the resource hierarchy solution for five philosophers. The sleep_for() function simulates the time normally spent with business logic. [6]

For GCC: compile with

g++src.cpp-std=c++11-lpthread 
#include<iostream>#include<chrono>#include<mutex>#include<thread>#include<random>#include<ctime>usingnamespacestd;intmyrand(intmin,intmax){staticmt19937rnd(time(nullptr));returnuniform_int_distribution<>(min,max)(rnd);}voidphilosopher(intph,mutex&ma,mutex&mb,mutex&mo){for(;;){// prevent thread from terminationintduration=myrand(200,800);{// Block { } limits scope of locklock_guard<mutex>gmo(mo);cout<<ph<<" thinks "<<duration<<"ms\n";}this_thread::sleep_for(chrono::milliseconds(duration));{lock_guard<mutex>gmo(mo);cout<<"\t\t"<<ph<<" is hungry\n";}lock_guard<mutex>gma(ma);// sleep_for() Delay before seeking second fork can be added here but should not be required.lock_guard<mutex>gmb(mb);duration=myrand(200,800);{lock_guard<mutex>gmo(mo);cout<<"\t\t\t\t"<<ph<<" eats "<<duration<<"ms\n";}this_thread::sleep_for(chrono::milliseconds(duration));}}intmain(){cout<<"dining Philosophers C++11 with Resource hierarchy\n";mutexm1,m2,m3,m4,m5;// 5 forks are 5 mutexesmutexmo;// for proper output// 5 philosophers are 5 threadsthreadt1([&]{philosopher(1,m1,m2,mo);});threadt2([&]{philosopher(2,m2,m3,mo);});threadt3([&]{philosopher(3,m3,m4,mo);});threadt4([&]{philosopher(4,m4,m5,mo);});threadt5([&]{philosopher(5,m1,m5,mo);});// Force a resource hierarchyt1.join();// prevent threads from terminationt2.join();t3.join();t4.join();t5.join();}

Arbitrator solution

Another approach is to guarantee that a philosopher can only pick up both forks or none by introducing an arbitrator, e.g., a waiter. In order to pick up the forks, a philosopher must ask permission of the waiter. The waiter gives permission to only one philosopher at a time until the philosopher has picked up both of his forks. Putting down a fork is always allowed. The waiter can be implemented as a mutex. In addition to introducing a new central entity (the waiter), this approach can result in reduced parallelism: if a philosopher is eating and one of his neighbors is requesting the forks, all other philosophers must wait until this request has been fulfilled even if forks for them are still available.

Concurrent algorithm design

Limiting the number of diners in the table

A solution presented by William Stallings [7] is to allow a maximum of n-1 philosophers to sit down at any time. The last philosopher would have to wait (for example, using a semaphore) for someone to finish dining before he "sits down" and requests access to any fork. This guarantees at least one philosopher may always acquire both forks, allowing the system to make progress.

Chandy/Misra solution

In 1984, K. Mani Chandy and J. Misra [8] proposed a different solution to the dining philosophers problem to allow for arbitrary agents (numbered P1, ..., Pn) to contend for an arbitrary number of resources, unlike Dijkstra's solution. It is also completely distributed and requires no central authority after initialization. However, it violates the requirement that "the philosophers do not speak to each other" (due to the request messages).

  1. For every pair of philosophers contending for a resource, create a fork and give it to the philosopher with the lower ID (n for agent Pn). Each fork can either be dirty or clean. Initially, all forks are dirty.
  2. When a philosopher wants to use a set of resources (i.e., eat), said philosopher must obtain the forks from his contending neighbors. For all such forks the philosopher does not have, he sends a request message.
  3. When a philosopher with a fork receives a request message, he keeps the fork if it is clean, but give it up when it is dirty. If the philosopher sends the fork over, he cleans the fork before doing so.
  4. After a philosopher is done eating, all his forks become dirty. If another philosopher had previously requested one of the forks, the philosopher that has just finished eating cleans the fork and sends it.

This solution also allows for a large degree of concurrency, and will solve an arbitrarily large problem.

It also solves the starvation problem. The clean/dirty labels act as a way of giving preference to the most "starved" processes, and a disadvantage to processes that have just "eaten". One could compare their solution to one where philosophers are not allowed to eat twice in a row without letting others use the forks in between. Chandy and Misra's solution is more flexible than that, but has an element tending in that direction.

In their analysis, they derive a system of preference levels from the distribution of the forks and their clean/dirty states. They show that this system may describe a directed acyclic graph, and if so, the operations in their protocol cannot turn that graph into a cyclic one. This guarantees that deadlock cannot occur. However, if the system is initialized to a perfectly symmetric state, like all philosophers holding their left side forks, then the graph is cyclic at the outset, and their solution cannot prevent a deadlock. Initializing the system so that philosophers with lower IDs have dirty forks ensures the graph is initially acyclic.

See also

Related Research Articles

A real-time operating system (RTOS) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in a multitasking or multiprogramming environments. Processing time requirements need to be fully understood and bound rather than just kept as a minimum. All processing must occur within the defined constraints. Real-time operating systems are event-driven and preemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts.

<span class="mw-page-title-main">Mutual exclusion</span> In computing, restricting data to be accessible by one thread at a time

In computer science, mutual exclusion is a property of concurrency control, which is instituted for the purpose of preventing race conditions. It is the requirement that one thread of execution never enters a critical section while a concurrent thread of execution is already accessing said critical section, which refers to an interval of time during which a thread of execution accesses a shared resource or shared memory.

In multi-threaded computer programming, a function is thread-safe when it can be invoked or accessed concurrently by multiple threads without causing unexpected behavior, race conditions, or data corruption. As in the multi-threaded context where a program executes several threads simultaneously in a shared address space and each of those threads has access to all every other thread's memory, thread-safe functions need to ensures all those threads behave properly and fulfill their design specifications without unintended interaction.

In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. Semaphores are a type of synchronization primitive. A trivial semaphore is a plain variable that is changed depending on programmer-defined conditions.

In software engineering, double-checked locking is a software design pattern used to reduce the overhead of acquiring a lock by testing the locking criterion before acquiring the lock. Locking occurs only if the locking criterion check indicates that locking is required.

In computer science, a lock or mutex is a synchronization primitive that prevents state from being modified or accessed by multiple threads of execution at once. Locks enforce mutual exclusion concurrency control policies, and with a variety of possible methods there exist multiple unique implementations for different applications.

In computer science, the sleeping barber problem is a classic inter-process communication and synchronization problem that illustrates the complexities that arise when there are multiple operating system processes.

In the C++ programming language, the C++ Standard Library is a collection of classes and functions, which are written in the core language and part of the C++ ISO Standard itself.

In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior. Thus, the parts of the program where the shared resource is accessed need to be protected in ways that avoid the concurrent access. One way to do so is known as a critical section or critical region. This protected section cannot be entered by more than one process or thread at a time; others are suspended until the first leaves the critical section. Typically, the critical section accesses a shared resource, such as a data structure, peripheral device, or network connection, that would not operate correctly in the context of multiple concurrent accesses.

Resource acquisition is initialization (RAII) is a programming idiom used in several object-oriented, statically typed programming languages to describe a particular language behavior. In RAII, holding a resource is a class invariant, and is tied to object lifetime. Resource allocation is done during object creation, by the constructor, while resource deallocation (release) is done during object destruction, by the destructor. In other words, resource acquisition must succeed for initialization to succeed. Thus the resource is guaranteed to be held between when initialization finishes and finalization starts, and to be held only when the object is alive. Thus if there are no object leaks, there are no resource leaks.

In concurrent programming, a monitor is a synchronization construct that prevents threads from concurrently accessing a shared object's state and allows them to wait for the state to change. They provide a mechanism for threads to temporarily give up exclusive access in order to wait for some condition to be met, before regaining exclusive access and resuming their task. A monitor consists of a mutex (lock) and at least one condition variable. A condition variable is explicitly 'signalled' when the object's state is modified, temporarily passing the mutex to another thread 'waiting' on the conditional variable.

In computer science, the reentrant mutex is a particular type of mutual exclusion (mutex) device that may be locked multiple times by the same process/thread, without causing a deadlock.

The cigarette smokers problem is a concurrency problem in computer science, originally described in 1971 by Suhas Patil. The problem has been criticized for having "restrictions which cannot be justified by practical considerations."

In computer science, a readers–writer is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, whereas write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers and readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updated atomically and is invalid until the update is complete.

In computer science, the readers–writers problems are examples of a common computing problem in concurrency. There are at least three variations of the problems, which deal with situations in which many concurrent threads of execution try to access the same shared resource at one time.

In computer science, synchronization is the task of coordinating multiple of processes to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action.

In parallel computing, a barrier is a type of synchronization method. A barrier for a group of threads or processes in the source code means any thread/process must stop at this point and cannot proceed until all other threads/processes reach this barrier.

In computing, the producer-consumer problem is a family of problems described by Edsger W. Dijkstra since 1965.

In computer science, false sharing is a performance-degrading usage pattern that can arise in systems with distributed, coherent caches at the size of the smallest resource block managed by the caching mechanism. When a system participant attempts to periodically access data that is not being altered by another party, but that data shares a cache block with data that is being altered, the caching protocol may force the first participant to reload the whole cache block despite a lack of logical necessity. The caching system is unaware of activity within this block and forces the first participant to bear the caching system overhead required by true shared access of a resource.

Although C++ is one of the most widespread programming languages, many prominent software engineers criticize C++ for being overly complex and fundamentally flawed. Among the critics have been: Robert Pike, Joshua Bloch, Linus Torvalds, Donald Knuth, Richard Stallman, and Ken Thompson. C++ has been widely adopted and implemented as a systems language through most of its existence. It has been used to build many pieces of very important software.

References

  1. Dijkstra, Edsger W. EWD-1000 (PDF). E.W. Dijkstra Archive. Center for American History, University of Texas at Austin. ( transcription )
  2. 1 2 J. Díaz; I. Ramos (1981). Formalization of Programming Concepts: International Colloquium, Peniscola, Spain, April 19–25, 1981. Proceedings. Birkhäuser. pp.  323 , 326. ISBN   978-3-540-10699-9.
  3. Hoare, C. A. R. (2004) [originally published in 1985 by Prentice Hall International]. "Communicating Sequential Processes" (PDF). usingcsp.com.
  4. 1 2 Tanenbaum, Andrew S. (2006), Operating Systems - Design and Implementation, 3rd edition [Chapter: 2.3.1 The Dining Philosophers Problem], Pearson Education, Inc.
  5. Dijkstra, Edsger W. EWD-310 (PDF). E.W. Dijkstra Archive. Center for American History, University of Texas at Austin. ( transcription )
  6. Tanenbaum, Andrew S. (2006), Operating Systems - Design and Implementation, 3rd edition [Chapter: 3.3.5 Deadlock Prevention], Pearson Education, Inc.
  7. Stallings, William (2018). Operating systems : internals and design principles (9th ed.). Harlow, Essex, England: Pearson. p. 310. ISBN   978-1-292-21429-0. OCLC   1009868379.
  8. Chandy, K.M.; Misra, J. (1984). The Drinking Philosophers Problem. ACM Transactions on Programming Languages and Systems.

Bibliography