Barrier (computer science)

Last updated

In parallel computing, a barrier is a type of synchronization method. A barrier for a group of threads or processes in the source code means any thread/process must stop at this point and cannot proceed until all other threads/processes reach this barrier. [1]

Contents

Many collective routines and directive-based parallel languages impose implicit barriers. For example, a parallel do loop in Fortran with OpenMP will not be allowed to continue on any thread until the last iteration is completed. This is in case the program relies on the result of the loop immediately after its completion. In message passing, any global communication (such as reduction or scatter) may imply a barrier.

In concurrent computing, a barrier may be in a raised or lowered state. The term latch is sometimes used to refer to a barrier that starts in the raised state and cannot be re-raised once it is in the lowered state. The term count-down latch is sometimes used to refer to a latch that is automatically lowered once a pre-determined number of threads/processes have arrived.

Implementation

The basic barrier has mainly two variables, one of which records the pass/stop state of the barrier, the other of which keeps the total number of threads that have entered in the barrier. The barrier state was initialized to be "stop" by the first threads coming into the barrier. Whenever a thread enters, based on the number of threads already in the barrier, only if it is the last one, the thread sets the barrier state to be "pass" so that all the threads can get out of the barrier. On the other hand, when the incoming thread is not the last one, it is trapped in the barrier and keeps testing if the barrier state has changed from "stop" to "pass", and it gets out only when the barrier state changes to "pass". The following C++ code demonstrates this procedure. [2] [3]

structbarrier_type{// how many processors have entered the barrier// initialize to 0intarrive_counter;// how many processors have exited the barrier// initialize to pintleave_counter;intflag;std::mutexlock;};// barrier for p processorsvoidbarrier(barrier_type*b,intp){b->lock.lock();if(b->arrive_counter==0){if(b->leave_counter==p){// no other threads in barrierb->flag=0;// first arriver clears flag}else{b->lock.unlock();while(b->leave_counter!=p);// wait for all to leave before clearingb->lock.lock();b->flag=0;}}intarrived=++(b->arrive_counter);unlock(b->lock);if(b->arrive_counter==p){//last arriver sets flagb->arrive_counter=0;b->leave_counter=1;b->flag=1;}else{while(b->flag==0);// wait for flaglock(b->lock);b->leave_counter++;unlock(b->lock);}}

The potential problem is:

  1. Due to all the threads repeatedly accessing the global variable for pass/stop, the communication traffic is rather high, which decreases the scalability.

This problem can be resolved by regrouping the threads and using multi-level barrier, e.g. Combining Tree Barrier. Also hardware implementations may have the advantage of higher scalability.

Sense-Reversal Centralized Barrier

Instead of using the same value to represent pass/stop, sequential barriers use opposite values for pass/stop state. For example, if barrier 1 uses 0 to stop the threads, barrier 2 will use 1 to stop threads and barrier 3 will use 0 to stop threads again and so on. [4] The following C++ code demonstrates this. [2] [5] [3]

structbarrier_type{intcounter;// initialize to 0intflag;// initialize to 0std::mutexlock;};intlocal_sense=0;// private per processor// barrier for p processorsvoidbarrier(barrier_type*b,intp){local_sense=1-local_sense;b->lock.lock();b->counter++;intarrived=b->counter;if(arrived==p)// last arriver sets flag{b->lock.unlock();b->counter=0;// memory fence to ensure that the change to counter// is seen before the change to flagb->flag=local_sense;}else{b->lock.unlock();while(b->flag!=local_sense);// wait for flag}}

Combining Tree Barrier

A Combining Tree Barrier is a hierarchical way of implementing barrier to resolve the scalability by avoiding the case that all threads are spinning at the same location. [4]

In k-Tree Barrier, all threads are equally divided into subgroups of k threads and a first-round synchronizations are done within these subgroups. Once all subgroups have done their synchronizations, the first thread in each subgroup enters the second level for further synchronization. In the second level, like in the first level, the threads form new subgroups of k threads and synchronize within groups, sending out one thread in each subgroup to next level and so on. Eventually, in the final level there is only one subgroup to be synchronized. After the final-level synchronization, the releasing signal is transmitted to upper levels and all threads get past the barrier. [5] [6]

Hardware Barrier Implementation

The hardware barrier uses hardware to implement the above basic barrier model. [2]

The simplest hardware implementation uses dedicated wires to transmit signal to implement barrier. This dedicated wire performs OR/AND operation to act as the pass/block flags and thread counter. For small systems, such a model works and communication speed is not a major concern. In large multiprocessor systems this hardware design can make barrier implementation have high latency. The network connection among processors is one implementation to lower the latency, which is analogous to Combining Tree Barrier. [7]

Thread barrier

POSIX Threads standard supports thread barrier which can be used to block the specified threads or the whole process at the barrier until other threads to reach that barrier. [1] 3 main API supports by POSIX to implement thread barriers are:

The following example (implemented in C with pthread API) will use thread barrier to block all the threads of the main process and therefore block the whole process:

#include<stdio.h>#include<pthread.h>#define TOTAL_THREADS           2#define THREAD_BARRIERS_NUMBER  3#define PTHREAD_BARRIER_ATTR    NULL // pthread barrier attributepthread_barrier_tbarrier;void*thread_func(void*ptr){printf("Waiting at the barrier as not enough %d threads are running ...\n",THREAD_BARRIERS_NUMBER);pthread_barrier_wait(&barrier);printf("The barrier is lifted, thread id %ld is running now\n",pthread_self());}intmain(){pthread_tthread_id[TOTAL_THREADS];pthread_barrier_init(&barrier,PTHREAD_BARRIER_ATTR,THREAD_BARRIERS_NUMBER);for(inti=0;i<TOTAL_THREADS;i++){pthread_create(&thread_id[i],NULL,thread_func,NULL);}// As pthread_join() will block the process until all the threads it specifies are finished, // and there is not enough thread to wait at the barrier, so this process is blockedfor(inti=0;i<TOTAL_THREADS;i++){pthread_join(thread_id[i],NULL);}pthread_barrier_destroy(&barrier);printf("Thread barrier is lifted\n");// This line won't be called as TOTAL_THREADS < THREAD_BARRIERS_NUMBER}

The result of that source code is:

Waiting at the barrier as not enough 3 threads are running ... Waiting at the barrier as not enough 3 threads are running ... // (main process is blocked as not having enough 3 threads) // Line printf("Thread barrier is lifted\n") won't be reached

As we can see from the source code, there are just only two threads are created. Those 2 thread both have thread_func(), as the thread function handler, which call pthread_barrier_wait(&barrier), while thread barrier expected 3 threads to call pthread_barrier_wait (THREAD_BARRIERS_NUMBER = 3) in order to be lifted. Change TOTAL_THREADS to 3 and the thread barrier is lifted:

Waiting at the barrier as not enough 3 threads are running ... Waiting at the barrier as not enough 3 threads are running ... Waiting at the barrier as not enough 3 threads are running ... The barrier is lifted, thread id 140643372406528 is running now The barrier is lifted, thread id 140643380799232 is running now The barrier is lifted, thread id 140643389191936 is running now Thread barrier is lifted

As main() is treated as a thread, i.e the "main" thread of the process, [10] calling pthread_barrier_wait() inside main() will block the whole process until other threads reach the barrier. The following example will use thread barrier, with pthread_barrier_wait() inside main(), to block the process/main thread for 5 seconds as waiting the 2 "newly created" thread to reach the thread barrier:

#define TOTAL_THREADS           2#define THREAD_BARRIERS_NUMBER  3#define PTHREAD_BARRIER_ATTR    NULL // pthread barrier attributepthread_barrier_tbarrier;void*thread_func(void*ptr){printf("Waiting at the barrier as not enough %d threads are running ...\n",THREAD_BARRIERS_NUMBER);sleep(5);pthread_barrier_wait(&barrier);printf("The barrier is lifted, thread id %ld is running now\n",pthread_self());}intmain(){pthread_tthread_id[TOTAL_THREADS];pthread_barrier_init(&barrier,PTHREAD_BARRIER_ATTR,THREAD_BARRIERS_NUMBER);for(inti=0;i<TOTAL_THREADS;i++){pthread_create(&thread_id[i],NULL,thread_func,NULL);}pthread_barrier_wait(&barrier);printf("Thread barrier is lifted\n");// This line won't be called as TOTAL_THREADS < THREAD_BARRIERS_NUMBERpthread_barrier_destroy(&barrier);}

This example doesn't use pthread_join() to wait for 2 "newly created" threads to complete. It calls pthread_barrier_wait() inside main(), in order to block the main thread, so that the process will be blocked until 2 threads finish its operation after 5 seconds wait (line 9 - sleep(5)).

See also

Related Research Articles

Thread safety is a computer programming concept applicable to multi-threaded code. Thread-safe code only manipulates shared data structures in a manner that ensures that all threads behave properly and fulfill their design specifications without unintended interaction. There are various strategies for making thread-safe data structures.

<span class="mw-page-title-main">Semaphore (programming)</span> Variable used in a concurrent system

In computer science, a semaphore is a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system. Semaphores are a type of synchronization primitive. A trivial semaphore is a plain variable that is changed depending on programmer-defined conditions.

In software engineering, double-checked locking is a software design pattern used to reduce the overhead of acquiring a lock by testing the locking criterion before acquiring the lock. Locking occurs only if the locking criterion check indicates that locking is required.

In software engineering, a spinlock is a lock that causes a thread trying to acquire it to simply wait in a loop ("spin") while repeatedly checking whether the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of busy waiting. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on blocks or "goes to sleep".

<span class="mw-page-title-main">OpenMP</span> Open standard for parallelizing

OpenMP is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran, on many platforms, instruction-set architectures and operating systems, including Solaris, AIX, FreeBSD, HP-UX, Linux, macOS, and Windows. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.

RTLinux is a hard realtime real-time operating system (RTOS) microkernel that runs the entire Linux operating system as a fully preemptive process. The hard real-time property makes it possible to control robots, data acquisition systems, manufacturing plants, and other time-sensitive instruments and machines from RTLinux applications. The design was patented. Despite the similar name, it is not related to the Real-Time Linux project of the Linux Foundation.

In computing, POSIX Threads, commonly known as pthreads, is an execution model that exists independently from a programming language, as well as a parallel execution model. It allows a program to control multiple different flows of work that overlap in time. Each flow of work is referred to as a thread, and creation and control over these flows is achieved by making calls to the POSIX Threads API. POSIX Threads is an API defined by the Institute of Electrical and Electronics Engineers (IEEE) standard POSIX.1c, Threads extensions .

In computer programming, a callback or callback function is any reference to executable code that is passed as an argument to another piece of code; that code is expected to call back (execute) the callback function as part of its job. This execution may be immediate as in a synchronous callback, or it might happen at a later point in time as in an asynchronous callback. They are also called blocking and non-blocking.

In computer science and software engineering, busy-waiting, busy-looping or spinning is a technique in which a process repeatedly checks to see if a condition is true, such as whether keyboard input or a lock is available. Spinning can also be used to generate an arbitrary time delay, a technique that was necessary on systems that lacked a method of waiting a specific length of time. Processor speeds vary greatly from computer to computer, especially as some processors are designed to dynamically adjust speed based on current workload. Consequently, spinning as a time-delay technique can produce unpredictable or even inconsistent results on different systems unless code is included to determine the time a processor takes to execute a "do nothing" loop, or the looping code explicitly checks a real-time clock.

In computer science, a ticket lock is a synchronization mechanism, or locking algorithm, that is a type of spinlock that uses "tickets" to control which thread of execution is allowed to enter a critical section.

In computing, sigaction is a function API defined by POSIX to give the programmer access to what should be a program's behavior when receiving specific OS signals.

In computer science, the fetch-and-add (FAA) CPU instruction atomically increments the contents of a memory location by a specified value.

The object pool pattern is a software creational design pattern that uses a set of initialized objects kept ready to use – a "pool" – rather than allocating and destroying them on demand. A client of the pool will request an object from the pool and perform operations on the returned object. When the client has finished, it returns the object to the pool rather than destroying it; this can be done manually or automatically.

In computer programming, the term hooking covers a range of techniques used to alter or augment the behaviour of an operating system, of applications, or of other software components by intercepting function calls or messages or events passed between software components. Code that handles such intercepted function calls, events or messages is called a hook.

In computer science, a readers–writer is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, whereas write operations require exclusive access. This means that multiple threads can read the data in parallel but an exclusive lock is needed for writing or modifying data. When a writer is writing the data, all other writers and readers will be blocked until the writer is finished writing. A common use might be to control access to a data structure in memory that cannot be updated atomically and is invalid until the update is complete.

In computer science, synchronization is the task of coordinating multiple of processes to join up or handshake at a certain point, in order to reach an agreement or commit to a certain sequence of action.

In computing, the producer-consumer problem is a family of problems described by Edsger W. Dijkstra since 1965.

Wrapper libraries consist of a thin layer of code which translates a library's existing interface into a compatible interface. This is done for several reasons:

Join-patterns provides a way to write concurrent, parallel and distributed computer programs by message passing. Compared to the use of threads and locks, this is a high level programming model using communication constructs model to abstract the complexity of concurrent environment and to allow scalability. Its focus is on the execution of a chord between messages atomically consumed from a group of channels.

Array-Based Queuing Lock (ABQL) is an advanced lock algorithm that ensures that threads spin on unique memory locations thus ensuring fairness of lock acquisition coupled with improved scalability.

References

  1. 1 2 GNU Operating System. "Implementation of pthread_barrier". gnu.org. Retrieved 2024-03-02.
  2. 1 2 3 Solihin, Yan (2015-01-01). Fundamentals of Parallel Multicore Architecture (1st ed.). Chapman & Hall/CRC. ISBN   978-1482211184.
  3. 1 2 "Implementing Barriers". Carnegie Mellon University.
  4. 1 2 Culler, David (1998). Parallel Computer Architecture, A Hardware/Software Approach. Gulf Professional. ISBN   978-1558603431.
  5. 1 2 Nanjegowda, Ramachandra; Hernandez, Oscar; Chapman, Barbara; Jin, Haoqiang H. (2009-06-03). Müller, Matthias S.; Supinski, Bronis R. de; Chapman, Barbara M. (eds.). Evolving OpenMP in an Age of Extreme Parallelism . Lecture Notes in Computer Science. Springer Berlin Heidelberg. pp.  42–52. doi:10.1007/978-3-642-02303-3_4. ISBN   9783642022845.
  6. Nikolopoulos, Dimitrios S.; Papatheodorou, Theodore S. (1999-01-01). "A quantitative architectural evaluation of synchronization algorithms and disciplines on ccNUMA systems". Proceedings of the 13th international conference on Supercomputing. ICS '99. New York, NY, USA: ACM. pp. 319–328. doi:10.1145/305138.305209. ISBN   978-1581131642. S2CID   6097544. Archived from the original on 2017-07-25. Retrieved 2019-01-18.
  7. N.R. Adiga, et al. An Overview of the BlueGene/L Supercomputer. Proceedings of the Conference on High Performance Networking and Computing, 2002.
  8. 1 2 "pthread_barrier_init(), pthread_barrier_destroy()". Linux man page. Retrieved 2024-03-16.
  9. "pthread_barrier_wait()". Linux man page. Retrieved 2024-03-16.
  10. "How to get number of processes and threads in a C program?". stackoverflow . Retrieved 2024-03-16.

"Parallel Programming with Barrier Synchronization". sourceallies.com. March 2012.