OpenMP

Last updated
OpenMP
Original author(s) OpenMP Architecture Review Board [1]
Developer(s) OpenMP Architecture Review Board [1]
Stable release
6.0 / November 2024;0 months ago (2024-11)
Operating system Cross-platform
Platform Cross-platform
Type Extension to C, C++, and Fortran; API
License Various [2]
Website openmp.org

OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran, [3] on many platforms, instruction-set architectures and operating systems, including Solaris, AIX, FreeBSD, HP-UX, Linux, macOS, and Windows. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior. [2] [4] [5]

Contents

OpenMP is managed by the nonprofit technology consortium OpenMP Architecture Review Board (or OpenMP ARB), jointly defined by a broad swath of leading computer hardware and software vendors, including Arm, AMD, IBM, Intel, Cray, HP, Fujitsu, Nvidia, NEC, Red Hat, Texas Instruments, and Oracle Corporation. [1]

OpenMP uses a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the standard desktop computer to the supercomputer.

An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), such that OpenMP is used for parallelism within a (multi-core) node while MPI is used for parallelism between nodes. There have also been efforts to run OpenMP on software distributed shared memory systems, [6] to translate OpenMP into MPI [7] [8] and to extend OpenMP for non-shared memory systems. [9]

Design

An illustration of multithreading where the primary thread forks off a number of threads which execute blocks of code in parallel Fork join.svg
An illustration of multithreading where the primary thread forks off a number of threads which execute blocks of code in parallel

OpenMP is an implementation of multithreading, a method of parallelizing whereby a primary thread (a series of instructions executed consecutively) forks a specified number of sub-threads and the system divides a task among them. The threads then run concurrently, with the runtime environment allocating threads to different processors.

The section of code that is meant to run in parallel is marked accordingly, with a compiler directive that will cause the threads to form before the section is executed. [3] Each thread has an ID attached to it which can be obtained using a function (called omp_get_thread_num()). The thread ID is an integer, and the primary thread has an ID of 0. After the execution of the parallelized code, the threads join back into the primary thread, which continues onward to the end of the program.

By default, each thread executes the parallelized section of code independently. Work-sharing constructs can be used to divide a task among the threads so that each thread executes its allocated part of the code. Both task parallelism and data parallelism can be achieved using OpenMP in this way.

The runtime environment allocates threads to processors depending on usage, machine load and other factors. The runtime environment can assign the number of threads based on environment variables, or the code can do so using functions. The OpenMP functions are included in a header file labelled omp.h in C/C++.

History

The OpenMP Architecture Review Board (ARB) published its first API specifications, OpenMP for Fortran 1.0, in October 1997. In October the following year they released the C/C++ standard. 2000 saw version 2.0 of the Fortran specifications with version 2.0 of the C/C++ specifications being released in 2002. Version 2.5 is a combined C/C++/Fortran specification that was released in 2005.[ citation needed ]

Up to version 2.0, OpenMP primarily specified ways to parallelize highly regular loops, as they occur in matrix-oriented numerical programming, where the number of iterations of the loop is known at entry time. This was recognized as a limitation, and various task parallel extensions were added to implementations. In 2005, an effort to standardize task parallelism was formed, which published a proposal in 2007, taking inspiration from task parallelism features in Cilk, X10 and Chapel. [10]

Version 3.0 was released in May 2008. Included in the new features in 3.0 is the concept of tasks and the task construct, [11] significantly broadening the scope of OpenMP beyond the parallel loop constructs that made up most of OpenMP 2.0. [12]

Version 4.0 of the specification was released in July 2013. [13] It adds or improves the following features: support for accelerators; atomics; error handling; thread affinity; tasking extensions; user defined reduction; SIMD support; Fortran 2003 support. [14] [ full citation needed ]

The current version is 5.2, released in November 2021. [15]

Version 6.0 was released in November 2024. [16]

Note that not all compilers (and OSes) support the full set of features for the latest version/s.

Core elements

Chart of OpenMP constructs OpenMP language extensions.svg
Chart of OpenMP constructs

The core elements of OpenMP are the constructs for thread creation, workload distribution (work sharing), data-environment management, thread synchronization, user-level runtime routines and environment variables.

In C/C++, OpenMP uses #pragmas. The OpenMP specific pragmas are listed below.

Thread creation

The pragma omp parallel is used to fork additional threads to carry out the work enclosed in the construct in parallel. The original thread will be denoted as master thread with thread ID 0.

Example (C program): Display "Hello, world." using multiple threads.

#include<stdio.h>#include<omp.h>intmain(void){#pragma omp parallelprintf("Hello, world.\n");return0;}

Use flag -fopenmp to compile using GCC:

$gcc-fopenmphello.c-ohello-ldl 

Output on a computer with two cores, and thus two threads:

Hello,world. Hello,world. 

However, the output may also be garbled because of the race condition caused from the two threads sharing the standard output.

Hello,wHello,woorld. rld. 

Whether printf is atomic depends on the underlying implementation [17] unlike C++11's std::cout, which is thread-safe by default. [18]

Work-sharing constructs

Used to specify how to assign independent work to one or all of the threads.

Example: initialize the value of a large array in parallel, using each thread to do part of the work

intmain(intargc,char**argv){inta[100000];#pragma omp parallel forfor(inti=0;i<100000;i++){a[i]=2*i;}return0;}

This example is embarrassingly parallel, and depends only on the value of i. The OpenMP parallel for flag tells the OpenMP system to split this task among its working threads. The threads will each receive a unique and private version of the variable. [19] For instance, with two worker threads, one thread might be handed a version of i that runs from 0 to 49999 while the second gets a version running from 50000 to 99999.

Variant directives

Variant directives are one of the major features introduced in OpenMP 5.0 specification to facilitate programmers to improve performance portability. They enable adaptation of OpenMP pragmas and user code at compile time. The specification defines traits to describe active OpenMP constructs, execution devices, and functionality provided by an implementation, context selectors based on the traits and user-defined conditions, and metadirective and declare directive directives for users to program the same code region with variant directives.

The mechanism provided by the two variant directives for selecting variants is more convenient to use than the C/C++ preprocessing since it directly supports variant selection in OpenMP and allows an OpenMP compiler to analyze and determine the final directive from variants and context.

// code adaptation using preprocessing directivesintv1[N],v2[N],v3[N];#if defined(nvptx)#pragma omp target teams distribute parallel for map(to:v1,v2) map(from:v3)for(inti=0;i<N;i++)v3[i]=v1[i]*v2[i];#else #pragma omp target parallel for map(to:v1,v2) map(from:v3)for(inti=0;i<N;i++)v3[i]=v1[i]*v2[i];#endif// code adaptation using metadirective in OpenMP 5.0intv1[N],v2[N],v3[N];#pragma omp target map(to:v1,v2) map(from:v3)#pragma omp metadirective \     when(device={arch(nvptx)}: target teams distribute parallel for)\     default(target parallel for)for(inti=0;i<N;i++)v3[i]=v1[i]*v2[i];

Clauses

Since OpenMP is a shared memory programming model, most variables in OpenMP code are visible to all threads by default. But sometimes private variables are necessary to avoid race conditions and there is a need to pass values between the sequential part and the parallel region (the code block executed in parallel), so data environment management is introduced as data sharing attribute clauses by appending them to the OpenMP directive. The different types of clauses are:

Data sharing attribute clauses
Synchronization clauses
Scheduling clauses
IF control
Initialization
Data copying
Reduction
Others

User-level runtime routines

Used to modify/check the number of threads, detect if the execution context is in a parallel region, how many processors in current system, set/unset locks, timing functions, etc

Environment variables

A method to alter the execution features of OpenMP applications. Used to control loop iterations scheduling, default number of threads, etc. For example, OMP_NUM_THREADS is used to specify number of threads for an application.

Implementations

OpenMP has been implemented in many commercial compilers. For instance, Visual C++ 2005, 2008, 2010, 2012 and 2013 support it (OpenMP 2.0, in Professional, Team System, Premium and Ultimate editions [20] [21] [22] ), as well as Intel Parallel Studio for various processors. [23] Oracle Solaris Studio compilers and tools support the latest OpenMP specifications with productivity enhancements for Solaris OS (UltraSPARC and x86/x64) and Linux platforms. The Fortran, C and C++ compilers from The Portland Group also support OpenMP 2.5. GCC has also supported OpenMP since version 4.2.

Compilers with an implementation of OpenMP 3.0:

Several compilers support OpenMP 3.1:

Compilers supporting OpenMP 4.0:

Several Compilers supporting OpenMP 4.5:

Partial support for OpenMP 5.0:

Auto-parallelizing compilers that generates source code annotated with OpenMP directives:

Several profilers and debuggers expressly support OpenMP:

Pros and cons

Pros:

Cons:

Performance expectations

One might expect to get an N times speedup when running a program parallelized using OpenMP on a N processor platform. However, this seldom occurs for these reasons:

Thread affinity

Some vendors recommend setting the processor affinity on OpenMP threads to associate them with particular processor cores. [45] [46] [47] This minimizes thread migration and context-switching cost among cores. It also improves the data locality and reduces the cache-coherency traffic among the cores (or processors).

Benchmarks

A variety of benchmarks has been developed to demonstrate the use of OpenMP, test its performance and evaluate correctness.

Simple examples

Performance benchmarks include:

Correctness benchmarks include:

See also

Related Research Articles

An optimizing compiler is a compiler designed to generate code that is optimized in aspects such as minimizing program execution time, memory use, storage size, and power consumption. Optimization is generally implemented as a sequence of optimizing transformations, algorithms that transform code to produce semantically equivalent code optimized for some aspect.

<span class="mw-page-title-main">Single instruction, multiple data</span> Type of parallel processing

Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. SIMD can be internal and it can be directly accessible through an instruction set architecture (ISA), but it should not be confused with an ISA. SIMD describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously.

The Message Passing Interface (MPI) is a standardized and portable message-passing standard designed to function on parallel computing architectures. The MPI standard defines the syntax and semantics of library routines that are useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several open-source MPI implementations, which fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications.

The C preprocessor is the macro preprocessor for several computer programming languages, such as C, Objective-C, C++, and a variety of Fortran languages. The preprocessor provides inclusion of header files, macro expansions, conditional compilation, and line control.

Cilk, Cilk++, Cilk Plus and OpenCilk are general-purpose programming languages designed for multithreaded parallel computing. They are based on the C and C++ programming languages, which they extend with constructs to express parallel loops and the fork–join idiom.

In computing, single program, multiple data (SPMD) is a term that has been used to refer to computational models for exploiting parallelism where-by multiple processors cooperate in the execution of a program in order to obtain results faster.

In compiler theory, loop optimization is the process of increasing execution speed and reducing the overheads associated with loops. It plays an important role in improving cache performance and making effective use of parallel processing capabilities. Most execution time of a scientific program is spent on loops; as such, many compiler optimization techniques have been developed to make them faster.

Automatic parallelization, also auto parallelization, or autoparallelization refers to converting sequential code into multi-threaded and/or vectorized code in order to use multiple processors simultaneously in a shared-memory multiprocessor (SMP) machine. Fully automatic parallelization of sequential programs is a challenge because it requires complex program analysis and the best approach may depend upon parameter values that are not known at compilation time.

<span class="mw-page-title-main">Binary Modular Dataflow Machine</span>

Binary Modular Dataflow Machine (BMDFM) is a software package that enables running an application in parallel on shared memory symmetric multiprocessing (SMP) computers using the multiple processors to speed up the execution of single applications. BMDFM automatically identifies and exploits parallelism due to the static and mainly dynamic scheduling of the dataflow instruction sequences derived from the formerly sequential program.

<span class="mw-page-title-main">Data parallelism</span> Parallelization across multiple processors in parallel computing environments

Data parallelism is parallelization across multiple processors in parallel computing environments. It focuses on distributing the data across different nodes, which operate on the data in parallel. It can be applied on regular data structures like arrays and matrices by working on each element in parallel. It contrasts to task parallelism as another form of parallelism.

Task parallelism is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurrently performed by processes or threads—across different processors. In contrast to data parallelism which involves running the same task on different components of data, task parallelism is distinguished by running many different tasks at the same time on the same data. A common type of task parallelism is pipelining, which consists of moving a single set of data through a series of separate tasks where each task can execute independently of the others.

The Sieve C++ Parallel Programming System is a C++ compiler and parallel runtime designed and released by Codeplay that aims to simplify the parallelization of code so that it may run efficiently on multi-processor or multi-core systems. It is an alternative to other well-known parallelisation methods such as OpenMP, the RapidMind Development Platform and Threading Building Blocks (TBB).

Intel Fortran Compiler, as part of Intel OneAPI HPC toolkit, is a group of Fortran compilers from Intel for Windows, macOS, and Linux.

Oracle Developer Studio, formerly named Oracle Solaris Studio, Sun Studio, Sun WorkShop, Forte Developer, and SunPro Compilers, is the Oracle Corporation's flagship software development product for the Solaris and Linux operating systems. It includes optimizing C, C++, and Fortran compilers, libraries, and performance analysis and debugging tools, for Solaris on SPARC and x86 platforms, and Linux on x86/x64 platforms, including multi-core systems.

Loop-level parallelism is a form of parallelism in software programming that is concerned with extracting parallel tasks from loops. The opportunity for loop-level parallelism often arises in computing programs where data is stored in random access data structures. Where a sequential program will iterate over the data structure and operate on indices one at a time, a program exploiting loop-level parallelism will use multiple threads or processes which operate on some or all of the indices at the same time. Such parallelism provides a speedup to overall execution time of the program, typically in line with Amdahl's law.

OpenHMPP - programming standard for heterogeneous computing. Based on a set of compiler directives, standard is a programming model designed to handle hardware accelerators without the complexity associated with GPU programming. This approach based on directives has been implemented because they enable a loose relationship between an application code and the use of a hardware accelerator (HWA).

For several years parallel hardware was only available for distributed computing but recently it is becoming available for the low end computers as well. Hence it has become inevitable for software programmers to start writing parallel applications. It is quite natural for programmers to think sequentially and hence they are less acquainted with writing multi-threaded or parallel processing applications. Parallel programming requires handling various issues such as synchronization and deadlock avoidance. Programmers require added expertise for writing such applications apart from their expertise in the application domain. Hence programmers prefer to write sequential code and most of the popular programming languages support it. This allows them to concentrate more on the application. Therefore, there is a need to convert such sequential applications to parallel applications with the help of automated tools. The need is also non-trivial because large amount of legacy code written over the past few decades needs to be reused and parallelized.

OpenACC is a programming standard for parallel computing developed by Cray, CAPS, Nvidia and PGI. The standard is designed to simplify parallel programming of heterogeneous CPU/GPU systems.

Intel Advisor is a design assistance and analysis tool for SIMD vectorization, threading, memory use, and GPU offload optimization. The tool supports C, C++, Data Parallel C++ (DPC++), Fortran and Python languages. It is available on Windows and Linux operating systems in form of Standalone GUI tool, Microsoft Visual Studio plug-in or command line interface. It supports OpenMP. Intel Advisor user interface is also available on macOS.

Privatization is a technique used in shared-memory programming to enable parallelism, by removing dependencies that occur across different threads in a parallel program. Dependencies between threads arise from two or more threads reading or writing a variable at the same time. Privatization gives each thread a private copy, so it can read and write it independently and thus, simultaneously.

References

  1. 1 2 3 "About the OpenMP ARB and". OpenMP.org. 2013-07-11. Archived from the original on 2013-08-09. Retrieved 2013-08-14.
  2. 1 2 "OpenMP Compilers & Tools". OpenMP.org. November 2019. Retrieved 2020-03-05.
  3. 1 2 Gagne, Abraham Silberschatz, Peter Baer Galvin, Greg (2012-12-17). Operating system concepts (9th ed.). Hoboken, N.J.: Wiley. pp. 181–182. ISBN   978-1-118-06333-0.{{cite book}}: CS1 maint: multiple names: authors list (link)
  4. OpenMP Tutorial at Supercomputing 2008
  5. Using OpenMP – Portable Shared Memory Parallel Programming – Download Book Examples and Discuss
  6. Costa, J.J.; et al. (May 2006). "Running OpenMP applications efficiently on an everything-shared SDSM". Journal of Parallel and Distributed Computing. 66 (5): 647–658. doi:10.1016/j.jpdc.2005.06.018. hdl: 2117/370260 .
  7. Basumallik, Ayon; Min, Seung-Jai; Eigenmann, Rudolf (2007). "Programming Distributed Memory Systems Using OpenMP". 2007 IEEE International Parallel and Distributed Processing Symposium. New York: IEEE Press. pp. 1–8. CiteSeerX   10.1.1.421.8570 . doi:10.1109/IPDPS.2007.370397. ISBN   978-1-4244-0909-9. S2CID   14237507. A preprint is available on Chen Ding's home page; see especially Section 3 on Translation of OpenMP to MPI.
  8. Wang, Jue; Hu, ChangJun; Zhang, JiLin; Li, JianJiang (May 2010). "OpenMP compiler for distributed memory architectures". Science China Information Sciences. 53 (5): 932–944. doi: 10.1007/s11432-010-0074-0 . (As of 2016 the KLCoMP software described in this paper does not appear to be publicly available)
  9. Cluster OpenMP (a product that used to be available for Intel C++ Compiler versions 9.1 to 11.1 but was dropped in 13.0)
  10. Ayguade, Eduard; Copty, Nawal; Duran, Alejandro; Hoeflinger, Jay; Lin, Yuan; Massaioli, Federico; Su, Ernesto; Unnikrishnan, Priya; Zhang, Guansong (2007). A proposal for task parallelism in OpenMP (PDF). Proc. Int'l Workshop on OpenMP.
  11. "OpenMP Application Program Interface, Version 3.0" (PDF). openmp.org. May 2008. Retrieved 2014-02-06.
  12. LaGrone, James; Aribuki, Ayodunni; Addison, Cody; Chapman, Barbara (2011). A Runtime Implementation of OpenMP Tasks. Proc. Int'l Workshop on OpenMP. pp. 165–178. CiteSeerX   10.1.1.221.2775 . doi:10.1007/978-3-642-21487-5_13.
  13. "OpenMP 4.0 API Released". OpenMP.org. 2013-07-26. Archived from the original on 2013-11-09. Retrieved 2013-08-14.
  14. "OpenMP Application Program Interface, Version 4.0" (PDF). openmp.org. July 2013. Retrieved 2014-02-06.
  15. "OpenMP 5.2 Specification".
  16. "OpenMP ARB Releases OpenMP 6.0 for Easier Programming".
  17. "C - How to use printf() in multiple threads".
  18. "std::cout, std::wcout - cppreference.com".
  19. "Tutorial – Parallel for Loops with OpenMP". 2009-07-14.
  20. Visual C++ Editions, Visual Studio 2005
  21. Visual C++ Editions, Visual Studio 2008
  22. Visual C++ Editions, Visual Studio 2010
  23. David Worthington, "Intel addresses development life cycle with Parallel Studio" Archived 2012-02-15 at the Wayback Machine , SDTimes, 26 May 2009 (accessed 28 May 2009)
  24. "XL C/C++ for Linux Features", (accessed 9 June 2009)
  25. "Oracle Technology Network for Java Developers | Oracle Technology Network | Oracle". Developers.sun.com. Retrieved 2013-08-14.
  26. 1 2 "openmp – GCC Wiki". Gcc.gnu.org. 2013-07-30. Retrieved 2013-08-14.
  27. Kennedy, Patrick (2011-09-06). "Intel® C++ and Fortran Compilers now support the OpenMP* 3.1 Specification | Intel® Developer Zone". Software.intel.com. Retrieved 2013-08-14.
  28. 1 2 "IBM XL C/C++ compilers features". IBM . 13 December 2018.
  29. 1 2 "IBM XL Fortran compilers features". 13 December 2018.
  30. 1 2 "Clang 3.7 Release Notes". llvm.org. Retrieved 2015-10-10.
  31. "Absoft Home Page" . Retrieved 2019-02-12.
  32. "GCC 4.9 Release Series – Changes". www.gnu.org.
  33. "OpenMP* 4.0 Features in Intel Compiler 15.0". Software.intel.com. 2014-08-13. Archived from the original on 2018-11-16. Retrieved 2014-11-10.
  34. "GCC 6 Release Series - Changes". www.gnu.org.
  35. "OpenMP Compilers & Tools". openmp.org. www.openmp.org. Retrieved 29 October 2019.
  36. 1 2 "OpenMP Support — Clang 12 documentation". clang.llvm.org. Retrieved 2020-10-23.
  37. "GOMP — An OpenMP implementation for GCC - GNU Project - Free Software Foundation (FSF)". gcc.gnu.org. Archived from the original on 2021-02-27. Retrieved 2020-10-23.
  38. "OpenMP* Support". Intel. Retrieved 2020-10-23.
  39. 1 2 Amritkar, Amit; Tafti, Danesh; Liu, Rui; Kufrin, Rick; Chapman, Barbara (2012). "OpenMP parallelism for fluid and fluid-particulate systems". Parallel Computing. 38 (9): 501. doi:10.1016/j.parco.2012.05.005.
  40. Amritkar, Amit; Deb, Surya; Tafti, Danesh (2014). "Efficient parallel CFD-DEM simulations using OpenMP". Journal of Computational Physics. 256: 501. Bibcode:2014JCoPh.256..501A. doi:10.1016/j.jcp.2013.09.007.
  41. OpenMP Accelerator Support for GPUs
  42. Detecting and Avoiding OpenMP Race Conditions in C++
  43. "Alexey Kolosov, Evgeniy Ryzhkov, Andrey Karpov 32 OpenMP traps for C++ developers". Archived from the original on 2017-07-07. Retrieved 2009-04-15.
  44. Stephen Blair-Chappell, Intel Corporation, Becoming a Parallel Programming Expert in Nine Minutes, presentation on ACCU 2010 conference
  45. Chen, Yurong (2007-11-15). "Multi-Core Software". Intel Technology Journal. 11 (4). doi:10.1535/itj.1104.08.
  46. "OMPM2001 Result". SPEC. 2008-01-28.
  47. "OMPM2001 Result". SPEC. 2003-04-01. Archived from the original on 2021-02-25. Retrieved 2008-03-28.

Further reading