PM2

Last updated

Parallel Multithreaded Machine (PM2) is a software for parallel networking of computers.

Contents

PM2 is an open-source distributed multithreaded programming environment designed to support efficiently distributed programs with a highly irregular behavior (e.g. branch and bound search, computation on sparse matrices, etc.) on distributed architectures. It is distributed under the GPL.

PM2 adheres to the SPMD (Single Program Multiple Data) programming model, in a way very similar to the PVM and MPI communication libraries. The user writes a single program text, a copy of which is launched by a specific load command on each processing node of the current configuration. It is up to the programmer to include branching so as to differentiate between the processing nodes. This way it allows a network of heterogeneous machines to be used as a single distributed parallel processor.

It is primarily designed for medium-size clusters of commodity processing nodes interconnected by high-performance networks, but nothing prevents the use on massively parallel MIMD machines at one end of the spectrum, or as a support for metacomputing over the Internet on the other end. It supports heterogeneous networking configurations, such as sets of interconnected clusters.

Distinguishing features of PM2 include its priority driven scheduling policy, its thread migration mechanisms and its ability to ease the development of various load balancing policies. It can manage several hundreds of threads on each available physical processor. The PM2 interface provides functionalities for the management of this high degree of parallelism and for dynamic load balancing.

The thread management subsystem of PM2 is called Marcel (named after Marcel Proust) and its communication subsystem Madeleine, a French sweet that supposedly played a central role in the life of Marcel Proust. PM2 features an additional functionality to provide threads with a uniform access to data, whatever their physical location. It is called DSM-PM2.

PM2 runs on most Unix platforms.

Development history

PM2 is developed at LaBRI (Laboratoire Bordelais de Recherche en Informatique), a research laboratory located in Bordeaux, France, jointly supported by the INRIA, the CNRS, and the University of Bordeaux. Before that, PM2 was developed at LIP (Laboratoire de l'Informatique du Parallélisme), a research laboratory located at the ENS Lyon (Ecole Normale Supérieure de Lyon), France, jointly supported by the INRIA, the CNRS and the University Claude Bernard Lyon. PM2 was originally designed by Raymond Namyst and Jean-François Méhaut at LIFL, University of Lille, France.

See also

Related Research Articles

<span class="mw-page-title-main">Thread (computing)</span> Smallest sequence of programmed instructions that can be managed independently by a scheduler

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. In many cases, a thread is a component of a process.

<span class="mw-page-title-main">Beowulf cluster</span> Type of computing cluster

A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware.

<span class="mw-page-title-main">Parallel computing</span> Programming paradigm in which many processes are executed simultaneously

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

Message Passing Interface (MPI) is a standardized and portable message-passing standard designed to function on parallel computing architectures. The MPI standard defines the syntax and semantics of library routines that are useful to a wide range of users writing portable message-passing programs in C, C++, and Fortran. There are several open-source MPI implementations, which fostered the development of a parallel software industry, and encouraged development of portable and scalable large-scale parallel applications.

Parallel Virtual Machine (PVM) is a software tool for parallel networking of computers. It is designed to allow a network of heterogeneous Unix and/or Windows machines to be used as a single distributed parallel processor. Thus large computational problems can be solved more cost effectively by using the aggregate power and memory of many computers. The software is very portable; the source code, available free through netlib, has been compiled on everything from laptops to Crays.

The National Institute for Research in Digital Science and Technology (Inria) is a French national research institution focusing on computer science and applied mathematics. It was created under the name French Institute for Research in Computer Science and Automation (IRIA) in 1967 at Rocquencourt near Paris, part of Plan Calcul. Its first site was the historical premises of SHAPE, which is still used as Inria's main headquarters. In 1980, IRIA became INRIA. Since 2011, it has been styled Inria.

<span class="mw-page-title-main">OpenMP</span> Open standard for parallelizing

OpenMP is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran, on many platforms, instruction-set architectures and operating systems, including Solaris, AIX, FreeBSD, HP-UX, Linux, macOS, and Windows. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.

In computing, single program, multiple data (SPMD) is a term that has been used to refer to computational models for exploiting parallelism where-by multiple processors cooperate in the execution of a program in order to obtain results faster.

In computer science, gang scheduling is a scheduling algorithm for parallel systems that schedules related threads or processes to run simultaneously on different processors. Usually these will be threads all belonging to the same process, but they may also be from different processes, where the processes could have a producer-consumer relationship or come from the same MPI program.

<span class="mw-page-title-main">Multi-core processor</span> Microprocessor with more than one processing unit

A multi-core processor is a microprocessor on a single integrated circuit with two or more separate processing units, called cores, each of which reads and executes program instructions. The instructions are ordinary CPU instructions but the single processor can run instructions on separate cores at the same time, increasing overall speed for programs that support multithreading or other parallel computing techniques. Manufacturers typically integrate the cores onto a single integrated circuit die or onto multiple dies in a single chip package. The microprocessors currently used in almost all personal computers are multi-core.

The Parallel Virtual File System (PVFS) is an open-source parallel file system. A parallel file system is a type of distributed file system that distributes file data across multiple servers and provides for concurrent access by multiple tasks of a parallel application. PVFS was designed for use in large scale cluster computing. PVFS focuses on high performance access to large data sets. It consists of a server process and a client library, both of which are written entirely of user-level code. A Linux kernel module and pvfs-client process allow the file system to be mounted and used with standard utilities. The client library provides for high performance access via the message passing interface (MPI). PVFS is being jointly developed between The Parallel Architecture Research Laboratory at Clemson University and the Mathematics and Computer Science Division at Argonne National Laboratory, and the Ohio Supercomputer Center. PVFS development has been funded by NASA Goddard Space Flight Center, The DOE Office of Science Advanced Scientific Computing Research program, NSF PACI and HECURA programs, and other government and private agencies. PVFS is now known as OrangeFS in its newest development branch.

<span class="mw-page-title-main">Cray XMT</span>

Cray XMT is a scalable multithreaded shared memory supercomputer architecture by Cray, based on the third generation of the Tera MTA architecture, targeted at large graph problems. Presented in 2005, it supersedes the earlier unsuccessful Cray MTA-2. It uses the Threadstorm3 CPUs inside Cray XT3 blades. Designed to make use of commodity parts and existing subsystems for other commercial systems, it alleviated the shortcomings of Cray MTA-2's high cost of fully custom manufacture and support. It brought various substantial improvements over Cray MTA-2, most notably nearly tripling the peak performance, and vastly increased maximum CPU count to 8,192 and maximum memory to 128 TB, with a data TLB of maximal 512 TB.

The Laboratoire d'Informatique de Grenoble is the largest research laboratory of Informatics in Grenoble, France. It was created 1 January 2007, as the result of a union of the 24 research teams of the previous IMAG Institute and the INRIA Rhône-Alpes.

PM2 may refer to:

<span class="mw-page-title-main">Computer cluster</span> Set of computers configured in a distributed computing system

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing.

<span class="mw-page-title-main">OCARI</span>

OCARI is a low-rate wireless personal area networks (LR-WPAN) communication protocol that derives from the IEEE 802.15.4 standard. It was developed by the following consortium during the OCARI project that is funded by the French National Research Agency (ANR):

<span class="mw-page-title-main">Node.js</span> JavaScript runtime environment

Node.js is a cross-platform, open-source JavaScript runtime environment that can run on Windows, Linux, Unix, macOS, and more. Node.js runs on the V8 JavaScript engine, and executes JavaScript code outside a web browser.

<span class="mw-page-title-main">DIET</span>

DIET is a software for grid-computing. As middleware, DIET sits between the operating system and the application software. DIET was created in 2000. It was designed for high-performance computing. It is currently developed by INRIA, École Normale Supérieure de Lyon, CNRS, Claude Bernard University Lyon 1, SysFera. It is open-source software released under the CeCILL license.

<span class="mw-page-title-main">Message passing in computer clusters</span> Aspect of computer clusters

Message passing is an inherent element of all computer clusters. All computer clusters, ranging from homemade Beowulfs to some of the fastest supercomputers in the world, rely on message passing to coordinate the activities of the many nodes they encompass. Message passing in computer clusters built with commodity servers and switches is used by virtually every internet service.

Heterogeneous computing refers to systems that use more than one kind of processor or core. These systems gain performance or energy efficiency not just by adding the same type of processors, but by adding dissimilar coprocessors, usually incorporating specialized processing capabilities to handle particular tasks.

References