Open Source Cluster Application Resources

Last updated

Open Source Cluster Application Resources (OSCAR) is a Linux-based software installation for high-performance cluster computing. OSCAR allows users to install a Beowulf type high performance computing cluster.

See also


Related Research Articles

Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.

<span class="mw-page-title-main">Beowulf cluster</span> Type of computing cluster

A Beowulf cluster is a computer cluster of what are normally identical, commodity-grade computers networked into a small local area network with libraries and programs installed which allow processing to be shared among them. The result is a high-performance parallel computing cluster from inexpensive personal computer hardware.

<span class="mw-page-title-main">Parallel computing</span> Programming paradigm in which many processes are executed simultaneously

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

<span class="mw-page-title-main">Server farm</span> Collection of computer servers

A server farm or server cluster is a collection of computer servers, usually maintained by an organization to supply server functionality far beyond the capability of a single machine. They often consist of thousands of computers which require a large amount of power to run and to keep cool. At the optimum performance level, a server farm has enormous financial and environmental costs. They often include backup servers that can take over the functions of primary servers that may fail. Server farms are typically collocated with the network switches and/or routers that enable communication between different parts of the cluster and the cluster's users. Server "farmers" typically mount computers, routers, power supplies and related electronics on 19-inch racks in a server room or data center.

<span class="mw-page-title-main">Xilinx</span> American technology company

Xilinx, Inc. was an American technology and semiconductor company that primarily supplied programmable logic devices. The company is known for inventing the first commercially viable field-programmable gate array (FPGA). It also created the first fabless manufacturing model.

<span class="mw-page-title-main">High-performance computing</span> Computing with supercomputers and clusters

High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.

Commodity computing involves the use of large numbers of already-available computing components for parallel computing, to get the greatest amount of useful computation at low cost. It is computing done in commodity computers as opposed to in high-cost superminicomputers or in boutique computers. Commodity computers are computer systems - manufactured by multiple vendors - incorporating components based on open standards.

Maui Cluster Scheduler is a job scheduler for use on clusters and supercomputers initially developed by Cluster Resources, Inc. Maui is capable of supporting multiple scheduling policies, dynamic priorities, reservations, and fairshare capabilities.

<span class="mw-page-title-main">Platform Computing</span> Software company in Canada

Platform Computing was a privately held software company primarily known for its job scheduling product, Load Sharing Facility (LSF). It was founded in 1992 in Toronto, Ontario, Canada and headquartered in Markham, Ontario with 11 branch offices across the United States, Europe and Asia.

The Terascale Open-source Resource and Queue Manager (TORQUE) is a distributed resource manager designed to oversee batch jobs and distributed compute nodes. It offers control and management capabilities for clusters, aiding in utilization, scheduling, and administration tasks.

The Parallel Virtual File System (PVFS) is an open-source parallel file system. A parallel file system is a type of distributed file system that distributes file data across multiple servers and provides for concurrent access by multiple tasks of a parallel application. PVFS was designed for use in large scale cluster computing. PVFS focuses on high performance access to large data sets. It consists of a server process and a client library, both of which are written entirely of user-level code. A Linux kernel module and pvfs-client process allow the file system to be mounted and used with standard utilities. The client library provides for high performance access via the message passing interface (MPI). PVFS is being jointly developed between The Parallel Architecture Research Laboratory at Clemson University and the Mathematics and Computer Science Division at Argonne National Laboratory, and the Ohio Supercomputer Center. PVFS development has been funded by NASA Goddard Space Flight Center, The DOE Office of Science Advanced Scientific Computing Research program, NSF PACI and HECURA programs, and other government and private agencies. PVFS is now known as OrangeFS in its newest development branch.

Gravity Pipe is a project which uses hardware acceleration to perform gravitational computations. Integrated with Beowulf-style commodity computers, the GRAPE system calculates the force of gravity that a given mass, such as a star, exerts on others. The project resides at Tokyo University.

In computing, performance per watt is a measure of the energy efficiency of a particular computer architecture or computer hardware. Literally, it measures the rate of computation that can be delivered by a computer for every watt of power consumed. This rate is typically measured by performance on the LINPACK benchmark when trying to compare between computing systems: an example using this is the Green500 list of supercomputers. Performance per watt has been suggested to be a more sustainable measure of computing than Moore's Law.

<span class="mw-page-title-main">Computer cluster</span> Set of computers configured in a distributed computing system

A computer cluster is a set of computers that work together so that they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing.

Windows HPC Server 2008, released by Microsoft on 22 September 2008, is the successor product to Windows Compute Cluster Server 2003. Like WCCS, Windows HPC Server 2008 is designed for high-end applications that require high performance computing clusters. This version of the server software is claimed to efficiently scale to thousands of cores. It includes features unique to HPC workloads: a new high-speed NetworkDirect RDMA, highly efficient and scalable cluster management tools, a service-oriented architecture (SOA) job scheduler, an MPI library based on open-source MPICH2, and cluster interoperability through standards such as the High Performance Computing Basic Profile (HPCBP) specification produced by the Open Grid Forum (OGF).

<span class="mw-page-title-main">Slurm Workload Manager</span> Free and open-source job scheduler for Linux and similar computers

The Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management (SLURM), or simply Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters.

A cloud database is a database that typically runs on a cloud computing platform and access to the database is provided as-a-service. There are two common deployment models: users can run databases on the cloud independently, using a virtual machine image, or they can purchase access to a database service, maintained by a cloud database provider. Of the databases available on the cloud, some are SQL-based and some use a NoSQL data model.

<span class="mw-page-title-main">Massachusetts Green High Performance Computing Center</span> Intercollegiate computer research center

Massachusetts Green High Performance Computing Center (MGHPCC) is an intercollegiate high-performance computing facility located in Holyoke, Massachusetts, connected to that city's municipal fiber grid and powered by Holyoke Gas and Electric via the Holyoke Canal System and Dam. MGHPCC is a joint venture of Boston University, Harvard, MIT, Northeastern, and the University of Massachusetts system; the facility holds the capacity for hundreds of thousands of cores in clusters provided by its affiliates. For example, as of 2016 one cluster used by UMass contained a network of 14,376 cores, both Intel and AMD, and more than 1.1 petabytes of on-site storage on an FDR Infiniband network. The facility maintains capacity for regular expansion, with key partners investing capability upgrades in the current building and more than 4 acres of additional undeveloped space.

<span class="mw-page-title-main">Message passing in computer clusters</span> Aspect of computer clusters

Message passing is an inherent element of all computer clusters. All computer clusters, ranging from homemade Beowulfs to some of the fastest supercomputers in the world, rely on message passing to coordinate the activities of the many nodes they encompass. Message passing in computer clusters built with commodity servers and switches is used by virtually every internet service.

The Holland Computing Center, often abbreviated HCC, is the high-performance computing core for the University of Nebraska System. HCC has locations in both the University of Nebraska-Lincoln June and Paul Schorr III Center for Computer Science & Engineering and the University of Nebraska Omaha Peter Kiewit Institute. The center was named after Omaha businessman Richard Holland who donated considerably to the university for the project.