Red Hat Cluster Suite

Last updated

The Red Hat Cluster Suite (RHCS) includes software to create a high availability and load balancing cluster. Both can be used on the same system although this use case is unlikely. Both products, the High Availability Add-On and Load Balancer Add-On, are based on open-source community projects. Red Hat Cluster developers contribute code upstream for the community. Computational clustering is not part of cluster suite, but instead provided by Red Hat MRG.

Contents

High-Availability Add-On

The High Availability Add-On is Red Hat's implementation of Linux-HA. It attempts to ensure service availability by monitoring other nodes of the cluster. All nodes of the cluster must agree on their configuration and shared services state before the cluster is considered to have a quorum and services are able to be started.

The primary form of communicating node status is via a network device (commonly Ethernet), although in the case of possible network failure, quorum can be decided through secondary methods such as shared storage or multicast.

Software services, filesystems and network status can be monitored and controlled by the cluster suite, services and resources can be failed over to other network nodes in case of failure.

The Cluster forcibly terminates a cluster node's access to services or resources, via fencing, to ensure the node and data is in a known state. The node is terminated by removing power (known as STONITH) or access to the shared storage. More recent versions of Red Hat use a distributed lock manager, to allow fine grained locking and no single point of failure. Earlier versions of the cluster suite relied on a "grand unified lock manager" (GULM) which could be clustered, but still presented a point of failure if the nodes acting as GULM servers were to fail. GULM was last available in Red Hat Cluster Suite 4.

Technical details

Load Balancing Add-On

Red Hat adapted the Piranha load balancing software to allow for transparent load balancing and failover between servers. The application being balanced does not require special configuration to be balanced, instead a Red Hat Enterprise Linux server with the load balancer configured, intercepts and routes traffic based on metrics/rules set on the load balancer.

Support and Product Life-Cycle

Red Hat cluster suite support is tied to a matching version of Red Hat Enterprise Linux and follows the same maintenance policy. The product has no activation, time limit or remote kill switch, it will remain working after the support life cycle has ended. It is partially supported running under VMware Virtual Machine. [1]

History

The cluster suite is available in:

See also

Related Research Articles

<span class="mw-page-title-main">Red Hat</span> Computing services company

Red Hat, Inc. is an American software company that provides open source software products to enterprises and is a subsidiary of IBM. Founded in 1993, Red Hat has its corporate headquarters in Raleigh, North Carolina, with other offices worldwide.

Sistina Software was a US company that focused on storage solutions designed around a Linux platform. It originated in the University of Minnesota.

In computing, the Global File System 2 or GFS2 is a shared-disk file system for Linux computer clusters. GFS2 allows all members of a cluster to have direct concurrent access to the same shared block storage, in contrast to distributed file systems which distribute data throughout the cluster. GFS2 can also be used as a local file system on a single computer.

Google File System is a proprietary distributed file system developed by Google to provide efficient, reliable access to data using large clusters of commodity hardware. Google file system was replaced by Colossus in 2010.

The Linux-HA project provides a high-availability (clustering) solution for Linux, FreeBSD, OpenBSD, Solaris and Mac OS X which promotes reliability, availability, and serviceability (RAS).

High-availability clusters are groups of computers that support server applications that can be reliably utilized with a minimum amount of down-time. They operate by using high availability software to harness redundant computers in groups or clusters that provide continued service when system components fail. Without clustering, if a server running a particular application crashes, the application will be unavailable until the crashed server is fixed. HA clustering remedies this situation by detecting hardware/software faults, and immediately restarting the application on another system without requiring administrative intervention, a process known as failover. As part of this process, clustering software may configure the node before starting the application on it. For example, appropriate file systems may need to be imported and mounted, network hardware may have to be configured, and some supporting applications may need to be running as well.

GPFS is high-performance clustered file system software developed by IBM. It can be deployed in shared-disk or shared-nothing distributed parallel modes, or a combination of these. It is used by many of the world's largest commercial companies, as well as some of the supercomputers on the Top 500 List. For example, it is the filesystem of the Summit at Oak Ridge National Laboratory which was the #1 fastest supercomputer in the world in the November 2019 Top 500 List. Summit is a 200 Petaflops system composed of more than 9,000 POWER9 processors and 27,000 NVIDIA Volta GPUs. The storage filesystem is called Alpine.

<span class="mw-page-title-main">DRBD</span> Distributed replicated storage system for Linux

Distributed Replicated Block Device (DRBD) is a distributed replicated storage system for the Linux platform. It mirrors block devices between multiple hosts, functioning transparently to applications on the host systems. This replication can involve any type of block device, such as hard drives, partitions, RAID setups, or logical volumes.

Gluster Inc. was a software company that provided an open source platform for scale-out public and private cloud storage. The company was privately funded and headquartered in Sunnyvale, California, with an engineering center in Bangalore, India. Gluster was funded by Nexus Venture Partners and Index Ventures. Gluster was acquired by Red Hat on October 7, 2011.

A clustered file system (CFS) is a file system which is shared by being simultaneously mounted on multiple servers. There are several approaches to clustering, most of which do not employ a clustered file system. Clustered file systems can provide features like location-independent addressing and redundancy which improve reliability or reduce the complexity of the other parts of the cluster. Parallel file systems are a type of clustered file system that spread data across multiple storage nodes, usually for redundancy or performance.

Ceph is a free and open-source software-defined storage platform that provides object storage, block storage, and file storage built on a common distributed cluster foundation. Ceph provides distributed operation without a single point of failure and scalability to the exabyte level. Since version 12 (Luminous), Ceph does not rely on any other conventional filesystem and directly manages HDDs and SSDs with its own storage backend BlueStore and can expose a POSIX filesystem.

StorNext File System (SNFS), colloquially referred to as StorNext is a shared disk file system made by Quantum Corporation. StorNext enables multiple Windows, Linux and Apple workstations to access shared block storage over a Fibre Channel network. With the StorNext file system installed, these computers can read and write to the same storage volume at the same time enabling what is known as a "file-locking SAN." StorNext is used in environments where large files must be shared, and accessed simultaneously by users without network delays, or where a file must be available for access by multiple readers starting at different times. Common use cases include multiple video editor environments in feature film, television and general video post production.

oVirt Free, open-source virtualization management platform

oVirt is a free, open-source virtualization management platform. It was founded by Red Hat as a community project on which Red Hat Virtualization is based. It allows centralized management of virtual machines, compute, storage and networking resources, from an easy-to-use web-based front-end with platform independent access. KVM on x86-64, PowerPC64 and s390x architecture are the only hypervisors supported, but there is an ongoing effort to support ARM architecture in a future releases.

<span class="mw-page-title-main">Linoma Software</span>

Linoma Software was a developer of secure managed file transfer and IBM i software solutions. The company was acquired by HelpSystems in June 2016. Mid-sized companies, large enterprises and government entities use Linoma's software products to protect sensitive data and comply with data security regulations such as PCI DSS, HIPAA/HITECH, SOX, GLBA and state privacy laws. Linoma's software runs on a variety of platforms including Windows, Linux, UNIX, IBM i, AIX, Solaris, HP-UX and Mac OS X.

BeeGFS is a parallel file system developed for high-performance computing. BeeGFS includes a distributed metadata architecture for scalability and flexibility reasons. It specializes in data throughput.

Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management. Originally designed by Google, the project is now maintained by a worldwide community of contributors, and the trademark is held by the Cloud Native Computing Foundation.

<span class="mw-page-title-main">Proxmox Virtual Environment</span> Linux distribution for server virtualization

Proxmox Virtual Environment is a virtualization platform designed for the provisioning of hyper-converged infrastructure.

ONTAP, Data ONTAP, Clustered Data ONTAP (cDOT), or Data ONTAP 7-Mode is NetApp's proprietary operating system used in storage disk arrays such as NetApp FAS and AFF, ONTAP Select, and Cloud Volumes ONTAP. With the release of version 9.0, NetApp decided to simplify the Data ONTAP name and removed the word "Data" from it, removed the 7-Mode image, therefore, ONTAP 9 is the successor of Clustered Data ONTAP 8.

References

  1. "Can I run Red Hat Enterprise Linux 5 AP Cluster Suite on VMWare?". July 17, 2009. Archived from the original on July 31, 2012. Retrieved 2010-02-02.
  2. "Chapter 1. GFS Overview". Red Hat Enterprise Linux 5. Red Hat. Retrieved 2012-04-02.
  3. "Chapter 1. GFS2 Overview". Red Hat Enterprise Linux 6. Red Hat. Retrieved 2012-04-02.