In a distributed computing system, a failure detector is a computer application or a subsystem that is responsible for the detection of node failures or crashes. [1] Failure detectors were first introduced in 1996 by Chandra and Toueg in their book Unreliable Failure Detectors for Reliable Distributed Systems. The book depicts the failure detector as a tool to improve consensus (the achievement of reliability) and atomic broadcast (the same sequence of messages) in the distributed system. In other words, failure detectors seek errors in the process, and the system will maintain a level of reliability. In practice, after failure detectors spot crashes, the system will ban the processes that are making mistakes to prevent any further serious crashes or errors. [2] [3]
In the 21st century, failure detectors are widely used in distributed computing systems to detect application errors, such as a software application stops functioning properly. As the distributed computing projects (see List of distributed computing projects) become more and more popular, the usage of the failure detects also becomes important and critical. [4] [5]
Chandra and Toueg, the co-authors of the book Unreliable Failure Detectors for Reliable Distributed Systems (1996), approached the concept of detecting failure nodes by introducing the unreliable failure detector. [6] They describe the behavior of a unreliable failure detector in a distributed computing system as: after each process in the system entered a local failure detector component, each local component will examine a portion of all processes within the system. [5] In addition, each process must also contain programs that are currently suspected by failure detectors. [5]
Chandra and Toueg claimed that an unreliable failure detector can still be reliable in detecting the errors made by the system. [6] They generalize unreliable failure detectors to all forms of failure detectors because unreliable failure detectors and failure detectors share the same properties. Furthermore, Chandra and Toueg point out an important fact that the failure detector does not prevent any crashes in the system, even if the crashed program has been suspected previously. The construction of a failure detector is an essential, but a very difficult problem that occurred in the development of the fault-tolerant component in a distributed computer system. As a result, the failure detector was invented because of the need for detecting errors in the massive information transaction in distributed computing systems. [1] [3] [5]
The classes of failure detectors are distinguished by two important properties: completeness and accuracy. Completeness means that the failure detectors would find the programs that finally crashed in a process, whereas accuracy means that correct decisions that the failure detectors made in a process. [5]
The degrees of completeness depend on the number of crashed process is suspected by a failure detector in a certain period. [5]
The degrees of accuracy depend on the number of mistakes that a failure detector made in a certain period. [5]
Failure detectors can be categorized in the following eight types: [1] [7]
The properties of these failure detectors are described below: [1]
Accuracy Complete- ness | Strong Perpetual Accuracy | Weak Perpetual Accuracy | Strong Eventual Accuracy | Weak Eventual Accuracy |
---|---|---|---|---|
Strong Completeness | P | S | ♦P | ♦S |
Weak Completeness | Q | W | ♦Q | ♦W |
In a nutshell, the properties of failure detectors depend on how fast the failure detector detects actual failures and how well it avoids false detection. A perfect failure detector will find all errors without any mistakes, whereas a weak failure detector will not find any errors and make numerous mistakes. [3] [8]
Different types of failure detectors can be obtained by changing the properties of failure detectors. [3] [6] The first examples show that how to increase completeness of a failure detector, and the second example shows that how to change one type of the failure detector to another.
The following is an example abstracted from the Department of Computer Science at Yale University. It functions by boosting the completeness of a failure detector. [6]
initially suspects = ∅ do forever: for each process p: if my weak-detector suspects p, then send p to all processes upon receiving p from some process q: suspects := suspects + p - q
From the example above, if p crashes, then the weak-detector will eventually suspect it. All failure detectors in the system will eventually suspect the p because of the infinite loop created by failure detectors. This example also shows that a weak completeness failure detector can also suspect all crashes eventually. [6] The inspection of crashed programs does not depend on completeness. [5]
The following are correctness arguments to satisfy the algorithm of changing a failure detector W to a failure detector S. [1] The failure detector W is weak in completeness, and the failure detector S is strong in completeness. They are both weak in accuracy. [6]
If all arguments above are satisfied, the reduction of a weak failure detector W to a strong failure detector S will agree with the algorithm within the distributed computing system. [1]
In information theory and coding theory with applications in computer science and telecommunication, error detection and correction (EDAC) or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases.
In machine learning, boosting is an ensemble meta-algorithm for primarily reducing bias, and also variance in supervised learning, and a family of machine learning algorithms that convert weak learners to strong ones. Boosting is based on the question posed by Kearns and Valiant : "Can a set of weak learners create a single strong learner?" A weak learner is defined to be a classifier that is only slightly correlated with the true classification. In contrast, a strong learner is a classifier that is arbitrarily well-correlated with the true classification.
The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny also produced a computational theory of edge detection explaining why the technique works.
Self-stabilization is a concept of fault-tolerance in distributed systems. Given any initial state, a self-stabilizing distributed system will end up in a correct state in a finite number of execution steps.
A Byzantine fault is a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine generals problem", developed to describe a situation in which, to avoid catastrophic failure of the system, the system's actors must agree on a concerted strategy, but some of these actors are unreliable.
Data corruption refers to errors in computer data that occur during writing, reading, storage, transmission, or processing, which introduce unintended changes to the original data. Computer, transmission, and storage systems use a number of measures to provide end-to-end data integrity, or lack of errors.
In distributed computing, the bully algorithm is a method for dynamically electing a coordinator or leader from a group of distributed computer processes. The process with the highest process ID number from amongst the non-failed processes is selected as the coordinator.
Fault tolerance is the ability of a system to maintain proper operation in the event of failures or faults in one or more of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system, in which even a small failure can lead to total breakdown. Fault tolerance is particularly sought after in high-availability, mission-critical, or even life-critical systems. The ability of maintaining functionality when portions of a system break down is referred to as graceful degradation.
Reliability, availability and serviceability (RAS), also known as reliability, availability, and maintainability (RAM), is a computer hardware engineering term involving reliability engineering, high availability, and serviceability design. The phrase was originally used by International Business Machines (IBM) as a term to describe the robustness of their mainframe computers.
A distributed algorithm is an algorithm designed to run on computer hardware constructed from interconnected processors. Distributed algorithms are used in different application areas of distributed computing, such as telecommunications, scientific computing, distributed information processing, and real-time process control. Standard problems solved by distributed algorithms include leader election, consensus, distributed search, spanning tree generation, mutual exclusion, and resource allocation.
A fundamental problem in distributed computing and multi-agent systems is to achieve overall system reliability in the presence of a number of faulty processes. This often requires coordinating processes to reach consensus, or agree on some data value that is needed during computation. Example applications of consensus include agreeing on what transactions to commit to a database in which order, state machine replication, and atomic broadcasts. Real-world applications often requiring consensus include cloud computing, clock synchronization, PageRank, opinion formation, smart power grids, state estimation, control of UAVs, load balancing, blockchain, and others.
In computer science, state machine replication (SMR) or state machine approach is a general method for implementing a fault-tolerant service by replicating servers and coordinating client interactions with server replicas. The approach also provides a framework for understanding and designing replication management protocols.
Paxos is a family of protocols for solving consensus in a network of unreliable or fallible processors. Consensus is the process of agreeing on one result among a group of participants. This problem becomes difficult when the participants or their communications may experience failures.
In fault-tolerant distributed computing, an atomic broadcast or total order broadcast is a broadcast where all correct processes in a system of multiple processes receive the same set of messages in the same order; that is, the same sequence of messages. The broadcast is termed "atomic" because it either eventually completes correctly at all participants, or all participants abort without side effects. Atomic broadcasts are an important distributed computing primitive.
The principal curvature-based region detector, also called PCBR is a feature detector used in the fields of computer vision and image analysis. Specifically the PCBR detector is designed for object recognition applications.
The Brooks–Iyengar algorithm or FuseCPA Algorithm or Brooks–Iyengar hybrid algorithm is a distributed algorithm that improves both the precision and accuracy of the interval measurements taken by a distributed sensor network, even in the presence of faulty sensors. The sensor network does this by exchanging the measured value and accuracy value at every node with every other node, and computes the accuracy range and a measured value for the whole network from all of the values collected. Even if some of the data from some of the sensors is faulty, the sensor network will not malfunction. The algorithm is fault-tolerant and distributed. It could also be used as a sensor fusion method. The precision and accuracy bound of this algorithm have been proved in 2016.
The Chandra–Toueg consensus algorithm, published by Tushar Deepak Chandra and Sam Toueg in 1996, is an algorithm for solving consensus in a network of unreliable processes equipped with an eventually strong failure detector. The failure detector is an abstract version of timeouts; it signals to each process when other processes may have crashed. An eventually strong failure detector is one that never identifies some specific non-faulty process as having failed after some initial period of confusion, and, at the same time, eventually identifies all faulty processes as failed. The Chandra–Toueg consensus algorithm assumes that the number of faulty processes, denoted by f, is less than n/2, i.e. it assumes f < n/2, where n is the total number of processes.
Avalanche is a decentralized, open-source proof of stake blockchain with smart contract functionality. AVAX is the native cryptocurrency of the platform.
The Scalable Weakly Consistent Infection-style Process Group Membership (SWIM) Protocol is a group membership protocol based on "outsourced heartbeats" used in distributed systems, first introduced by Indranil Gupta in 2001. It is a hybrid algorithm which combines failure detection with group membership dissemination.
Bernadette Charron-Bost is a French computer scientist specializing in distributed computing. She is a director of research for the French National Centre for Scientific Research (CNRS). Formerly affiliated with the École polytechnique, she recently moved to the École normale supérieure (Paris).
{{cite book}}
: CS1 maint: multiple names: authors list (link)