Fault detection, isolation, and recovery (FDIR) is a subfield of control engineering which concerns itself with monitoring a system, identifying when a fault has occurred, and pinpointing the type of fault and its location. Two approaches can be distinguished: A direct pattern recognition of sensor readings that indicate a fault and an analysis of the discrepancy between the sensor readings and expected values, derived from some model. In the latter case, it is typical that a fault is said to be detected if the discrepancy or residual goes above a certain threshold. It is then the task of fault isolation to categorize the type of fault and its location in the machinery. Fault detection and isolation (FDI) techniques can be broadly classified into two categories. These include model-based FDI and signal processing based FDI.
In model-based FDI techniques some model of the system is used to decide about the occurrence of fault. The system model may be mathematical or knowledge based. Some of the model-based FDI techniques include [2] observer-based approach, parity-space approach, and parameter identification based methods. There is another trend of model-based FDI schemes, which is called set-membership methods. These methods guarantee the detection of fault under certain conditions. The main difference is that instead of finding the most likely model, these techniques omit the models, which are not compatible with data. [3] [4]
The example shown in the figure on the right illustrates a model-based FDI technique for an aircraft elevator reactive controller through the use of a truth table and a state chart. The truth table defines how the controller reacts to detected faults, and the state chart defines how the controller switches between the different modes of operation (passive, active, standby, off, and isolated) of each actuator. For example, if a fault is detected in hydraulic system 1, then the truth table sends an event to the state chart that the left inner actuator should be turned off. One of the benefits of this model-based FDI technique is that this reactive controller can also be connected to a continuous-time model of the actuator hydraulics, allowing the study of switching transients. [5]
In signal processing based FDI, some mathematical or statistical operations are performed on the measurements, or some neural network is trained using measurements to extract the information about the fault. [6] [7] [8] [9]
A good example of signal processing based FDI is time domain reflectometry where a signal is sent down a cable or electrical line and the reflected signal is compared mathematically to original signal to identify faults. Spread Spectrum Time Domain Reflectometry, for instance, involves sending down a spread spectrum signal down a wire line to detect wire faults. [10] Several clustering methods have also been proposed to identify the novel fault and segment a given signal into normal and faulty segments. [11]
Machine fault diagnosis is a field of mechanical engineering concerned with finding faults arising in machines. A particularly well developed part of it applies specifically to rotating machinery, one of the most common types encountered. To identify the most probable faults leading to failure, many methods are used for data collection, including vibration monitoring, thermal imaging, oil particle analysis, etc. Then these data are processed utilizing methods like spectral analysis, wavelet analysis, wavelet transform, short term Fourier transform, Gabor Expansion, Wigner-Ville distribution (WVD), cepstrum, bispectrum, correlation method, high resolution spectral analysis, waveform analysis (in the time domain, because spectral analysis usually concerns only frequency distribution and not phase information) and others. The results of this analysis are used in a root cause failure analysis in order to determine the original cause of the fault. For example, if a bearing fault is diagnosed, then it is likely that the bearing was not itself damaged at installation, but rather as the consequence of another installation error (e.g., misalignment) which then led to bearing damage. Diagnosing the bearing's damaged state is not enough for precision maintenance purposes. The root cause needs to be identified and remedied. If this is not done, the replacement bearing will soon wear out for the same reason and the machine will suffer more damage, remaining dangerous. Of course, the cause may also be visible as a result of the spectral analysis undertaken at the data-collection stage, but this may not always be the case.
The most common technique for detecting faults is the time-frequency analysis technique. For a rotating machine, the rotational speed of the machine (often known as the RPM), is not a constant, especially not during the start-up and shutdown stages of the machine. Even if the machine is running in the steady state, the rotational speed will vary around a steady-state mean value, and this variation depends on load and other factors. Since sound and vibration signals obtained from a rotating machine are strongly related to its rotational speed, it can be said that they are time-variant signals in nature. These time-variant features carry the machine fault signatures. Consequently, how these features are extracted and interpreted is important to research and industrial applications.
The most common method used in signal analysis is the FFT, or Fourier transform. The Fourier transform and its inverse counterpart offer two perspectives to study a signal: via the time domain or via the frequency domain. The FFT-based spectrum of a time signal shows us the existence of its frequency contents. By studying these and their magnitude or phase relations, we can obtain various types of information, such as harmonics, sidebands, beat frequency, bearing fault frequency and so on. However, the FFT is only suitable for signals whose frequency contents do not change over time; however, as mentioned above, the frequency contents of the sound and vibration signals obtained from a rotating machine are very much time-dependent. For this reason, FFT-based spectra are unable to detect how the frequency contents develop over time. To be more specific, if the RPM of a machine is increasing or decreasing during its startup or shutdown period, its bandwidth in the FFT spectrum will become much wider than it would be simply for the steady state. Hence, in such a case, the harmonics are not so distinguishable in the spectrum.
The time frequency approach for machine fault diagnosis can be divided into two broad categories: linear methods and the quadratic methods. The difference is that linear transforms can be inverted to construct the time signal, thus, they are more suitable for signal processing, such as noise reduction and time-varying filtering. Although the quadratic method describes the energy distribution of a signal in the joint time frequency domain, which is useful for analysis, classification, and detection of signal features, phase information is lost in the quadratic time-frequency representation; also, the time histories cannot be reconstructed with this method.
The short-term Fourier transform (STFT) and the Gabor transform are two algorithms commonly used as linear time-frequency methods. If we consider linear time-frequency analysis to be the evolution of the conventional FFT, then quadratic time frequency analysis would be the power spectrum counterpart. Quadratic algorithms include the Gabor spectrogram, Cohen's class and the adaptive spectrogram. The main advantage of time frequency analysis is discovering the patterns of frequency changes, which usually represent the nature of the signal. As long as this pattern is identified the machine fault associated with this pattern can be identified. Another important use of time frequency analysis is the ability to filter out a particular frequency component using a time-varying filter.
In practice, model uncertainties and measurement noise can complicate fault detection and isolation. [12]
As a result, using fault diagnostics to meet industrial needs in a cost-effective way, and to reduce maintenance costs without requiring more investments than the cost of what is to be avoided in the first place, requires an effective scheme of applying them. This is the subject of maintenance, repair and operations; the different strategies include:
In fault detection and diagnosis, mathematical classification models which in fact belong to supervised learning methods, are trained on the training set of a labeled dataset to accurately identify the redundancies, faults and anomalous samples. During the past decades, there are different classification and preprocessing models that have been developed and proposed in this research area. [13] K-nearest-neighbors algorithm (kNN) is one of the oldest techniques which has been used to solve fault detection and diagnosis problems. [14] Despite the simple logic that this instance-based algorithm has, there are some problems with large dimensionality and processing time when it is used on large datasets. [15] Since kNN is not able to automatically extract the features to overcome the curse of dimensionality, so often some data preprocessing techniques like Principal component analysis(PCA), Linear discriminant analysis(LDA) or Canonical correlation analysis(CCA) accompany it to reach a better performance. [16] In many industrial cases, the effectiveness of kNN has been compared with other methods, specially with more complex classification models such as Support Vector Machines (SVMs), which is widely used in this field. Thanks to their appropriate nonlinear mapping using kernel methods, SVMs have an impressive performance in generalization, even with small training data. [17] However, general SVMs do not have automatic feature extraction themselves and just like kNN, are often coupled with a data pre-processing technique. [18] Another drawback of SVMs is that their performance is highly sensitive to the initial parameters, particularly to the kernel methods, [19] so in each signal dataset, a parameter tuning process is required to be conducted first. Therefore, the low speed of the training phase is a limitation of SVMs when it comes to its usage in fault detection and diagnosis cases. [20]
Artificial Neural Networks (ANNs) are among the most mature and widely used mathematical classification algorithms in fault detection and diagnosis. ANNs are well-known for their efficient self-learning capabilities of the complex relations (which generally exist inherently in fault detection and diagnosis problems) and are easy to operate. [18] Another advantage of ANNs is that they perform automatic feature extraction by allocating negligible weights to the irrelevant features, helping the system to avoid dealing with another feature extractor. [21] However, ANNs tend to over-fit the training set, which will have consequences of having poor validation accuracy on the validation set. Hence, often, some regularization terms and prior knowledge are added to the ANN model to avoid over-fitting and achieve higher performance. Moreover, properly determining the size of the hidden layer needs an exhaustive parameter tuning, to avoid poor approximation and generalization capabilities. [20] In general, different SVMs and ANNs models (i.e. Back-Propagation Neural Networks and Multi-Layer Perceptron) have shown successful performances in the fault detection and diagnosis in industries such as gearbox, [22] machinery parts (i.e. mechanical bearings [23] ), compressors, [24] wind and gas turbines [25] [26] and steel plates. [27]
With the research advances in ANNs and the advent of deep learning algorithms using deep and complex layers, novel classification models have been developed to cope with fault detection and diagnosis. [28] Most of the shallow learning models extract a few feature values from signals, causing a dimensionality reduction from the original signal. By using Convolutional neural networks, the continuous wavelet transform scalogram can be directly classified to normal and faulty classes. Such a technique avoids omitting any important fault message and results in a better performance of fault detection and diagnosis. [29] In addition, by transforming signals to image constructions, 2D Convolutional neural networks can be implemented to identify faulty signals from vibration image features. [30]
Deep belief networks, [31] Restricted Boltzmann machines [32] and Autoencoders [33] are other deep neural networks architectures which have been successfully used in this field of research. In comparison to traditional machine learning, due to their deep architecture, deep learning models are able to learn more complex structures from datasets, however, they need larger samples and longer processing time to achieve higher accuracy. [18]
Fault Recovery in FDIR is the action taken after a failure has been detected and isolated to return the system to a stable state. Some examples of fault recoveries are:
Infrared spectroscopy is the measurement of the interaction of infrared radiation with matter by absorption, emission, or reflection. It is used to study and identify chemical substances or functional groups in solid, liquid, or gaseous forms. It can be used to characterize new materials or identify and verify known and unknown samples. The method or technique of infrared spectroscopy is conducted with an instrument called an infrared spectrometer which produces an infrared spectrum. An IR spectrum can be visualized in a graph of infrared light absorbance on the vertical axis vs. frequency, wavenumber or wavelength on the horizontal axis. Typical units of wavenumber used in IR spectra are reciprocal centimeters, with the symbol cm−1. Units of IR wavelength are commonly given in micrometers, symbol μm, which are related to the wavenumber in a reciprocal way. A common laboratory instrument that uses this technique is a Fourier transform infrared (FTIR) spectrometer. Two-dimensional IR is also possible as discussed below.
In mathematics, the Morlet wavelet is a wavelet composed of a complex exponential (carrier) multiplied by a Gaussian window (envelope). This wavelet is closely related to human perception, both hearing and vision.
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.
Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data. Other frameworks in the spectrum of supervisions include weak- or semi-supervision, where a small portion of the data is tagged, and self-supervision. Some researchers consider self-supervised learning a form of unsupervised learning.
A spectrogram is a visual representation of the spectrum of frequencies of a signal as it varies with time. When applied to an audio signal, spectrograms are sometimes called sonographs, voiceprints, or voicegrams. When the data are represented in a 3D plot they may be called waterfall displays.
A spectrum analyzer measures the magnitude of an input signal versus frequency within the full frequency range of the instrument. The primary use is to measure the power of the spectrum of known and unknown signals. The input signal that most common spectrum analyzers measure is electrical; however, spectral compositions of other signals, such as acoustic pressure waves and optical light waves, can be considered through the use of an appropriate transducer. Spectrum analyzers for other types of signals also exist, such as optical spectrum analyzers which use direct optical techniques such as a monochromator to make measurements.
In mathematics, a time series is a series of data points indexed in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. Examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average.
Bioacoustics is a cross-disciplinary science that combines biology and acoustics. Usually it refers to the investigation of sound production, dispersion and reception in animals. This involves neurophysiological and anatomical basis of sound production and detection, and relation of acoustic signals to the medium they disperse through. The findings provide clues about the evolution of acoustic mechanisms, and from that, the evolution of animals that employ them.
Prognostics is an engineering discipline focused on predicting the time at which a system or a component will no longer perform its intended function. This lack of performance is most often a failure beyond which the system can no longer be used to meet desired performance. The predicted time then becomes the remaining useful life (RUL), which is an important concept in decision making for contingency mitigation. Prognostics predicts the future performance of a component by assessing the extent of deviation or degradation of a system from its expected normal operating conditions. The science of prognostics is based on the analysis of failure modes, detection of early signs of wear and aging, and fault conditions. An effective prognostics solution is implemented when there is sound knowledge of the failure mechanisms that are likely to cause the degradations leading to eventual failures in the system. It is therefore necessary to have initial information on the possible failures in a product. Such knowledge is important to identify the system parameters that are to be monitored. Potential uses for prognostics is in condition-based maintenance. The discipline that links studies of failure mechanisms to system lifecycle management is often referred to as prognostics and health management (PHM), sometimes also system health management (SHM) or—in transportation applications—vehicle health management (VHM) or engine health management (EHM). Technical approaches to building models in prognostics can be categorized broadly into data-driven approaches, model-based approaches, and hybrid approaches.
A pitch detection algorithm (PDA) is an algorithm designed to estimate the pitch or fundamental frequency of a quasiperiodic or oscillating signal, usually a digital recording of speech or a musical note or tone. This can be done in the time domain, the frequency domain, or both.
Originally known as optimal subband tree structuring (SB-TS), also called wavelet packet decomposition, is a wavelet transform where the discrete-time (sampled) signal is passed through more filters than the discrete wavelet transform (DWT).
Condition monitoring is the process of monitoring a parameter of condition in machinery, in order to identify a significant change which is indicative of a developing fault. It is a major component of predictive maintenance. The use of condition monitoring allows maintenance to be scheduled, or other actions to be taken to prevent consequential damages and avoid its consequences. Condition monitoring has a unique benefit in that conditions that would shorten normal lifespan can be addressed before they develop into a major failure. Condition monitoring techniques are normally used on rotating equipment, auxiliary systems and other machinery like belt-driven equipment,, while periodic inspection using non-destructive testing (NDT) techniques and fit for service (FFS) evaluation are used for static plant equipment such as steam boilers, piping and heat exchangers.
Predictive maintenance techniques are designed to help determine the condition of in-service equipment in order to estimate when maintenance should be performed. This approach promises cost savings over routine or time-based preventive maintenance, because tasks are performed only when warranted. Thus, it is regarded as condition-based maintenance carried out as suggested by estimations of the degradation state of an item.
In mathematics, a wavelet series is a representation of a square-integrable function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform.
Computer-aided detection (CADe), also called computer-aided diagnosis (CADx), are systems that assist doctors in the interpretation of medical images. Imaging techniques in X-ray, MRI, endoscopy, and ultrasound diagnostics yield a great deal of information that the radiologist or other medical professional has to analyze and evaluate comprehensively in a short time. CAD systems process digital images or videos for typical appearances and to highlight conspicuous sections, such as possible diseases, in order to offer input to support a decision taken by the professional.
The Advanced Learning and Research Institute (ALaRI), a faculty of informatics, was established in 1999 at the University of Lugano to promote research and education in embedded systems. The Faculty of Informatics within very few years has become one of the Switzerland major destinations for teaching and research, ranking third after the two Federal Institutes of Technology, Zurich and Lausanne.
The Hilbert–Huang transform (HHT) is a way to decompose a signal into so-called intrinsic mode functions (IMF) along with a trend, and obtain instantaneous frequency data. It is designed to work well for data that is nonstationary and nonlinear. In contrast to other common transforms like the Fourier transform, the HHT is an algorithm that can be applied to a data set, rather than a theoretical tool.
3D sound localization refers to an acoustic technology that is used to locate the source of a sound in a three-dimensional space. The source location is usually determined by the direction of the incoming sound waves and the distance between the source and sensors. It involves the structure arrangement design of the sensors and signal processing techniques.
EEG analysis is exploiting mathematical signal analysis methods and computer technology to extract information from electroencephalography (EEG) signals. The targets of EEG analysis are to help researchers gain a better understanding of the brain; assist physicians in diagnosis and treatment choices; and to boost brain-computer interface (BCI) technology. There are many ways to roughly categorize EEG analysis methods. If a mathematical model is exploited to fit the sampled EEG signals, the method can be categorized as parametric, otherwise, it is a non-parametric method. Traditionally, most EEG analysis methods fall into four categories: time domain, frequency domain, time-frequency domain, and nonlinear methods. There are also later methods including deep neural networks (DNNs).
Applications of machine learning in earth sciences include geological mapping, gas leakage detection and geological features identification. Machine learning (ML) is a type of artificial intelligence (AI) that enables computer systems to classify, cluster, identify and analyze vast and complex sets of data while eliminating the need for explicit instructions and programming. Earth science is the study of the origin, evolution, and future of the planet Earth. The Earth system can be subdivided into four major components including the solid earth, atmosphere, hydrosphere and biosphere.