Prognostics

Last updated

Prognostics is an engineering discipline focused on predicting the time at which a system or a component will no longer perform its intended function. [1] This lack of performance is most often a failure beyond which the system can no longer be used to meet desired performance. The predicted time then becomes the remaining useful life (RUL), which is an important concept in decision making for contingency mitigation. Prognostics predicts the future performance of a component by assessing the extent of deviation or degradation of a system from its expected normal operating conditions. [2] The science of prognostics is based on the analysis of failure modes, detection of early signs of wear and aging, and fault conditions. An effective prognostics solution is implemented when there is sound knowledge of the failure mechanisms that are likely to cause the degradations leading to eventual failures in the system. It is therefore necessary to have initial information on the possible failures (including the site, mode, cause and mechanism) in a product. Such knowledge is important to identify the system parameters that are to be monitored. Potential uses for prognostics is in condition-based maintenance. The discipline that links studies of failure mechanisms to system lifecycle management is often referred to as prognostics and health management (PHM), sometimes also system health management (SHM) or—in transportation applications—vehicle health management (VHM) or engine health management (EHM). Technical approaches to building models in prognostics can be categorized broadly into data-driven approaches, model-based approaches, and hybrid approaches.

Contents

Data-driven prognostics

Data-driven prognostics usually use pattern recognition and machine learning techniques to detect changes in system states. [3] The classical data-driven methods for nonlinear system prediction include the use of stochastic models such as the autoregressive (AR) model, the threshold AR model, the bilinear model, the projection pursuit, the multivariate adaptive regression splines, and the Volterra series expansion. Since the last decade, more interests in data-driven system state forecasting have been focused on the use of flexible models such as various types of neural networks (NNs) and neural fuzzy (NF) systems. Data-driven approaches are appropriate when the understanding of first principles of system operation is not comprehensive or when the system is sufficiently complex such that developing an accurate model is prohibitively expensive. Therefore, the principal advantages to data driven approaches is that they can often be deployed quicker and cheaper compared to other approaches, and that they can provide system-wide coverage (cf. physics-based models, which can be quite narrow in scope). The main disadvantage is that data driven approaches may have wider confidence intervals than other approaches and that they require a substantial amount of data for training. Data-driven approaches can be further subcategorized into fleet-based statistics and sensor-based conditioning. In addition, data-driven techniques also subsume cycle-counting techniques that may include domain knowledge.

The two basic data-driven strategies involve (1) modeling cumulative damage (or, equivalently, health) and then extrapolating out to a damage (or health) threshold, or (2) learning directly from data the remaining useful life. [4] [5] As mentioned, a principal bottleneck is the difficulty in obtaining run-to-failure data, in particular for new systems, since running systems to failure can be a lengthy and rather costly process. When future usage is not the same as in the past (as with most non-stationary systems), collecting data that includes all possible future usages (both load and environmental conditions) becomes often nearly impossible. Even where data exist, the efficacy of data-driven approaches is not only dependent on the quantity but also on the quality of system operational data. These data sources may include temperature, pressure, oil debris, currents, voltages, power, vibration and acoustic signal, spectrometric data as well as calibration and calorimetric data. The data often needs to be pre-processed before it can be used. Typically two procedures are performed i) Denoising and ii) Feature extraction. Denoising refers to reducing or eliminating the influence of noise on data. Features extraction is important because in today's data hungry world, huge amount of data is collected using sensor measurement that may not be used readily. Therefore, domain knowledge and statistical signal processing is applied to extract important features from (more often than not) noisy, high-dimensional data. [6]

Physics-based prognostics

Physics-based prognostics (sometimes called model-based prognostics) attempts to incorporate physical understanding (physical models) of the system into the estimation of remaining useful life (RUL). Modeling physics can be accomplished at different levels, for example, micro and macro levels. At the micro level (also called material level), physical models are embodied by series of dynamic equations that define relationships, at a given time or load cycle, between damage (or degradation) of a system/component and environmental and operational conditions under which the system/component are operated. The micro-level models are often referred as damage propagation model. For example, Yu and Harris's fatigue life model for ball bearings, which relates the fatigue life of a bearing to the induced stress, [7] Paris and Erdogan's crack growth model, [8] and stochastic defect-propagation model [9] are other examples of micro-level models. Since measurements of critical damage properties (such as stress or strain of a mechanical component) are rarely available, sensed system parameters have to be used to infer the stress/strain values. Micro-level models need to account in the uncertainty management the assumptions and simplifications, which may pose significant limitations of that approach.

Macro-level models are the mathematical model at system level, which defines the relationship among system input variables, system state variables, and system measures variables/outputs where the model is often a somewhat simplified representation of the system, for example a lumped parameter model. The trade-off is increased coverage with possibly reducing accuracy of a particular degradation mode. Where this trade-off is permissible, faster prototyping may be the result. However, where systems are complex (e.g., a gas turbine engine), even a macro-level model may be a rather time-consuming and labor-intensive process. As a result, macro-level models may not be available in detail for all subsystems. The resulting simplifications need to be accounted for by the uncertainty management.

Hybrid approaches

Hybrid approaches attempt to leverage the strength from both data-driven approaches as well as model-based approaches. [10] [11] In reality, it is rare that the fielded approaches are completely either purely data-driven or purely model-based. More often than not, model-based approaches include some aspects of data-driven approaches and data-driven approaches glean available information from models. An example for the former would be where model parameters are tuned using field data. An example for the latter is when the set-point, bias, or normalization factor for a data-driven approach is given by models. Hybrid approaches can be categorized broadly into two categories, 1) Pre-estimate fusion and 2.) Post-estimate fusion.

Pre-estimate fusion of models and data

The motivation for pre-estimate aggregation may be that no ground truth data are available. This may occur in situations where diagnostics does a good job in detecting faults that are resolved (through maintenance) before system failure occurs. Therefore, there are hardly any run-to-failure data. However, there is incentive to know better when a system would fail to better leverage the remaining useful life while at the same time avoiding unscheduled maintenance (unscheduled maintenance is typically more costly than scheduled maintenance and results in system downtime). Garga et al. describe conceptually a pre-estimate aggregation hybrid approach where domain knowledge is used to change the structure of a neural network, thus resulting in a more parsimonious representation of the network.[ citation needed ] Another way to accomplish the pre-estimate aggregation is by a combined off-line process and on-line process: In the off-line mode, one can use a physics-based simulation model to understand the relationships of sensor response to fault state; In the on-line mode, one can use data to identify current damage state, then track the data to characterize damage propagation, and finally apply an individualized data-driven propagation model for remaining life prediction. For example, Khorasgani et al. [12] modeled the physics of failure in electrolytic capacitors. Then, they used a particle filter approach to derive the dynamic form of the degradation model and estimate the current state of capacitor health. This model is then used to get more accurate estimation of the Remaining Useful Life (RUL) of the capacitors as they are subjected to the thermal stress conditions.

Post-estimate fusion of model-based approaches with data-driven approaches

Motivation for post-estimate fusion is often consideration of uncertainty management. That is, the post-estimate fusion helps to narrow the uncertainty intervals of data-driven or model-based approaches. At the same time, the accuracy improves. The underlying notion is that multiple information sources can help to improve performance of an estimator. This principle has been successfully applied within the context of classifier fusion where the output of multiple classifiers is used to arrive at a better result than any classifier alone. Within the context of prognostics, fusion can be accomplished by employing quality assessments that are assigned to the individual estimators based on a variety of inputs, for example heuristics, a priori known performance, prediction horizon, or robustness of the prediction[ citation needed ].

Prognostic performance evaluation

Prognostic performance evaluation is of key importance for a successful PHM system deployment. The early lack of standardized methods for performance evaluation and benchmark data-sets resulted in reliance on conventional performance metrics borrowed from statistics. Those metrics were primarily accuracy and precision based where performance is evaluated against end of life, typically known a priori in an offline setting. More recently, efforts towards maturing prognostics technology has put a significant focus on standardizing prognostic methods, including those of performance assessment. A key aspect, missing from the conventional metrics, is the capability to track performance with time. This is important because prognostics is a dynamic process where predictions get updated with an appropriate frequency as more observation data become available from an operational system. Similarly, the performance of prediction changes with time that must be tracked and quantified. Another aspect that makes this process different in a PHM context is the time value of a RUL prediction. As a system approaches failure, the time window to take a corrective action gets shorter and consequently the accuracy of predictions becomes more critical for decision making. Finally, randomness and noise in the process, measurements, and prediction models are unavoidable and hence prognostics inevitably involves uncertainty in its estimates. A robust prognostics performance evaluation must incorporate the effects of this uncertainty.

Several prognostics performance metrics have evolved with consideration of these issues:

A visual representation of these metrics can be used to depict prognostic performance over a long time horizon.

Uncertainty in prognostics

There are many uncertainty parameters that can influence the prediction accuracy. These can be categorized as: [13]

Examples of uncertainty quantification can be found in. [14] [15] [16] [17] [18]

Commercial hardware and software platforms

For most PHM industrial applications, commercial off the shelf data acquisition hardware and sensors are normally the most practical and common. Example commercial vendors for data acquisition hardware include National Instruments [19] and Advantech Webaccess; [20] however, for certain applications, the hardware can be customized or ruggedized as needed. Common sensor types for PHM applications include accelerometers, temperature, pressure, measurements of rotational speed using encoders or tachometers, electrical measurements of voltage and current, acoustic emission, load cells for force measurements, and displacement or position measurements. There are numerous sensor vendors for those measurement types, with some having a specific product line that is more suited for condition monitoring and PHM applications.

The data analysis algorithms and pattern recognition technology are now being offered in some commercial software platforms or as part of a packaged software solution. National Instruments currently has a trial version (with a commercial release in the upcoming year) of the Watchdog Agent prognostic toolkit, which is a collection of data-driven PHM algorithms that were developed by the Center for Intelligent Maintenance Systems. [21] This collection of over 20 tools allows one to configure and customize the algorithms for signature extraction, anomaly detection, health assessment, failure diagnosis, and failure prediction for a given application as needed. Customized predictive monitoring commercial solutions using the Watchdog Agent toolkit are now being offered by a recent start-up company called Predictronics Corporation [22] in which the founders were instrumental in the development and application of this PHM technology at the Center for Intelligent Maintenance Systems. Another example is MATLAB and its Predictive Maintenance Toolbox [23] which provides functions and an interactive app for exploring, extracting, and ranking features using data-based and model-based techniques, including statistical, spectral, and time-series analysis. This toolbox also includes reference examples for motors, gearboxes, batteries, and other machines that can be reused for developing custom predictive maintenance and condition monitoring algorithms. Other commercial software offerings focus on a few tools for anomaly detection and fault diagnosis, and are typically offered as a package solution instead of a toolkit offering. Example includes Smart Signals anomaly detection analytical method, based on auto-associative type models (similarity based modeling) that look for changes in the nominal correlation relationship in the signals, calculates residuals between expected and actual performance, and then performs hypothesis testing on the residual signals (sequential probability ratio test). [24] Similar types of analysis methods are also offered by Expert Microsystems, which uses a similar auto-associative kernel method for calculating residuals, and has other modules for diagnosis and prediction. [25]

System-level prognostics

[26] While most prognostics approaches focus on accurate computation of the degradation rate and the remaining useful life (RUL) of individual components, it is the rate at which the performance of subsystems and systems degrade that is of greater interest to the operators and maintenance personnel of these systems.

See also

Notes

  1. Vachtsevanos; Lewis, Roemer; Hess, and Wu (2006). Intelligent fault Diagnosis and Prognosis for Engineering Systems. Wiley. ISBN   978-0-471-72999-0.
  2. Pecht, Michael G. (2008). Prognostics and Health Management of Electronics. Wiley. ISBN   978-0-470-27802-4.
  3. Liu, Jie; Wang, Golnaraghi (2009). "A multi-step predictor with a variable input pattern for system state forecasting". Mechanical Systems and Signal Processing. 23 (5): 1586–1599. Bibcode:2009MSSP...23.1586L. doi:10.1016/j.ymssp.2008.09.006.
  4. Mosallam, A.; Medjaher, K; Zerhouni, N. (2014). "Data-driven prognostic method based on Bayesian approaches for direct remaining useful life prediction" (PDF). Journal of Intelligent Manufacturing. 27 (5): 1037–1048. doi:10.1007/s10845-014-0933-4. S2CID   1978502.
  5. Mosallam, A.; Medjaher, K.; Zerhouni, N. (2015). "Component based data-driven prognostics for complex systems: Methodology and applications". 2015 First International Conference on Reliability Systems Engineering (ICRSE) (PDF). pp. 1–7. doi:10.1109/ICRSE.2015.7366504. ISBN   978-1-4673-8557-2. S2CID   2931906.
  6. Mosallam, A.; Medjaher, K; Zerhouni, N. (2013). "Nonparametric time series modelling for industrial prognostics and health management". The International Journal of Advanced Manufacturing Technology. 69 (5): 1685–1699. doi:10.1007/s00170-013-5065-z. S2CID   3182648.
  7. Yu, Wei Kufi; Harris (2001). "A new stress-based fatigue life model for ball bearings". Tribology Transactions. 44 (1): 11–18. doi:10.1080/10402000108982420. S2CID   110685617.
  8. Paris, P.C.; F. Erdogan (1963). "Closure to "Discussions of 'A Critical Analysis of Crack Propagation Laws'" (1963, ASME J. Basic Eng., 85, pp. 533–534)". Journal of Basic Engineering. 85 (4): 528–534. doi: 10.1115/1.3656903 .
  9. Li, Y.; Kurfess, T.R.; Liang, S.Y. (2000). "Stochastic Prognostics for Rolling Element Bearings". Mechanical Systems and Signal Processing. 14 (5): 747–762. Bibcode:2000MSSP...14..747L. doi:10.1006/mssp.2000.1301. ISSN   0888-3270.
  10. Pecht, Michael; Jaai (2010). "A prognostics and health management roadmap for information and electronics-rich systems". Microelectronics Reliability. 50 (3): 317–323. Bibcode:2010ESSFR...3.4.25P. doi:10.1016/j.microrel.2010.01.006.
  11. Liu, Jie; Wang, Ma; Yang, Yang (2012). "A data-model-fusion prognostic framework for dynamic system state forecasting". Engineering Applications of Artificial Intelligence. 25 (4): 814–823. doi:10.1016/j.engappai.2012.02.015. S2CID   33838225.
  12. researchgate.net
  13. "Prognostics and Health Management for Maintenance Practitioners - Review, Implementation and Tools Evaluation". PHM Society. 2017-12-11. Retrieved 2020-06-13.
  14. Sankararaman, Shankar (2015). "Significance, interpretation, and quantification of uncertainty in prognostics and remaining useful life prediction". Mechanical Systems and Signal Processing. 52–53. Elsevier BV: 228–247. Bibcode:2015MSSP...52..228S. doi:10.1016/j.ymssp.2014.05.029. ISSN   0888-3270.
  15. Sun, Jianzhong; Zuo, Hongfu; Wang, Wenbin; Pecht, Michael G. (2014). "Prognostics uncertainty reduction by fusing on-line monitoring data based on a state-space-based degradation model". Mechanical Systems and Signal Processing. 45 (2). Elsevier BV: 396–407. Bibcode:2014MSSP...45..396S. doi:10.1016/j.ymssp.2013.08.022. ISSN   0888-3270.
  16. Duong, Pham L.T.; Raghavan, Nagarajan (2017). "Uncertainty quantification in prognostics: A data driven polynomial chaos approach". 2017 IEEE International Conference on Prognostics and Health Management (ICPHM). IEEE. pp. 135–142. doi:10.1109/icphm.2017.7998318. ISBN   978-1-5090-5710-8.
  17. Datong Liu; Yue Luo; Yu Peng (2012). "Uncertainty processing in prognostics and health management: An overview". Proceedings of the IEEE 2012 Prognostics and System Health Management Conference (PHM-2012 Beijing). IEEE. pp. 1–6. doi:10.1109/phm.2012.6228860. ISBN   978-1-4577-1911-0.
  18. Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo (2018). "On-line Bayesian model updating for structural health monitoring". Mechanical Systems and Signal Processing. 103. Elsevier BV: 174–195. Bibcode:2018MSSP..103..174R. doi:10.1016/j.ymssp.2017.10.015. ISSN   0888-3270.
  19. National Instruments. "Condition Monitoring".
  20. Advantech. "Webaccess".
  21. National Instruments. "Watchdog Agent® Toolkit".
  22. Predictronics. "Predictronics".
  23. "Predictive Maintenance Toolbox". www.mathworks.com. Retrieved 2019-07-11.
  24. Wegerich, S. (2005). "Similarity-based Modeling of Vibration Features for Fault Detection and Identification". Sensor Review. 25 (2): 114–122. doi:10.1108/02602280510585691.
  25. Clarkson, S.A.; Bickford, R.L. (2013). "Path Classification and Remaining Life Estimation for Systems Having Complex Modes of Failure". MFPT Conference.
  26. Rodrigues, L. R.; Gomes, J. P. P.; Ferri, F. A. S.; Medeiros, I. P.; Galvão, R. K. H.; Júnior, C. L. Nascimento (December 2015). "Use of PHM Information and System Architecture for Optimized Aircraft Maintenance Planning". IEEE Systems Journal. 9 (4): 1197–1207. Bibcode:2015ISysJ...9.1197R. doi:10.1109/jsyst.2014.2343752. ISSN   1932-8184. S2CID   22285080.

Related Research Articles

<span class="mw-page-title-main">Maintenance</span> Maintaining a device in working condition

The technical meaning of maintenance involves functional checks, servicing, repairing or replacing of necessary devices, equipment, machinery, building infrastructure, and supporting utilities in industrial, business, and residential installations. Over time, this has come to include multiple wordings that describe various cost-effective practices to keep equipment operational; these activities occur either before or after a failure.

A prognostic variable in engineering within the context of prognostics, is a measured or estimated variable that is correlated with the health condition of a system, and may be used to predict its residual useful life.

<span class="mw-page-title-main">Prediction</span> Statement about a future event

A prediction or forecast is a statement about a future event or about future data. Predictions are often, but not always, based upon experience or knowledge of forecasters. There is no universal agreement about the exact difference between "prediction" and "estimation"; different authors and disciplines ascribe different connotations.

Forecasting is the process of making predictions based on past and present data. Later these can be compared (resolved) against what happens. For example, a company might estimate their revenue in the next year, then compare it against the actual results creating a variance actual analysis. Prediction is a similar but more general term. Forecasting might refer to specific formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods or the process of prediction and resolution itself. Usage can vary between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.

Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

<span class="mw-page-title-main">Ensemble forecasting</span> Multiple simulation method for weather forecasting

Ensemble forecasting is a method used in or within numerical weather prediction. Instead of making a single forecast of the most likely weather, a set of forecasts is produced. This set of forecasts aims to give an indication of the range of possible future states of the atmosphere. Ensemble forecasting is a form of Monte Carlo analysis. The multiple simulations are conducted to account for the two usual sources of uncertainty in forecast models: (1) the errors introduced by the use of imperfect initial conditions, amplified by the chaotic nature of the evolution equations of the atmosphere, which is often referred to as sensitive dependence on initial conditions; and (2) errors introduced because of imperfections in the model formulation, such as the approximate mathematical methods to solve the equations. Ideally, the verified future atmospheric state should fall within the predicted ensemble spread, and the amount of spread should be related to the uncertainty (error) of the forecast. In general, this approach can be used to make probabilistic forecasts of any dynamical system, and not just for weather prediction.

Predictive modelling uses statistics to predict outcomes. Most often the event one wants to predict is in the future, but predictive modelling can be applied to any type of unknown event, regardless of when it occurred. For example, predictive models are often used to detect crimes and identify suspects, after the crime has taken place.

Condition monitoring is the process of monitoring a parameter of condition in machinery, in order to identify a significant change which is indicative of a developing fault. It is a major component of predictive maintenance. The use of condition monitoring allows maintenance to be scheduled, or other actions to be taken to prevent consequential damages and avoid its consequences. Condition monitoring has a unique benefit in that conditions that would shorten normal lifespan can be addressed before they develop into a major failure. Condition monitoring techniques are normally used on rotating equipment, auxiliary systems and other machinery like belt-driven equipment,, while periodic inspection using non-destructive testing (NDT) techniques and fit for service (FFS) evaluation are used for static plant equipment such as steam boilers, piping and heat exchangers.

<span class="mw-page-title-main">Predictive maintenance</span> Method to predict when equipment should be maintained

Predictive maintenance techniques are designed to help determine the condition of in-service equipment in order to estimate when maintenance should be performed. This approach promises cost savings over routine or time-based preventive maintenance, because tasks are performed only when warranted. Thus, it is regarded as condition-based maintenance carried out as suggested by estimations of the degradation state of an item.

Flood forecasting is the process of predicting the occurrence, magnitude, timing, and duration of floods in a specific area, often by analysing various hydrological, meteorological, and environmental factors. The primary goal of flood forecasting is to deliver timely and accurate information to decision-makers, empowering them to take appropriate actions to mitigate the potential consequences of flooding on human lives, property, and the environment. By accounting for the various dimensions of a flood event, such as occurrence, magnitude, duration, and spatial extent, flood forecasting models can offer a more holistic and detailed representation of the impending risks and facilitate more effective response strategies.

Machine to machine (M2M) is direct communication between devices using any communications channel, including wired and wireless. Machine to machine communication can include industrial instrumentation, enabling a sensor or meter to communicate the information it records to application software that can use it. Such communication was originally accomplished by having a remote network of machines relay information back to a central hub for analysis, which would then be rerouted into a system like a personal computer.

Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.

Physics of failure is a technique under the practice of reliability design that leverages the knowledge and understanding of the processes and mechanisms that induce failure to predict reliability and improve product performance.

Integrated vehicle health management (IVHM) or integrated system health management (ISHM) is the unified capability of systems to assess the current or future state of the member system health and integrate that picture of system health within a framework of available resources and operational demand.

<span class="mw-page-title-main">Wind turbine prognostics</span>

The growing demand for renewable energy has resulted in global adoption and rapid expansion of wind turbine technology. Wind Turbines are typically designed to reach a 20-year life, however, due to the complex loading and environment in which they operate wind turbines rarely operate to that age without significant repairs and extensive maintenance during that period. In order to improve the management of wind farms there is an increasing move towards preventative maintenance as opposed to scheduled and reactive maintenance to reduce downtime and lost production. This is achieved through the use of prognostic monitoring/management systems.

An intelligent maintenance system (IMS) is a system that uses collected data from machinery in order to predict and prevent potential failures in them. The occurrence of failures in machinery can be costly and even catastrophic. In order to avoid failures, there needs to be a system which analyzes the behavior of the machine and provides alarms and instructions for preventive maintenance. Analyzing the behavior of the machines has become possible by means of advanced sensors, data collection systems, data storage/transfer capabilities and data analysis tools. These are the same set of tools developed for prognostics. The aggregation of data collection, storage, transformation, analysis and decision making for smart maintenance is called an intelligent maintenance system (IMS).

A digital twin is a digital model of an intended or actual real-world physical product, system, or process that serves as the effectively indistinguishable digital counterpart of it for practical purposes, such as simulation, integration, testing, monitoring, and maintenance. The digital twin is the underlying premise for Product Lifecycle Management and exists throughout the entire lifecycle of the physical entity it represents. Since information is detailed, the digital twin representation is determined by the value-based use cases it is created to implement. The digital twin can exist before the physical entity, as for example with virtual prototyping. The use of a digital twin in the creation phase allows the intended entity's entire lifecycle to be modeled and simulated. A digital twin of an existing entity may be used in real time and regularly synchronized with the corresponding physical system.

Industrial big data refers to a large amount of diversified time series generated at a high speed by industrial equipment, known as the Internet of things. The term emerged in 2012 along with the concept of "Industry 4.0”, and refers to big data”, popular in information technology marketing, in that data created by industrial equipment might hold more potential business value. Industrial big data takes advantage of industrial Internet technology. It uses raw data to support management decision making, so to reduce costs in maintenance and improve customer service. Please see intelligent maintenance system for more reference.

Predictive engineering analytics (PEA) is a development approach for the manufacturing industry that helps with the design of complex products. It concerns the introduction of new software tools, the integration between those, and a refinement of simulation and testing processes to improve collaboration between analysis teams that handle different applications. This is combined with intelligent reporting and data analytics. The objective is to let simulation drive the design, to predict product behavior rather than to react on issues which may arise, and to install a process that lets design continue after product delivery.

<span class="mw-page-title-main">Deterioration modeling</span> Engineering formula

Deterioration modeling is the process of modeling and predicting the physical conditions of equipment, structures, infrastructure or any other physical assets. The condition of infrastructure is represented either using a deterministic index or the probability of failure. Examples of such performance measures are pavement condition index for roads or bridge condition index for bridges. For probabilistic measures, which are the focus of reliability theory, probability of failure or reliability index are used. Deterioration models are instrumental to infrastructure asset management and are the basis for maintenance and rehabilitation decision-making. The condition of all physical infrastructure degrade over time. A deterioration model can help decision-makers to understand how fast the condition drops or violates a certain threshold.