First-order reliability method

Last updated

The first-order reliability method, (FORM), is a semi-probabilistic reliability analysis method devised to evaluate the reliability of a system. The accuracy of the method can be improved by averaging over many samples, which is known as Line Sampling. [1] [2]

The method is also known as the Hasofer-Lind Reliability Index, developed by Professor Michael Hasofer and Professor Neil Lind in 1974. [3] The index has been recognized as an important step towards the development of contemporary methods to effectively and accurately estimate structural safety. [4] [5]

The analysis method depends on a "Most Probable Point" on the limit state [6]

See also

Related Research Articles

<span class="mw-page-title-main">Psychological statistics</span>

Psychological statistics is application of formulas, theorems, numbers and laws to psychology. Statistical methods for psychology include development and application statistical theory and methods for modeling psychological data. These methods include psychometrics, factor analysis, experimental designs, and Bayesian statistics. The article also discusses journals in the same field.

<span class="mw-page-title-main">Psychometrics</span> Theory and technique of psychological measurement

Psychometrics is a field of study within psychology concerned with the theory and technique of measurement. Psychometrics generally refers to specialized fields within psychology and education devoted to testing, measurement, assessment, and related activities. Psychometrics is concerned with the objective measurement of latent constructs that cannot be directly observed. Examples of latent constructs include intelligence, introversion, mental disorders, and educational achievement. The levels of individuals on nonobservable latent variables are inferred through mathematical modeling based on what is observed from individuals' responses to items on tests and scales.

<span class="mw-page-title-main">Safety engineering</span> Engineering discipline which assures that engineered systems provide acceptable levels of safety

Safety engineering is an engineering discipline which assures that engineered systems provide acceptable levels of safety. It is strongly related to industrial engineering/systems engineering, and the subset system safety engineering. Safety engineering assures that a life-critical system behaves as needed, even when components fail.

<span class="mw-page-title-main">Fault tree analysis</span> Failure analysis system used in safety engineering and reliability engineering

Fault tree analysis (FTA) is a type of failure analysis in which an undesired state of a system is examined. This analysis method is mainly used in safety engineering and reliability engineering to understand how systems can fail, to identify the best ways to reduce risk and to determine event rates of a safety accident or a particular system level (functional) failure. FTA is used in the aerospace, nuclear power, chemical and process, pharmaceutical, petrochemical and other high-hazard industries; but is also used in fields as diverse as risk factor identification relating to social service system failure. FTA is also used in software engineering for debugging purposes and is closely related to cause-elimination technique used to detect bugs.

Multi-disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization (MSDO), and Multidisciplinary Design Analysis and Optimization (MDAO).

Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system can be divided and allocated to different sources of uncertainty in its inputs. A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem.

Failure mode and effects analysis is the process of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a system and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. An FMEA can be a qualitative analysis, but may be put on a quantitative basis when mathematical failure rate models are combined with a statistical failure mode ratio database. It was one of the first highly structured, systematic techniques for failure analysis. It was developed by reliability engineers in the late 1950s to study problems that might arise from malfunctions of military systems. An FMEA is often the first step of a system reliability study.

Human reliability is related to the field of human factors and ergonomics, and refers to the reliability of humans in fields including manufacturing, medicine and nuclear power. Human performance can be affected by many factors such as age, state of mind, physical health, attitude, emotions, propensity for certain common mistakes, errors and cognitive biases, etc.

Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability describes the ability of a system or component to function under stated conditions for a specified period of time. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

<span class="mw-page-title-main">Structural equation modeling</span> Form of causal modeling that fit networks of constructs to data

Structural equation modeling (SEM) is a label for a diverse set of methods used by scientists in both experimental and observational research across the sciences, business, and other fields. It is used most in the social and behavioral sciences.

A hazard analysis is used as the first step in a process used to assess risk. The result of a hazard analysis is the identification of different types of hazards. A hazard is a potential condition and exists or not. It may, in single existence or in combination with other hazards and conditions, become an actual Functional Failure or Accident (Mishap). The way this exactly happens in one particular sequence is called a scenario. This scenario has a probability of occurrence. Often a system has many potential failure scenarios. It also is assigned a classification, based on the worst case severity of the end condition. Risk is the combination of probability and severity. Preliminary risk levels can be provided in the hazard analysis. The validation, more precise prediction (verification) and acceptance of risk is determined in the risk assessment (analysis). The main goal of both is to provide the best selection of means of controlling or eliminating the risk. The term is used in several engineering specialties, including avionics, chemical process safety, safety engineering, reliability engineering and food safety.

Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.

The Life Quality Index (LQI) is a calibrated compound social indicator of human welfare that reflects the expected length of life in good health and enhancement of the quality of life through access to income. The Life Quality Index combines two primary social indicators: the expectancy of healthy life at birth, E, and the real gross domestic product per person, G, corrected for purchasing power parity as appropriate. Both are widely available and accurate statistics.

Henrik Overgaard Madsen is a businessperson, engineer, Member of the Board of Aker Solutions and was chief executive officer of DNV GL between 2006 and 2015.

<span class="mw-page-title-main">Probability box</span> Characterization of uncertain numbers consisting of both aleatoric and epistemic uncertainties

A probability box is a characterization of uncertain numbers consisting of both aleatoric and epistemic uncertainties that is often used in risk analysis or quantitative uncertainty modeling where numerical calculations must be performed. Probability bounds analysis is used to make arithmetic and logical calculations with p-boxes.

Probability bounds analysis (PBA) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called probability boxes, and constrain cumulative probability distributions.

P-boxes and probability bounds analysis have been used in many applications spanning many disciplines in engineering and environmental science, including:

<span class="mw-page-title-main">Roy Billinton</span>

Roy Billinton is a Canadian scholar and a Distinguished Emeritus Professor at the University of Saskatchewan, Saskatoon, Saskatchewan, Canada. In 2008, Billinton won the IEEE Canada Electric Power Medal for his research and application of reliability concepts in electric power system. In 2007, Billinton was elected a Foreign Associate of the United States National Academy of Engineering for “contributions to teaching, research and application of reliability engineering in electric power generation, transmission, and distribution systems."

Line sampling is a method used in reliability engineering to compute small failure probabilities encountered in engineering systems. The method is particularly suitable for high-dimensional reliability problems, in which the performance function exhibits moderate non-linearity with respect to the uncertain parameters The method is suitable for analyzing black box systems, and unlike the importance sampling method of variance reduction, does not require detailed knowledge of the system.

Abraham Michael Hasofer (1927-2010) was an Australian statistician. Professor Hasofer held the position of the Chair of Statistics within the Mathematics Department in the University of New South Wales in Sydney from 1969 to 1991. He subsequently held a position at the La Trobe University in Melbourne. He authored a number of publications in the field of applied mathematics and civil engineering, including his formulation of the Hasofer-Lind Reliability Index.

References

  1. Verderaime, V. (1994) "Illustrated Structural Application of Universal First-Order Reliability Method", NASA Technical Paper 3501.
  2. Cizelj, L.; Mavko, B.; Riesch-Oppermann, H. (1994) "Application of first and second order reliability methods in the safety assessment of cracked steam generator tubing", Nuclear Engineering and Design, 147.
  3. Huang, Jinsong, and D. V. Griffiths. "Observations on FORM in a simple geomechanics example." Structural Safety 33, no. 1 (2011): 115-119.
  4. Dudzik, A., and U. Radoń. "The reliability assessment for steel industrial building." Advances in Mechanics: Theorectical, Computational and Interdisciplinary Issues (2016): 163-166.
  5. Choi, Chan Kyu, and Hong Hee Yoo. "Uncertainty analysis of nonlinear systems employing the first-order reliability method." Journal of Mechanical Science and Technology 26, no. 1 (2012): 39-44.
  6. C Annis. "How FORM/SORM is Supposed to Work"