Probabilistic design

Last updated
Statistical interference of distributions of applied load and material strength. Load applied and fracture stress are assumed to be normally distributed, and the failure probability is the overlap colored in grey. Probabilistic Design Distributions.svg
Statistical interference of distributions of applied load and material strength. Load applied and fracture stress are assumed to be normally distributed, and the failure probability is the overlap colored in grey.

Probabilistic design is a discipline within engineering design. It deals primarily with the consideration and minimization of the effects of random variability upon the performance of an engineering system during the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using the safety factor. [2] [3] Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design, quality control, systems engineering, machine design, civil engineering (particularly useful in limit state design) and manufacturing.

Contents

Objective and motivations

When using a probabilistic approach to design, the designer no longer thinks of each variable as a single value or number. Instead, each variable is viewed as a continuous random variable with a probability distribution. From this perspective, probabilistic design predicts the flow of variability (or distributions) through a system [4] .

Because there are so many sources of random and systemic variability when designing materials and structures, it is greatly beneficial for the designer to model the factors studied as random variables. By considering this model, a designer can make adjustments to reduce the flow of random variability, thereby improving engineering quality. Proponents of the probabilistic design approach contend that many quality problems can be predicted and rectified during the early design stages and at a much reduced cost [4] [5] .

Typically, the goal of probabilistic design is to identify the design that will exhibit the smallest effects of random variability. Minimizing random variability is essential to probabilistic design because it limits uncontrollable factors, while also providing a much more precise determination of failure probability. This could be the one design option out of several that is found to be most robust. Alternatively, it could be the only design option available, but with the optimum combination of input variables and parameters. This second approach is sometimes referred to as robustification, parameter design or design for six sigma [4] .

Sources of variability

Though the laws of physics dictate the relationships between variables and measurable quantities such as force, stress, strain, and deflection, there are still three primary sources of variability when considering these relationships [6] .

The first source of variability is statistical, due to the limitations of having a finite sample size to estimate parameters such as yield stress, Young's modulus, and true strain [7] . Measurement uncertainty is the most easily minimized out of these three sources, as variance is proportional to the inverse of the sample size.

We can represent variance due to measurement uncertainties as a corrective factor , which is multiplied by the true mean to yield the measured mean of . Equivalently, .

This yields the result , and the variance of the corrective factor is given as:

where is the correction factor, is the true mean, is the measured mean, and is the number of measurements made [6] .

The second source of variability stems from the inaccuracies and uncertainties of the model used to calculate such parameters. These include the physical models we use to understand loading and their associated effects in materials. The uncertainty from the model of a physical measurable can be determined if both theoretical values according to the model and experimental results are available.

The measured value is equivalent to the theoretical model prediction multiplied by a model error of , plus the experimental error [8] . Equivalently,

and the model error takes the general form:

where are coefficients of regression determined from experimental data [8] .

Finally, the last variability source comes from the intrinsic variability of any physical measurable. There is a fundamental random uncertainty associated with all physical phenomena, and it is comparatively the most difficult to minimize this variability. Thus, each physical variable and measurable quantity can be represented as a random variable with a mean and a variability.

Comparison to classical design principles

Consider the classical approach to performing tensile testing in materials. The stress experienced by a material is given as a singular value (i.e., force applied divided by the cross-sectional area perpendicular to the loading axis). The yield stress, which is the maximum stress a material can support before plastic deformation, is also given as a singular value. Under this approach, there is a 0% chance of material failure below the yield stress, and a 100% chance of failure above it. However, these assumptions break down in the real world.

The classical stress-strain model for a metal. The material is presumed to fail if stress exceeds the yield stress. Stress-strain curve.svg
The classical stress-strain model for a metal. The material is presumed to fail if stress exceeds the yield stress.

The yield stress of a material is often only known to a certain precision, meaning that there is an uncertainty and therefore a probability distribution associated with the known value [6] [8] . Let the probability distribution function of the yield strength be given as .

Similarly, the applied load or predicted load can also only be known to a certain precision, and the range of stress which the material will undergo is unknown as well. Let this probability distribution be given as .

The probability of failure is equivalent to the area between these two distribution functions, mathematically:

or equivalently, if we let the difference between yield stress and applied load equal a third function , then:

where the variance of the mean difference is given by .

The probabilistic design principles allow for precise determination of failure probability, whereas the classical model assumes absolutely no failure before yield strength [9] . It is clear that the classical applied load vs. yield stress model has limitations, so modeling these variables with a probability distribution to calculate failure probability is a more precise approach. The probabilistic design approach allows for the determination of material failure under all loading conditions, associating quantitative probabilities to failure chance in place of a definitive yes or no.

Methods used to determine variability

Finite element analysis (pictured here, of a dogbone under uniaxial stress) is a primary method used to provide theoretical values for stress and failure in probabilistic design . Poutre2S22.jpg
Finite element analysis (pictured here, of a dogbone under uniaxial stress) is a primary method used to provide theoretical values for stress and failure in probabilistic design .

In essence, probabilistic design focuses upon the prediction of the effects of variability. In order to be able to predict and calculate variability associated with model uncertainty, many methods have been devised and utilized across different disciplines to determine theoretical values for parameters such as stress and strain. Examples of theoretical models used alongside probabilistic design include:


Additionally, there are many statistical methods used to quantify and predict the random variability in the desired measurable. Some methods that are used to predict the random variability of an output include:

See also

Footnotes

  1. Sundarth, S; Woeste, Frank E.; Galligan, William (1978), Differential reliability : probabilistic engineering applied to wood members in bending-tension (PDF), vol. Res. Pap. FPL-RP-302., US Forest Products Laboratory, retrieved 21 January 2015{{citation}}: CS1 maint: multiple names: authors list (link)
  2. Sundararajan, S (1995). Probabilistic Structural Mechanics Handbook. Springer. ISBN   978-0412054815.
  3. Long, M W; Narcico, J D (June 1999), Design Methodology for Composite Aircraft Structures, DOT/FAA/AR-99/2, FAA, archived from the original on 3 March 2016, retrieved 24 January 2015
  4. 1 2 3 Ang, Alfredo H-S; Tang, Wilson H (2006). Probability Concepts in Engineering: Emphasis on Applications to Civil and Environmental Engineering (2nd ed.). John Wiley & Sons. ISBN   978-0471720645.
  5. Doorn, Neelke; Hansson, Sven Ove (2011-06-01). "Should Probabilistic Design Replace Safety Factors?". Philosophy & Technology. 24 (2): 151–168. doi: 10.1007/s13347-010-0003-6 . ISSN   2210-5441.
  6. 1 2 3 Soares, C. Guedes (1997), Soares, C. Guedes (ed.), "Quantification of Model Uncertainty in Structural Reliability", Probabilistic Methods for Structural Design, Solid Mechanics and Its Applications, Dordrecht: Springer Netherlands, pp. 17–37, doi:10.1007/978-94-011-5614-1_2, ISBN   978-94-011-5614-1 , retrieved 2023-12-11
  7. Soares, C. Guedes, ed. (1997). "Probabilistic Methods for Structural Design". Solid Mechanics and Its Applications. doi:10.1007/978-94-011-5614-1. ISSN   0925-0042.
  8. 1 2 3 Ditlevsen, Ove (1982-01-01). "Model uncertainty in structural reliability". Structural Safety. 1 (1): 73–86. doi:10.1016/0167-4730(82)90016-9. ISSN   0167-4730.
  9. Haugen, Edward B. (1980). Probabilistic mechanical design: Edward B. Haugen. New York: Wiley. ISBN   978-0-471-05847-2.
  10. 1 2 Benaroya, H.; Rehak, M. (May 1, 1988). "Finite Element Methods in Probabilistic Structural Analysis: A Selective Review". Applied Mechanics Reviews. 41 (5): 201–213 via ASME Digital Collection.
  11. Liu, W. K.; Belytschko, T.; Lua, Y. J. (1995), Sundararajan, C. (ed.), "Probabilistic Finite Element Method", Probabilistic Structural Mechanics Handbook: Theory and Industrial Applications, Boston, MA: Springer US, pp. 70–105, doi:10.1007/978-1-4615-1771-9_5, ISBN   978-1-4615-1771-9 , retrieved 2023-12-11
  12. Kong, Depeng; Lu, Shouxiang; Frantzich, Hakan; Lo, S. M. (2013-12-01). "A method for linking safety factor to the target probability of failure in fire safety engineering". Journal of Civil Engineering and Management. 19 (S1): S212–S212. doi:10.3846/13923730.2013.802718.

Related Research Articles

In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace, is an integral transform that converts a function of a real variable to a function of a complex variable . The transform has many applications in science and engineering, mostly as a tool for solving linear differential equations. In particular, it transforms ordinary differential equations into algebraic equations and convolution into multiplication. For suitable functions f, the Laplace transform is defined by the integral

<span class="mw-page-title-main">Probability theory</span> Branch of mathematics concerning probability

Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of the sample space is called an event.

<span class="mw-page-title-main">Probability distribution</span> Mathematical function for the probability a given outcome occurs in an experiment

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events.

<span class="mw-page-title-main">Random variable</span> Variable representing a random phenomenon

A random variable is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' can be misleading as its mathematical definition is not actually random nor a variable, but rather it is a function from possible outcomes in a sample space to a measurable space, often to the real numbers.

<span class="mw-page-title-main">Probability space</span> Mathematical concept

In probability theory, a probability space or a probability triple is a mathematical construct that provides a formal model of a random process or "experiment". For example, one can define a probability space which models the throwing of a die.

In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. The same concepts are known in more general mathematics as stochastic convergence and they formalize the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behavior that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behavior can be characterized: two readily understood behaviors are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution.

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, S. Watanabe, I. Shigekawa, and so on finally completed the foundations.

A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.

In mathematics, probabilistic metric spaces are a generalization of metric spaces where the distance no longer takes values in the non-negative real numbers R0, but in distribution functions.

In mathematics, a π-system on a set is a collection of certain subsets of such that

<span class="mw-page-title-main">Scoring rule</span> Measure for evaluating probabilistic forecasts

In decision theory, a scoring rule provides a summary measure for the evaluation of probabilistic predictions or forecasts. It is applicable to tasks in which predictions assign probabilities to events, i.e. one issues a probability distribution as prediction. This includes probabilistic classification of a set of mutually exclusive outcomes or classes.

Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.

In probability theory, a standard probability space, also called Lebesgue–Rokhlin probability space or just Lebesgue space is a probability space satisfying certain assumptions introduced by Vladimir Rokhlin in 1940. Informally, it is a probability space consisting of an interval and/or a finite or countable number of atoms.

In mathematics – specifically, in the theory of stochastic processes – Doob's martingale convergence theorems are a collection of results on the limits of supermartingales, named after the American mathematician Joseph L. Doob. Informally, the martingale convergence theorem typically refers to the result that any supermartingale satisfying a certain boundedness condition must converge. One may think of supermartingales as the random variable analogues of non-increasing sequences; from this perspective, the martingale convergence theorem is a random variable analogue of the monotone convergence theorem, which states that any bounded monotone sequence converges. There are symmetric results for submartingales, which are analogous to non-decreasing sequences.

<span class="mw-page-title-main">Interval finite element</span>

In numerical analysis, the interval finite element method is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of the structure. This is important in concrete structures, wood structures, geomechanics, composite structures, biomechanics and in many other areas. The goal of the Interval Finite Element is to find upper and lower bounds of different characteristics of the model and use these results in the design process. This is so called worst case design, which is closely related to the limit state design.

In probability theory, the Doob–Dynkin lemma, named after Joseph L. Doob and Eugene Dynkin, characterizes the situation when one random variable is a function of another by the inclusion of the -algebras generated by the random variables. The usual statement of the lemma is formulated in terms of one random variable being measurable with respect to the -algebra generated by the other.

<span class="mw-page-title-main">Probability box</span> Characterization of uncertain numbers consisting of both aleatoric and epistemic uncertainties

A probability box is a characterization of uncertain numbers consisting of both aleatoric and epistemic uncertainties that is often used in risk analysis or quantitative uncertainty modeling where numerical calculations must be performed. Probability bounds analysis is used to make arithmetic and logical calculations with p-boxes.

Probability bounds analysis (PBA) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called probability boxes, and constrain cumulative probability distributions.

Line sampling is a method used in reliability engineering to compute small failure probabilities encountered in engineering systems. The method is particularly suitable for high-dimensional reliability problems, in which the performance function exhibits moderate non-linearity with respect to the uncertain parameters The method is suitable for analyzing black box systems, and unlike the importance sampling method of variance reduction, does not require detailed knowledge of the system.

References