First-order second-moment method

Last updated

In probability theory, the first-order second-moment (FOSM) method, also referenced as mean value first-order second-moment (MVFOSM) method, is a probabilistic method to determine the stochastic moments of a function with random input variables. The name is based on the derivation, which uses a first-order Taylor series and the first and second moments of the input variables. [1]

Contents

Approximation

Consider the objective function , where the input vector is a realization of the random vector with probability density function . Because is randomly distributed, is also randomly distributed. Following the FOSM method, the mean value of is approximated by

The variance of is approximated by

where is the length/dimension of and is the partial derivative of at the mean vector with respect to the i-th entry of . More accurate, second-order second-moment approximations are also available [2]

Derivation

The objective function is approximated by a Taylor series at the mean vector .

The mean value of is given by the integral

Inserting the first-order Taylor series yields

The variance of is given by the integral

According to the computational formula for the variance, this can be written as

Inserting the Taylor series yields

Higher-order approaches

The following abbreviations are introduced.

In the following, the entries of the random vector are assumed to be independent. Considering also the second-order terms of the Taylor expansion, the approximation of the mean value is given by

The second-order approximation of the variance is given by

The skewness of can be determined from the third central moment . When considering only linear terms of the Taylor series, but higher-order moments, the third central moment is approximated by

For the second-order approximations of the third central moment as well as for the derivation of all higher-order approximations see Appendix D of Ref. [3] Taking into account the quadratic terms of the Taylor series and the third moments of the input variables is referred to as second-order third-moment method. [4] However, the full second-order approach of the variance (given above) also includes fourth-order moments of input parameters, [5] the full second-order approach of the skewness 6th-order moments, [3] [6] and the full second-order approach of the kurtosis up to 8th-order moments. [6]

Practical application

There are several examples in the literature where the FOSM method is employed to estimate the stochastic distribution of the buckling load of axially compressed structures (see e.g. Ref. [7] [8] [9] [10] ). For structures which are very sensitive to deviations from the ideal structure (like cylindrical shells) it has been proposed to use the FOSM method as a design approach. Often the applicability is checked by comparison with a Monte Carlo simulation. Two comprehensive application examples of the full second-order method specifically oriented towards the fatigue crack growth in a metal railway axle are discussed and checked by comparison with a Monte Carlo simulation in Ref. [5] [6]

In engineering practice, the objective function often is not given as analytic expression, but for instance as a result of a finite-element simulation. Then the derivatives of the objective function need to be estimated by the central differences method. The number of evaluations of the objective function equals . Depending on the number of random variables this still can mean a significantly smaller number of evaluations than performing a Monte Carlo simulation. However, when using the FOSM method as a design procedure, a lower bound shall be estimated, which is actually not given by the FOSM approach. Therefore, a type of distribution needs to be assumed for the distribution of the objective function, taking into account the approximated mean value and standard deviation.

Related Research Articles

<span class="mw-page-title-main">Feynman diagram</span> Pictorial representation of the behavior of subatomic particles

In theoretical physics, a Feynman diagram is a pictorial representation of the mathematical expressions describing the behavior and interaction of subatomic particles. The scheme is named after American physicist Richard Feynman, who introduced the diagrams in 1948. The interaction of subatomic particles can be complex and difficult to understand; Feynman diagrams give a simple visualization of what would otherwise be an arcane and abstract formula. According to David Kaiser, "Since the middle of the 20th century, theoretical physicists have increasingly turned to this tool to help them undertake critical calculations. Feynman diagrams have revolutionized nearly every aspect of theoretical physics." While the diagrams are applied primarily to quantum field theory, they can also be used in other areas of physics, such as solid-state theory. Frank Wilczek wrote that the calculations that won him the 2004 Nobel Prize in Physics "would have been literally unthinkable without Feynman diagrams, as would [Wilczek's] calculations that established a route to production and observation of the Higgs particle."

<span class="mw-page-title-main">Variance</span> Statistical measure of how far values spread from their average

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

<span class="mw-page-title-main">Central limit theorem</span> Fundamental theorem in probability theory and statistics

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation.

In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis.

In mathematics, separation of variables is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs on a different side of the equation.

In mathematics, a Dirichlet series is any series of the form where s is complex, and is a complex sequence. It is a special case of general Dirichlet series.

<span class="mw-page-title-main">Cramér–Rao bound</span> Lower bound on variance of an estimator

In estimation theory and statistics, the Cramér–Rao bound (CRB) relates to estimation of a deterministic parameter. The result is named in honor of Harald Cramér and C. R. Rao, but has also been derived independently by Maurice Fréchet, Georges Darmois, and by Alexander Aitken and Harold Silverstone. It is also known as Fréchet-Cramér–Rao or Fréchet-Darmois-Cramér-Rao lower bound. It states that the precision of any unbiased estimator is at most the Fisher information; or (equivalently) the reciprocal of the Fisher information is a lower bound on its variance.

In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information.

Stein's lemma, named in honor of Charles Stein, is a theorem of probability theory that is of interest primarily because of its applications to statistical inference — in particular, to James–Stein estimation and empirical Bayes methods — and its applications to portfolio choice theory. The theorem gives a formula for the covariance of one random variable with the value of a function of another, when the two random variables are jointly normally distributed.

In image processing, computer vision and related fields, an image moment is a certain particular weighted average (moment) of the image pixels' intensities, or a function of such moments, usually chosen to have some attractive property or interpretation.

In probability theory, calculation of the sum of normally distributed random variables is an instance of the arithmetic of random variables.

Differential entropy is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.

In mathematics, probabilistic metric spaces are a generalization of metric spaces where the distance no longer takes values in the non-negative real numbers R0, but in distribution functions.

In mathematics, the secondary measure associated with a measure of positive density ρ when there is one, is a measure of positive density μ, turning the secondary polynomials associated with the orthogonal polynomials for ρ into an orthogonal system.

<span class="mw-page-title-main">Gauss–Hermite quadrature</span> Form of Gaussian quadrature

In numerical analysis, Gauss–Hermite quadrature is a form of Gaussian quadrature for approximating the value of integrals of the following kind:

In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point, in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals.

In stochastic processes, Kramers–Moyal expansion refers to a Taylor series expansion of the master equation, named after Hans Kramers and José Enrique Moyal. In many textbooks, the expansion is used only to derive the Fokker–Planck equation, and never used again. In general, continuous stochastic processes are essentially all Markovian, and so Fokker–Planck equations are sufficient for studying them. The higher-order Kramers–Moyal expansion only come into play when the process is jumpy. This usually means it is a Poisson-like process.

References

  1. A. Haldar and S. Mahadevan, Probability, Reliability, and Statistical Methods in Engineering Design. John Wiley & Sons New York/Chichester, UK, 2000.
  2. Crespo, L. G.; Kenny, S. P. (2005). "A first and second order moment approach to probabilistic control synthesis". {AIAA} Guidance Navigation and Control conference. hdl: 2060/20050232742 .
  3. 1 2 B. Kriegesmann, "Probabilistic Design of Thin-Walled Fiber Composite Structures", Mitteilungen des Instituts für Statik und Dynamik der Leibniz Universität Hannover 15/2012, ISSN   1862-4650, Gottfried Wilhelm Leibniz Universität Hannover, Hannover, Germany, 2012, PDF; 10,2MB.
  4. Y. J. Hong, J. Xing, and J. B. Wang, "A Second-Order Third-Moment Method for Calculating the Reliability of Fatigue", Int. J. Press. Vessels Pip., 76 (8), pp 567–570, 1999.
  5. 1 2 Mallor C, Calvo S, Núñez JL, Rodríguez-Barrachina R, Landaberea A. "Full second-order approach for expected value and variance prediction of probabilistic fatigue crack growth life." International Journal of Fatigue 2020;133:105454. https://doi.org/10.1016/j.ijfatigue.2019.105454.
  6. 1 2 3 Mallor C, Calvo S, Núñez JL, Rodríguez-Barrachina R, Landaberea A. "Uncertainty propagation using the full second-order approach for probabilistic fatigue crack growth life." International Journal of Numerical Methods for Calculation and Design in Engineering (RIMNI) 2020:11. https://doi.org/10.23967/j.rimni.2020.07.004.
  7. I. Elishakoff, S. van Manen, P. G. Vermeulen, and J. Arbocz, "First-Order Second-Moment Analysis of the Buckling of Shells with Random Imperfections", AIAA J., 25 (8), pp 1113–1117, 1987.
  8. I. Elishakoff, "Uncertain Buckling: Its Past, Present and Future", Int. J. Solids Struct., 37 (46–47), pp 6869–6889, Nov. 2000.
  9. J. Arbocz and M. W. Hilburger, "Toward a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells", AIAA J., 43 (8), pp 1823–1827, 2005.
  10. B. Kriegesmann, R. Rolfes, C. Hühne, and A. Kling, "Fast Probabilistic Design Procedure for Axially Compressed Composite Cylinders", Compos. Struct., 93, pp 3140–3149, 2011.