Subset simulation

Last updated

Subset simulation [1] is a method used in reliability engineering to compute small (i.e., rare event) failure probabilities encountered in engineering systems. The basic idea is to express a small failure probability as a product of larger conditional probabilities by introducing intermediate failure events. This conceptually converts the original rare event problem into a series of frequent event problems that are easier to solve. In the actual implementation, samples conditional on intermediate failure events are adaptively generated to gradually populate from the frequent to rare event region. These 'conditional samples' provide information for estimating the complementary cumulative distribution function (CCDF) of the quantity of interest (that governs failure), covering the high as well as the low probability regions. They can also be used for investigating the cause and consequence of failure events. The generation of conditional samples is not trivial but can be performed efficiently using Markov chain Monte Carlo (MCMC).

Contents

Subset Simulation takes the relationship between the (input) random variables and the (output) response quantity of interest as a 'black box'. This can be attractive for complex systems where it is difficult to use other variance reduction or rare event sampling techniques that require prior information about the system behaviour. For problems where it is possible to incorporate prior information into the reliability algorithm, it is often more efficient to use other variance reduction techniques such as importance sampling. It has been shown that subset simulation is more efficient than traditional Monte Carlo simulation, but less efficient than line sampling, when applied to a fracture mechanics test problem. [2]

Basic idea

Let X be a vector of random variables and Y = h(X) be a scalar (output) response quantity of interest for which the failure probability is to be determined. Each evaluation of h(·) is expensive and so it should be avoided if possible. Using direct Monte Carlo methods one can generate i.i.d. (independent and identically distributed) samples of X and then estimate P(F) simply as the fraction of samples with Y > b. However this is not efficient when P(F) is small because most samples will not fail (i.e., with Y  b) and in many cases an estimate of 0 results. As a rule of thumb for small P(F) one requires 10 failed samples to estimate P(F) with a coefficient of variation of 30% (a moderate requirement). For example, 10000 i.i.d. samples, and hence evaluations of h(·), would be required for such an estimate if P(F) = 0.001.

Subset simulation attempts to convert a rare event problem into more frequent ones. Let be an increasing sequence of intermediate threshold levels. From the basic property of conditional probability,

The 'raw idea' of subset simulation is to estimate P(F) by estimating and the conditional probabilities for , anticipating efficiency gain when these probabilities are not small. To implement this idea there are two basic issues:

  1. Estimating the conditional probabilities by means of simulation requires the efficient generation of samples of X conditional on the intermediate failure events, i.e., the conditional samples. This is generally non-trivial.
  2. The intermediate threshold levels should be chosen so that the intermediate probabilities are not too small (otherwise ending up with rare event problem again) but not too large (otherwise requiring too many levels to reach the target event). However, this requires information of the CCDF, which is the target to be estimated.

In the standard algorithm of subset simulation the first issue is resolved by using Markov chain Monte Carlo. [3] More generic and flexible version of the simulation algorithms not based on Markov chain Monte Carlo have been recently developed. [4] The second issue is resolved by choosing the intermediate threshold levels {bi} adaptively using samples from the last simulation level. As a result, subset simulation in fact produces a set of estimates for b that corresponds to different fixed values of p = P(Y > b), rather than estimates of probabilities for fixed threshold values.

There are a number of variations of subset simulation used in different contexts in applied probability and stochastic operations research [5] [6] For example, in some variations the simulation effort to estimate each conditional probability P(Y > bi | Y > bi−1) (i = 2, ..., m) may not be fixed prior to the simulation, but may be random, similar to the splitting method in rare-event probability estimation. [7] These versions of subset simulation can also be used to approximately sample from the distribution of X given the failure of the system (that is, conditional on the event ). In that case, the relative variance of the (random) number of particles in the final level can be used to bound the sampling error as measured by the total variation distance of probability measures. [8]

See also

Notes

Related Research Articles

Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.

<span class="mw-page-title-main">Metropolis–Hastings algorithm</span> Monte Carlo algorithm

In statistics and statistical physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution from which direct sampling is difficult. This sequence can be used to approximate the distribution or to compute an integral. Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multi-dimensional distributions, especially when the number of dimensions is high. For single-dimensional distributions, there are usually other methods that can directly return independent samples from the distribution, and these are free from the problem of autocorrelated samples that is inherent in MCMC methods.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as its equilibrium distribution, one can obtain a sample of the desired distribution by recording states from the chain. The more steps that are included, the more closely the distribution of the sample matches the actual desired distribution. Various algorithms exist for constructing chains, including the Metropolis–Hastings algorithm.

A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output are random variables.

In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. This sequence can be used to approximate the joint distribution ; to approximate the marginal distribution of one of the variables, or some subset of the variables ; or to compute an integral. Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled.

Importance sampling is a Monte Carlo method for evaluating properties of a particular distribution, while only having samples generated from a different distribution than the distribution of interest. Its introduction in statistics is generally attributed to a paper by Teun Kloek and Herman K. van Dijk in 1978, but its precursors can be found in statistical physics as early as 1949. Importance sampling is also related to umbrella sampling in computational physics. Depending on the application, the term may refer to the process of sampling from this alternative distribution, the process of inference, or both.

<span class="mw-page-title-main">Monte Carlo integration</span> Numerical technique

In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that numerically computes a definite integral. While other algorithms usually evaluate the integrand at a regular grid, Monte Carlo randomly chooses points at which the integrand is evaluated. This method is particularly useful for higher-dimensional integrals.

Monte Carlo methods are used in corporate finance and mathematical finance to value and analyze (complex) instruments, portfolios and investments by simulating the various sources of uncertainty affecting their value, and then determining the distribution of their value over the range of resultant outcomes. This is usually done by help of stochastic asset models. The advantage of Monte Carlo methods over other techniques increases as the dimensions of the problem increase.

Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to solve filtering problems arising in signal processing and Bayesian statistical inference. The filtering problem consists of estimating the internal states in dynamical systems when partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Del Moral about mean-field interacting particle methods used in fluid mechanics since the beginning of the 1960s. The term "Sequential Monte Carlo" was coined by Liu and Chen in 1998.

<span class="mw-page-title-main">Variance reduction</span>

In mathematics, more specifically in the theory of Monte Carlo methods, variance reduction is a procedure used to increase the precision of the estimates obtained for a given simulation or computational effort. Every output random variable from the simulation is associated with a variance which limits the precision of the simulation results. In order to make a simulation statistically efficient, i.e., to obtain a greater precision and smaller confidence intervals for the output random variable of interest, variance reduction techniques can be used. The main ones are common random numbers, antithetic variates, control variates, importance sampling, stratified sampling, moment matching, conditional Monte Carlo and quasi random variables. For simulation with black-box models subset simulation and line sampling can also be used. Under these headings are a variety of specialized techniques; for example, particle transport simulations make extensive use of "weight windows" and "splitting/Russian roulette" techniques, which are a form of importance sampling.

The cross-entropy (CE) method is a Monte Carlo method for importance sampling and optimization. It is applicable to both combinatorial and continuous problems, with either a static or noisy objective.

Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.

A stochastic simulation is a simulation of a system that has variables that can change stochastically (randomly) with individual probabilities.

Probability bounds analysis (PBA) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called probability boxes, and constrain cumulative probability distributions.

Rare event sampling is an umbrella term for a group of computer simulation methods intended to selectively sample 'special' regions of the dynamic space of systems which are unlikely to visit those special regions through brute-force simulation. A familiar example of a rare event in this context would be nucleation of a raindrop from over-saturated water vapour: although raindrops form every day, relative to the length and time scales defined by the motion of water molecules in the vapour phase, the formation of a liquid droplet is extremely rare.

Mean-field particle methods are a broad class of interacting type Monte Carlo algorithms for simulating from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability measures can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depends on the distributions of the current random states. A natural way to simulate these sophisticated nonlinear Markov processes is to sample a large number of copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and Markov chain Monte Carlo methods these mean-field particle techniques rely on sequential interacting samples. The terminology mean-field reflects the fact that each of the samples interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes. In other words, starting with a chaotic configuration based on independent copies of initial state of the nonlinear Markov chain model, the chaos propagates at any time horizon as the size the system tends to infinity; that is, finite blocks of particles reduces to independent copies of the nonlinear Markov process. This result is called the propagation of chaos property. The terminology "propagation of chaos" originated with the work of Mark Kac in 1976 on a colliding mean-field kinetic gas model.

Line sampling is a method used in reliability engineering to compute small failure probabilities encountered in engineering systems. The method is particularly suitable for high-dimensional reliability problems, in which the performance function exhibits moderate non-linearity with respect to the uncertain parameters The method is suitable for analyzing black box systems, and unlike the importance sampling method of variance reduction, does not require detailed knowledge of the system.

<span class="mw-page-title-main">Dirk Kroese</span> Dutch-Australian mathematician and statistician

Dirk Pieter Kroese is a Dutch-Australian mathematician and statistician, and Professor at the University of Queensland. He is known for several contributions to applied probability, kernel density estimation, Monte Carlo methods and rare event simulation. He is, with Reuven Rubinstein, a pioneer of the Cross-Entropy (CE) method.

In the mathematical theory of probability, a generalized renewal process (GRP) or G-renewal process is a stochastic point process used to model failure/repair behavior of repairable systems in reliability engineering. Poisson point process is a particular case of GRP.

References

  1. Au, S.K.; Beck, James L. (October 2001). "Estimation of small failure probabilities in high dimensions by subset simulation". Probabilistic Engineering Mechanics. 16 (4): 263–277. CiteSeerX   10.1.1.131.1941 . doi:10.1016/S0266-8920(01)00019-4.
  2. Zio, E; Pedroni, N (2009). "Subset simulation and line sampling for advanced Monte Carlo reliability analysis". Reliability, Risk, and Safety (PDF). doi:10.1201/9780203859759.ch94. ISBN   978-0-415-55509-8. S2CID   9845287.
  3. Au, Siu-Kui (2016). "On MCMC algorithm for Subset Simulation". Probabilistic Engineering Mechanics. 43: 117–120. doi:10.1016/j.probengmech.2015.12.003.
  4. Au, Siu-Kui; Patelli, Edoardo (2016). "Rare event simulation in finite-infinite dimensional space" (PDF). Reliability Engineering & System Safety. 148: 67–77. doi:10.1016/j.ress.2015.11.012.
  5. Villén-Altamirano, Manuel; Villén-Altamirano, José (1994). "Restart: a straightforward method for fast simulation of rare events". Written at San Diego, CA, USA. Proceedings of the 26th Winter simulation conference. WSC '94. Orlando, Florida, United States: Society for Computer Simulation International. pp.  282–289. ISBN   0-7803-2109-X. acmid 194044.
  6. Botev, Z. I.; Kroese, D. P. (2008). "An Efficient Algorithm for Rare-event Probability Estimation, Combinatorial Optimization, and Counting". Methodology and Computing in Applied Probability. 10 (4): 471–505. CiteSeerX   10.1.1.399.7912 . doi:10.1007/s11009-008-9073-7. S2CID   1147040.
  7. Botev, Z. I.; Kroese, D. P. (2012). "Efficient Monte Carlo simulation via the generalized splitting method". Statistics and Computing. 22 (1): 1–16. doi:10.1007/s11222-010-9201-4. S2CID   14970946.
  8. Botev, Z. I.; L’Ecuyer, P. (2020). "Sampling Conditionally on a Rare Event via Generalized Splitting". INFORMS Journal on Computing. arXiv: 1909.03566 . doi:10.1287/ijoc.2019.0936. S2CID   202540190.
  9. Au, S.K.; Wang, Y. (2014). Engineering Risk Assessment with Subset Simulation. Singapore: John Wiley & Sons. ISBN   978-1-118-39804-3.
  10. Schuëller, G.I.; Pradlwarter, H.J. (2007). "Benchmark study on reliability estimation in higher dimensions of structural systems – An overview". Structural Safety. 29 (3): 167–182. doi:10.1016/j.strusafe.2006.07.010.
  11. Phoon, K.K. (2008). Reliability-Based Design in Geotechnical Engineering: Computations and Applications. Singapore: Taylor & Francis. ISBN   978-0-415-39630-1.
  12. Zio, E.; Pedroni, N. (2011). "How to effectively compute the reliability of a thermal–hydraulic nuclear passive system". Nuclear Engineering and Design. 241: 310–327. CiteSeerX   10.1.1.636.2126 . doi:10.1016/j.nucengdes.2010.10.029. S2CID   53677748.