Metamodel-based importance sampling for structural reliability analysis

Metamodel-based importance sampling for structural reliability analysis
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Structural reliability methods aim at computing the probability of failure of systems with respect to some prescribed performance functions. In modern engineering such functions usually resort to running an expensive-to-evaluate computational model (e.g. a finite element model). In this respect simulation methods, which may require $10^{3-6}$ runs cannot be used directly. Surrogate models such as quadratic response surfaces, polynomial chaos expansions or kriging (which are built from a limited number of runs of the original model) are then introduced as a substitute of the original model to cope with the computational cost. In practice it is almost impossible to quantify the error made by this substitution though. In this paper we propose to use a kriging surrogate of the performance function as a means to build a quasi-optimal importance sampling density. The probability of failure is eventually obtained as the product of an augmented probability computed by substituting the meta-model for the original performance function and a correction term which ensures that there is no bias in the estimation even if the meta-model is not fully accurate. The approach is applied to analytical and finite element reliability problems and proves efficient up to 100 random variables.


💡 Research Summary

The paper addresses the long‑standing challenge of estimating very small failure probabilities for engineering systems whose limit‑state functions are expensive to evaluate, such as finite‑element models. Classical Monte Carlo simulation becomes infeasible when the probability of failure is rare because it would require millions of costly model runs. The authors propose a hybrid method that combines kriging (Gaussian‑process) surrogate modeling with importance sampling (IS) to obtain an unbiased estimator of the failure probability while drastically reducing the number of expensive model evaluations.

First, a kriging surrogate of the limit‑state function g(x) is built from a limited design of experiments (DOE). Kriging provides not only a mean prediction µ_G(x) but also a prediction variance σ_G²(x), which quantifies epistemic uncertainty due to the finite DOE. From these quantities the authors define a probabilistic classification function π(x)=Φ(−µ_G(x)/σ_G(x)), i.e., the probability that the surrogate predicts failure at a deterministic point x. This function is a smooth surrogate for the indicator 1_{g≤0}(x) and incorporates the surrogate’s uncertainty.

The optimal IS density would be h*(x)=1_{g≤0}(x)f_X(x)/p_f, where f_X is the input PDF and p_f the true failure probability. Since h* depends on the unknown indicator, it cannot be used directly. The authors replace the indicator by π(x) and obtain a quasi‑optimal density ĥ(x)=π(x)f_X(x)/p̂_f, where p̂_f=E_f


Comments & Academic Discussion

Loading comments...

Leave a Comment