Propagation of Uncertainty in Risk Analysis and Safety Integrity Level Composition
In many risk analyses the results are only given as mean values and often the input data are also mean values. However the required accuracy of the result is often an interval of values e. g. for the
In many risk analyses the results are only given as mean values and often the input data are also mean values. However the required accuracy of the result is often an interval of values e. g. for the derivation of a Safety Integrity Level (SIL). In this paper we reason what should be the accuracy of the input data of risk analyses if a particular certainty of the result is demanded. Also the backside of the coin, the SIL composition is discussed. The results show that common methods for risk analysis are faulty and that SIL allocation by a kind of SIL calculus seems infeasible without additional requirements on the composed components. A justification of a common practice for parameter scaling in well-constructed semi-quantitative risk analysis is also provided.
💡 Research Summary
The paper critically examines the way risk analyses and Safety Integrity Level (SIL) allocations are performed in industrial safety engineering, focusing on the propagation of uncertainty from input data to the final safety metrics. It begins by pointing out that most risk assessments report only point estimates (means) for both the results and the underlying parameters, while regulatory and engineering requirements often demand that the result lie within a specified confidence interval (an “error band”). To bridge this gap, the authors adopt a formal uncertainty‑propagation framework. Assuming independent, normally distributed input variables (X_i \sim N(\mu_i,\sigma_i^2)) and a risk function (R = f(X_1,\dots,X_n)), they linearise the function using a first‑order Taylor expansion. This yields the familiar expressions (\mu_R \approx f(\mu_1,\dots,\mu_n)) and (\sigma_R^2 \approx \sum (\partial f/\partial X_i)^2 \sigma_i^2). Consequently, the variance of the final risk metric is a weighted sum of the variances of the inputs. The authors demonstrate, with a chemical‑plant case study, that ignoring input variances can lead to substantial under‑ or over‑estimation of risk, whereas incorporating them produces a realistic confidence band around the risk estimate.
The second major contribution concerns the composition of SILs. SILs, defined in standards such as IEC 61508, correspond to ranges of the probability of failure on demand (PFD). In practice, engineers often apply heuristic rules like “two SIL‑2 components together achieve SIL‑3”. The paper shows that such additive or averaging rules are mathematically unsound. For two independent components A and B with PFDs (p_A) and (p_B), the system PFD is (p_{sys}=1-(1-p_A)(1-p_B)\approx p_A+p_B) for small probabilities, and the associated variance follows (\sigma_{sys}^2\approx\sigma_A^2+\sigma_B^2+2\mathrm{Cov}). Thus, the overall SIL cannot be inferred without accounting for both the nominal PFD values and their uncertainties. The authors propose two additional constraints for a valid SIL calculus: (1) all constituent components must satisfy a maximum allowable uncertainty (\sigma_{max}); (2) the design must explicitly state a target PFD and its confidence interval for the desired SIL, ensuring that each component’s PFD and variance stay within those limits. Only under these conditions does a systematic SIL composition become defensible.
The third part of the paper justifies the widespread use of parameter scaling in semi‑quantitative risk analysis. Practitioners often bucket risk factors into logarithmic steps (e.g., “low”, “medium”, “high” corresponding to factors of 10 or 100). The authors argue that many risk‑related variables follow log‑normal distributions; applying a logarithmic transformation stabilises variance and renders the uncertainty‑propagation equations approximately linear. They derive that the logarithm of the risk metric, (\ln R), is normally distributed with mean (\mu_{\ln R}) and variance (\sigma_{\ln R}^2). Transforming back to the original scale yields confidence intervals that are exponential functions of the log‑space intervals, preserving the interpretability of the scaled categories while keeping the underlying uncertainty tractable.
In conclusion, the paper asserts that current risk‑analysis practices, which rely heavily on point estimates, are insufficient for the rigorous safety arguments required by modern standards. By explicitly modelling and propagating uncertainties, engineers can determine the required precision of input data, set realistic confidence bounds on risk results, and perform SIL composition in a mathematically sound manner. The presented framework offers a pathway for future revisions of safety standards and the development of analysis tools that embed uncertainty handling at their core.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...