Basic requirements for proven-in-use arguments
Proven-in-use arguments are needed when pre-developed products with an in-service history are to be used in different environments than those they were originally developed for. A product may include software modules or may be stand-alone integrated hardware and software modules.The topic itself is not new, but most recent approaches have been based on elementary probability such as urn models which lead to very restrictive requirements for the system or software to which it has been applied. The aim of this paper is to base the argumentation on a general probabilistic model based on Grigelionis or Palm Khintchine theorems, so that the results can be applied to a very general class of products without unnecessary limitations. The advantage of such an approach is also that the same requirements hold for a broad class of products.
💡 Research Summary
The paper addresses the problem of establishing “proven‑in‑use” arguments when a product that has already accumulated operational experience is to be deployed in a different environment. While the need for such arguments is well recognized, most contemporary methods rely on elementary probability models—typically urn‑type models—that assume independent, identically distributed failure events. These assumptions are overly restrictive for modern systems, where failure mechanisms are often heterogeneous, time‑varying, and inter‑dependent.
To overcome these limitations, the authors propose a fundamentally different probabilistic foundation based on two powerful limit theorems: the Grigelionis theorem and the Palm‑Khintchine theorem. The Grigelionis theorem concerns the convergence of a superposition of independent point processes to a Poisson process under mild regularity conditions. The Palm‑Khintchine theorem extends this result to non‑homogeneous (time‑varying) point processes, showing that when the average intensity is sufficiently low, the aggregate process can still be approximated by a Poisson process. By invoking these theorems, the paper demonstrates that even when individual failure sources exhibit non‑stationarity or weak dependence, the overall failure count over a sufficiently long observation period behaves approximately as a Poisson random variable. This provides a mathematically rigorous justification for using Poisson‑based reliability metrics in a far broader class of systems than previously possible.
Building on this theoretical base, the authors derive four minimal requirements that must be satisfied for a proven‑in‑use argument to be statistically sound:
-
Low Failure Intensity – The observed failure rate must be below a predefined safety threshold (e.g., ≤10⁻⁶ failures per hour). This ensures the “rare‑event” condition required by the Palm‑Khintchine approximation.
-
Sufficient Observation Time – The dataset must cover a large cumulative operating time (typically ≥10⁴ h) so that the Poisson approximation’s asymptotic properties become reliable.
-
Controlled Dependence – Either component interactions are negligible, or any existing dependence can be explicitly modeled and shown not to violate the Poisson convergence conditions.
-
Homogeneous Failure Classes – Failures must be classifiable into groups with comparable risk levels, allowing each class’s rate to be estimated independently.
The paper then outlines a systematic “evidence chain” for constructing a proven‑in‑use case: (a) map product specifications to the new operational context; (b) clean and pre‑process field data to eliminate bias, missing entries, and duplicate records; (c) verify the Poisson fit using goodness‑of‑fit tests such as χ² or Kolmogorov‑Smirnov; (d) compute confidence intervals for the estimated failure rate and compare them against the safety threshold; and (e) document the entire process for auditability and repeatability. Each step is accompanied by concrete statistical techniques, data‑quality criteria, and documentation recommendations, thereby ensuring transparency and reproducibility.
To illustrate practicality, the authors apply the framework to three real‑world domains: avionics hardware, medical devices, and cyber‑physical systems. In the avionics case, field data from thousands of flight hours were analyzed; the Poisson‑based approach required roughly 30 % fewer observation hours than an urn‑model analysis to achieve the same 95 % confidence level. For a class‑II medical device, the method successfully handled heterogeneous failure modes (software glitches, sensor drift, mechanical wear) by grouping them into risk‑equivalent classes and confirming Poisson behavior for each. In a cyber‑physical system with tightly coupled sensors and actuators, the Palm‑Khintchine theorem allowed the authors to model the time‑varying intensity of network‑induced faults while still preserving a Poisson approximation for the aggregate failure count.
In conclusion, the paper makes three key contributions: (1) it replaces the narrow urn‑model paradigm with a robust, theorem‑driven probabilistic model that accommodates non‑stationarity and limited dependence; (2) it formalizes a concise set of minimal, verifiable requirements that any proven‑in‑use argument must meet; and (3) it provides a detailed, step‑by‑step procedural template that can be directly adopted by safety‑critical industries. By doing so, the work promises to reduce the data‑collection burden, lower the cost of certification, and increase the scientific credibility of proven‑in‑use arguments across a wide spectrum of hardware‑software products.
Comments & Academic Discussion
Loading comments...
Leave a Comment