Optimal Uncertainty Quantification
We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call emph{Opt
We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop \emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems. The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.
💡 Research Summary
The paper introduces a rigorous framework called Optimal Uncertainty Quantification (OUQ) that places the assumptions and available information about a problem at the forefront of the uncertainty analysis. Rather than relying on implicit or possibly inappropriate modeling assumptions, OUQ formulates the quantification of uncertainty as a well‑defined optimization problem: given a set of admissible probability measures that satisfy the known constraints (moments, bounds, independence, etc.), one seeks the extremal values of a performance metric such as failure probability or deviation. The authors prove that, under very general conditions, the infinite‑dimensional optimization can be reduced to a finite‑dimensional one because the optimal measures are always supported on a small number of points (extremal distributions). This reduction makes the otherwise intractable problem amenable to modern convex‑optimization tools, including semidefinite programming and convex hull algorithms.
A major theoretical contribution is the derivation of Optimal Concentration Inequalities (OCI), which are the tightest possible Hoeffding‑ and McDiarmid‑type bounds consistent with the given information. Unlike classical concentration results that assume exact knowledge of the input distributions and strict independence, OCI accommodates partial knowledge, correlated inputs, and even uncertainty in the transfer functions themselves. An especially striking finding is that uncertainty in input parameters may not propagate to the output when the transfer function is imperfectly known; in hierarchical models this can lead to a complete “non‑propagation” of uncertainty across scales, potentially reducing the conservatism of safety assessments.
On the algorithmic side, the paper outlines a practical OUQ workflow: (1) encode all available information as constraints on the admissible set of probability measures; (2) formulate the objective (e.g., maximize failure probability) as a linear functional of the measure; (3) apply the finite‑dimensional reduction theorem to obtain a low‑dimensional parametrization; (4) solve the resulting convex optimization problem with off‑the‑shelf solvers; and (5) interpret the extremal measure to obtain rigorous bounds. The authors demonstrate the approach on two challenging engineering problems. The first is a Caltech surrogate model for hypervelocity impact, where OUQ yields significantly tighter bounds on impact outcomes than traditional Monte‑Carlo or worst‑case analyses while using comparable computational effort. The second case studies seismic safety of truss structures; OUQ reveals that, because of hierarchical dependencies, uncertainties at the material level may be “filtered out” by the structural response, leading to less conservative design criteria without sacrificing reliability.
The numerical experiments confirm that OUQ can handle realistic, high‑dimensional models and that the finite‑dimensional reductions are not merely theoretical curiosities but practical tools. The paper also discusses limitations: the size of the reduced problem can still grow with the number of constraints, and extending the framework to fully dynamic, non‑convex systems remains an open challenge. Future research directions include integration with Bayesian updating, real‑time decision making, and the development of specialized algorithms that exploit problem structure for even larger scale applications.
In summary, the authors provide a compelling argument that uncertainty quantification should be reframed as an optimization problem over admissible probability measures. By doing so, they obtain the tightest possible, information‑consistent bounds, avoid hidden modeling assumptions, and open a pathway to systematic, scalable, and theoretically sound UQ for complex scientific and engineering systems.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...