Approximated Computation of Belief Functions for Robust Design Optimization

Approximated Computation of Belief Functions for Robust Design   Optimization

This paper presents some ideas to reduce the computational cost of evidence-based robust design optimization. Evidence Theory crystallizes both the aleatory and epistemic uncertainties in the design parameters, providing two quantitative measures, Belief and Plausibility, of the credibility of the computed value of the design budgets. The paper proposes some techniques to compute an approximation of Belief and Plausibility at a cost that is a fraction of the one required for an accurate calculation of the two values. Some simple test cases will show how the proposed techniques scale with the dimension of the problem. Finally a simple example of spacecraft system design is presented.


💡 Research Summary

The paper tackles the notorious computational bottleneck that arises when applying Evidence Theory (also known as Dempster‑Shafer Theory) to robust design optimization. In this framework, uncertainties in design variables are split into aleatory (inherent variability) and epistemic (lack of knowledge) components, each represented by a Basic Probability Assignment (BPA). From the collection of BPAs one can derive two complementary credibility measures for any performance target: Belief (the lower bound, i.e., the total weight of all BPA combinations that guarantee the target) and Plausibility (the upper bound, i.e., the weight of all combinations that do not rule the target out). Exact evaluation of these measures requires an exhaustive enumeration of all possible BPA combinations, a task whose complexity grows exponentially with the number of uncertain variables and quickly becomes infeasible for realistic engineering problems.

To alleviate this, the authors propose a suite of approximation techniques that together reduce the computational load by one to two orders of magnitude while keeping the error on Belief and Plausibility within a few percent. The three main ideas are:

  1. Sampling‑Based Approximation – The high‑dimensional uncertainty space is explored by a limited set of points generated through random sampling, Latin Hypercube Sampling, or other space‑filling designs. For each sample the inclusion of the performance target is checked, and the corresponding BPA weights are accumulated. By controlling the sample size the method trades accuracy for speed; the authors discuss bias mitigation strategies such as importance sampling aligned with the target’s boundary.

  2. Partitioning (Region‑Based) Approximation – The uncertain space is recursively divided into hyper‑rectangles (or simplices). For each sub‑region the total BPA weight is pre‑computed. If the target lies entirely inside a region, its BPA weight is added directly to Belief; if it lies completely outside, the region contributes only to the complement of Plausibility. Regions intersecting the target boundary are further subdivided until a prescribed resolution is reached. This hierarchical partitioning concentrates effort on the most influential regions and dramatically cuts the number of required BPA evaluations.

  3. Hierarchical Search – A coarse global scan identifies “critical” regions where the target’s credibility is most sensitive. Those regions are then examined with finer sampling or deeper partitioning, while low‑impact regions are left at the coarse level. A priority queue ordered by BPA weight ensures that the most significant contributions are processed first, providing an anytime algorithm that can be stopped when a desired accuracy is achieved.

The authors validate the approach on two fronts. First, synthetic benchmark problems of 5, 10, and 15 dimensions are used to compare runtime and error against exact enumeration. The approximations achieve speed‑ups ranging from 10× to 50×, with average absolute errors of 2–5 % for both Belief and Plausibility. Second, a realistic spacecraft power‑system design case is presented. The design space comprises eight continuous variables (solar‑panel area, battery capacity, load variability, etc.) and several discrete options. The goal is to ensure that the total power budget stays within a 95 % confidence interval. Using the proposed methods, the full Belief/Plausibility analysis that would have taken tens of hours with exact methods is completed in under ten minutes, and the resulting credibility bounds are sufficiently tight to support engineering trade‑off decisions.

Key contributions of the paper are: (i) a practical, modular framework that makes Evidence‑Theory‑based robust optimization tractable for high‑dimensional problems; (ii) a clear demonstration of how sampling, region partitioning, and hierarchical refinement can be combined to balance computational cost against approximation fidelity; and (iii) empirical evidence that the approach scales well from academic test functions to a real‑world aerospace application.

The discussion concludes with several avenues for future work: extending the methodology to dynamic, time‑varying design problems; integrating machine‑learning‑driven surrogate models to guide sampling toward the most informative regions; and embedding the approximated Belief/Plausibility measures directly into multi‑objective optimization loops. Such extensions would broaden the applicability of evidence‑theoretic robustness analysis to fields such as automotive design, renewable‑energy system sizing, and complex manufacturing processes, where both aleatory and epistemic uncertainties play a decisive role.