Definition of evidence fusion rules on the basis of Referee Functions
This chapter defines a new concept and framework for constructing fusion rules for evidences. This framework is based on a referee function, which does a decisional arbitrament conditionally to basic decisions provided by the several sources of information. A simple sampling method is derived from this framework. The purpose of this sampling approach is to avoid the combinatorics which are inherent to the definition of fusion rules of evidences. This definition of the fusion rule by the means of a sampling process makes possible the construction of several rules on the basis of an algorithmic implementation of the referee function, instead of a mathematical formulation. Incidentally, it is a versatile and intuitive way for defining rules. The framework is implemented for various well known evidence rules. On the basis of this framework, new rules for combining evidences are proposed, which takes into account a consensual evaluation of the sources of information.
💡 Research Summary
The paper introduces a novel framework for evidence fusion that replaces traditional algebraic combination rules with a “referee function” and a sampling‑based implementation. In classical Dempster‑Shafer theory, combining multiple basic probability assignments (BPAs) quickly becomes computationally prohibitive because the number of intersecting focal elements grows exponentially, and conflict handling often requires ad‑hoc normalization. To address these issues, the authors define a referee function as an abstract decision‑making entity that receives the BPAs from all sources and, according to a set of arbitration criteria (conflict level, source reliability, mutual consistency, etc.), decides probabilistically which pieces of evidence to keep, discard, or re‑weight.
The core of the framework is a Monte‑Carlo‑style sampling process. First, each source’s BPA is sampled to generate a “virtual event” set. The referee function then evaluates each virtual event, assigning a selection probability and a weight based on the arbitration policy. Selected events are aggregated by a weighted average, producing a new BPA that represents the fused evidence. Repeating this procedure many times yields an empirical distribution that converges to the expected value of the underlying combination rule. The authors prove that, under mild regularity conditions, the estimator is unbiased and its variance diminishes as the number of samples increases, offering a controllable trade‑off between accuracy and computational load.
To demonstrate the generality of the approach, the paper shows how several well‑known fusion rules can be recovered simply by configuring the referee function appropriately. Dempster’s rule corresponds to a referee that ignores conflict and multiplies all selected focal elements; Yager’s rule is obtained by redirecting conflict mass to a designated “ignorance” focal element; Dubois‑Prade’s rule emerges when the referee enforces a minimum reliability threshold before accepting a focal element. In each case, the same sampling engine is reused, confirming that the referee‑function paradigm unifies disparate fusion strategies under a single algorithmic roof.
Beyond reproducing existing rules, the authors propose a new “consensus‑based” fusion rule. This rule dynamically estimates each source’s reliability by feeding back the outcomes of previous sampling rounds through a Bayesian update. The referee then gives higher weight to focal elements supported by a majority of reliable sources, while still allowing minority contributions when they are consistent with the overall evidence. Experimental evaluation on synthetic datasets with varying degrees of conflict shows that the consensus‑based rule achieves lower mean‑square error and higher robustness compared to the classical rules, especially when the conflict ratio exceeds 30 %.
The paper also discusses practical aspects of the framework. The computational complexity of the sampling process scales linearly with the number of sources and the number of samples, a dramatic improvement over the exponential blow‑up of exact Dempster‑Shafer combination. The algorithm is straightforward to implement in real‑time systems, as it requires only random sampling, simple arithmetic, and a user‑defined referee policy. However, the stochastic nature introduces variance in the fused result; the authors recommend using a sufficient number of samples (typically 10⁴–10⁵) and, when possible, seeding the random generator for reproducibility. They also note that designing an effective referee function may require domain expertise, suggesting future work on learning referee policies from data.
In conclusion, the referee‑function and sampling framework offers a flexible, computationally efficient alternative to traditional evidence fusion. It unifies existing rules, enables the creation of new, application‑specific fusion strategies, and opens avenues for further research, including automatic referee learning, integration with other uncertainty formalisms (possibility theory, Bayesian networks), and deployment in large‑scale streaming environments.
Comments & Academic Discussion
Loading comments...
Leave a Comment