Qualitative Belief Conditioning Rules (QBCR)

Reading time: 6 minute
...

📝 Abstract

In this paper we extend the new family of (quantitative) Belief Conditioning Rules (BCR) recently developed in the Dezert-Smarandache Theory (DSmT) to their qualitative counterpart for belief revision. Since the revision of quantitative as well as qualitative belief assignment given the occurrence of a new event (the conditioning constraint) can be done in many possible ways, we present here only what we consider as the most appealing Qualitative Belief Conditioning Rules (QBCR) which allow to revise the belief directly with words and linguistic labels and thus avoids the introduction of ad-hoc translations of quantitative beliefs into quantitative ones for solving the problem.

💡 Analysis

In this paper we extend the new family of (quantitative) Belief Conditioning Rules (BCR) recently developed in the Dezert-Smarandache Theory (DSmT) to their qualitative counterpart for belief revision. Since the revision of quantitative as well as qualitative belief assignment given the occurrence of a new event (the conditioning constraint) can be done in many possible ways, we present here only what we consider as the most appealing Qualitative Belief Conditioning Rules (QBCR) which allow to revise the belief directly with words and linguistic labels and thus avoids the introduction of ad-hoc translations of quantitative beliefs into quantitative ones for solving the problem.

📄 Content

In this paper, we propose a simple arithmetic of linguistic labels which allows a direct extension of quantitative Belief Conditioning Rules (BCR) proposed in the DSmT [3,4] framework to their qualitative counterpart. Qualitative beliefs assignments are well adapted for manipulated information expressed in natural language and usually reported by human expert or AI-based expert systems. A new method for computing directly with words (CW) for combining and conditioning qualitative information is presented. CW, more precisely computing with linguistic labels, is usually more vague, less precise than computing with numbers, but it is expected to offer a better robustness and flexibility for combining uncertain and conflicting human reports than computing with numbers because in most of cases human experts are less efficient to provide (and to justify) precise quantitative beliefs than qualitative beliefs.

Before extending the quantitative DSmT-based conditioning rules to their qualitative counterparts, it will be necessary to define few but new important operators on linguistic labels and what is a qualitative belief assignment. Then we will show though simple examples how the combination of qualitative beliefs can be obtained in the DSmT framework.

Since one wants to compute directly with words (CW) instead of numbers, we define without loss of generality a finite set of linguistic labels L = {L 1 , L 2 , . . . , L n } where n ≥ 2 is an integer. L is endowed with a total order relationship ≺, so that L 1 ≺ L 2 ≺ . . . ≺ L n . To work on a close linguistic set under linguistic addition and multiplication operators, one extends L with two extreme values L 0 and L n+1 where L 0 corresponds to the minimal qualitative value and L n+1 corresponds to the maximal qualitative value, in such a way that L 0 ≺ L 1 ≺ L 2 ≺ . . . ≺ L n ≺ L n+1 where ≺ means inferior to, or less, or smaller (in quality) than, etc. Therefore, one will work on the extended ordered set L of qualitative values L = {L 0 , L 1 , L 2 , . . . , L n , L n+1 }. The qualitative addition and multiplication of linguistic labels, which are commutative, associative, and unitary operators, are defined as follows -see Chapter 10 in [4] for details and examples :

Let’s consider a finite and discrete frame of discernment Θ = {θ 1 , . . . , θ n } for the given problem under consideration where the true solution must lie in; its model M(Θ) defined by the set of integrity constraints on elements of Θ (i.e. free-DSm model, hybrid model or Shafer’s model) and its corresponding hyper-power set denoted D Θ ; that is, the Dedekind’s lattice on Θ [3] which is nothing but the space of propositions generated with ∩ and ∪ operators and elements of Θ taking into account the integrity constraints (if any) of the model. A qualitative basic belief assignment (qbba) also called qualitative belief mass is a mapping function qm(.) : D Θ → L. In the sequel, all qualitative masses not explicitly specified in the examples, are by default (and for notation convenience) assumed to take the minimal linguistic value L 0 .

There is no way to define a normalized qm(.), but a qualitative quasi-normalization [4] is nevertheless possible if needed as follows: a) If the previous defined labels L 0 , L 1 , L 2 , . . ., L n , L n+1 from the set L are equidistant, i.e. the (linguistic) distance between any two consecutive labels L j and L j+1 is the same, for any j ∈ {0, 1, 2, . . . , n}, then one can make an isomorphism between L and a set of sub-unitary numbers from the interval [0, 1] in the following way: L i = i/(n + 1), for all i ∈ {0, 1, 2, . . . , n + 1}, and therefore the interval [0, 1] is divided into n + 1 equal parts. Hence, a qualitative mass, qm

but this one is equivalent to

In this case we have a qualitative normalization, similar to the (classical) numerical normalization.

b) But, if the previous defined labels L 0 , L 1 , L 2 , . . ., L n , L n+1 from the set L are not equidistant, so the interval [0, 1] cannot be split into equal parts according to the distribution of the labels, then it makes sense to consider a qualitative quasi-normalization, i.e. an approximation of the (classical) numerical normalization for the qualitative masses in the same way:

In general, if we don’t know if the labels are equidistant or not, we say that a qualitative mass is quasinormalized when the above summation holds. So, let’s suppose one has a prior basic belief assignment (bba) m(.) defined on hyper-power set D Θ , and one finds out (or one assumes) that the truth is in a given element A ∈ D Θ , i.e. A has really occurred or is supposed to have occurred. The problem of belief conditioning is on how to revise properly the prior bba m(.) with the knowledge about the occurrence of A. Simply stated: how to compute m(.|A) from the knowledge available, that is with any prior bba m(.) and A ?

Until very recently, the most commonly used conditioning rule for belief revision was the one prop

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut