A new probabilistic transformation of belief mass assignment

In this paper, we propose in Dezert-Smarandache Theory (DSmT) framework, a new probabilistic transformation, called DSmP, in order to build a subjective probability measure from any basic belief assignment defined on any model of the frame of discern…

Authors: Jean Dezert (ONERA), Florentin Smar, ache

A ne w probabilistic transformati on of belief mass assignment Jean Dezert ONERA The French Aerospace Lab 29 A v . Di vision Leclerc, 92320 Ch ˆ atillon, France. Email:jean.de zert@onera.f r Florentin Smarandache Chair of Math. & Sciences Dept. University of New Mexico, 200 College Road , Gallup, NM 87301 , U.S.A. Email: smarand@unm .edu Abstract — In th is pap er , we propose in Dezert-Smarandache Theory (DSmT) framew ork, a new probabilistic transf ormation, called DSmP , in order to build a subjective probability mea sure from any basic belief assignment defined on any model of th e frame of discernment. S ev eral examples are give n to show how the DSmP transfor mation works and we compare it to main existing transf ormations proposed in the literature so far . W e show th e advantages of DSmP ov er classical transformations in term of Probabilistic In for mation Content (PIC). Th e direct extension of this transform ation for dealing with qualitative belief assignments is also presented. Keywords: DSmT , Subjective probability , Probabilistic Inf ormation Content, qualita tive belief. I . I N T RO D U C T I O N A N D M O T I V AT I O N In the theories of belief functions, Dempster-Shafer Theory (DST) [4] , T ransferable Belief Mod el (TBM) [1 1] or DSmT [6], [7], the mapping from the belief to the pro bability do- main is a con troversial issue. The origin al p urpose of such mapping s was to make (hard) decision, but contr ariwise to erroneo us wid espread id ea/claim, this is not the only in terest for using such mapp ings nowadays. Actu ally the probab ilistic transform ations of b elief mass assignme nts ar e very u seful in mo dern m ultitarget multisensor tr acking systems (or in any oth er sy stems) where one d eals with soft decisions ( i.e. where all possible solution s a re kept fo r state estimation with their likelihoods). For example, in a Multiple Hypotheses T racker using both kin ematical and attribute data, on e needs to compute all probabilities values f or deri ving th e likelihoods of data association hypotheses and then mixin g th em alto gether to estimate states of targets. Ther efore, it is very relevant to use a m apping whic h provides a hig h probab ilistic information content (PIC) for e xpecting better performances. This perfectly justifies the theoretical work p roposed in this p aper . A classi- cal transformatio n is the so -called pignistic pr ob ability [1 0], denoted B etP , which o ffers a good co mprom ise between the maximum of credibility B el and the ma ximum o f plau sibility P l f or decision -suppor t. Un fortun ately , B etP doesn’t provid e the high est PIC in g eneral as p ointed out b y Sudano [12 ]–[14 ]. W e propo se hereafter a new g eneralized p ignistic transf orma- tion, de noted DS mP , wh ich is justified by the maximizatio n of the PIC criter ion. An extension of this transform ation in the qualitative d omain is a lso presented. I I . P I G N I S T I C P RO B A B I L I T I E S The basic idea of the pign istic transforma tion [ 9], [1 0] consists in transf erring the positiv e mass of belief of each non specific element on to the sing letons in volved in that element split b y th e cardinality of the pro position when working with normalize d basic belief assignments (bba’ s). Th e (classical) pignistic proba bility in TBM framework is given by 1 B e tP ( ∅ ) = 0 and ∀ X ∈ 2 Θ \ {∅} by: B e tP ( X ) = X Y ∈ 2 Θ ,Y 6 = ∅ | X ∩ Y | | Y | m ( Y ) 1 − m ( ∅ ) , (1) where 2 Θ is th e power set of the finite an d discrete f rame Θ assuming Shaf er’ s m odel, i.e. all elements of Θ are assumed truly exclusive. In Sh afer’ s app roach, m ( ∅ ) = 0 and the formu la (1) can be rewritten for any singleton θ i ∈ Θ as B e tP ( θ i ) = X Y ∈ 2 Θ θ i ⊆ Y 1 | Y | m ( Y ) = m ( θ i ) + X Y ∈ 2 Θ θ i ⊂ Y 1 | Y | m ( Y ) (2) This transform ation h as been g eneralized in DSmT f or any regular bba m ( . ) : G Θ 7→ [0 , 1] (i.e. su ch th at m ( ∅ ) = 0 and P X ∈ G Θ m ( X ) = 1 ) an d for any mo del of th e f rame (free DSm m odel, h ybrid DSm m odel an d Shafer’ s model as well) [6]. I t is given by B e tP ( ∅ ) = 0 and ∀ X ∈ G Θ \ {∅} by B e tP ( X ) = X Y ∈ G Θ C M ( X ∩ Y ) C M ( Y ) m ( Y ) (3) where G Θ correspo nds t o the h yper-power set including all the integrity con straints of the mode l (if any) 2 ; C M ( Y ) de notes t he DSm car dinal 3 of the set Y . Th e fo rmula (3) reduces to (1) when G Θ reduces to classical power set 2 Θ when o ne ado pts Shafer’ s mode l. 1 W e assume that m ( . ) is of course a non deg enerate bba, i.e. m ( ∅ ) 6 = 1 . 2 G Θ = 2 Θ if one a dopts Shafer’ s model for Θ and G Θ = D Θ (Dedekin d’ s latti ce) if one adop ts the free DSm m odel for Θ [6]. 3 C M ( Y ) is the number of parts of Y in the V enn diagram of the model M of th e frame Θ under considerat ion [6] (Chap. 7). I I I . S U DA N O ’ S P RO BA B I L I T I E S Recently , Sud ano has proposed interesting alternativ es d e- noted P rP l , P rN P l , P raP l , P rB el and P r H y b to B etP , all define d in DST framework [1 5]. Sud ano uses different kinds of mapp ings either p ropo rtional to the plausibility , to the normalized plau sibility , to all plausibilities, to th e belief or a hybrid mapp ing. P rP l and P rB el are defined 4 for all X 6 = ∅ ∈ Θ b y: P rP l ( X ) = P l ( X ) · X Y ∈ 2 Θ ,X ⊆ Y m ( Y ) C S [ P l ( Y )] (4) P rB el ( X ) = B el ( X ) · X Y ∈ 2 Θ ,X ⊆ Y m ( Y ) C S [ B el ( Y )] (5) where the com pound -to-sum of singletons (CS) ope rator of any fu nction 5 f ( . ) is d efined by [12] : C S [ f ( Y )] , X Y i ∈ 2 Θ , | Y i | =1 , ∪ i Y i = Y f ( Y i ) P rN P l , P raP l and P rH y b a re g iv en by [12], [ 15]: • a mappin g propo rtional to the no rmalized p lausibility P rN P l ( X ) = 1 ∆ X Y ∈ 2 Θ ,Y ∩ X 6 = ∅ m ( Y ) = 1 ∆ · P l ( X ) (6) where ∆ is a normaliza tion factor . • a mappin g propo rtional to all plausib ilities P raP l ( X ) = B el ( X ) + ǫ · P l ( X ) (7) with ǫ , (1 − P Y ∈ 2 Θ B e l ( Y )) / ( P Y ∈ 2 Θ P l ( Y ) . • a hybr id transform ation P rH y b ( X ) = P raP l ( X ) · X Y ∈ 2 Θ X ⊆ Y m ( Y ) C S [ P raP l ( Y )] (8) I V . C U Z Z O L I N ’ S I N T E R S E C T I O N P RO BA B I L I T Y In 2007 , a new tra nsformatio n has been pro posed in [1 ] by Cu zzolin in the fra mew ork of DST . Fr om a geometr ic interpretatio n o f Dem pster’ s rule, an In tersection Pr ob ability measure was proposed from the proportio nal repartition of the T otal Non Specific Mass 6 (TNSM) by each contr ibution of the non-spe cific masses involv ed in it. For notatio n convenience, we will denote it C uz z P in the seq uel. C uz z P ( . ) is defined on any finite and d iscrete fram e Θ = { θ 1 , . . . , θ n } , n ≥ 2 , satisfying Shafer’ s mod el, by C uz z P ( θ i ) = m ( θ i ) + ∆( θ i ) P n j =1 ∆( θ j ) × T N S M (9) with ∆( θ i ) , P l ( θ i ) − m ( θ i ) and T N S M = 1 − n X j =1 m ( θ j ) = X A ∈ 2 Θ , | A | > 1 m ( A ) (10) 4 For notat ion con veni ence and simplicity , we use a dif ferent but equiv alent notati on than the one in [15]. 5 For e xample , f ( . ) must be rep laced by P l ( . ) in (4) or by B el ( . ) in (5). 6 i.e. the mass committed to partia l and total ig norances, i.e. to disjuncti ons of eleme nts of the frame . C uz z P is howe ver not appealing for the following reason s: 1) A lthough (9) does not include exp licitly Dempster’ s rule, its geo metrical justification [1 ], [2] is strong ly condi- tioned by the acceptan ce of Dempster’ s rule as the fusion operator for belief fun ctions. Th is is a dogmatic point of view we disagree with sinc e it has been recognize d since many ye ars by different experts of AI commu nity , that o ther f usion r ules ca n offer better perfo rmances, especially fo r cases where high con flicting sources are in volved. 2) So me p arts of the masses of partial ignor ance, say A , in volved in the TNSM , are also transfer red to singletons, say θ i ∈ Θ which ar e not included in A (i.e. such that { θ i } ∩ A = ∅ ). Suc h transfer is not g ood and does not make sense in o ur p oint of view . T o be more clear , let’ s take Θ = { A, B , C } and m ( . ) defin ed on its power set with all m asses stric tly p ositiv e. In that case, m ( A ∪ B ) > 0 does count in TNSM and thu s it is a b it redistributed back to C with the ratio ∆( C ) ∆( A )+∆( B ) + ∆ ( C ) throug h T N S M > 0 . There is no solid reason for committing p artially m ( A ∪ B ) to C since, only A and B are in volved in that partial ig noran ce. Similar re mark holds for the partial redistribution of m ( A ∪ C ) > 0 . 3) C uz z P is not defined wh en m ( . ) is a probab ilistic mass because one gets 0 / 0 indeterm ination. This remark is importan t only from the mathematica l po int of v iew . V . A N E W G E N E R A L I Z E D P I G N I S T I C T R A N S F O R M A T I O N Our new map ping, den oted D S mP is straight, different from Sudan o’ s an d Cu zzolin’ s map pings which ar e more refined but less interesting in ou r opinio ns than wha t we present here . The basic idea of D S mP consists in a n ew way of propo rtionalization s o f the mass of each partial ignorance such as A 1 ∪ A 2 or A 1 ∪ ( A 2 ∩ A 3 ) or ( A 1 ∩ A 2 ) ∪ ( A 3 ∩ A 4 ) , etc. and th e mass of the total ignor ance A 1 ∪ A 2 ∪ . . . ∪ A n , to the elements inv olved in th e igno rances. This new transfor- mation takes into account both the values of the m asses and the card inality of elements in the pr oportio nal redistribution process. W e first present the general for mula for this new transform ation and the nu merical examples and co mparison s with respect to other transformations are giv en in next sections. A. The DS mP formula Let’ s consider a d iscrete frame Θ with a given model (free DSm m odel, hy brid D Sm mo del or Shaf er’ s mod el), the D S mP map ping is defined by 7 D S mP ǫ ( ∅ ) = 0 and ∀ X ∈ G Θ \ {∅} by D S mP ǫ ( X ) = X Y ∈ G Θ X Z ⊆ X ∩ Y C ( Z )=1 m ( Z ) + ǫ · C ( X ∩ Y ) X Z ⊆ Y C ( Z )=1 m ( Z ) + ǫ · C ( Y ) m ( Y ) (11) 7 The formulatio n of (11) for the case of s inglet ons θ i of Θ is giv en in [8]. where ǫ ≥ 0 is a tuning parameter an d G Θ correspo nds to the hyper-power set includin g eventually all the integrity con- straints (if any) of the model M ; C ( X ∩ Y ) and C ( Y ) d enote the DSm cardina ls 8 of the sets X ∩ Y and Y respectively . ǫ allows to reach the maximum PIC value of the approx imation of m ( . ) into a sub jectiv e probab ility measu re. Th e smaller ǫ , the better/bigg er PIC value. In some p articular degener ate cases howe ver, the D S mP ǫ =0 values cannot be der iv ed, but the DS mP ǫ> 0 values can howe ver alw ays be deriv ed by choosing ǫ a s a very small positive n umber, say ǫ = 1 / 1 000 for example in order to be as close as we want to the maxim um of th e PIC (see next sections fo r details and examples). When ǫ = 1 and when the masses of all elements Z having C ( Z ) = 1 are zero, (1 1) red uces to (3), i.e. D S mP ǫ =1 = B etP . The passage from a free DSm mod el to a Shaf er’ s model inv olves the passage from a structu re to another one, and the card inals change as well in the for mula (11). B. Advantages of DSmP D S mP work s for all models (free, hybrid and Shafer’ s). In order to app ly c lassical B etP , C uz z P or Suda no’ s mappings, we ne ed at first to refine th e frame (on the cases when it is possible!) in order to work with Shafer’ s mo del, an d then apply their formulas. In the case wher e refin ement makes sense, then one ca n app ly the other subjective p robab ilities on the refined frame. D S mP works o n the refined fram e as we ll an d g iv es the same r esult as it does on the non- refined frame. Thus D S mP with ǫ > 0 works on any models and so is very gener al and app ealing. It is a comb ination of P rB el and B etP . P rB el perform s a redistribution o f an ignoran ce mass to the singletons in volv ed in that ignoran ce propo rtionally with respe ct to the singleto n masses. While B e tP also do es a re distribution of an ignora nce mass to the singletons inv olved in that ig noranc e but prop ortionally with respect to the singleto n cardinals. P rB el does not work when the m asses of all sing letons inv olved in an ignoran ce are nu ll since it g iv es the inde termination 0/0; an d in th e case wh en at least one sing leton mass inv olved in an igno rance is zero, th at singleton does no t recei ve any mass fro m th e distrib ution e ven if it was inv olved in an ign orance, which is not fair/goo d. So, D S mP solves the P r B e l problem by doing a redistribution of the ig noran ce mass with resp ect to bo th th e sing leton masses and th e singletons’ cardinals in the same time. Now , if all masses o f singletons in volved in all ign orances ar e d ifferent from zero, then we can take ǫ = 0 , an d D S mP c oincides with P rB el and both o f the m give the best result, i.e. th e best PIC v alue. P r N P l is not satisfactory since it yields to an abno rmal behavior . In deed, in any mo del, when a bb a m ( . ) is tra nsformed into a probab ility , normally (we mean it is lo gically that) the masses o f ig noran ces are tran sferred to the masses of elem ents of cardinal 1 (in Shafer’ s mo del these elements are sing letons). Thus, the resulting pro bability of an element who se cardina l is 1 should be greater than or equal to the mass of that element. I. e. if A in G Θ and C ( A ) = 1 , 8 W e hav e omitted the inde x of the model M for notat ion con venience. then P ( A ) ≥ m ( A ) for any probability transform ation P ( . ) . This legitimate proper ty is not satisfied by P rN P l , since for example if we co nsider Θ = { A, B , C } and m ( A ) = 0 . 2 , m ( B ) = m ( C ) = 0 a nd m ( B ∪ C ) = 0 . 8 , o ne obta ins P rN P l ( A ) = 0 . 1112 < m ( A ) = 0 . 2 . So it is abno rmal that singleton A loo ses m ass when m ( . ) is transform ed into a subjective pro bability . In summary , DS m P does an ’improvement’ of all Su- dano, Cuzzolin, an d BetP for mulas, in the sense that D S mP mathematically makes a more acc urate redistribution of the ignoran ce masses to the sin gletons inv olved in ignora nces. D S mP an d B etP work in both theories: DST (= Shafer ’ s model) an d DSmT ( = free or hybrid mod els) as we ll. In order to use Su dano’ s and Cuzzolin’ s in DSm T mod els, we have to refine the frame (see Examp le 5). V I . T H E P RO BA B I L I S T I C I N F O R M A T I O N C O N T E N T ( P I C ) Follo wing Sudano’ s approach [12], [1 3], [15], we adopt the Probabilistic Info rmation Content (PIC) criterio n as a metric depicting th e strength of a critica l d ecision by a spec ific probab ility distribution. I t is an essential m easure in any threshold- driven automated d ecision system. The PIC is the dual of the normalized Shanno n entro py . A PIC value of one indicates the total k nowledge to make a correct decision (o ne hypoth esis has a pr obability v alue of on e an d th e rest of zero ). A PIC value of zero indicates that the knowledge to ma ke a correct decision does n ot exist (all th e hypo theses have an equal pr obability value), i.e. o ne has the max imal entropy . The PIC is used in ou r analy sis to sort the p erform ances o f the different pig nistic transfo rmations throug h several numer ical examples. W e fir st recall what Shanno n entro py and PIC measure ar e and their tight relationship . A. Shanno n en tr opy Shannon entro py , usually expr essed in bits (binar y d igits), of a probability measure P { . } over a discrete finite set Θ = { θ 1 , . . . , θ n } is defined by 9 [5]: H ( P ) , − n X i =1 P { θ i } lo g 2 ( P { θ i } ) (12) H ( P ) is maximal f or the u niform pr obability d istribution over Θ , i.e. when P { θ i } = 1 /n for i = 1 , 2 , . . . , n . In that case, one g ets H ( P ) = H max = − P n i =1 1 n log 2 ( 1 n ) = lo g 2 ( n ) . H ( P ) is minimal f or a totally deterministic p robability , i.e. for any P { . } such that P { θ i } = 1 for som e i ∈ { 1 , 2 , . . . , n } and P { θ j } = 0 for j 6 = i . H ( P ) measures the rand omness carried b y any d iscrete p robab ility P { . } . B. The PI C metric The Probabilistic Informa tion C ontent (PIC) of a pro bability measure P { . } associated with a pr obabilistic source over a discrete finite set Θ = { θ 1 , . . . , θ n } is defined b y [13]: P I C ( P ) = 1 + 1 H max · n X i =1 P { θ i } lo g 2 ( P { θ i } ) (13) 9 with common conv ention 0 log 2 0 = 0 . The PIC is n othing b ut th e dual of the no rmalized Shannon entropy an d thus is actually unit less. P I C ( P ) takes its values in [0 , 1] . P I C ( P ) is maximu m, i. e. P I C max = 1 with any deterministic prob ability a nd it is m inimum, i.e. P I C min = 0 , with the unifo rm prob ability over the f rame Θ . The simple relationships between H ( P ) and P I C ( P ) a re P I C ( P ) = 1 − ( H ( P ) /H max ) and H ( P ) = H max · (1 − P I C ( P )) . V I I . E X A M P L E S A N D C O M PA R I S O N S O N A 2 D F R A M E Due to the space limitation con straint, all d etails of d eriv a - tions are voluntarily om itted here but they will ap pear in [8] . In this section, we work with the 2D frame Θ = { A, B } . A. Example 1 (Shafer’s mod el a nd a general source) Since one assumes Shaf er’ s model, G Θ = 2 Θ = {∅ , A, B , A ∪ B } . The non-Bayesian qua ntitativ e b elief m ass is given in T able I. T able II pr esents the r esults of th e different mappings and their PIC sorted by increasing o rder . One sees that D S m P ǫ → 0 provides sam e r esult as P rB el and P I C ( D S mP ǫ → 0 ) is gre ater than the PIC values obtained with P rN P L , B etP , C u z z P , P rP l and P raP l . A B A ∪ B m ( . ) 0.3 0.1 0.6 T ABLE I Q U A N T I TATI V E I N P U T S F O R E X A M P L E 1 A B P I C ( . ) P r N P l ( . ) 0.5625 0.4375 0.0113 B etP ( . ) 0.6000 0.4000 0.0291 C uz z P ( . ) 0.6000 0.4000 0.0291 P r P l ( . ) 0.6375 0.3625 0.0553 P r aP l ( . ) 0.6375 0.3625 0.0553 P r H y b ( . ) 0.6825 0.3175 0.0984 D S mP ǫ =0 . 001 ( . ) 0.7492 0.2508 0.1875 P r B el ( . ) 0.7500 0.2500 0.1887 D S mP ǫ =0 ( . ) 0.7500 0.2500 0.1887 T ABLE II R E S U LT S F O R E X A M P L E 1 . B. Example 2 ( Shafer’s model an d the to tally ignorant sour ce) Let’ s assum e Shafer’ s model and the vacuous bba char ac- terizing the totally ig norant sou rce, i.e. m ( A ∪ B ) = 1 . It can be verified that all mappin gs coin cide with the uniform probab ility measure over sin gletons of Θ , except P r B el wh ich is mathem atically no t defined in that case. This result can be easily proved for any size of th e f rame Θ with | Θ | > 2 . C. Example 3 (Shafer’s mod el a nd a pr obab ilistic source) Let’ s assume Shafer’ s mod el a nd let’ s see what happ ens when a pplying all the transfor mations on a pr obabilistic source 10 which commits a belief mass only to singletons of 2 Θ , i.e. a Bayesian mass [ 4]. It is intuitively expected that all transfor mations are id empoten t when de aling with probab ilistic sources, since actually there is no reason/need 10 This has obv iously no pract ical interest since the source alrea dy provides a proba bility measure, ne vertheless this is ve ry inte resting to see the t heoret ical beha vior of the transformations in such case . to mod ify m ( . ) (the input mass) to o btain a new sub jectiv e probab ility measure since B el ( . ) associated with m ( . ) is already a pr obability mea sure. So if we co nsider f or examp le the unifor m Bayesian mass defined by m u ( A ) = m u ( B ) = 1 / 2 , it is very easy to verify in this case, th at almost all transform ations coincide with the (p robab ilistic) input mass as expected, so th at the id empoten cy proper ty is satisfied. Only Cuzzolin’ s transformation fails to satisfy this property becau se in C uz z P ( . ) formu la (9) one gets 0 / 0 indeterminacy since all ∆( . ) = 0 in ( 9). This remark is valid w hatever the dim ension of the fram e Θ is, an d for any Bayesian mass (not on ly for unifor m belief mass). D. E xample 4 (Shafer’s mod el a nd non- Bayesian mass) Let’ s assume Shafer’ s model and the non-Bay esian mass (more prec isely the simple support ma ss) given in T able III. W e summarize in T able IV, the results obtained with all transform ations. One sees that P I C ( D S mP ǫ → 0 ) is maximu m among all PIC v alues. P rB el ( . ) does not work co rrectly since it can no t h av e a division by zero. W e use NaN acro nym here standing f or Not a Nu mber 11 ; even overcomin g it 12 , P rB el does not d o a fair redistribution of the igno rance m ( A ∪ B ) = 0 . 6 because B does not rece i ve anything from the mass 0. 6, althoug h B is inv olved in the ignoran ce A ∪ B . All m ( A ∪ B ) = 0 . 6 was unfairly redistributed to A on ly . A B A ∪ B m ( . ) 0.4 0 0.6 T ABLE III Q U A N T I TATI V E I N P U T S F O R E X A M P L E 4 A B P I C ( . ) P r B el ( . ) 1 NaN NaN P r N P l ( . ) 0.6250 0.3750 0.0455 B etP ( . ) 0.7000 0.3000 0.1187 C uz z P ( . ) 0.7000 0.3000 0.1187 P r P l ( . ) 0.7750 0.2250 0.2308 P r aP l ( . ) 0.7750 0.2250 0.2308 P r H y b ( . ) 0.8650 0.1350 0.4291 D S mP ǫ =0 . 001 ( . ) 0.9985 0.0015 0.9838 D S mP ǫ =0 ( . ) 1 0 1 T ABLE IV R E S U LT S F O R E X A M P L E 4 . The best re sult is an ad equate pr ob ability , not the biggest PIC in this case. T his is because P ( B ) deserves to receiv e some mass from m ( A ∪ B ) , so th e most corr ect r esult is done by D S mP ǫ =0 . 001 in T able I V ( of course we can cho ose any o ther very sm all positive value fo r ǫ if we want). Always when a singleton whose mass is zer o, but it is inv olved in an ignoran ce whose m ass is not zero, then ǫ (in D S mP f ormula (11)) should be different fro m zero . 11 we could al so use the standard ”N/A” standing for ”does not apply”. 12 since the direct deri v ation of P rB el ( B ) cannot be done from the formula (5) because of the undefined form 0 / 0 , we could howe ver force it to P r B el ( B ) = 0 since P r B e l ( B ) = 1 − P rB el ( A ) = 1 − 1 = 0 , and conseque ntly we indirec tly take P I C ( P rB el ) = 1 . E. Example 5 (F r ee DSm m odel) Let’ s assume the free DSm m odel (i.e. A ∩ B 6 = ∅ ) and th e generalized mass giv en in T able V. In the case of free-DSm (or hybrid DSm) models, the p ignistic probab ility and the DSmP can be der iv ed directly from m ( . ) without the refine ment of the frame Θ whereas Sudan o’ s and Cuzzolin’ s probab ilities cannot be deri ved directly from their formulas (4)-(9) for such mode ls. Howe ver, they can be ob tained indirectly after a refin ement of the fram e Θ into Θ ref which satisfies Shafer’ s model. More precisely , in stead o f working directly on the 2D fr ame Θ = { A, B } with m ( . ) giv en in T able V, we need to work on the 3D frame Θ ref = { A ′ , A \ { A ∩ B } , B ′ , B \ { A ∩ B } , C ′ , A ∩ B } satisfying Shafer’ s mo del with the equ iv alen t bba m ( . ) defined as in T able VI. The r esults ar e then given in T ab le VII. One sees that P I C ( D S mP ǫ → 0 ) is the max imum value. P rB el does n ot work corr ectly because it cannot be d irectly ev alu ated for A an d B since the un derlying P rB el ( A ′ ) and P rB el ( B ′ ) are mathematically u ndefined in such case. I f on e works on the r efined frame Θ ref and one ap plies the D S mP mapping of th e b ba m ( . ) defin ed in T able VI, one obtains naturally the same results f or D S mP as those given in tab le VII. Of co urse the results o f B etP in T ab le VII are the same using d irectly the for mula (3) as tho se using (1) on Θ ref . Th e verification is left to th e r eader . A ∩ B A B A ∪ B m ( . ) 0.4 0.2 0.1 0.3 T ABLE V Q U A N T I TATI V E I N P U T S F O R E X A M P L E 5 C ′ A ′ ∪ C ′ B ′ ∪ C ′ A ′ ∪ B ′ ∪ C ′ m ( . ) 0.4 0.2 0.1 0.3 T ABLE VI Q U A N T I TATI V E I N P U T S O N T H E R E F I N E D F R A M E Θ R EF A B A ∩ B P I C ( . ) P r N P l ( . ) 0.7895 0.7368 0.5263 0.0741 C uz z P ( . ) 0.8400 0.8000 0.6400 0.1801 B etP ( . ) 0.8500 0.8000 0.6500 0.1931 P r aP l ( . ) 0.8736 0.8421 0.7157 0.2789 P r P l ( . ) 0.9083 0.8544 0.7627 0.3570 P r H y b ( . ) 0.9471 0.9165 0.8636 0.5544 D S mP ǫ =0 . 001 ( . ) 0.9990 0.9988 0.9978 0.9842 P r B el ( . ) NaN NaN 1 1 D S mP ǫ =0 ( . ) 1 1 1 1 T ABLE VII R E S U LT S F O R E X A M P L E 5 . V I I I . E X A M P L E S O N A 3 D F R A M E W e work hereaf ter on the 3D fr ame Θ = { A, B , C } . A. Example 6 (Shafer’s mod el a nd a non-Ba yesian mass) This example is d rawn fro m [1 5]. Let’ s assume Sh afer’ s model and the n on-Baye sian be lief mass giv en by m ( A ) = 0 . 35 , m ( B ) = 0 . 2 5 , m ( C ) = 0 . 02 , m ( A ∪ B ) = 0 . 20 , m ( A ∪ C ) = 0 . 07 , m ( B ∪ C ) = 0 . 05 an d m ( A ∪ B ∪ C ) = 0 . 06 . The r esults of the map pings are given in T able VIII. O ne sees that D S mP ǫ → 0 provides the same result as P rB el wh ich correspo nds here to the best result in term o f PIC metric. A B C P I C ( . ) P r N P l ( . ) 0.4722 0.3889 0.138 9 0.0936 C uz z P ( . ) 0.5029 0.3937 0.103 4 0.1377 B etP ( . ) 0.5050 0.3950 0.100 0 0.1424 P r aP l ( . ) 0.5294 0.3978 0.072 8 0.1861 P r P l ( . ) 0.5421 0.4005 0.0574 0.2149 P r H y b ( . ) 0.5575 0.4019 0.040 6 0.2517 D S mP ǫ =0 . 001 ( . ) 0.5665 0.4037 0.0298 0.2783 P r B el ( . ) 0.5668 0.4038 0.0294 0.2793 D S mP ǫ =0 ( . ) 0.5668 0.4038 0.0294 0.2793 T ABLE VIII R E S U LT S F O R E X A M P L E 6 . B. Example 7 (Shafer’s model and a no n-Bayesian m ass) Let’ s assume Shafer’ s mod el an d chang e a bit the non- Bayesian inp ut mass by tak ing m ( A ) = 0 . 10 , m ( B ) = 0 , m ( C ) = 0 . 20 , m ( A ∪ B ) = 0 . 30 , m ( A ∪ C ) = 0 . 10 , m ( B ∪ C ) = 0 a nd m ( A ∪ B ∪ C ) = 0 . 30 . The results of the mapping s are given in T ab le IX. One sees that D S mP ǫ → 0 provides the best PIC value than all o ther mappin gs since P rB el is mathematically undefined . I f o ne takes artificially P rB el ( B ) = 0 , on e g ets the same result as with D S mP ǫ → 0 . A B C P I C ( . ) P r B el ( . ) 0.5333 NaN 0.4667 NaN P r N P l ( . ) 0.4000 0.3000 0.3000 0.0088 C uz z P ( . ) 0.3880 0.2470 0.3650 0.0163 B etP ( . ) 0.4000 0.2500 0.3500 0.0164 P r aP l ( . ) 0.3800 0.2100 0.4100 0.0342 P r P l ( . ) 0.4486 0.2186 0.3328 0.0368 P r H y b ( . ) 0.4553 0.1698 0.3749 0.0650 D S mP ǫ =0 . 001 ( . ) 0.5305 0.0039 0.4656 0.3500 T ABLE IX R E S U LT S F O R E X A M P L E 7 . C. Exa mple 8 (Hybrid DSm model) W e consider the hy brid DSm m odel in which all intersec- tions of e lements of Θ are em pty , but A ∩ B . In this case, G Θ reduces to 9 elemen ts {∅ , A ∩ B , A, B , C , A ∪ B , A ∪ C, B ∪ C, A ∪ B ∪ C } . The input masses of focal elements are gi ven by m ( A ∩ B ) = 0 . 20 , m ( A ) = 0 . 10 , m ( C ) = 0 . 20 , m ( A ∪ B ) = 0 . 30 , m ( A ∪ C ) = 0 . 10 , and m ( A ∪ B ∪ C ) = 0 . 1 0 . In order to app ly Sudano ’ s and Cuzzolin’ s m apping s, we need to work on the refined frame Θ ref with Shafer’ s mod el as de picted on Figure 1 and masses given in th e T able X. D ′ A ′ ∪ D ′ C ′ m ( . ) 0.2 0.1 0.2 A ′ ∪ B ′ ∪ D ′ A ′ ∪ C ′ ∪ D ′ A ′ ∪ B ′ ∪ C ′ ∪ D ′ m ( . ) 0.3 0.1 0.1 T ABLE X Q U A N T I TATI V E I N P U T S O N T H E R E FI N E D F R A M E F O R E X A M P L E 8 One sees fro m the T ab le XI that D S mP ǫ → 0 provides the best results in term o f PIC metric . The refined fram e has been defined as: Θ ref = { A ′ , A \ ( A ∩ B ) , B ′ , B \ ( A ∩ B ) , C ′ , C, D ′ , A ∩ B } acco rding to Figure 1. A ′ B ′ C ′ D ′ P I C ( . ) P r B el ( . ) NaN NaN 0.3000 0.7000 NaN P r N P l ( . ) 0.2728 0.1818 0.1818 0.3636 0.0318 C uz z P ( . ) 0.2000 0.1333 0.2667 0.4000 0.0553 B etP ( . ) 0.2084 0.1250 0.2583 0.4083 0.0607 P r aP l ( . ) 0.1636 0.1091 0.3091 0.4182 0.0872 P r P l ( . ) 0.2035 0.0848 0.2404 0.4713 0.1124 P r H y b ( . ) 0.1339 0.0583 0.2656 0.5422 0.1928 D S mP ǫ =0 . 001 ( . ) 0.0025 0.0017 0.2996 0.6962 0.5390 T ABLE XI R E S U LT S F O R E X A M P L E 8 . ✫✪ ✬✩ ✫✪ ✬✩ ✫✪ ✬✩ ❅ ❘ A  ✠ B ✛ C D ′ C ′ B ′ A ′ Fig. 1. Refined 3D frame for example 8 D. E xample 9 (fr e e DSm mode l) W e con sider th e free DSm model depicted on Fig ure 2 with the inpu t masses given in T able XII. T o app ly Sudano’ s and Cuzzolin’ s map pings, one work s on the r efined fram e Θ ref = { A ′ , B ′ , C ′ , D ′ , E ′ , F ′ , G ′ } wh ere the elemen ts of Θ ref are exclusi ve (assuming such refinement h as a p hysically sen se) accordin g to Figu re 2. This r efinement step is no t necessary when using D S m P since it works directly on DSm free model. The PIC values ob tained with th e different mapping s are gi ven in T able XIII. One sees that D S mP ǫ → 0 provides here again th e best results in term of PIC. ✫✪ ✬✩ ✫✪ ✬✩ ✫✪ ✬✩ ❅ ❘ A  ✠ B ❅ ■ C D ′ G ′ C ′ E ′ F ′ B ′ A ′ Fig. 2. Free DSm model for a 3D frame for example 9. A ∩ B ∩ C A ∩ B A m ( . ) 0.1 0.2 0.3 A ∪ B A ∪ B ∪ C m ( . ) 0.1 0.3 T ABLE XII Q U A N T I TATI V E I N P U T S F O R E X A M P L E 9 I X . E X T E N S I O N O F D S M P F O R Q UA L I TA T I V E B E L I E F A. Qualitative belief assignment q m ( . ) In order to compu te directly with words (linguistic labels), Smarandac he and Dez ert have defined in [ 7] a qu alitative basic belief assignment q m ( . ) as a mappin g fu nction from Tra nsformations P I C ( . ) P r B el ( . ) NaN P r N P l ( . ) 0.0414 C uz z P ( . ) 0.0621 P r aP l ( . ) 0.0693 B etP ( . ) 0.1176 P r P l ( . ) 0.1940 P r H y b ( . ) 0.2375 D S mP ǫ =0 . 001 ( . ) 0.8986 T ABLE XIII R E S U LT S F O R E X A M P L E 9 . G Θ into a set of linguistic labels L = { L 0 , ˜ L, L n +1 } where ˜ L = { L 1 , · · · , L n } is a finite set o f linguistic labels and where n ≥ 2 is an integer . For exam ple, L 1 can take the linguistic value “poor ”, L 2 the linguistic value “g ood”, etc. ˜ L is endowed with a total o rder r elationship ≺ , so th at L 1 ≺ L 2 ≺ · · · ≺ L n . T o work on a tru e clo sed lin guistic set L u nder linguistic operator s, ˜ L is extend ed with two extreme values L 0 = L min and L n +1 = L max , wher e L 0 correspo nds to the minimal qu alitativ e value an d L n +1 correspo nds to the maximal qualitative value, in such a way that L 0 ≺ L 1 ≺ L 2 ≺ · · · ≺ L n ≺ L n +1 , where ≺ means inferior to, or less (in quality) than, or smaller than, etc. B. Operator on qu alitative lab els From the extensio n of th e isomorp hism between the set of linguistic equidistant labels and a set of numbers in the interv al [0 , 1 ] , o ne can built exact op erators on lingu istic labels which makes possible the extension all the q uantitative fusion rules and p robabilistic tr ansforma tions in to th eir qu alitati ve coun- terparts [3] . W e b riefly remin d the main qualitative opera tors (or q -op erators for shor t) o n linguistic labels: • q -add ition: L i + L j = ( L i + j if i + j < n + 1 , L n +1 = L max if i + j ≥ n + 1 . (14) The q -addition is an extension of th e addition operator on equidistant labels which is given by L i + L j = i n +1 + j n +1 = i + j n +1 = L i + j . • q -subtr action: L i − L j = ( L i − j if i ≥ j, − L j − i if i < j. (15) where − L = {− L 1 , − L 2 , . . . , − L n , − L n +1 } . The q - subtraction is ju stified since when i ≥ j , one has with equidistant labels L i − L j = i n +1 − j n +1 = i − j n +1 . • q -mu ltiplication 13 : L i · L j = L [( i · j ) / ( n +1)] . (16) 13 The q -multiplicat ion of two linguistic label s defined here can be extend ed direct ly to the multiplica tion of n > 2 linguistic labels. For example the product of three linguistic labe l will be defined as L i · L j · L k = L [( i · j · k ) / ( n +1)( n +1)] , etc . where [ x ] means the closest integer to x (with [ n + 0 . 5] = n + 1 , ∀ n ∈ N ) . Th is o perator is justified by the approx imation of th e prod uct of equidistant labels given by L i · L j = i n +1 · j n +1 = ( i · j ) / ( n +1) n +1 . • Scalar multiplicatio n of a lin guistic label: Let a b e a real number . The multiplication of a lingu istic labe l b y a scalar is defined by: a · L i = a · i n + 1 ≈ ( L [ a · i ] if [ a · i ] ≥ 0 , L − [ a · i ] otherwise . (17) • Division of linguistic labels: a) q -division as an intern al o perator: Let j 6 = 0 , then L i /L j = ( L [( i/j ) · ( n +1)] if [( i/j ) · ( n + 1)] < n + 1 , L n +1 otherwise . (18) The first equality in (18) is well justified be- cause with equ idistant labels, one gets: L i /L j = i/ ( n +1) j / ( n +1) = ( i/j ) · ( n +1) n +1 ≈ L [( i/j ) · ( n +1)] . b) D i vision as an external operato r: ⊘ . Let j 6 = 0 . W e define: L i ⊘ L j = i /j. (19) since for equid istant labels L i ⊘ L j = ( i/ ( n + 1)) / ( j / ( n + 1)) = i/ j . Remark : When workin g with labels, no matter how many operation s we have, the best (mo st accurate) result is obtained if we do on ly one a pprox imation, and that one should be just at the very en d. C. Mor e op erations with lab els On the interval [0 , 1] we consider the labels L i , 0 ≤ i ≤ n + 1 , n ≥ 0 such tha t L i = i/ ( n + 1) . But we extend this closed interval to the righ t and to th e left in order to be able to do all neede d lab el op erations in any fusion calc ulation. Therefo re L n +2 = n +2 n +1 , L n +3 = n +3 n +1 , . . . and respec ti vely L − i = − L i = − i n +1 , so we g et L − 1 , L − 2 , . . . . In gene ral L i = i/ ( n + 1) for any i ∈ Z = { . . . , − 2 , − 1 , 0 , 1 , 2 , . . . } where Z is the set of all integers. Now we de fine fo ur more operator s in volving labels. 1) Addition of labels with real scalars: If r ∈ R (the set of real numb ers) and i ∈ Z , th en: L i + r = r + L i = L [ i + r ( n +1)] (20) where [ x ] mea ns th e clo sest integer to x . Th is op erator is justified because L i + r = i n +1 + r = i + r ( n +1) n +1 ≈ L [ i + r ( n +1)] and it is ne eded in th e qualitati ve extension o f DSmP formula. 2) Subtraction between labels and real scalars: L i − r = L [ i − r ( n +1) ] (21) because L i − r = i n +1 − r = i − r ( n +1) n +1 ≈ L [ i − r ( n +1) ] and similarly r − L i = L [ r ( n +1) − i ] because r − L i = r − i n +1 = r ( n +1) − i n +1 ≈ L [ r ( n +1) − i ] . 3) & 4 ) P owers and r oo ts of la bels: ( L i ) k = L [ i k ( n +1) k − 1 ] (22) for k ∈ R because ( L i ) k = ( i n +1 ) k = i k ( n +1) k − 1 n +1 ≈ L [ i k ( n +1) k − 1 ] . If k ∈ Q , which is the set of fractions (rational number s), we get the radical oper ation of labels. Ther efore, p p L i = L [ p √ i. ( n +1) p − 1 ] (23) because we r eplace k = 1 / p in the f ormula (22). D. Qu asi-normalizatio n of q m ( . ) There is no way to define a nor malized q m ( . ) , but a qualitative qu asi-norm alization [7] is nevertheless possible when con sidering equidistant lin guistic labels be cause in such case, q m ( X i ) = L i , is equiv alent to a qu antitative mass m ( X i ) = i / ( n + 1 ) which is normalized if: X X ∈ G Θ m ( X ) = X k i k / ( n + 1 ) = 1 , but this one is eq uiv a lent to: X X ∈ G Θ q m ( X ) = X k L i k = L n +1 . In this case, we have a qualitative normalization , similar to the (classical) numerical n ormalization . But, if the lab els L 0 , L 1 , L 2 , . . . , L n , L n +1 are n ot eq uidistant, so the interval [0 , 1] cannot b e split into equal parts according to the distribution of th e labels, then it makes sense to c onsider a qu alitative quasi-no rmalization , i.e. an a pprox imation of th e (classical) numerical no rmalization for the q ualitative masses in the same way: X X ∈ G Θ q m ( X ) = L n +1 . In general, if we don’t know if the labels are eq uidistant or not, we say that a qualitative mass is qua si-normalized w hen the a bove summation holds. E. Qualitative extension of DSmP The qualitative extension of (1 1), den oted q D S mP ( . ) is giv en by q D S mP ǫ ( ∅ ) = 0 an d ∀ X ∈ G Θ \ {∅} by q D S mP ǫ ( X ) = X Y ∈ G Θ X Z ⊆ X ∩ Y C ( Z )=1 q m ( Z ) + ǫ · C ( X ∩ Y ) X Z ⊆ Y C ( Z )=1 q m ( Z ) + ǫ · C ( Y ) q m ( Y ) (24) where all operation s in (24) are r eferred to labels, that is q - operator s on linguistic labels defin ed in IX-B and not classical operator s on number s. In the same manne r , due to o ur con- struction of labels and qualitative o perators, we can transform any quantitative fu sion rule (or a rithmetic expression ) in to a qualitative fu sion rule (or qualitative expression ). F . Deriva tion of P IC fr om qD SmP W e propo se here the d eriv ation of PIC fr om q ualitative DSmP . Let’ s consider a finite s pace of discrete exclusiv e events Θ = { θ 1 , θ 2 , . . . , θ M } a nd a subjective q ualitative alike pro b- ability measu re q P ( . ) : Θ 7→ L = { L 0 , L 1 , . . . , L n , L n +1 } . Then one defines the entropy and PIC metrics from q P ( . ) as H ( q P ) , − M X i =1 q P { θ i } lo g 2 ( q P { θ i } ) (25) P I C ( q P ) = 1 + 1 H max · M X i =1 q P { θ i } lo g 2 ( q P { θ i } ) (26) where H max = lo g 2 ( M ) and in o rder to compute the log a- rithms, one utilized the isomor phism L i = i/ ( n + 1 ) . X . E X A M P L E F O R Q UA L I TA T I V E D S M P Let’ s consider the fram e Θ = { A, B , C } with Shafer’ s model and the following set of lingu istic labels L = { L 0 , L 1 , L 2 , L 3 , L 4 , L 5 } , with L 0 = L min and L 5 = L max . Let’ s consider the following qualitative belief a ssignment q m ( A ) = L 1 , q m ( B ∪ C ) = L 4 and q m ( X ) = L 0 for all X ∈ 2 Θ \ { A, B ∪ C } . q m ( . ) is qua si-normalized since P X ∈ 2 Θ q m ( X ) = L 5 = L max . In this example, qm ( B ∪ C ) = L 4 is redistributed b y q D S mP ǫ ( . ) to B and C o nly , since B and C were inv o lved in the igno rance, propo rtionally with respect to their ca rdinals ( since their masses are L 0 ≡ 0 ). Applying q D S m P ǫ ( . ) for mula (24), one gets fo r this example: q D S mP ǫ ( A ) = L 1 q D S mP ǫ ( B ) = q m ( B ) + ǫ · C ( B ) q m ( B ) + q m ( C ) + ǫ · C ( B ∪ C ) q m ( B ∪ C ) = L 0 + ǫ · 1 L 0 + L 0 + ǫ · 2 · L 4 = L [0+( ǫ · 1) · 5] L [0+0+( ǫ · 2) · 5] · L 4 = L [ ǫ · 5] L [ ǫ · 10] · L 4 = L [ 5 ǫ 10 ǫ · 5] · L 4 = L [2 . 5] · L 4 = L [2 . 5 · 4 / 5] = L [10 / 5] = L 2 Similarly , o ne gets q D S mP ǫ ( C ) = q m ( C ) + ǫ · C ( C ) q m ( B ) + q m ( C ) + ǫ · C ( B ∪ C ) q m ( B ∪ C ) = L 0 + ǫ · 1 L 0 + L 0 + ǫ · 2 L 4 = L 2 where the index in [ · ] has been co mputed at the very en d for the best accur acy . T hanks to the isomorp hism between labels and nu mbers, all th e p roperties of o peration s with numb ers are transmitted to the operation s wi th lab els. q D S m P ǫ ( . ) is q uasi- normalized since q D S mP ǫ ( A ) + q D S mP ǫ ( B ) + q D S mP ǫ ( C ) equals L 1 + L 2 + L 2 = L 5 = L max . Applying the PIC formula (26), one obtains (here M = | Θ | = 3 ) : P I C ( q D S mP ǫ ) = 1 + 1 log 2 3 ( L 1 log 2 ( L 1 ) + L 2 log 2 ( L 2 ) + L 2 log 2 ( L 2 )) ≈ 1 5 L 1 where in o rder to comp ute th e qualitativ e logarithm s, one utilized the isomorphism L i = i n +1 . X I . C O N C L U S I O N S Motiv ated by the necessity to u se a b etter (mo re in forma- tional) pr obabilistic app roximatio n of belief assignm ent m ( . ) for app lications in volving soft dec isions, we have dev eloped a new pro babilistic tran sformatio n, called D S mP , for ap prox- imating m ( . ) into a subjective probability measure . DS mP provides the maximum of the Probab ilistic I nform ation Con- tent (PIC) of the source because it is based on p ropo rtional re- distribution of p artial and total uncertainty masses to elements of cardinal 1 with respect to their correspond ing m asses and cardinalities. D S mP works directly for any mod el (Shafe r’ s, hybrid , o r fr ee DSm mod el) of the fram e of the p roblem and the result can be o btained at any level of precision by a tun ing positive param eter ǫ > 0 . D S mP ǫ =0 coincides with Sudano’ s P rB el transfor mation for the cases when all masses of sin gletons in volved in ign orances ar e nonz ero. P rB el formu la is restricted to work on Shafer’ s mo del only wh ile D S mP ǫ> 0 is alw ays define d and f or any mo del. W e have clearly proved th rough simple examples that the classical B e tP and Cuzzolin’ s transfo rmations do no t pe rform well in term of PI C criterion. It has been shown also how D S mP can be extended to the qualitative domain to approx imate qualitative belief assignments pr ovided by h uman source s in natural lan guage . R E F E R E N C E S [1] F . Cuzzolin, “On the propertie s of the Intersec tion probabili ty”, submitted to the Annals of Mathemat ics and AI , Feb . 2007. [2] F . Cuzzol in, “ A geometri c approach to the theory of evi dence” , IEEE T ransactions on Systems, Man, and Cybernetic s , Part C, 2008 (to ap- pear). http://perception.inrialpes.fr/peo ple/Cuzzolin/pubs.html [3] X. Li and X. Huang and J. Dezert and F . Smarandache, “Enrichment of Qualit ati ve Belief for Reasoning under Uncertaint y”, Pro c. of Fusion 2007 , Qu ´ ebec, July 2007 . [4] G. Shafer , “ A mathemat ical theory of eviden ce”, Princeto n University Press , 1976. [5] C.E. Shannon, “ A Mathemati cal T heory of Communicatio n”, Bell Syst. T ech. J . , 27, pp. 379-423 and 623-6 56, 1948. [6] F . Sm arandac he and J. Dezert (Editors), “ Applicati ons and Adv ances of DSmT for Information Fusion’, American Researc h Press , 2004. http://www .gallup.unm.edu/ ˜ smarandach e/DSmT -book1.pdf . [7] F . Smarandac he and J. Dezert (Editors), “ Applicat ions and Advance s of DSmT for Information F usion’, V ol.2, A merican Researc h Press , 2006. http://www .gallup.unm.edu/ ˜ smarandach e/DSmT -book2.pdf . [8] F . Smarandac he and J. Dezert (Editors), “ Applicat ions and Advance s of DSmT for Information Fusion”, V ol.3 (in preparat ion), 2008. [9] Ph. Sm ets, “Constructing the pignistic probabili ty function in a context of unc ertain ty”, Uncertai nty in AI , vol . 5, pp. 29-39, 1990. [10] Ph. Sm ets, “Dec ision making in the TBM: the necessity of th e pignisti c transformat ion”, Int. Jo ur . Appr ox. Reasoning , vol. 38, 2005. [11] Ph. Smets, “The Combinat ion of Evidence in the Transfera ble Beli ef Model”, IE EE T rans. on P AMI , vol. 12, no. 5, pp. 447-458, 1990. [12] J . Sudano, “Pignisti c Probability Transforms for Mixe s of Low- and High-Probab ility Eve nts”, Proc. of Fusion 2001 , Montre al, Augu st 2001. [13] J . Sudano, “The system proba bilit y informat ion content (PIC) . . . ”, Pr oc. of Fusion 2002 , Annapolis, July 2002. [14] J . Sudano, “Equi valenc e Between Belief Theories and Nai ve Bayesian Fusion for Systems with Independe nt Evidential Data - Part I, T he Theory”, Pr oc. of Fusion 2003 , Cairns, July 200 3. [15] J . Sudano, “Y et Another Paradig m Illustra ting Evidence Fusion (Y APIEF)”, Pr oc. of Fusion 2006 , Florence , July 2006.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment