Evidential-EM Algorithm Applied to Progressively Censored Observations

Evidential-EM (E2M) algorithm is an effective approach for computing maximum likelihood estimations under finite mixture models, especially when there is uncertain information about data. In this paper we present an extension of the E2M method in a p…

Authors: Kuang Zhou (IRISA), Arnaud Martin (IRISA), Quan Pan

Evidential-EM Algorithm Applied to Progressively Censored Observations
Eviden tial-EM algorithm applied t o progressiv ely censored o b serv ations Kuang Zhou 1 , 2 , Arnaud Martin 2 , and Quan P a n 1 1 School of Automation, Northw estern Po ly technical Univer sit y , Xi’an, Shaanxi 710072 , PR China 2 IRISA , Universit y of Rennes 1, Rue E. Branly , 22300 Lannion, F rance kzhoumath@163.com, Arnaud .Martin@un iv-rennes1.fr, quanpan@nwpu.edu.cn Abstract. Evidential -EM (E2 M) algorithm is a n effective approac h for computing maximum l ikelihoo d estimations un d er finite mixture mo dels, especially when there is uncertain information about data. In this pap er w e present an extension of the E2M metho d in a p articular case of incom- plete data, where the loss of information is du e to b oth mixture mo dels and censored observ ations. The prior uncertain information is expressed by b elief fun ctions, while th e pseud o-like liho o d function is deriv ed based on imprecise observ ations and prio r knowl ed ge. Then E2M method is evok ed t o maximize the generalized lik eliho o d function t o obtain the optimal estimation of parameters. Numerical examples sho w that the prop osed metho d could effectivel y integra te the uncertain prior infor- mation with the current imprecise kn o wledge con veyed by the observed data. Keywords: Belief function th eory; Evidential-EM; Mixed-distribution; Uncertaint y; Reliabilit y analysis 1 In tro duction In life-testing exp eriments, the data are often censored. A datum T i is said to be right-censored if the event occ ur s a t a time after a right bound, but we do not ex a ctly know when. The only information we hav e is this r ig ht b ound. Two most common right ce nsoring schemes ar e termed a s Type- I and Type-I I censoring. The exp eriments using these test schemes ha ve the drawback that they do not allow remov a l o f s a mples at time points o ther than the terminal of the exp eriment. The progres sively censor ing scheme, which po ssesses this adv a nt a ge, has b eco me very p opula r in the life tests in the last few years [1]. The cens ored data pr ovide some kind of imprecise information for reliability analysis. It is in teres ting to ev aluate the reliability p er formance for items with mixture distributions. When the populatio n is comp osed of several subp opulations, an instance in the da ta set is exp ected to hav e a lab el which re pr esents the origin, that is, the subpo pula tion from which the data is observed. In r eal-world data, observed labe ls may carry only partial information about the origins o f s a m- ples. Th us there a re concurrent imprecision a nd uncer taint y for the censored 2 Kuang Zhou et al. data from mixture distributions. The Eviden tia l-EM (E2M) method, prop ose d by Denœux [4,3], is an effectiv e approach for co mputing maxim um likeliho o d estimates for the mixture pro blem, esp ecia lly when there is b oth imprecise and uncertain knowledge ab out the data. How ever, it ha s not b een us ed for r eliability analysis a nd the cens ored life tests. This pap e r consider s a special kind of inco mplete data in life tests, where the loss of information is due sim ultaneo usly to the mixture problem a nd to censo red observ ations. The data set analys ed in this pap er is merged by sa mples from dif- ferent classes . Some uncerta in information ab out class v alues of these unlab e le d data is express ed b y b elief functions. The pseudo-likeliho o d function is obtained based on the imprecis e o bserv ations and uncertain pr ior informatio n, and then E2M metho d is invok ed to maximize the ge neralized likelihoo d function. The simulation studies show that the prop osed metho d could take adv antages of us- ing the partial la be ls, a nd thus incor p o rates more information tha n tra ditional EM a lgorithms. 2 Theoretical analysi s Progr essively cens oring scheme ha s attracted considerable a tten tio n in rece nt years, since it has the flexibility of allowing remov a l o f units a t p oints other than the terminal point of the experiment [1]. The theo ry of belief functions is first describ ed by Dempster [2] with the study o f upp er and low er pr obabilities a nd extended by Shafer later [6]. This section will give a brief description of these t wo co ncepts. 2.1 The T yp e-I I progressiv el y censoring sc heme The mo de l of Type-I I progres sively censoring scheme (PCS) is describ ed as fol- lows [1]. Supp ose n indep endent ident ic al items ar e placed o n a life-test with the corre sp o nding lifetimes X 1 , X 2 , · · · , X n being identically dis tr ibuted. W e as- sume that X i ( i = 1 , 2 , · · · , n ) are i.i.d. with probability density function (p df ) f ( x ; θ ) and cumulativ e dis tr ibution function (cdf ) F ( x ; θ ). The integer J < n is fixed at the beg inning of the e xp e riment. The v alues R 1 , R 2 , · · · , R J are J pre-fixed satisfying R 1 + R 2 + · · · + R J + J = n . During the exp er iment, the j th failure is observed and immediately after the failure, R j functioning items are randomly removed from the test. W e denote the time of the j th failure b y X j : J : n , wher e J and n describ e the censored sc heme used in the exp eriment, that is, there are n test units and the exp eriment stops after J failures a re obser ved. Therefore, in the presence of Type-I I progressively censoring sc hemes, w e ha ve the o bserv ations { X 1: J : n , · · · , X J : J : n } . The likeliho o d function can b e g iven by L ( θ ; x 1: J : n , · · · , x J : J : n ) = C J Y i =1 f ( x i : J : n ; θ )[1 − F ( x i : J : n ; θ )] R i , (1) where C = n ( n − 1 − R 1 )( n − 2 − R 1 − R 2 ) · · · ( n − J + 1 − R 1 − R 2 − · · · − R J − 1 ). Evidential -EM algorithm applied to progressive ly censored observ ations 3 2.2 Theory of b elief functions Let Θ = { θ 1 , θ 2 , . . . , θ N } b e the finite domain of X , called the discernment frame. The mass function is defined on the power set 2 Θ = { A : A ⊆ Θ } . The function m : 2 Θ → [0 , 1 ] is s aid to be the ba sic b e lief assignment (bba) on 2 Θ , if it satisfie s : X A ⊆ Θ m ( A ) = 1 . (2) Every A ∈ 2 Θ such that m ( A ) > 0 is called a fo cal element. The credibility and plausibility functions a re de fined in Eq. (3) and E q. (4). B e l ( A ) = X ∅6 = B ⊆ A m ( B ) , ∀ A ⊆ Θ , (3) P l ( A ) = X B ∩ A 6 = ∅ m ( B ) , ∀ A ⊆ Θ . (4) Each quantit y B e l ( A ) denotes the degree to whic h the evidence suppo rts A , while P l ( A ) can b e interpreted a s an upp er b ound o n the degree of supp ort that could b e assigned to A if more specific infor ma tion became av a ila ble [7]. The function pl : Θ → [0 , 1] such that pl ( θ ) = P l ( { θ } ) is called the contour function asso ciated to m . If m has a single foca l elemen t A , it is said to b e categorica l and denoted as m A . If a ll fo cal elemen ts o f m ar e singletons, then m is said to b e Ba yesian. Bay esian mass functions are equiv alent to pr obability distr ibutio ns . If there are tw o distinct pieces of evidences (bba) o n the same frame, they can b e combined using Dempster’s rule [6] to form a new bba: m 1 ⊕ 2 ( C ) = P A i ∩ B j = C m 1 ( A i ) m 2 ( B j ) 1 − k ∀ C ⊆ Θ, C 6 = ∅ (5) If m 1 is Bay esian mass function,and its co rresp onding contour function is p 1 . Let m 2 be an a rbitrary mass function with co nt o ur function pl 2 . The co mbin a tion of m 1 and m 2 yields a Bay e s ian mass function m 1 ⊕ m 2 with con tour function p 1 ⊕ pl 2 defined b y p 1 ⊕ pl 2 = p 1 ( ω ) pl 2 ( ω ) P ω ′ ∈ Ω p 1 ( ω ′ ) pl 2 ( ω ′ ) . (6) The conflict betw een p 1 and pl 2 is k = 1 − P ω ′ ∈ Ω p 1 ( ω ′ ) pl 2 ( ω ′ ). It equals one min us the exp ectatio n of pl 2 with r esp ect to p 1 . 3 The E2M algorithm for T yp e-I I PCS 3.1 The generalized lik e liho o d function and E2M algorithm E2M a lg orithm, similar to the E M metho d, is an iterative optimization tactics to o btain the ma ximum o f the obs erved likelihoo d function [4,3]. How ever, the 4 Kuang Zhou et al. data applied to E2M mo del can be imprecise and uncertain. The impr ecision may be br ought b y missing infor mation or hidden v a riables, a nd this problem can b e so lved by the E M a ppr oach. The uncer taint y may b e due to the unreliable sensors, the err o rs caused by the measuring or estimation metho ds and so on. In the E 2M mo del, the uncertaint y is represented by b elief functions. Let X be a discrete v aria ble defined on Ω X and the pr obability density function is p X ( · ; θ ). If x is an observ ation sa mple o f X , the likeliho o d function can b e expr e s sed a s: L ( θ ; x ) = p X ( x ; θ ) . (7) If x is not completely o bserved, and what we only k now is that x ∈ A, A ⊆ Ω X , then the likeliho o d function b eco mes: L ( θ ; A ) = X x ∈ A p X ( x ; θ ) . (8) If there is so me uncertain information ab out x , for exa mple, the exp erts may give their b elief ab out x in the form of mass functions: m ( A i ) , i = 1 , 2 , · · · , r , A i ⊆ Ω X , then the likelihoo d b ecomes: L ( θ ; m ) = r X i =1 m ( A i ) L ( θ ; A i ) = X x ∈ Ω x p X ( x ; θ ) pl ( x ) . (9) It can be seen from Eq. (9) that the likelihoo d L ( θ ; m ) only depends on m through its asso ciated contour function pl . Th us we could write indifferen tly L ( θ ; m ) or L ( θ ; pl ). Let W = ( X, Z ) be the co mplete v aria ble set. Set X is the observ able data while Z is unobser v able but with some uncertain knowledge in the for m of pl Z . The log- likelihoo d based o n the complete sample is log L ( θ ; W ). In E2M, the observe-data log likelihoo d is log L ( θ ; X, pl Z ). In the E-s tep o f the E 2M algor ithm, the pseudo-likeliho o d function should be ca lculated a s: Q ( θ , θ k ) = E θ k [log L ( θ ; W ) | X , pl Z ; θ k ] , (10) where pl Z is the contour function des cribing our uncertaint y on Z , and θ k is the parameter vector obtained at the k th step. E θ k represents the exp ectation with resp ect to the following density: γ ′ ( Z = j | X , pl Z ; θ k ) , γ ( Z = j | X ; θ k ) ⊕ pl Z . (11) F unction γ ′ could b e r egarded as a combination of conditiona l probability density γ ( Z = j | X ; θ k ) = p Z ( Z = j | X ; θ k ) and the contour function pl Z . It depicts the current informatio n based o n the observ ation X and the prior uncertain information on Z , thu s this combination is similar to the Bayes r ule. According to the Dempster combination rule and Eq. (9), we can get: γ ′ ( Z = j | X , pl Z ; θ k ) = r ( Z = j | X ; θ k ) pl Z ( Z = j ) P j r ( Z = j | X ; θ k ) pl Z ( Z = j ) . (12) Evidential -EM algorithm applied to progressive ly censored observ ations 5 Therefore, the pseudo- likelihoo d is: Q ( θ , θ k ) = P j r ( Z = j | X ; θ k ) pl ( Z = j ) log L ( θ ; W ) L ( θ k ; X , pl Z ) . (13) The M- step is the same as EM and requires the ma ximization o f Q ( θ , θ k ) with resp ect to θ . The E 2M algor ithm alternately re p ea ts the E - and M-steps ab ov e un til the increa s e of gener al observed-data likelihoo d becomes smaller than a given threshold. 3.2 Mixed-distri buted prog ressiv e ly censored data Here, we present a special t y p e of inc o mplete data, where the imperfection of information is due b oth to the mixed-distribution and to some censo red obser- v ations. Let Y denote the lifetime o f test samples. The n test samples can de divided into t wo parts, i.e. Y 1 , Y 2 , where Y 1 is the s e t of obs erved data, while Y 2 is the censore d data set. Let Z b e the class lab els and W = ( Y , Z ) repre sent the complete data. Assume that Y is from mixed-distribution with p.d.f. f Y ( y ; θ ) = p X z =1 λ z f ( y ; ξ z ) , (14) where θ = ( λ 1 , · · · , λ p , ξ 1 , · · · , ξ p ). The co mplete data distribution of W is given by P ( Z = z ) = λ z and P ( Y | Z = z ) = f ( y ; ξ z ). V a riable Z is hidden but we can hav e a pr ior knowledge ab out it. This kind of prior uncertain informa tion of Z can b e descr ibe d in the form of b elief functions: pl Z ( Z = j ) = p l j , j = 1 , 2 , · · · , p. (15) The likeliho o d of the complete data is : L c ( θ ; Y , Z ) = n Y j =1 f ( y j , z j ; θ ) , (16) and the pseudo- likeliho o d function is: Q ( θ , θ k ) = E θ k [log L c ( θ ; Y , Z ) | Y ∗ , p l Z ; θ k ] , (17) where E θ k [ ·| Y ∗ , p l Z ; θ k ] denotes exp ectation with r esp ect to the c onditional dis- tribution of W giv en the obser v ation Y ∗ and the uncertain information pl Z . Theorem 1. F or ( y j , z j ) ar e c omplete and c ensor e d, f Y Z ( y j , z j | y ∗ j ; θ k ) c an b e c alculate d ac c or ding t o Eq. (18) and Eq. (19) r esp e ctively. L et y ∗ j b e the j th observation. If the j th sample is c ompletely observe d, y j = y ∗ j ; Otherwise y j ≥ y ∗ j . f 1 Y Z ( y j , z j | y ∗ j ; θ k ) = I { y j = y ∗ j } P k 1 j z , (18) 6 Kuang Zhou et al. f 2 Y Z ( y j , z j | y ∗ j ; θ k ) = I { y j >y ∗ j } P k 2 j z f ( y j ; ξ k z ) F ( y ∗ j ; ξ k z ) . (19) wher e P k 1 j z and P k 2 j z ar e shown in Eq. (20) . P k j z ( z j = z | Y ∗ ; θ ) = ( P k 1 j z ( z j = z | y ∗ j ; θ k ) for t he c ompletely observe d data P k 2 j z ( z j = z | y ∗ j ; θ k ) for t he c ensor e d data (20) wher e P k 1 j z ( z j = z | y ∗ j ; θ k ) = f ( y ∗ j ; ξ k z ) λ k z P z f ( y ∗ j ; ξ k z ) λ k z , (21) P k 2 j z ( z j = z | y ∗ j ; θ k ) = F ( y ∗ j ; ξ k z ) λ k z P z F ( y ∗ j ; ξ k z ) λ k z . (22) Pr o of. If ( y j , z j ) ar e c ompletely observe d, f 1 y z ( y j , z j | y ∗ j ; θ k ) = P k 1 j z f ( y j | y ∗ j = y j , Z j = z ; θ k ) , we obtain Eq. (18) . If ( y j , z j ) ar e c ensor e d, f 2 y z ( y j , z j | y ∗ j ; θ k ) = P k 2 j z f ( y j | y ∗ j < y j , Z j = z ; θ k ) , F r om the the or em in [5], f ( y j | y ∗ j < y j , Z j = z ; θ k ) = f ( y j ; ξ k z ) F ( y ∗ j ; ξ k z ) I { y j >y ∗ j } , we c an get Eq. (1 9) . This c ompletes this pr o of. F rom the ab ove theorem, the pseudo-likelihoo d function can b e written as : Q ( θ , θ k ) = E θ k [log f c ( Y , Z ) | Y ∗ , p l Z ; θ k ] = n X j =1 E θ k [log λ z + log f ( y j | ξ z ) | Y ∗ , p l Z ; θ k ] = X y j ∈ Y 1 X z P ′ k 1 j z log λ z + X y j ∈ Y 2 X z P ′ k 2 j z log λ z + X y j ∈ Y 1 X z P ′ k 1 j z log f ( y ∗ j | ξ z ) + X y j ∈ Y 2 X z P ′ k 2 j z Z + ∞ y ∗ j log f ( x | ξ z ) f ( x | ξ k z ) F ( y ∗ j ; ξ k z ) d x, (23) where P ′ k ij z ( z j = z | y ∗ j , p l Z j ; θ k ) = P k ij z ( z j = z | y ∗ j ; θ k ) ⊕ pl Z j , i = 1 , 2 . Evidential -EM algorithm applied to progressive ly censored observ ations 7 It ca n b e seen that P ′ k ij z ( z j = z | y ∗ j , p l Z j ; θ k ) is a Dempster combination of the prior a nd the obs erved infor ma tion. Assume that the data is from the mix e d-Rayleigh distribution without loss of generality , the p.d.f. is shown in Eq. (24 ): f X ( x ; λ, ξ ) = p X j =1 λ j g X ( x ; ξ j ) = p X j =1 λ j ξ 2 j exp {− 1 2 ξ 2 j x 2 } , (24) After the k th iteration and θ k = λ k is got, the ( k + 1) th step of E2M algo rithm is shown as follows: 1. E-step: F or j = 1 , 2 , · · · , n , z = 1 , 2 · · · , p , use Eq. (23) to o bta in the co ndi- tional p.d.f. of log L c ( θ ; W ) bas ed on the observed data, the prior uncerta in information and the cur rent par a meters. 2. M-step: Maximize Q ( θ | θ k ) a nd up date the parameter s: λ k +1 z = 1 n   X y j ∈ Y 1 P ′ k 1 j z + X y j ∈ Y 2 P ′ k 2 j z   , (25) ( ξ k +1 z ) 2 = 2  P y j ∈ Y 1 P ′ k 1 j z + P y j ∈ Y 2 P ′ k 2 j z  P y j ∈ Y 1 P ′ k 1 j z y ∗ 2 j + P y j ∈ Y 2 P ′ k 2 j z ( y ∗ 2 j + 2 / ( ξ k z ) 2 ) . (26) It sho uld be po inted out that the maximize of Q ( θ, θ k ) is conditione d on P p i =1 λ i = 1 . By L a grang e mult ipliers metho d w e ha ve the new ob jective func- tion: Q ( θ , θ k ) − α ( p X i =1 λ i − 1) . 4 Numerical results In this section, we will use Monte-Carlo method to tes t the prop osed method. The simulated data set in this section is drawn from mixed Rayleigh distr ibution as shown in Eq. (24) with p = 3, λ = (1 / 3 , 1 / 3 , 1 / 3) a nd ξ = (4 , 0 . 5 , 0 . 8). The test sc heme is n = 5 0 0, m = n ∗ 0 . 6 , R = (0 , 0 , · · · , n − m ) 1 × m . Let the initial v alues b e λ 0 = (1 / 3 , 1 / 3 , 1 / 3) and ξ 0 = (4 , 0 . 5 , 0 . 8 ) − 0 . 01. As mentioned b efore, usually there is no information ab out the sub class la be ls of the data, whic h is the case of unsup ervise d learning. But in real life, we may get so me prior uncertain knowledge from the exp erts or exp er ience. These partial information is ass umed to be in the form of b elief functions here. T o simulate the uncertaint y on the lab e ls of the da ta, the o r iginal genera ted datasets are co rrupted as follows. F or each data j , an error pr o bability q j is drawn randomly from a beta distribution with mean ρ and standar d devia tion 0.2. The v alue q j expresses the doubt by exp er ts on the class of sample j . With 8 Kuang Zhou et al. ● ● ● ● ● ● ● ● ● ● 0.0 0.2 0.4 0.6 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 ρ RABias ● noisy labels without labels uncertain labels ● ● ● ● ● ● ● ● ● ● 0.00 0.25 0.50 0.75 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.9 ρ RABias 0.8 ● noisy labels without labels uncertain labels a. Estimation of ξ 1 b. Estimation of ξ 2 ● ● ● ● ● ● ● ● ● ● 0.0 0.5 1.0 0.01 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 ρ RABias ● noisy labels without labels uncertain labels c. Estimation of ξ 3 Fig. 1. Average RA Bias val u es (plus and minus one standard deviation) for 20 rep eated exp eriments, as a function of the error probability ρ for the simula ted lab els. probability q j , the lab el of sample j is changed to a ny (three) cla ss (denoted by z ∗ j ) with equal proba bilities. The plausibilities are then determined as pl Z j ( z j ) = ( q j 3 if z j 6 = z ∗ j , q j 3 + 1 − q j if z j = z ∗ j . (27) The r esults of our appro ach with uncertain lab els are compared with the cas es of noisy labels and no informatio n on lab els. The former case with noisy labels is like sup ervise d learning, while the latter is the tra ditional EM alg orithm applied to pro gressively censor ed da ta. In each cas e, the E2 M (or EM) alg orithm is run 20 times. The estimations o f pa rameters are compared to their r eal v alue using absolute relative bias (RABias). W e r ecall that this co mmonly used mea sure equals 0 for the absolutely exact estimation ˆ θ = θ . Evidential -EM algorithm applied to progressive ly censored observ ations 9 ● ● ● ● ● ● ● −0.3 0.0 0.3 0.6 20 50 100 200 300 400 500 RABias n ● noisy labels without labels uncertain labels ● ● ● ● ● ● ● 0.0 0.4 0.8 1.2 20 50 100 200 300 400 500 RABias n ● noisy labels without labels uncertain labels a. Estimation of ξ 1 b. Estimation of ξ 2 ● ● ● ● ● ● ● 0.0 0.5 1.0 20 50 100 200 300 400 500 RABias n ● noisy labels without labels uncertain labels c. Estimation of ξ 3 Fig. 2. Average RA Bias val u es (plus and minus one standard deviation) for 20 rep eated exp eriments, as a function of the sample num b ers n . The r e sults are shown graphica lly in Figure 1. As exp ected, a deg r adation of the estimation per fo rmance is observed when the er ror probability ρ increa ses using noisy and uncertain lab els. But our solutio n based o n soft lab els do es not suffer a s muc h that using noisy la b e ls , a nd it clearly outp erfor ms the sup ervised learning with noisy la b els . The es timations for ξ 1 and ξ 3 by our appr oach (un- certain labels) are b etter than the unsup ervis ed lea rning with unknown lab els. Although the estimation r esult for ξ 2 using uncertain lab els s e ems not b etter than that by traditional EM a lgorithm when ρ is large, it still indicates that our appr oach is able to e x ploit additional information on data uncerta int y when such infor mation is av ailable as the case when ρ is small. In the following exp e riment, w e will test the algo rithm with different sample nu mber s n . In order to illustrate the different behavior o f the appro ach with resp ect to n , w e consider a fixed censored scheme with ( m =) 60% o f sam- ples ar e censo red. With a g iven n , the test scheme is as follows: m = n ∗ 0 . 6, 10 Kuang Zhou et al. R = (0 , 0 , · · · , n − m ) 1 × m . Let the error probability b e ρ = 0 . 1. Also we will compare our metho d using uncer tain lab els with tho se by noisy lab els and with- out using a ny informatio n of lab els. The RABia s for the r esults with different metho ds is shown in Figure 2. W e can get s imila r co nclusions a s b efore that uncertaint y on cla ss lab els app ears to be successfully exploited by the prop ose d approach. Moreover, as n inc r eases, the RABias decrea ses, which indicates the large s a mple pro pe rties o f the max imum-likelihoo d estimation. 5 Conclusion In this pa per , we inv estiga te how to apply E 2M a lgorithm to pro gressively cen- sored data analy sis. F ro m the numerical results we can see that the prop osed metho d ba sed on E2M algorithm ha s a b etter behavior in terms of the RABias of the para meter estima tio ns as it could take a dv an ta ge o f the a v ailable data uncertaint y . Thus the belief function theory is a n effective to ol to represent and deal with the uncertain information in reliability ev aluation. The Monte-Carlo simulations show that the RABiases decrea ses with the increase of n for all cas es. The method do es improv e for large sa mple size. The mixture dis tribution is widely used in reliability pro ject. Engineers find that ther e ar e often failures of tubes or other de v ices at the early stag e, but the failure r ate will rema in stable or contin ue to raise with the increase of time. F rom the view of statistics, these pro ducts sho uld b e regarded to come from mixed distributions. B e s ides, when the reliabilit y ev aluation of these complex pro ducts is p erformed, there is often not enough prior i information. Therefore, the a pplication of the pro p osed metho d is of pr actical meaning in this case . References 1. Bal akrishnan, N .: Progressive censoring metho dology: an app raisal. TEST 16, 211– 259 (2007) 2. Dempster, A.P .: Upp er and lo wer probabilities indu ced by a multiv alued mapping. Annals of Mathematical Statistics 38, 325–328 (1967) 3. Denœux, T.: Maximum likelihoo d estimation from uncertain data in the b elief fun c- tion framework. Know ledge an d Data Engineering, IEEE T ransactions on 25(1), 119 –130 (jan 2013) 4. Denœux, T.: Maxim u m like liho o d from evidentia l d ata: An exten sion of the em algorithm. In: Combining Soft Compu t ing and Statistical Metho ds in Data Analysis, Adv ances in Intell igent and Soft Computin g, v ol. 77, pp. 181–188. Sp ringer Berlin Heidelb erg (2010) 5. Ng, H., Chan, P ., Balakrishnan, N.: Estimation of parameters from progressiv ely censored data using em alg orithm. Computational Statistics and D ata Analysis 39(4), 371–386 (2002) 6. Shafer, G.: A mathematical theory of eviden ce. Princeton U niversit y Press (1976) 7. Smets, P ., Kennes, R.: The transferable b elief mo del. Artificial In telligence 66(2), 191 – 234 (1994)

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment