Intrinsic posterior regret gamma-minimax estimation for the exponential family of distributions
In practice, it is desired to have estimates that are invariant under reparameterization. The invariance property of the estimators helps to formulate a unified solution to the underlying estimation problem. In robust Bayesian analysis, a frequent cr…
Authors: Mohammad Jafari Jozani, Nahid Jafari Tabrizi
Intrinsic posterior regret gamma-minimax estima tion f or the exponential f amil y of distributions Mohammad Jafari Jozani a, 1 and Nahid Jafari T abrizi b a Dep artment of Statistics, University of Manitob a, Winnip e g, M B, CAN ADA, R3T 2N2. b Dep artment of Mathematics, Islamic Azad University-Kar aj Br anch, Kar aj, IRAN. ABSTRA CT: In practice , it is desired to hav e estimates that are inv ariant under repar ameterization. The inv aria nc e prop erty of the estimators helps to formulate a unified solution to the underlying estimation pro blem. In robust Ba yesian ana lysis, a frequent criticism is that the optimal estimators a r e not inv a riant under smo oth repara meter izations. This pap er considers the problem of p os terior reg ret gamma-minima x (PRGM) estimation of the natur a l pa rameter of the ex po nent ial family o f distributions under in trinsic loss functions. W e show that under the cla ss o f Jeffrey’s Co njugate Prior (JCP) distributions, PRGM estimators are inv a riant to smo oth one-to-one repa rameteriza tions. W e a pply our results to several distributions and different classes of JCP , as well a s the usual conjug a te prior distributions. W e obser ve that, in ma n y ca ses, inv a riant P RGM estimators in the class of JCP distributions c a n b e obta ine d b y some mo difications of PRGM estimator s in the usual class of conjugate priors. Moreover, when the class of priors a re co nvex or dep endant o n a hyper-par a meter b elonging to a connected set, we show that the PRGM estimator under the intrinsic los s function could b e Bayes with resp ect to a prio r distribution in the o riginal prior cla ss. Theoretical results are supplemented with several ex amples and illustrations. Keyw ords: In trinsic loss function; Bayes es timator; Robust Bayesian a nalysis; Posterior r isk; Posterior reg ret gamma-minimax. 1 Intr oduc tion Supp ose x is a realization of a random sample X with a sampling mo del giv en by a family of densities { f ( ·| θ ) : θ ∈ Θ } w ith resp ect to a σ -finite measure ν on a sample sp ace χ w here θ is the unkno wn parameter of inte rest with θ ∈ Θ. Let π ( · ) b e a pr ior distribu tion on Θ and π ( ·| x ) denote the p osterior distribution of θ giv en x . In standard Bay esian analysis, one needs to sp ecify the true prior distrib u- tion π ( · ). Ho wev er, in p ractice, elicitat ion of the tr ue prior distrib u tion can nev er b e done without error. Hence, we usually need to consider a class Γ of p rior distributions wh ic h reflect (appr o ximately) true p rior b eliefs, i.e., the tru e prior distribu tion π ( · ) is an un kno wn element of Γ. Robust Ba yesia n analysis is d esigned to ac kno wledge suc h a p r ior uncertaint y by considering the class Γ of p laus ib le prior distributions in stead of a single prior distribution π and stud ying the corresp onding range of Ba ye sian solutions. See Berger (1994) and Rios Insu a and Ruggeri (2000) for more details. One ma y also attempt to d etermine an optimal estimator δ by minimizing some m easur es of robustness. Seve ral criteria ha v e b een prop osed for the selection of pr o cedures in robus t Ba y esian s tu dies. In this pap er, w e study the maximal p osterior regret metho d (e.g., Rios Insua and Ruggeri, 2000; Rios Ins ua et al., 1995) to obtain th e p osterior regret gamma-minimax (PRGM) estimator of the un k n o w n p arameter for the one-parameter exp onen tial f amily of distribu tions. Th e PR GM criterion has b een used recen tly 1 Corresponding author: m − jafari − jozani@umanitoba.ca b y man y p eople from b oth theoretical and practical p oin ts of view. F or example, G´ omez-D ´ eniz (2009) in v estigate d the use of PRG M for credibilit y premium estimatio n in Actuarial Science, Borat y ´ nsk a (2002 , 2006) in in surance for collectiv e risk m o del analysis, and Jafari Jozani and P arsian (2008) in statistica l inference b ased on record data. F or an observ ed v alue x , a p r ior distribu tion π and the corresp onding p osterior distribution π ( ·| x ), w e d enote the p osterior risk of an estimate δ ( x ) of the u nkno wn parameter θ under L ( θ , δ ) by r ( x, δ ) = E [ L ( θ , δ ( x )) | x ]. The Ba y es estimator of θ u nder the loss function L ( θ , δ ) is then giv en b y a δ π ( X ) s u c h that r ( x, δ π ) = inf δ r ( x, δ ). Definition 1 The PR GM estimator of θ under the loss function L ( θ , δ ) and a class Γ of prior distri- butions is define d as an estimator δ P R such that sup π ∈ Γ ρ ( δ π ( x ) , δ P R ( x )) = inf δ sup π ∈ Γ ρ ( δ π ( x ) , δ ( x )) , (1) wher e ρ ( δ π , δ ) = r ( x, δ ) − r ( x, δ π ) is the p osterior r e gr et me asuring the loss entaile d in cho osing the action δ ( x ) inste ad of the optimal Bayes action δ π ( x ) (under prior π and loss L ). In this p ap er, w e study the construction of PRG M estimators under the so-called intrinsic loss functions. These loss fun ctions shift atten tion fr om the d istance b et wee n the estimator δ and the true parameter v alue θ , to the more relev an t distance b et wee n statistic al mo dels they lab el. More sp ecifically , the intrinsic loss of using δ as a proxy f or θ is the int rinsic d istance b et we en the true mo d el f ( x | θ ) and the mo d el f ( x | δ ) w hen θ = δ , that is L ( θ , δ ) = d ( f ( x | θ ) , f ( x | δ )) , (2) where d ( · , · ) is a su itable distance measure. In p ractice, intrinsic loss functions could b e used as b enchmark losses wh en th e utilit y function related to the u n derlying statistical pr ob lem cannot b e obtained by practitioners. A desir ed p rop erty of intrinsic loss functions is th at they are inv ariant u n der one-to-one sm o oth reparameterizations. The inv ariance prop ert y of in trinsic loss fun ctions pro vides a very con v enien t to ol for statistical application. W e sh o w that, under s uitable conditions, intrinsic loss f unctions could b e used to formulate a unified s et of solutions to the problem of PRGM estima- tion of the u nkno wn parameter of the exp onential family of distrib utions which is consisten t und er reparameterization, a rather obvio us requiremen t, whic h un fortunately man y statistical metho d s fail to satisfy . In Section 2, w e obtain the PRGM estimator of the n atural parameter θ of th e exp onenti al f amily of distributions und er the in trinsic loss fun ction (2) when d ( · , · ) is c hosen to b e the Ku llbac k-Leibler distance. W e consider different classes of conjugate priors on the natural parameter θ and sho w h o w to obtain the P R GM estimator of θ in eac h class. The results are v ery general and provide an automated and u nified s olution to the PRGM estimation of th e u nknown p arameter of the exp onent ial family of distributions u nder different loss fun ctions, includ ing, bu t not limited to, quadratic, LINEX, entrop y and Stein loss fu nctions. In Ba yesia n statistical analysis, as p oin ted out b y Gelman (2004), transformations of the parameter 2 t yp ically su ggest new families of prior distribu tions. Therefore, the usu al robust Ba y esian inf er en ces are not inv ariant u nder r eparameterizatio ns. F or example, if δ P R ( X ) is the PR GM estimator of θ , then it is not necessarily tru e that h ( δ P R ( X )) is the PR GM estimator of η = h ( θ ), w hen h is a one-to-one smo oth function. A solution to th is p roblem is p rop osed in Section 3. T o this end , we obtain in v arian t PR GM estimators of θ un der the intrinsic loss function and different classes of Jeffrey’s Conju gate Prior (JCP) distributions. W e sh o w that the resulting PRG M estimates are inv ariant under one-to-one smo oth transformations of θ . Theoretic al results are augmen ted with several examples and illustrations. In Section 4, we pro vid e some general results showing that, und er general cond itions, PRGM and in trinsic PR GM estimators are Ba yes with r esp ect to prior distributions in the u n derlying class of p riors. W e study tw o cases of con v ex classes of p rior distr ib utions as we ll as the case w here the und erlying class of priors dep end s on a hyp er-parameter b elonging to a connected s et. W e provide a sufficien t cond ition under which the PRGM and in trinsic PRGM estimators are Bay es with resp ect to data indep end en t prior distribu tions within the underlying class of priors. Finally , in Section 5, we giv e some conclud in g remarks. 2 PR GM e stima tion under intrinsic loss functions Supp ose X is a random v ariable, where its distrib ution b elongs to the one-parameter exp onen tial family of distribu tions F = { f ( x | θ ) : x ∈ χ ⊆ R , θ ∈ Θ ⊆ R } , with probabilit y den sit y function (p d f ) f ( x | θ ) = β ( θ ) t ( x ) e − θ r ( x ) , (3) where r ( x ) > 0, β ( θ ) t ( x ) > 0 and θ is the unkn o wn real-v alued natural parameter of the mo del. Th e densit y is consid er ed with resp ect to the Leb esgue measur e for cont inuous and the coun ting measure for discrete distributions. Sup p ose δ is an estimate of θ with b oth θ , δ ∈ Θ. W e d efine the intrinsic loss function (2), us ing the Kullbac k-Leibler measure b et wee n f ( x | θ ) and f ( x | δ ), as follo ws L ( θ , δ ) = E θ log f ( X | θ ) f ( X | δ ) = Z χ log f ( x | θ ) f ( x | δ ) f ( x | θ ) dν ( x ) . (4) Loss fu nction (4) can b e int erpreted as the exp ected log-lik eliho o d ratio in fav our of the true mo del. Th us, the intrinsic loss fu nction (4) not only has the desired in v ariance prop ert y but it is also related to the r elev ant measure of evidence in th e Neyman-P earson Lemma. Note that the in trinsic loss fun ction (4) is inv ariant u nder reparameterization since the parameters affect the loss function only via the probabilit y distribu tions they lab el, which are ind ep endent of the particular parameterizatio n. F or a general reference on intrinsic losses and additional details we r efer to Rob ert (1996) and Bernard o (2011 ). First, we give a lemma whic h id en tifies the in trin sic loss function for the exp onential family of distributions. 3 Lemma 1 F or the exp onential family of distributions (3) , the intrinsic loss function (4 ) r e duc es to L ( θ , δ ) = log β ( θ ) β ( δ ) + ( δ − θ ) β ′ ( θ ) β ( θ ) , (5) wher e β ′ ( θ ) = d dθ β ( θ ) . Let H ( t ) := β ′ ( t ) /β ( t ) . A straigh tforward calculatio n sho ws th at the p osterior risk asso ciated with δ , under the loss fu nction (5), is r ( x, δ ) = E (log β ( θ ) | x ) − log β ( δ ( x )) + δ ( x ) E H ( θ ) x − E θ H ( θ ) x . (6) The Ba yes estimator of θ can therefore b e obtained by m inimizing (6 ) in δ as f ollo ws δ π ( X ) = H − 1 { E H ( θ ) X } . (7) F ollo wing the decreasing monotone like liho o d ratio prop ert y of the d ensities f ( x | θ ) in (3) in r ( X ), and since E [ r ( X )] = H ( θ ), H ( · ) is a decreasing fun ction. Ther efore, the Ba y es estimator δ π ( X ) is unique. F urtherm ore, the p osterior regret for estimating θ using δ instead of the optimal estimator δ π is obtained by ρ ( δ π , δ ) = log β ( δ π ) β ( δ ) + ( δ − δ π ) H ( δ π ) . (8) Note that ρ ( δ π , δ ), as a fu n ction of δ π , decreases th en in creases with a un ique minimum at δ π = δ . The main resu lt of this section is given in the follo win g th eorem which obtains the PRGM estimator of θ under the int rinsic loss function (5). Theorem 1 L et δ ( x ) = inf π ∈ Γ δ π ( x ) and δ ( x ) = s up π ∈ Γ δ π ( x ) and supp ose that δ ( x ) and δ ( x ) ar e finite almost everywher e. The PRGM estimator of θ in the exp onential family (3) under the loss fu nction (5) and in the class of prior distributions Γ i s given by δ P R ( X ) = δ ( X ) H ( δ ( X )) − δ ( X ) H ( δ ( X )) − log β ( δ ( X )) β ( δ ( X )) H ( δ ( X )) − H ( δ ( X )) . (9) Pro of: First, n ote that inf δ sup π ∈ Γ ρ ( δ π , δ ) = m in ( inf δ ≤ δ sup π ∈ Γ ρ ( δ π , δ ) , inf δ <δ<δ sup π ∈ Γ ρ ( δ π , δ ) , in f δ ≥ δ sup π ∈ Γ ρ ( δ π , δ ) ) . So, we consider th e follo wing three cases: Case 1. When δ ≤ δ , we ha v e sup π ∈ Γ ρ ( δ π , δ ) = ρ ( δ , δ ). Let f 1 ( δ ) = ρ ( δ , δ ) = log β ( δ ) β ( δ ) + ( δ − δ ) H ( δ ) with f ′ 1 ( δ ) = H ( δ ) − H ( δ ) < 0, follo win g the decreasing pr op ert y of H ( · ). Hence, f 1 ( δ ) is a decreasing function of δ for δ ≤ δ and inf δ ≤ δ f 1 ( δ ) = f 1 ( δ ). T herefore, inf δ ≤ δ sup π ∈ Γ ρ ( δ π , δ ) = ρ ( δ , δ ) . 4 Case 2. F or δ ≥ δ , we ha v e sup π ∈ Γ ρ ( δ π , δ ) = ρ ( δ , δ ). Let f 2 ( δ ) = ρ ( δ , δ ) = log β ( δ ) β ( δ ) + ( δ − δ ) H ( δ ) with f ′ 2 ( δ ) = H ( δ ) − H ( δ ) > 0. Hence, f 2 ( δ ) is an increasing function of δ for δ ≥ δ and inf δ ≥ δ f 2 ( δ ) = f 2 ( δ ). Therefore, inf δ ≥ δ sup π ∈ Γ ρ ( δ π , δ ) = ρ ( δ , δ ) . Case 3. If δ < δ < δ , then sup π ∈ Γ ρ ( δ π , δ ) = m ax { ρ ( δ , δ ) , ρ ( δ , δ ) } . Let f 3 ( δ ) = f 1 ( δ ) − f 2 ( δ ) w here f ′ 3 ( δ ) = H ( δ ) − H ( δ ) < 0. Since f 3 ( δ ) is a decreasing f unction of δ with f 3 ( δ ) < 0 and f 3 ( δ ) > 0, there exists a un ique δ ∗ ∈ ( δ , δ ) (as the ro ot of f 3 ( δ ) = 0) su c h th at ρ ( δ , δ ∗ ) = ρ ( δ , δ ∗ ). Hence, for δ < δ < δ ∗ , sup π ∈ Γ ρ ( δ π , δ ) = ρ ( δ , δ ) and for δ ∗ < δ < δ , sup π ∈ Γ ρ ( δ π , δ ) = ρ ( δ , δ ). Note that, for δ < δ < δ , ρ ( δ , δ ) is a decreasing fu nction in δ with in f δ <δ <δ ∗ sup π ∈ Γ ρ ( δ π , δ ) = ρ ( δ , δ ∗ ) and ρ ( δ , δ ) is an increasing fun ction in δ with in f δ ∗ <δ< δ sup π ∈ Γ ρ ( δ π , δ ) = ρ ( δ , δ ∗ ). Therefore, inf δ <δ<δ sup π ∈ Γ ρ ( δ π , δ ) = ρ ( δ , δ ∗ ) = ρ ( δ , δ ∗ ) . F ollo wing the ab o v e cases, w e conclude that inf δ ∈ D sup π ∈ Γ ρ ( δ π , δ ) = inf δ <δ<δ sup π ∈ Γ ρ ( δ π , δ ) = ρ ( δ , δ ∗ ) = ρ ( δ , δ ∗ ) . That is, the P RGM estimator of θ is giv en by δ P R = δ ∗ ∈ ( δ , δ ), as the solution of log β ( δ ) β ( δ ) + δ P R H ( δ ) − H ( δ ) + δ H ( δ ) − δ H ( δ ) = 0 , in δ P R whic h results in the estimator (9). ✷ W e giv e some applications of Th eorem 1. Example 1 (Normal distribution). Supp ose X ∼ N ( µ, 1) is a normal ly distribute d r andom variable with unknown p ar ameter µ ∈ R and p df f ( x | µ ) = 1 √ 2 π e − 1 2 ( x − µ ) 2 , −∞ < x < ∞ . The p df f ( x | µ ) b elongs to the exp onential family (3) with θ = µ , and β ( θ ) = e − θ 2 2 . Also, H ( θ ) = − θ , and the i ntrinsic loss function (5) r e duc es to L ( θ , δ ) = 1 2 ( δ − θ ) 2 which is essential ly the usual squar e d err or loss fu nction. L et δ and δ b e define d as in The or em 1. U sing (9) , subje ct to the existenc e of δ and δ , the PRGM estimator of θ in the class Γ of prior distributions is given by δ P R ( X ) = 1 2 ( δ ( X ) + δ ( X )) , which is also obtaine d in Rios Insua et al. (1995) as wel l as Ber g er (1994). Example 2 (Exp onential distribution). Supp ose X ∼ E xp ( σ ) is an e xp onential r andom variable with p df f ( x | σ ) = 1 σ e − x/σ , x > 0 , wher e σ > 0 is the unknown p ar ameter. The p df f ( x | σ ) b elongs to the exp onential family (3) with θ = 1 σ , and β ( θ ) = θ . In this c ase, H ( θ ) = θ − 1 , and the intrinsic loss 5 function (5) r e duc es to the Stein loss L ( θ , δ ) = δ θ − log δ θ − 1 . Using (9) , subje c t to the existenc e of δ and δ , the PRGM e stimator of θ under the Stein loss function is given by δ P R ( X ) = log 1 δ ( X ) − log 1 δ ( X ) 1 δ ( X ) − 1 δ ( X ) . The PRG M estimator of σ is also obtaine d in Example 5. Example 3 (Binomial distribution). Supp ose X ∼ B in ( n, p ) is a binomial r andom variable with pr ob ability mass function (pmf ) f ( x | p ) = n x p x (1 − p ) n − x , wher e n is known, x = 0 , 1 , . . . , n, and p ∈ [0 , 1] is the unknown p ar ameter. Th e pmf f ( x | p ) is a memb er of the exp onential family (3) with θ = log ( 1 − p p ) and β ( θ ) = (1 + e − θ ) − n . We also have H ( θ ) = n 1+ e θ which r esults in the intrinsic loss function L ( θ , δ ) = n log e θ e δ · 1 + e δ 1 + e θ + δ − θ 1 + e θ . (10) Using (9) , subje c t to the existenc e of δ and δ , the P RGM e stimator of θ is given by δ P R ( X ) = δ ( X ) 1+ e δ ( X ) − δ ( X ) 1+ e δ ( X ) − log n e δ ( X ) e δ ( X ) 1+ e δ ( X ) 1+ e δ ( X ) o 1 1+ e δ ( X ) − 1 1+ e δ ( X ) . (11) In Example 7, we obtain the PRGM e stimator of p . W e now consider the PR GM estimation of θ und er conjugate classes of p r ior distribu tions. F or the exp onenti al family (3) and a conjugate prior distribution π α,λ ( θ ) ∝ { β ( θ ) } α e − θ λ , (12) the p osterior d istribution is giv en b y π ( θ | x ) ∝ { β ( θ ) } 1+ α e − ( λ + r ( x )) θ , and π ( θ | x ) = π α +1 ,λ + r ( x ) ( θ ). Also, as established b y Diaconis and Ylvisake r (1979), E [ H ( θ ) | x ] = λ + r ( x ) α +1 . No w, the Ba yes estimator of θ under the intrinsic loss function (5) is obtained b y (e.g., Bernardo and S m ith (1994) , Rob ert (1996) and Gutierrez-P ena(1992 )) δ π ( X ) = H − 1 λ + r ( X ) α + 1 . (13) F ur thermore, the p osterior regret for estimating θ with δ ( x ) is ρ ( δ π , δ ) = log β ( δ π ( x )) β ( δ ( x )) +( δ ( x ) − δ π ( x )) λ + r ( x ) α +1 . No w, su pp ose that the p r ior distribution π α,λ b elongs to the follo wing class of conjugate prior d istri- butions: Γ = { π α,λ ( θ ) : α ∈ [ α 1 , α 2 ] , λ ∈ [ λ 1 , λ 2 ] } , 6 with suitable c hoices of α 1 < α 2 and λ 1 < λ 2 leading to prop er p osterior distributions for θ . A straigh tforward calculat ion sho w s th at H ( δ ( x )) = λ 2 + r ( x ) α 1 +1 and H ( ¯ δ ( x )) = λ 1 + r ( x ) α 2 +1 . Hence, w e can state the follo w ing result. Lemma 2 Supp ose U ( t ) = H − 1 ( t ) with H ( t ) = β ′ ( t ) /β ( t ) . The PRGM estimate of θ for the exp onen- tial family (3) under the intrinsic loss function (5) and in the class Γ of prior distributions is given by δ Γ P R ( x ) = λ 1 + r ( x ) α 2 +1 U λ 1 + r ( x ) α 2 +1 − λ 2 + r ( x ) α 1 +1 U λ 2 + r ( x ) α 1 +1 − log β ( U ( λ 1 + r ( x ) α 2 +1 )) β ( U ( λ 2 + r ( x ) α 1 +1 )) λ 1 + r ( x ) α 2 +1 − λ 2 + r ( x ) α 1 +1 . (14) Remark 1 One c an also c onsider other classes of c onjugate priors su c h as Γ 1 = { π α,λ 0 ( θ ) : α ∈ [ α 1 , α 2 ] , λ 0 is fixe d } or Γ 2 = { π α 0 ,λ ( θ ) : α = α 0 is fixe d , λ ∈ [ λ 1 , λ 2 ] } . The P R GM e stimator of θ in Γ 1 or Γ 2 c an b e obtaine d using (14) and by letting λ 1 = λ 2 = λ 0 or α 1 = α 2 = α 0 , r esp e ctively. Example 4 In Example 2 , let π α,λ ( θ ) ∝ θ α − 1 e − θ λ with the p osterior distribution π ( θ | x ) = π α +1 ,λ + x ( θ ) , and δ π ( x ) = α +1 λ + x . U sing (14) , the PRGM estimator of θ under the Stein loss function L ( θ , δ ) = δ θ − log δ θ − 1 in Γ = { π α,λ ( θ ) : α ∈ [ α 1 , α 2 ] , λ ∈ [ λ 1 , λ 2 ] } , with 0 < α 1 < α 2 and 0 < λ 1 < λ 2 is given by δ Γ P R ( X ) = log α 1 + 1 α 2 + 1 λ 1 + X λ 2 + X λ 1 + X α 2 + 1 − λ 2 + X α 1 + 1 . In Γ 1 , as define d in R emark 1, we have δ Γ 1 P R ( X ) = ( α 1 + 1) ( α 2 + 1) α 1 − α 2 log α 1 + 1 α 2 + 1 1 λ 0 + X . Similarly, in Γ 2 , we have δ Γ 2 P R ( X ) = α 0 + 1 λ 2 − λ 1 log λ 2 + X λ 1 + X . 3 Intrinsic PR GM e stima tion In Section 2, we obtained the PRG M estimato r of the n atural p arameter θ of the exp onen tial family un - der the intrinsic loss function. In some applications, there may b e in terest in fi n ding PR GM estimation of the original parameter of th e un derlying mo del rather th an th e natural p arameter θ . Un fortunately , lik e m any other m etho ds, PR GM estimators are not necessarily in v arian t under reparameterization. Although r esu lts of this nature, th at are n ot inv arian t und er reparameterizatio n, can sometimes b e in teresting in theory , they tend to b e less useful in practice. I ndeed, it is difficult to s ell to a p racti- tioner that the P R GM estimator of h ( θ ) is not necessarily h ( δ P R ). In this section, we obtain PRG M estimators that are inv arian t und er one-to-one sm o oth reparameterizations, hence the name intrinsic PR GM estimators. F or the exp onential family (3), as opp osed to the well kno w n and commonly used conjugate prior 7 (12), consider the follo wing conjugate prior distribu tion for θ π J α,λ ( θ ) ∝ { β ( θ ) } α e − λθ p I θ ( θ ) , (15) where I θ ( θ ) is the Fisher information for θ . Druilhet and P ommeret (2012) in tro duced (15) and referred to it as the Jeffr ey’s Conjugate Prior (JCP). It is easy to see that the JC P is in v arian t u nder smo oth reparameterizations, and the necessary conditions on α and λ in (15), leading to prop er p osterior distributions, do not dep end on the c hoice of the reparameterization. The in v ariance p rop erty of JCP und er any s mo oth and one-to-one r eparameterizatio n η = h ( θ ) can b e sho wn by th e follo wing relationship I η ( η ) = I θ ( h − 1 ( η )) × dh − 1 ( η ) dη 2 . Remark 2 F or the exp onential family (3) , sinc e I θ ( θ ) = − H ′ ( θ ) , the JCP i s given by π J α,λ ( θ ) ∝ { β ( θ ) } α e − λθ p − H ′ ( θ ) . First, we giv e the follo wing result. Lemma 3 Supp ose δ J π is the Bayes estimator of the natur al p ar ameter θ of the exp onential family (3) under the intrinsic loss fu nction (4) with r esp e ct to the JCP distribution (15) . F or every one- to-one smo oth tr ansformatio n h ( θ ) , the Bayes estimator of h ( θ ) is h ( δ J π ) . Pro of: Th e p ro of is similar to the pro of of Lemma 6.2 of Rob ert (199 6) and hence omitted. No w, w e state the main result of this section whic h can easily b e pro v ed using the in v ariance prop ert y of b oth the class of JCP d istributions and the intrinsic loss f unctions under smo oth reparameterization of θ . Theorem 2 Supp ose δ Γ J I P R ( X ) is the PRGM estimator of the unknown p ar ameter θ for the exp onential family (3) u nder the intrinsic loss function (5) with r esp e ct to a c lass Γ J of JCP distributions for θ . Then, for any one-to-one smo oth tr ansforma tion h ( θ ) , the P RGM e stimator of h ( θ ) is h ( δ Γ J I P R ( X )) . Pro of: By d efinition, the PRGM estimator of h ( θ ) in th e class Γ J of JCP distributions is giv en b y the solution of inf δ sup π ∈ Γ J ρ ( δ h π , δ ) = in f δ sup π ∈ Γ J log β ( δ h π ) β ( δ ) + ( δ − δ h π ) H ( δ h π ) , where δ h π is the Bay es estimator of h ( θ ). Note that ρ ( δ h π , δ ) = L ( δ h π , δ ) w here L is d efi ned in (5). No w, using the in v ariance prop ert y of L and Lemma 3, since δ h π = h ( δ π ), with δ π b eing the Ba yes estimator of θ , we hav e inf δ sup π ∈ Γ J ρ ( δ h π , δ ) = in f δ sup π ∈ Γ J ρ ( h ( δ π ) , δ ) = in f t : h ( t )= δ sup π ∈ Γ J ρ ( h ( δ π ) , h ( t )) = in f t : h ( t )= δ sup π ∈ Γ J ρ ( δ π , t ) . 8 Therefore, if δ Γ J I P R ( X ) is the PR GM estimator of θ , i.e., δ Γ J I P R minimizes (in t ) sup π ∈ Γ J ρ ( δ π , t ), then, the transform h ( δ Γ J I P R ( X )) is the PR GM estimator of h ( θ ), that is, h ( δ Γ J I P R ) minimizes (in δ ) su p π ∈ Γ J ρ ( δ h π , δ ) and this completes th e pro of. ✷ Example 5 Supp ose X ∼ E xp ( σ ) with σ, x > 0 . In E xample 2, we showe d that the intrinsic loss for estimating θ = σ − 1 by δ r e duc es to the Stein loss f unction L ( θ , δ ) = δ θ − log δ θ − 1 . Under the JCP distribution π J α,λ ( θ ) ∝ θ α − 2 e − θ λ , α > 1 , the p osterior distribution is a Gamma ( α, 1 λ + x ) with π J ( θ | x ) ∝ θ α − 1 e − ( λ + x ) θ which r esults in the Bayes estimator of θ as δ π ( X ) = α λ + X . Also, the intrinsic PRGM estimator of θ under L ( θ , δ ) is given by δ Γ J I P R ( X ) = log 1 δ ( X ) − log 1 δ ( X ) 1 δ ( X ) − 1 δ ( X ) . Now, for the estimation of η = σ = 1 θ using ˜ δ , it is e asy to se e that the Bayes estimator of η under the entr opy loss function L ( η , ˜ δ ) = η ˜ δ − log η ˜ δ − 1 , is giv en by ˜ δ π ( X ) = λ + X α = 1 δ π ( X ) . T o se e this, note that π J ( η ) ∝ η − α e − λ/η with π J ( η | x ) ∝ η − ( α +1) e − λ + x η and ˜ δ π ( x ) = E [ η | x ] . Also, the intrinsic PRGM estimator of η is given by ˜ δ Γ J I P R ( X ) = ˜ δ ( X ) − ˜ δ ( X ) log ˜ δ ( X ) − log ˜ δ ( X ) = 1 δ ( X ) − 1 δ ( X ) log 1 δ ( X ) − log 1 δ ( X ) = 1 δ Γ J I P R ( X ) . F or the PRGM estimation of θ under the E ntr opy loss function and its applic ation to r e c or d data analysis we r efer to J afari Jozani and P arsian (2008). Similarly, if η ∗ = − 1 a log θ , α 6 = 0 , then the intrinsic PRGM estimator of η ∗ under the LINEX loss function L ( η ∗ , δ ∗ ) = e a ( η ∗ − δ ∗ ) − a ( η ∗ − δ ∗ ) − 1 , is given by δ ∗ Γ J I P R ( X ) = δ ∗ ( X ) + 1 a log ( e a ( δ ∗ ( X ) − δ ∗ ( X )) − 1 a ( δ ∗ ( X ) − δ ∗ ( X )) ) = 1 a log δ Γ J I P R ( X ) , which is the PR GM estimator obtaine d in Bor aty ´ nska (2006). F or the exp onen tial family (3 ), sup p ose that th e prior distribution b elongs to th e follo wing class of JCP distribu tions: Γ J = { π J α,λ ( θ ) : α ∈ [ α 1 , α 2 ] , λ ∈ [ λ 1 , λ 2 ] } , (16) for suitable c hoices of α 1 < α 2 and λ 1 < λ 2 . W e con tinue with some applications of Theorem 2 under the ab o v e class of priors. Similar results can b e ob tained in other classes of JCP distribu tions (see 9 Remark 1), whic h we do not pr esen t h ere. In view of Th eorem 2, and to obtain an in trinsic P RGM estimator, the critical cond ition is that the elements of the underlying class of pr ior d istributions are in the form of (15) and the underlying loss function is in trinsic. W e observe that, in many cases (see Examples 6 and 7) int rinsic PRGM estimators u n der Γ J can b e obtained using the PRGM estimators under the usual class Γ of conjugate priors with mo dified v alues of α i s and λ i s in Γ, i = 1 , 2. O ne can easily c hec k that this w ill happ en wh enev er the mean-v alue p arameter is conju gate for the natural parameter in th e sen s e of Gutierrez-P ena and Smith (1995). In the one-parameter case, a sufficient condition for this is that the exp onenti al f amily ha ve a quadratic v ariance f unction (see Section 3.3 of Gutierrez-P ena and Smith (1995)). Example 6 In Example 5, we showe d that π J α,λ ( θ ) ∝ θ α − 2 e − θ λ and δ π ( x ) = α λ + x . Si nc e π J α,β ( θ | x ) is e qual to π ( θ | x ) , the p osterior distribution of θ , given the usual c onjugate prior π α − 1 ,λ + x ( θ ) , the intrinsic PRGM estimator of θ under the Stein loss fu nction and the class of JCP distributions c an b e obtaine d using the PRG M estimator of θ under the usual class of c onjugate priors ( as in Example 4), by r eplacing α i with α i − 1 , i = 1 , 2 . F or example, the intrinsic PRGM estimator of θ in Γ J with 0 < α 1 < α 2 and 0 < λ 1 < λ 2 is given by δ Γ J I P R ( X ) = log α 1 α 2 λ 1 + X λ 2 + X λ 1 + X α 2 − λ 2 + X α 1 . L et Γ J 1 = { π J α,β ( θ ) : α ∈ [ α 1 , α 2 ] and λ = λ 0 } . Then, the i ntrinsic PRGM estimator of θ in Γ J 1 , with 0 < α 1 < α 2 and λ 0 > 0 , is given by δ Γ J 1 I P R ( X ) = α 1 α 2 α 1 − α 2 log α 1 α 2 1 λ 0 + X . Similarly, in Γ J 2 = { π J α,β ( θ ) : α = α 0 fixe d and λ ∈ [ λ 1 , λ 2 ] } , the intrinsic P RGM estimator of θ in Γ J 2 , with α 0 > 0 and λ 1 , λ 2 > 0 , i s given by δ Γ J 2 I P R ( X ) = α 0 λ 2 − λ 1 log λ 2 + X λ 1 + X . Similar r esults c an b e obtaine d f or estimating any smo oth and one-to-one function of θ under c orr e- sp onding class of JCP distributions. Example 7 (Binomial Distribu tion). In Example 3, we showe d that pmf of X c an b e written as f ( x | θ ) = n x ( e θ 1+ e θ ) n e − xθ with θ = log ( 1 − p p ) . He r e I θ ( θ ) = e θ (1+ e θ ) 2 and the JCP for θ is obtaine d as π J α,λ ( θ ) ∝ ( e θ 1+ e θ ) α e − λθ e θ/ 2 1+ e θ . This r esults in the p osterior distribution π J ( θ | x ) ∝ ( e θ 1+ e θ ) α + n +1 e − ( x + λ + 1 2 ) θ . Sinc e π J ( θ | x ) is e qual to π ( θ | x ) , the p osterior distribution of θ , given the usual c onjugate prior π α +1 ,λ + 1 2 ( θ ) , the intrinsic PRGM estimator δ Γ J I P R ( X ) of θ in Γ J c an b e obtaine d using (11) and by r eplacing α i and λ i with α i + 1 and λ i + 1 2 , i = 1 , 2 , r esp e ctively. Also, the intrinsic PRGM estimator of p = 1 1+ e θ under the loss function L ( p, ˜ δ ) = p log p ˜ δ + (1 − p ) log 1 − p 1 − ˜ δ , 10 is given by ˜ δ Γ J I P R ( X ) = { 1 + e δ Γ J I P R ( X ) } − 1 . 4 PR GM, Intrinsic PRGM and Ba yes es tima tors In this section, w e provide s ome general results concerning the Bay esianit y of the PRGM and intrinsic PR GM estimators of θ for the exp onent ial family d istr ibution (3) un der the intrinsic loss function (5 ) with resp ect to pr iors in the underlyin g class of pr ior distributions. The results are only presen ted for PR GM estimators of θ , but they can also b e used for in trinsic PRGM estimators by simp le mo d ifica- tions. O ur framework in this section closely resem bles the one introd u ced b y Rios Insua et al. (1995), who considered similar pr oblem for the quadratic loss function. Results of this nature are also obtained b y Zen and DasGup ta (199 3) un der the quadratic loss fun ction for the binomial d istribution. S ev eral of the follo wing preliminary results and detailed p ro ofs are rep orted here for the sak e of completeness. The idea is to c h ec k the cont in uit y of th e u n derlying Ba yes estimator with resp ect to the prior. Similar to Rios In sua et al. (1995) we study t wo cases, when (a) the class of prior distrib utions is con v ex, or (b) the u nderlying class of prior distrib utions dep ends on a hyp er-parameter b elonging to a connected set. First, consider the s ituation where the class Γ of pr iors is con v ex. T hat is, if π 0 , π 1 ∈ Γ, then π t = tπ 0 + (1 − t ) π 1 b elongs to Γ, for any t ∈ [0 , 1]. Sup p ose that X is a rand om v ariable w hose d ensit y b elongs to the family of d istributions (3). Let ψ ( t ) = H ( δ π t ( x )) whic h is a d ecreasing fu n ction of δ π t for an y t ∈ [0 , 1]. In the next lemma we sho w that ψ ( t ) is a con tinuous function in its domain t ∈ [0 , 1]. Lemma 4 Supp ose ψ ( t ) , the p osterior exp e ctation of H ( θ ) = β ′ ( θ ) β ( θ ) when π t = tπ 0 + (1 − t ) π 1 , t ∈ [0 , 1] , is finite. Then, ψ ( t ) is c ontinuous in t , t ∈ [0 , 1] . Pro of. L et a i = R Θ H ( θ ) π i ( θ ) f ( x | θ ) dθ , m i ( x ) = R Θ π i ( θ ) f ( x | θ ) dθ for i = 1 , 2, and sup p ose th at a i s and m i s exist and are fi nite. Then ψ ( t ) = E π t [ H ( θ ) | x ] = t R Θ H ( θ ) f ( x | θ ) π 0 ( θ ) dθ + (1 − t ) R Θ H ( θ ) f ( x | θ ) π 1 ( θ ) dθ t R Θ f ( x | θ ) π 0 ( θ ) dθ + (1 − t ) R Θ f ( x | θ ) π 1 ( θ ) dθ = ta 0 + (1 − t ) a 1 tm 0 ( x ) + (1 − t ) m 1 ( x ) , whic h is a cont in uous function of t , t ∈ [0 , 1]. ✷ No w, we u se the con tinuit y of ψ ( t ) to p ro ve that, u nder the conditions of Lemma 4, the PRGM estimator δ P R is Ba ye s if the class of priors is con v ex. Theorem 3 Supp ose Γ is a c onvex class of prior distributions on the unknown p ar ameter θ of the exp onential family of distributions (3) . Then, ther e exi sts a prior distribution π ∈ Γ suc h that δ P R = δ π , wher e δ P R is define d in (9) . Pro of. F ollo wing the d efinition of δ and δ , consider a s mall enough ε > 0 and tw o p rior distr ibutions π 0 , π 1 ∈ Γ suc h that δ π 0 < δ + ǫ < δ P R < δ − ǫ < δ π 1 . 11 Since H ( · ) is a decreasing function, then H ( δ π 1 ) < H ( δ − ǫ ) < H ( δ P R ) < H ( δ + ǫ ) < H ( δ π 0 ) . Let π t = tπ 0 + (1 − t ) π 1 , t ∈ [0 , 1] and d efine ψ ( t ) = H ( δ π t ). Note that, ψ (0) = H ( δ π 0 ) and ψ (1) = H ( δ π 1 ). Now, from Lemma 4, th e contin uit y of ψ ( t ) in t sho ws that th ere exists a t ∗ ∈ [0 , 1] suc h that ψ ( t ∗ ) = H ( δ π t ∗ ) = H ( δ P R ) , whic h completes the pro of. ✷ A shortcoming of the result in Th eorem 3 is that it is not applicable to the cases where the class of pr ior distribu tions d ep ends on a hyp er-parameter wh ose range is connected. F or th is case, w e p ro v e Lemma 5 and Th eorem 4 wh ic h are simple extensions of Lemma 3.2 and Prop osition 3.2 of Rios Insu a et al. (1995). The pro of of Lemma 5 is essentiall y s im ilar to the pro of of Lemm a 3.2 of Rios Insua et al. (1995). The same is true of Theorem 4. Nonetheless, w e pro vide th e pro ofs in the Ap p end ix for the sak e of completeness. L et ψ ( π ) = R Θ H ( θ ) f ( x | θ ) π ( θ ) dθ R Θ f ( x | θ ) π ( θ ) dθ = r ( π ) s ( π ) . (17) Consider d ( π , π ′ ) = sup Θ | π ( θ ) − π ′ ( θ ) | to b e the u sual l ∞ distance b et ween prior densities π and π ′ , where H ( t ) = β ′ ( t ) /β ( t ) is defi ned as b efore. Lemma 5 Supp ose that R Θ H ( θ ) f ( x | θ ) dθ exists and i t is finite. Then, ψ ( π ) is c ontinuous in π , in the top olo gy gener ate d by the l ∞ distanc e. Theorem 4 L et Γ = { π α : α ∈ Λ } , wher e Λ is a c onne cte d set and π α ’s ar e densities. U nder the c onditions of L emma 5 and the assumption that α n → α implies d ( π α n , π α ) → 0 , ther e exists a prior distribution π ∈ Γ such that δ P R = δ π , that is, the PR GM estimator (9) is Bayes. In the follo wing lemma, w e provide a sufficient condition u nder whic h the PRG M (or intrinsic PR GM) estimator is Ba yes with resp ect to the same p rior in the underlying class of pr ior d istr ibution, regardless of the obser ved v alue of x . Lemma 6 L et Γ = { π α : α ∈ [ α 1 , α 2 ] } b e the class of prior distributions. Supp ose the Bayes estimator Ψ( α, x ) = H − 1 { E [ H ( θ ) | x ) } is a differ entiable function of the hyp e r- p ar ameter α and the observe d value x . Assume that we ar e under the c onditions of The or em 4. If ∂ ∂ x Ψ( α, x ) = ∂ ∂ x Ψ( α 1 , x ) H (Ψ( α 1 , x )) − Ψ( α 2 , x ) H (Ψ( α 2 , x )) − log β (Ψ( α 1 ,x )) β (Ψ( α 2 ,x )) H (Ψ( α 1 , x )) − H (Ψ( α 2 , x )) , (18) has a c onstant solution in α , then ther e is a data indep endent prior π α ∈ Γ r esulting in the PRGM estimate as the Bayes estimate of the natur al p ar ameter θ of the exp onential family (3 ) under the intrinsic loss function (4) . Pro of: Under the conditions of Theorem 4, there exists a s olution α ( x ) suc h that the PR GM estimator 12 (9) is Ba y es with resp ect to the prior π α ( x ) ∈ Γ und er the intrinsic loss function (4). That is, Ψ( α ( x ) , x ) = Ψ( α 1 , x ) H (Ψ( α 1 , x )) − Ψ( α 2 , x ) H (Ψ( α 2 , x )) − log β (Ψ( α 1 ,x )) β (Ψ( α 2 ,x )) H (Ψ( α 1 , x )) − H (Ψ( α 2 , x )) . No w, differen tiating the equ ation with resp ect to x leads to ∂ ∂ α Ψ( α ( x ) , x ) dα ( x ) dx + ∂ ∂ x Ψ( α ( x ) , x ) = ∂ ∂ x Ψ( α 1 , x ) H (Ψ( α 1 , x )) − Ψ( α 2 , x ) H (Ψ( α 2 , x )) − log β (Ψ( α 1 ,x )) β (Ψ( α 2 ,x )) H (Ψ( α 1 , x )) − H (Ψ( α 2 , x )) . If α ( x ) is data ind ep endent, i.e., α ( x ) = α , then dα ( x ) dx = 0. No w, th e d esir ed v alue for α is the constan t solution to the equation (18) leading to a data ind ep endent pr ior for th e PR GM estimator to b e Ba ye s. ✷ Example 8 In Example 1, the c ondition (18) r e duc es to the c ondition (5) in Pr op osition 3.3 of Rio s Insua et al. (1995) as fol lows 2 ∂ ∂ x Ψ( α, x ) = ∂ ∂ x Ψ( α 1 , x ) + ∂ ∂ x Ψ( α 2 , x ) . Now, c onsider the class Γ = { π α,λ 0 : α ∈ [ α 1 , α 2 ] , λ 0 is fixe d } of c onjugate priors wher e π α,λ 0 is give n by (12 ) with θ = µ and β ( θ ) = e − θ 2 / 2 . H er e, the Bayes estimator of θ is given by Ψ( α, X ) = δ π α,λ ( X ) = X − λ 0 α +1 . It is e asy to se e that, the PR GM estimator of θ given by δ P R ( X ) = 1 2 X − λ 0 α 1 + 1 + X − λ 0 α 2 + 1 , is Bayes with r esp e ct to the data indep endent prior π α ∗ ,λ 0 ∈ Γ wher e α ∗ is given as the solution to the fol lowing e quation 2 α ∗ + 1 = 1 α 1 + 1 + 1 α 2 + 1 . That is, α ∗ = α 1 + α 2 +2 α 1 α 2 α 1 + α 2 +2 ∈ [ α 1 , α 2 ] and δ P R ( X ) = X − λ 0 α ∗ +1 = δ π α ∗ ,λ ( X ) . Example 9 In Example 4, the c ondition (18) r e duc es to ∂ ∂ x Ψ( α, x ) = ∂ ∂ x ( log 1 Ψ( α 1 ,x ) − log 1 Ψ( α 2 ,x ) 1 Ψ( α 1 ,x ) − 1 Ψ( α 2 ,x ) ) . Now, c onsider the class Γ 1 = { π α,λ 0 ( θ ) : α ∈ [ α 1 , α 2 ] , λ 0 is fixe d } of c onjugate priors on θ . H e r e, the Bayes estimator of θ with r e sp e ct to the prior π α,λ 0 ( θ ) is Ψ( α, X ) = δ π α,λ 0 ( X ) = α +1 λ 0 + X . The PRGM estimator of θ is then Bayes with r esp e ct to a data indep endent prior π α ∗ ,λ 0 ( θ ) ∈ Γ 1 , if ther e exists a data indep endent solution α ∗ to the e quation − α ∗ + 1 ( λ 0 + X ) 2 = − log α 1 + 1 α 2 + 1 ( α 1 + 1) ( α 2 + 1) α 1 − α 2 1 ( λ 0 + X ) 2 . 13 A str aightforwar d c alculation shows that α ∗ = ( α 1 + 1) ( α 2 + 1) α 1 − α 2 log α 1 + 1 α 2 + 1 − 1 ∈ [ α 1 , α 2 ] . Ther e f or e, the PRGM estimator of θ under the Stein loss f u nction c an b e obtaine d as the Bayes esti- mator of θ with r esp e ct to the prior distribution π α ∗ ,λ 0 ( θ ) ∈ Γ 1 as fol lows δ Γ 1 P R ( X ) = ( α 1 + 1) ( α 2 + 1) α 1 − α 2 log α 1 + 1 α 2 + 1 1 λ 0 + X = α ∗ + 1 λ 0 + X = δ π α ∗ ,λ 0 ( X ) . Similarly, in Example 6, one c an e asily show that the intrinsic PRGM estimator δ Γ J 1 I P R ( X ) is the Bayes estimator of θ u nder the Stein loss function with r e sp e c t to the prior distribution π J α ∗∗ ,λ 0 ∈ Γ J 1 , when α ∗∗ = α 1 α 2 α 1 − α 2 log( α 1 α 2 ) . Note that 1 /α ∗∗ is the lo g arithmic me an of 1 /α 1 and 1 /α 2 , and α ∗∗ ∈ [ α 1 , α 2 ] . 5 Concluding Remarks In v arian t estimators are u sually demand ing in p ractice. I n this pap er, we ha ve pr o vided general r esults concerning the PR GM estimation of the n atural parameter of the one-parameter exp onen tial family of distributions u nder in trin sic loss f unctions. T he PR GM estimators are sh own to b e inv ariant to one- to-one s m o oth reparameterizatio ns u nder intrinsic loss functions and the class of Jeffrey’s conjugate prior distribu tions. Moreo ve r, when the class of priors are con vex or dep endan t on a h yp er-parameter b elonging to a connected set, we sh o w that the obtained PR GM estimators could b e Ba yes with resp ect to prior d istributions in the u n derlying class of pr iors. Several examples are provided to clarify the results. A ck no wled gements Mohammad Jafari Jozani gratefully ac kno wledges the partial supp ort of the Natural Sciences and Engineering Researc h Council of Canada. Th is w ork was done during the second author’s v isit to the Univ ersit y of Manitoba, Department of Statistics. 6 Appendix 6.1 Pro of of Lemma 5 Supp ose that d ( π , π ′ ) < ǫ . Then , for all θ ∈ Θ, π ( θ ) − ǫ ≤ π ′ ( θ ) ≤ π ( θ ) + ǫ , and so f ( x | θ ) π ( θ ) − f ( x | θ ) ǫ ≤ f ( x | θ ) π ′ ( θ ) ≤ f ( x | θ ) π ( θ ) + f ( x | θ ) ǫ. (19) Up on in tegrating (19) o v er θ we get s ( π ) − ǫ Z Θ f ( x | θ ) dθ ≤ s ( π ′ ) ≤ s ( π ) + ǫ Z Θ f ( x | θ ) dθ . 14 Let θ 0 ∈ Θ b e su c h that H ( θ ) = β ′ ( θ ) β ( θ ) > 0 for all θ < θ 0 , an d H ( θ ) ≤ 0 for all θ ≥ θ 0 . F or θ < θ 0 , m ultiplying (19) b y H ( θ ) ≥ 0 r esults in H ( θ ) f ( x | θ ) π ( θ ) − ǫ H ( θ ) f ( x | θ ) ≤ H ( θ ) f ( x | θ ) π ′ ( θ ) ≤ H ( θ ) f ( x | θ ) π ( θ ) + ǫ H ( θ ) f ( x | θ ) , (20) while for θ ≥ θ 0 w e ha v e H ( θ ) f ( x | θ ) π ( θ ) + ǫ H ( θ ) f ( x | θ ) ≤ H ( θ ) f ( x | θ ) π ′ ( θ ) ≤ H ( θ ) f ( x | θ ) π ( θ ) − ǫ H ( θ ) f ( x | θ ) . (21) Using (20) and (21) and in tegrating o ve r θ , leads to r ( θ ) − ǫ Z Θ H ( θ ) f ( x | θ ) dθ ≤ r ′ ( θ ) ≤ r ( θ ) + ǫ Z Θ H ( θ ) f ( x | θ ) dθ . Since w e assumed th at R Θ H ( θ ) f ( x | θ ) dθ = K 1 < ∞ , then R Θ f ( x | θ ) dθ = K 2 < ∞ , and r ( π ) − ǫK 1 s ( π ) + ǫK 2 ≤ r ( π ′ ) s ( π ′ ) ≤ r ( π ) − ǫK 1 s ( π ) − ǫK 2 . If ǫ → 0, then r ( π ′ ) s ( π ′ ) → r ( π ) s ( π ) . Ther efore, ψ ( π ′ ) → ψ ( π ). 6.2 Pro of of Theorem 4 W e consider π 0 = π α 0 and π 1 = π α 1 . Due to the connectedness of Λ, there is a con tinuous path g ( t ) ∈ Λ , t ∈ [0 , 1] such that g (0) = α 0 , and g (1) = α 1 . Let ψ ( t ) = ψ ( π g ( t ) ) = H ( δ π g ( t ) ) , t ∈ [0 , 1]. ψ ( t ) is a con tin uous function in t , so th ere is t ∗ ∈ [0 , 1] su c h that ψ ( t ∗ ) = ψ ( π P R ) leading to π g ( t ∗ ) ∈ Γ as the prior distribu tion w e w ere lo oking for. Reference s Berger, J.O. (19 94). An overview of r obust Bayesian analysis . T est, 3 , 5– 124. Bernardo , J .M. (2011). Inte gr ate d obje ctive Bayesian estimation and hyp othesis test ing (with discussion) . In Bay es ian Analysis 9 (eds. J.M. Bernardo, M.J. Bayarri, J.O. Ber ger, A.P . Dawid, D. Heckerman, A.F.M. Smith and M. W est). Oxford Universit y P ress, 1–68. Bernardo , J.M. a nd Smith, A.F.M. (1994 ). Bayesian The ory . Chichester: Wiley . Boraty´ nsk a , A. (20 02). Posterior r e gr et gamma-minimax est imation in a normal mo del with asymmetric loss function. Acta Mathematicae, 29 , 7–13. Boraty´ nsk a , A. (2006). R obust Bayesian pr e diction with asymmetric loss function in Poisson mo del of insur anc e risk . Acta Univ ersitatis Lo dz ie nsis, F olia Oeco nomica, 1 9 6 , 123–1 38. Diaconis, P . and Ylvisaker, D. (19 79). Co njugate priors for exp onential famili es . Annals o f Statistics, 7 , 269–2 81. Druilhet, P . and Pommeret D. (201 2 ). Invariant c onjugate analysis for ex p onential families . Bay esia n Analys is, 7 , 235 –248 . Gelman, A. (2004). Par ameterization and Bayesian mo del ling . Jo urnal of the American Statistical Asso cia tion, 99 , 537– 545. G´ omez - D ´ eniz, E. (2009). Some Bayesian cr e dibility pr emiums obtaine d by using p osterior r e gr et gamma- minimax metho dolo gy. Bayesian Analysis, 4 , 223 –242. 15 Gutirrez-Pea, E. (19 9 2). Exp e cte d L o garithmic Diver genc e for Exp onent ial F amilies . In Bay esian Statistics 4 (J.M. Berna rdo, J .O . B erger, A.P . Dawid y A.F.M. Smith, eds.) Oxford: Universit y P ress, 669– 6 74. Gutirrez-Pea, E. y Smith, A.F.M. (19 95). Conjugate Par ametrizations for Nat u r al Exp onen t ial F amilies. Jour- nal of the America n Sta tis tica l Asso cia tio n, 90 , 1 347–1 356. Jafari Joz ani, M., a nd Parsian, A. (2008 ). Posterior r e gr et Γ -minimax estimation and pr e diction b ase d on k -r e c or d data u nder entr opy loss fun ction . Communications in Statistics: Theo ry a nd Me tho ds , 37 , 1 4 , 2202– 2212 . Rob ert, C.P . (19 96). In trinsic loss functions . Theory a nd Decision, 40 , 19 2–214 . Rios Insua, D., and Ruggeri, F. (200 0). R obust Bayesian analysis . Lecture Notes in Statistics 152 , Spr inger- V erlag, New Y o rk. Rios Insua, D., Rugger i, F., a nd Vidakovic, B. (19 95). Some r esults on p ost erior r e gr et Γ -minimax est imation . Statistics & Decisio ns , 1 3 , 315 –351. Zen, M., and Das Gupta, A. (1 9 93). Estimating a binomial p ar ameter: Is r obust Bayes r e al Bayes? Statistics & Decisions , 11 , 3 7–60. 16
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment