Secure Lossless Compression with Side Information
Secure data compression in the presence of side information at both a legitimate receiver and an eavesdropper is explored. A noise-free, limited rate link between the source and the receiver, whose output can be perfectly observed by the eavesdropper…
Authors: Deniz Gunduz, Elza Erkip, H. Vincent Poor
Secure Lossless Compression with Side Informati on Deniz G ¨ und ¨ uz ∗ † , Elza Erkip ∗ ‡ , H. V incent Poor ∗ ∗ Dept. of Electrical Engineer ing, Princeton Un i versity , Princeton, NJ, 08 544 † Dept. o f Electrical Engine ering, Stanford Un i versity , Stanfor d, CA, 9430 5 ‡ Dept. o f Electr ical and Compu ter Engine ering, Polytechn ic Univ ersity , Broo klyn, NY , 1120 1 Email: ∗ { dgun duz, poor } @prin ceton.edu , ‡ elza@poly .ed u Abstract — Secure data compression i n the presence of side informa tion at both a legitimate recei ver and an eav esdropper is explored. A noise-free, limited rate li nk b etween the source and the receiv er , whose output can be perfectly obser ved by th e ea ve s- dropper , is assumed. As opposed to th e wiretap channel model, in which secur e communication can be established by exploiti ng th e noise in the channel, here th e existence of side info rmation at the recei ver is used. Both coded and u ncoded side i nfo rmation are considered. In the coded side inform ation scenario, inner and outer bounds on the compres sion-equivocation rate reg ion are giv en. In the uncoded side inform ation scenario, the av ailability of the l egitimate recei ver’ s and the eav esdropper’ s sid e inform ation at th e encoder is considered, and the compression-equiv ocation rate region is ch aracterized for these cases. It i s shown that th e side inform ation at the encoder can increase the equiv ocation rate at the eav esd ropper . Hence, the side information at t he encoder is shown to be useful in terms of security; thi s is in contrast with the pu re lossless data compression case where side i nfo rmation at the encoder would n ot help. I . I N T R O D U C T I O N Consider a sensor network in wh ich multiple sensors ob- serve an und erlying phenomen on th at needs to be recon - structed at an access po int. While som e sensors mig ht ha ve secure (p ossibly wired ) con nections to the access p oint, o thers might be transmitting over the wireless medium, which can be accessed by an adversar y trying to obtain info rmation about the underlying pheno menon. Furthermo re, th is adversary might have its own observation of the main sour ce. Our goal is to explo re the secu rity issues in this sensor network scena rio. Our model is a simp lified version of the general p roblem, in which we assume a single sensor ( Alice) having d irect access to the underlyin g sour ce that needs to be transmitted to the access point (Bob) r eliably and securely . Furthermore, we assume an id ealized noise-fr ee chan nel whose ou tput can also be observed b y the eavesdropper (Eve). If no side in formation is available to Bob, then we can- not achieve any level of security . Howe ver , if we assume the existence o f a nearby senso r (Charlie) ha ving acce ss to correlated side in formation abo ut Alice’ s sourc e an d a secure limited-rate link to Bob , th is sensor might enab le secure transmission of Alice’ s sour ce u sing its own secure link (see Fig. 1). Our go al is to ch aracterize the c apacities of error-free commun ication lin ks from Alice an d Charlie to Bob su ch that This research was supported in part by the US National Science Foundation under Grants CCF-04-30885, CCF-06-35177, CCF-07-28208, and CNS-06- 25637. A N C N E N Alice Bob Ev e ˆ A N ∆ Charlie R A R C Fig. 1. Side informatio n of Bob is provided by Charlie who has access to his own corre lated side information. Alice’ s infor mation can be r eliably tra nsmitted to Bob, while keeping Eve’ s info rmation about the source limited. Secure com munication over noisy cha nnels in the p resence of a wireta pper has r ecently attracted con siderable interest. Inform ation theo retic security in this c ontext is defined throu gh the equivocation rate a t the wiretap per , whic h can be rou ghly defined as the uncertain ty of th e wiretapp er abo ut the message after observ ing the ch annel o utput. In his pio neering work [1 ], W y ner introduced the wire-tap cha nnel, and showed that it is p ossible to transmit at a positive rate with perfect secr ecy , assuming the wiretapp er’ s ch annel is ph ysically degraded with respect to the rece i ver . Later, W yner’ s analy sis is exten ded to more general broadcast channels in [2], which characterizes the capacity-equ i vocation r ate region. V arious extensions of the wiretap chann el mod el to multiuser scenario s and fading channels h a ve recen tly been in vestigated [3], [4], [5]. In the w iretap chann el mo del, th e potential for secure com- munication arises f rom the fact that the inte nded recei ver has a better qu ality co mmunication ch annel than the wiretap per [2] . In our m odel, since th e comm unication channels are n ot no isy , the technique s of [2] do n ot app ly; however , it is still possible to a chiev e secur ity wh en Bob has higher q uality side infor ma- tion th an Eve as in [6], [7 ]. In [6], Me rhav proved a source - channel sep aration theo rem for the wiretap c hannel assumin g both the chann el and the side infor mation of the wiretapper are physically degrad ed. Recently , Prabhak aran and Ramchand ran [7] con sider the arbitrarily correlated side in formation case focusing only on th e leakage rate to the eavesdropper . Th ey find the minim um leakage rate, and throu gh an example, argue that the a vailability of Bob’ s side in formation to Alice m ight increase Eve’ s u ncertainty abou t Alice’ s source. Secure com- pression of two co rrelated sources is co nsidered in [10], wh ere the eav esdropp er has access to o nly one o f the com pressed b it streams. Our work is also closely r elated to the secret key capacity model of [ 8], [ 9], wh ere correlate d sources are u sed for secure key gener ation. However , our g oal here is not to generate a secret key am ong Alice and Bob. Instead, we wish to commu nicate Alice’ s sou rce to Bo b securely . In this p aper , we first co nsider the case in which the si de informa tion of Bob is provided by Char lie over a no ise-free secure channel. After gi ving in ner and o uter bounds f or the set of achie v able compression -equiv ocation rates for this setup, we focus on the case in which Charlie- Bob link h as enou gh capacity for Bob to obtain Charlie’ s side inform ation lo ss- lessly . F or th is scenar io, which also corr esponds to uncoded side information , we consider cases in which either or both Bob’ s and Eve’ s side informa tion may be av ailable to Alice. W e show that, in th e secure co mpression mo del, as opp osed to the usual lossless co mpression wh ere side inf ormation at the encoder does not improve the perfor mance, the availability of side information to Alice has th e poten tial of imp roving the secrecy performan ce. W e g eneralize the char acterization of the achiev able comp ression and eq uiv o cation rates to all th e sid e informa tion cases an d provid e illustrativ e examp les. I I . S Y S T E M M O D E L W e assum e that Alice ha s access to an N -length source sequence A N , which she wants to transmit to Bob reliably over a noise-free , fin ite c apacity channel. Alice’ s tran smission will also b e perfectly received by an eavesdropper called Eve. W e assume that Eve has her own correlated side informatio n E N . On the o ther han d, a helper , called Ch arlie, has access to correlated side info rmation C N and a limited rate secure channel to Bob (see Fig. 1). W e m odel A N , C N , an d E N as being g enerated indepen dent an d ide ntically distributed (i.i.d.) accordin g to the joint pro bability distribution p A,C,E ( a, c, e ) over the fin ite alp habet A × C × E . While Alice wants to trans- mit her so urce r eliably to Bob , she also wants to maximize the equiv ocation at Eve, which represents th e uncer tainty o f Eve about A N after receiving Alice’ s transmission and combinin g with her (Eve’ s) own side infor mation E N . An ( R A , R C , N ) code for secure source comp ression in this setup is com posed of an encoding function at Alice 1 , f A : A N → { 1 , 2 , . . . , 2 N R A } , an enco ding function at Charlie, f C : C N → { 1 , 2 , . . . , 2 N R C } , and a dec oding function at Bob, g N : { 1 , 2 , . . . , 2 N R A } × { 1 , 2 , . . . , 2 N R C } → A N . The equiv ocation rate of this co de is define d as 1 N H ( A N | f A ( A N ) , E N ) , (1) and the e rror pro bability of the cod e has the usual definition : P N e = P ( g ( f A ( A N ) , f C ( C N )) 6 = A N ) . (2) 1 T o keep the prese ntatio n simple, here we a ssume deterministic coding, but similar to [8], randomized coding can be considere d by assuming that Alice, Bob and Charlie initiall y generat e independent random v ariabl es and keep the rest of the coding scheme determin istic. Proofs would follow similarly . Definition 2.1: W e say that ( R A , R C , ∆) is achievable if, for any ǫ > 0 , there exist an ( R A , R C , N ) code su ch that H ( A N | f A ( A N ) , E N ) ≥ N ∆ and P N e < ǫ . I I I . C O D E D A N D U N C O D E D S I D E I N F O R M A T I O N A T B O B In this section, we give in ner an d outer bo unds to the set of all achie vable ( R A , R C , ∆) triplets. In general, these bou nds do not match. Theor em 3. 1: For the setup above, ( R A , R C , ∆) is ach ie v- able if , R A ≥ H ( A | V ) , (3) R C ≥ I ( C ; V ) , (4) ∆ ≤ max { I ( A ; V | U ) − I ( A ; E | U ) } , and (5) R A + ∆ ≥ H ( A | E ) , (6) where we maximize over aux iliary random v ariables V and U that come fr om the joint d istribution p ( a, c , e, u, v ) = p ( a, c, e ) p ( u | a ) p ( v | c ) with |U | ≤ |A| + 1 and |V | ≤ |C | + 2 . Con versely , if ( R A , R C , ∆) is achievable, the n (3)-(6) hold for some aux iliary rand om variables V and U for which V − C − ( A, E ) and U − A − ( C, E ) fo rm Markov chain s. Pr oo f: The proo f is g i ven in Appen dix I. W e can conside r this problem to be a gen eralization of source coding with coded side infor mation [11], wh ere we have the security constraint in addition to lossless comp res- sion. In the a chiev ab ility of the inner bou nd giv en in Appendix I, Alice’ s encod er , instead of directly binning its observation with re spect to the co ded side inform ation at Bob , uses an auxiliary codebook gener ated b y U to send her observation and creates higher equiv ocation at Eve. This auxiliary codeb ook generation resembles lossy sou rce coding with coded side informa tion [12] for wh ich the single letter characterization of the rate region rem ains to be an open problem. Similar to the inner and ou ter bounds fo r that pro blem [1 3], our inn er and outer bou nds dif fer in the join t distribution of the auxiliary random variables. A special case of the above theorem is obtained when we assume th at R C ≥ H ( C ) , that is, the side info rmation C N of Charlie can be recovered by Bob with an arbitr arily small probab ility of error . I n this scenario , in ord er to keep the presentation simple, we can assume th at a side info rmation sequence B N is av ailable directly to Bob where B N = C N with high prob ability (see Fig. 2 w ith both switches op en). For this uncod ed side inf ormation c ase, the decodin g fu nction a t Bob is re placed b y g N : { 1 , 2 , . . . , 2 N R A } × B N → A N . Th e achiev ability is now defined similarly , for an ( R A , ∆) pair . W e have the f ollowing corollar y which fo llo ws from Th e- orem 3 .1. The proof o f this special case (assuming no rate limitations b etween Alice and Bob) is also g i ven in [7]. Cor ollary 3 .2: For uncoded side information B N at Bob, ( R A , ∆) is a n ach ie vable rate- equiv ocation pair if and on ly if, R A ≥ H ( A | B ) , and (7) ∆ ≤ max { I ( A ; B | U ) − I ( A ; E | U ) } , (8) A N B N E N S B S E Alice Bob Ev e ˆ A N ∆ R A Fig. 2. Uncoded side information at Bob . The states of switches S B and S E model diffe rent scenario s in terms of the side information at the encoder . where we maximize over auxiliar y rand om variables U such that U − A − ( B , E ) form a Markov chain an d |U | ≤ |A| + 1 . While Corollary 3.2 requires an auxiliary codebook gen- erated by U in the gen eral case to con ceal th e source from the eavesdropper, it is sometimes possible that the ordinary Slepian-W o lf bin ning achieves th e hig hest po ssible security in terms o f equ i vocation, i.e., (8) is max imized b y a constan t U . Some definitions are in ord er . Definition 3.1: W e say th at th e side informa tion B is less noisy th an the side inf ormation E if I ( U ; E ) ≤ I ( U ; B ) (9) for every pro bability distribution of the form p ( a, b, e, u ) = p ( a, b, e ) p ( u | a ) . Definition 3.2: Side info rmation E is said to be physically de graded with respect to B if, A − B − E form a Markov chain. W e say E is sto chastically degr aded with respect to B if, th ere exists a jo int pro bability distribution p A ˜ B ˜ E such th at p A ˜ B = p AB , p A ˜ E = p AE , an d A − ˜ B − ˜ E is a Markov chain. The less noisy condition is strictly weaker th an the stochasti- cally d e graded condition [ 14]. Furtherm ore, the compression - equiv ocation rate region dep ends on the jo int distribution p AB E only via its m arginals p AB and p AE . Hence, phy sical degradation and stochastic degradation are equiv a lent in this scenario. Cor ollary 3 .3: For un coded side info rmation at Bob, if Bob has less no isy side inf ormation than Eve, then an ( R A , ∆) pair is ac hiev ab le if and o nly if R A ≥ H ( A | B ) , and (10) ∆ ≤ I ( A ; B ) − I ( A ; E ) . (11) Pr oo f: Achiev ability follows simply b y letting U be constant in Co rollary 3.2. For the con verse, consider any U with the joint d istribution p ( u, a, b, e ) = p ( a, b, e ) p ( u | a ) . W e have [ I ( A ; B ) − I ( A ; E )] − [ I ( A ; B | U ) − I ( A ; E | U )] = [ I ( A ; B ) − I ( A ; E )] − [ I ( A, U ; B ) − I ( B ; U ) − I ( A, U ; E ) + I ( E ; U )] (12) = I ( B ; U | E ) − I ( E ; U | B ) (13) = I ( B ; U ) − I ( E ; U ) ≥ 0 , (14) where the last in equality is due to th e less noisy assumption . Corollary 3 .3 for the special case o f ph ysically degraded side inform ation at Eve is given in [6] as well. T he fo llo wing corollary , which we state without pr oof, gives a condition under which no positive eq uiv o cation can b e achieved. Cor ollary 3 .4: If Bob ’ s side inf ormation is a stochastically degraded version of Eve’ s side information , th en no positive equiv ocation rate is achievable, an d ∆ = 0 . W e use the following simple example (suggested in [ 7]) to illustrate so me of o ur results. Let the or iginal sour ce seq uence A N = ( A 1 , . . . , A N ) av ailable to Alice be an i.i.d. binary sequence of A i ∼ B e rnoul l i (1 / 2 ) ran dom variables. The observation of Bob B N = ( B 1 , . . . , B N ) is ge nerated by indepen dently erasing each element of the A N sequence w ith probab ility p B , that is, B i = A i with p robability 1 − p B , an d B i = e with proba bility p B . Similarly , the o bservation E N = ( E 1 , . . . , E N ) of the eav esdropp er Eve is an ind ependent erased version of A N . W e have E i = A i with pro bability 1 − p E , an d E i = e with pro bability p E . For p E > p B , the side information of Eve is a stochasti- cally degraded version of the side inf ormation o f Bob. Using Corollary 3.3, we know that a con stant U is optimal. Then, the optimal equivocation is ∆ = I ( A ; B ) − I ( A ; E ) = (1 − p B ) − (1 − p E ) = p E − p B . When p B ≥ p E , then B N is a stochastically degraded version of E N . From Corollary 3.4, we ge t ∆ = 0 . I V . S I D E I N F O R M AT I O N A V A I L A B L E T O A L I C E In this section, we consider v arious cases in which Alice also has acce ss to th e side info rmation av ailable to Bob an d/or Eve. W e know f rom the Slepian-W olf source coding that, the av ailability of Bob’ s side information at Alice does no t help in terms of co mpression ra tes. Howe ver , a s shown in [7] via a simple examp le, in the secure compression setup , the av a ilability of B N at Alice potentially enab les h igher e quiv- ocation rates at the eavesdropper . In the following theorem, we ch aracterize the comp ression-equivocation rate regions for various side information scenar ios at Alice. Theor em 4. 1: Consider secur e source com pression f or un - coded side infor mation at Bob as illustrate d in Fig . 2. An ( R A , ∆) p air is achievable if an d on ly if R A ≥ H ( A | B ) , and (15) ∆ ≤ max { I ( A ; B | U ) − I ( A ; E | U ) } , (16) where we maximize over auxiliar y rand om variables U such that th e joint distribution p ( u , a, b , e ) is given in the following table d epending on wh ich switches are closed: Closed Switche s p ( u, a, b, e ) S B p ( a, b, e ) p ( u | a, b ) S E p ( a, b, e ) p ( u | a, e ) S B and S E p ( a, b, e ) p ( u | a, b, e ) In the case when only the switch S E is closed, the rate region can be explicitly given as follows. R A ≥ H ( A | B ) and ∆ ≤ I ( A ; B | E ) . (17) Pr oo f: Th e proof resembles Theo rem 3 .1, and will not be in cluded du e to space limitation s. Note that the av ailability o f either or both of th e side informa tion sequ ences at the transmitter enlarges the space of the auxiliar y ran dom variables U and potentially results in a hig her equivocation rate at th e eavesdropper . T o illustrate this, consider th e r andom er asure side in formation example in Section III. Suppose th at the observation of Bob B N is av a ilable to Alice a s well. Alice can tr ansmit on ly the erased bits of Bob, hence lea king the least am ount of info rmation to Eve. As stated in [ 7], it is possible to show that the optim al auxiliary random variable U satisfies U = A when there is an er asure at Bob, and U is constant o therwise. The optimal equiv ocation rate in th is case 2 is ∆ = p E (1 − p B ) . Note that this equiv ocation is strictly larger th an the o ne without side informa tion. Further more, even if Bob’ s side inform ation is a stochastically degraded versio n o f Eve’ s, i. e., p B > p E , we are still able to achieve a n on-zero equiv ocation rate if this side inform ation c an be p rovided to Alice as well. When only the ob servation of Eve, E N is av ailab le to Alice, from (17) the optimal equiv o cation rate is giv en by I ( A ; B | E ) . In the erasur e example, the op timal equ i vocation rate is found to b e ∆ = p E (1 − p B ) , which is the same as in the case when only switch S B is closed. W e observe that, for this specific example of erased o bservations at Bob an d Eve, the ben efit of having either Bob’ s or Eve’ s side inform ation to Alice is the same. For th is example, it is also po ssible to show that, e ven when both o bservation sequences are a vailable to Alice, the optimal equivocation rate is still ∆ = p E (1 − p B ) . While ther e is n o difference b etween physically o r stochas- tically degraded observations when both switches are open, this is no longer true wh en we consider side in formation at Alice. In the following coro llary , we show that fo r a physically d egraded observation at Eve, th e av ailability of E N to Alice does not help . This is in contrast to stochastically degraded side info rmation E N whose av ailability at Alice would potentially in crease the equiv ocation rate as seen in the example ab ove. Cor ollary 4 .2: If the observation of Eve is a ph ysically degraded version of Bob’ s side info rmation, i.e., A − B − E form a Mar kov chain, then pr oviding this observation to Alice would n ot impr ove the equivocation rate. V . C O N C L U S I O N W e have considere d secure lossless co mpression in the presence o f an eav esdropp er with cor related side inform ation. W e have shown that secure com munication can be enabled by another ag ent who has its own co rrelated side infor mation and a secure link to the legitimate recei ver . W e have stud ied scenar- ios un der wh ich secure com pression co debook s are iden tical 2 There is a typo in the leakage rate of 1 − p Y p Z reporte d in [7]. It s hould hav e been 1 − p Z − p Y p Z . to Slepian-W olf codeboo ks. W e have also character ized the compression -equiv ocation rate region s consider ing av ailability of side inform ation at the enc oder . W e h a ve sh own that, while it is useless in the pur e lossless compression setup, side inf ormation at the e ncoder may help to in crease the equiv ocation rate in secure com pression mod el. A P P E N D I X I P RO O F O F T H E O R E M 3 . 1 Inner b ound: W e fix p ( u | a ) and p ( v | c ) satisfying the condition s in the theorem. T hen we generate 2 N ( I ( A ; U )+ ǫ 1 ) indepen dent codewords of len gth N , U N ( w 1 ) , w 1 ∈ { 1 , . . . , 2 N ( I ( A ; U )+ ǫ 1 ) } , with distribution Q N i =1 p ( u i ) . W e ran- domly bin all U N ( w 1 ) sequen ces in to 2 N ( I ( A ; U | V )+ ǫ 2 ) bins, calling them the auxiliar y bins. For e ach codeword U N ( w 1 ) , we denote the co rrespond ing auxiliary bin index as a ( w 1 ) . On the other hand, we randomly bin all A N sequences into 2 N ( H ( A | V ,U )+ ǫ 3 ) bins, calling them the source bins, and denote the corre sponding bin index as s ( A N ) . W e a lso gener ate 2 N ( I ( C ; V )+ ǫ 4 ) indepen dent co dew ords V N ( w 2 ) of length N , w 2 ∈ { 1 , . . . , 2 N ( I ( C ; V )+ ǫ 4 ) } , with distribution Q N i =1 p ( v i ) . For each typical outcom e of A N , Alice fin ds a jointly typical U N ( w 1 ) . Then she reveals a ( w 1 ) , the auxiliary bin index of U N ( w 1 ) , and s ( A N ) , the source bin index of A N , to both Bob and Eve, that is, the encoding fu nction f A of Alice is comp osed of the pair ( a ( w 1 ) , s ( A N )) . U sing standard technique s, it is possible to show that we have such a u nique index pair with hig h pro bability . The help er , Charlie, observes the outco me of its source C N , finds a jointly typical V N with C N , and send s the index w 2 of V N over th e private chan nel to Bob . With high prob ability C N will be a typical outcome, and there will be a unique V N ( w 2 ) tha t is join tly typical with C N . Bob , having access to V N ( w 2 ) and the auxiliary bin index a ( w 1 ) , can find the jointly typical U N ( w 1 ) corr ectly with high prob ability . Then using V N ( w 2 ) , U N ( w 1 ) an d the source bin index s ( A N ) , Bob can reliably decode the source sequence A N . Letting ǫ i → 0 for i = 1 , 2 , 3 and 4 , we can make th e total co mmunication rate of Alice arbitrar ily close to I ( A ; U | V ) + H ( A | U, V ) = H ( A | V ) , while ha ving an error probability less than ǫ for suffi ciently large N . The equiv ocation rate fo r this schem e can be foun d as 1 N H ( A N | a ( w 1 ) , s ( A N ) , E N ) = 1 N H ( A N ) − I ( A N ; a ( w 1 ) , s ( A N ) , E N ) = 1 N H ( A N ) − I ( A N ; a ( w 1 ) , E N ) − I ( A N ; s ( A N ) | E N , a ( w 1 )) ≥ 1 N H ( A N ) − I ( A N ; U N , E N ) − H ( s ( A N )) (18) = H ( A | U, E ) − H ( A | V , U ) − ǫ 3 (19) = I ( A ; V | U ) − I ( A ; E | U ) − ǫ 3 , where (18 ) follo ws for m the data pr ocessing inequ ality; and (19) follows for m th e fact that s ( A N ) is a random variable over a set o f size 2 N ( H ( A | V ,U )+ ǫ 3 ) . Finally , we also have 1 N H ( A N | a ( w 1 ) , s ( A N ) , E N ) = 1 N H ( A N | E N ) − I ( A N ; a ( w 1 ) , s ( A N ) | E N ) ≥ H ( A | E ) − 1 N H ( a ( w 1 ) , s ( A N )) (20) ≥ H ( A | E ) − R A . (21) Outer boun d: Let J , f A ( A N ) and K , f C ( C N ) . From Fano’ s inequa lity , we have H ( A N | J, K ) ≤ N δ ( P N e ) , where δ ( x ) is a non -negativ e function with lim x → 0 δ ( x ) = 0 . Define U i , ( J, A i − 1 , E i − 1 ) a nd V i , ( K, C i − 1 ) . Note that both U i − A i − ( B i , E i ) a nd V i − C i − ( A i , E i ) f orm Markov chains. Then, we have th e f ollowing chain of in equalities: N R C ≥ H ( K ) ≥ I ( C N ; K ) = N X i =1 I ( C i ; K , C i − 1 ) (22) = N X i =1 I ( C i ; V i ) , where (22) follows f rom the ch ain rule of mutua l in formation and the m emoryless assumption on C i . W e also have N R A ≥ H ( J ) ≥ H ( J | K ) = H ( A N , J | K ) − H ( A N | J, K ) ≥ H ( A N | K ) − N ǫ (23) = N X i =1 H ( A i | K, A i − 1 ) − N ǫ ≥ N X i =1 H ( A i | K, A i − 1 , C i − 1 ) − N ǫ (24) = N X i =1 H ( A i | K, C i − 1 ) − N ǫ (25) = N X i =1 H ( A i | V i ) − N ǫ, where (23) follows fro m Fano’ s inequality an d no nnegativity of entropy; (24) follows as A i − ( K , A i − 1 ) − C i − 1 form a Markov chain; an d (25) follows as A i − ( K , C i − 1 ) − A i − 1 form a Markov ch ain. Finally , we can also ob tain H ( A N | J, E N ) = H ( A N | J ) − I ( A N ; E N | J ) = H ( A N | J, K ) + I ( A N ; K | J ) − I ( A N ; E N | J ) = N X i =1 I ( A i ; K | J, A i − 1 ) − H ( E i | J, E i − 1 ) + H ( E N | A N , J ) + N ǫ (26) ≤ N X i =1 I ( A i ; K | J, A i − 1 , E i − 1 ) − H ( E i | J, E i − 1 , A i − 1 ) + H ( E N | A N ) + N ǫ (27) ≤ N X i =1 I ( A i ; K , C i − 1 | J, A i − 1 , E i − 1 ) − H ( E i | J, E i − 1 , A i − 1 ) + H ( E i | A i ) + N ǫ (28) = N X i =1 [ I ( A i ; V i | U i ) − H ( E i | U i ) + H ( E i | A i )] + N ǫ (29) = N X i =1 [ I ( A i ; V i | U i ) − I ( A i ; E i | U i )] + N ǫ ( 30) where (26) follows fro m the Fano’ s ineq uality an d the ch ain rule of mutual inform ation; (2 7) follows fr om the m emoryless proper ty of the sour ce and the side in formation sequ ences, and the fact that co nditioning reduc es entropy; ( 28) follows fro m the ch ain rule and n on-negativity of mu tual inform ation; (2 9) follows from the definitions of V i and U i giv en above and the fact that con ditioning re duces en tropy; (3 0) fo llows since U i − A i − E i . W e d efine an independ ent random variable Q u niformly distributed over the set { 1 , 2 , . . . , N } , a nd A = A Q , E = E Q , V = ( V Q , Q ) , an d U = ( U Q , Q ) . Then from the usual technique s, (3)-( 5) follo w while V − C − ( A, E ) and U − A − ( C, E ) are Mar kov chain s. Finally , we also h a ve 1 N H ( A N | E N ) ≤ 1 N H ( A N , J | E N ) = 1 N H ( J | E N ) + H ( A N | E N , J ) ≤ H ( J ) N + ∆ ≤ R A + ∆ . R E F E R E N C E S [1] A. D. W yner , “The wire-tap channe l, ” Bell Syst. T ech . J . , vol. 54, no. 8, pp. 1355-1387, Oct. 1975. [2] I. Csisz ` ar and J. K ¨ orner , “Broadcast channels with confidentia l messages, ” IEEE T ran s. Inf. Theory , vol. 24, pp. 339-348, May 1978. [3] E. T ekin and A. Y ener , “The Gaussian m ulti ple-acc ess wire-tap channel, ” submitted to IEE E T rans. Inf. Theory , May 2006. [4] Y . Liang, H. V . Poor and S. Shamai, “Secure communicat ion over fading channe ls, ” IEEE T rans. Inf. Theory , vol. 54, no. 6, June 2008, to appear . [5] R. Liu, I. Maric, P . Spasoje vic and R. Y ates, “Discrete memoryl ess interfe rence and broadcast channe ls with confidential messages: Secrecy capac ity regio ns, ” submitted to IEEE T rans. Inf. Theory . [6] N. Merha v , “Shannons secre cy system with informed recei vers and its applic ation to systematic coding for wiretapp ed channels, ” submitted to IEEE T ran s. Inf. Theory , 2007. [7] V . Prabhaka ran and K. Ramchandran, “On secure distribu ted source coding, ” Proc . IEEE Inf. Theory W orkshop , Lake T ahoe, CA, Sept. 2007. [8] R. Ahlswede and I. Csisz ` ar , “Common randomness in information theory and cryptography . Part 1: Secret sharing, ” IEEE T rans. Inf. Theory , vol. 39, no. 4, pp. 1121-1132, July 1993. [9] U. Maurer , “Secret ke y agree ment by public discussion from common informati on, ” IEEE T rans. Inf . Theory , vol.39, no.3, pp.733-742, May 1993. [10] W . Luh and D. Kundur , “Separat e encipheri ng of correlat ed messages for confidenti ality in distribute d networks, ” Proc. IEEE Global Commun. Conf . , W ashing ton, D.C., Nov . 2007. [11] A. W yner , “On source coding with side information at the decode r , ” IEEE T ran s. Inf. Theory , vol. 21, no. 3, pp. 294-300, May 1975. [12] T . Berger , et al., “ An upper bound on the rate distortio n function for source coding with partial side information at the decoder , ” IEEE T rans. Inf. Theory , vol. 25, no. 6, pp. 664-666, Nov . 1979. [13] S. T ung, Multiterminal Source Codin g , PhD T hesis, Cornel l Univ ., 1978. [14] J. K ¨ orne r and K. Marton, “ A source network problem inv olvi ng the comparison of two channels, ” T rans. Colloq. Inf. Theory , Ke szthely , Hungary , Aug. 1975.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment