Secrecy via Sources and Channels
Alice and Bob want to share a secret key and to communicate an independent message, both of which they desire to be kept secret from an eavesdropper Eve. We study this problem of secret communication and secret key generation when two resources are a…
Authors: Vinod M. Prabhakaran, Krishnan Eswaran, Kannan Ramch
1 Secrec y via Source s and Channel s V inod M. Prabhakaran, Kris hnan Eswaran, and Kannan Ramchandran Email: vinodmp@ tifr.res.i n , krish.eecs @gmail.com , kannanr@ee cs.berkele y.edu Abstract Alice and Bob want to shar e a secre t key and to co mmunicate an in depend ent message, both of wh ich they desire to b e kept secret fr om an eavesdropper Eve. W e study this problem of secret commun ication and secret key generation when tw o resou rces are a v ailable – corr elated sources at Alice, Bob, and Eve, and a noisy broadcast channel from Alice to Bob and Eve which is inde pendent o f the sou rces. W e are interested i n character izing the fu ndamen tal trade- off between the rates of the secr et message and secret ke y . W e presen t an achievable solutio n and p rove its optimality for the parallel channels and sources case when each sub-ch annel an d source componen t satisfies a degrad ation order (either in fav o r of the legitimate re ceiv er or the eavesdropper). This includes the case of join tly Gaussian sources and an add iti ve Gaussian ch annel, for which the secre cy region is evaluated. I . I N T RO D U C T I O N Alice has a s ecret message sh e wants to s end to Bob, but u nfortunately , sh e must do so in the presence of Eve, a n eavesdropper . This pape r explores a n ew dimension of this familiar problem: h ow can Alice efficiently utilize two d isparate resources to keep this me ssage s ecret from Eve? The first resource is a one-way noisy b roadcast chan nel from Alice t o Bob a nd Eve, and t he s econd resource is the presenc e of correlated s ource observations a t Alice, Bob , and Eve. Specifically , we are interested in understanding how to design strategies t hat “fuse” these res ources optimally in orde r to s upport secure communication between Alice an d Bob . V . M . Prabhak aran is with the School of T echnology and Computer S cience, T ata Institute of F undamental Research, Mumbai 400005 , India. K. Eswaran wa s wit h the Department of Electrical E ngineering and Computer S ciences, Univ ersity of California, Berkele y CA 9472 0, USA; he is no w with Goog le Inc., USA. K. Ramchandran is with the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley CA 94720, USA. October 27, 2018 DRAFT 2 There already exists a body of l iterature for ca ses in which only one of t hese resource s is a vailable. W yner’ s seminal work, “The W ire-tap Channel” [1] considered secure communication over degraded broadcas t channels [ 2] and was later generalized by Csisz ´ ar and K ¨ orner [ 3] to cover all broadcast channels. Analogous ly , Ahlswede and Csisz ´ ar [4] and Maurer [5] reco gnized that depende nt source o bservations av a ilable a t the termi nals can be u sed as a res ource for generating a secret-key – a uniform random v ariable shared by Alice a nd Bob wh ich Eve is obli vious of – if the terminals can communicate over a noiseles s public chan nel ( which delivers all its inpu t fait hfully to all t he terminals including the eavesdropper). In [4], the secret-key capa city of dependent s ources was ch aracterized if a one-way n oiseless public channe l from Alice to Bob and Eve of uncons trained capacity is av ailable. The characteriza tion for the case when there is a constraint on the capac ity of the pu blic channel w as later found by Csisz ´ ar and Narayan [6] as a special case of their results on a class of common r andomne ss generation problems using a helper . As in the cha nnel setting, one c an a lso exploit distri buted sou rces for sending a secret messag e. The p resent in ves tigation is moti vated by wireless sensor networks, in which sensors have acces s to both a wireless ch annel and their correlated sensor readings. No te t hat in such s ituations, fading c an cause the chan nel characteristics to be more or les s fa v orable to sec recy a t dif ferent points in time. Thus, when the chan nel characteristics are fav orab le, it can be advantageous for Alice and Bob, ins tead of (or in a ddition to) send ing a specific sec ret message , to simply agree o n a se quence of pri vate common random bits (a secret key) to be use d l ater when the characte ristics a re unfa vorable. See Khalil et al. [7] f or an example of how this can enable a form of secure c ommunication w ith d elay constraints unde r fading chan nels. Not s urprisingly , it turns o ut that in some s ettings, on e ca n achieve higher rates for the secret key than the more restricti ve sec ret messa ge. The general prob lem we conside r abstracts this issue into considering a tradeoff between trans mitting a u niform s ource pri vately (a s ecret message) and generating pri vate common randomness (a s ecret key). A related model for the s ecret message case w as studied by Chen and V inck [8], who co nsider a channel with non-causa l c hannel state information at Alice, and the c hanne l is degraded in f av or of Bob, as in W y ner’ s wiretap channel. A more recent work by Khisti, Diggavi and W ornell [9] also examines secret key ag reement when non-caus al cha nnel state information is av ailable at Alice. The achiev a bility result there coincides with the results in this paper when it is specialized to the one there. In independen t and c oncurrent work as ou rs, Kh isti, Digg avi and W ornell [10] also in vestigate secrecy in a simi lar s etting, but their focus is solely on the question of secret key generation and limited to the case where only Alice and Bob have source o bservations. The achiev ability results and a n o ptimality re sult in the ir work coincide with the results in this pa per und er October 27, 2018 DRAFT 3 the s pecialized se tting ab ove. Ad ditionally , a gen eral u pper bou nd on the s ecret key cap acity is provided in [10]. In contradistinction to these other works, our main contrib utions are (i) a n ac hiev a ble trade-of f betwee n secret-key and se cret-message rates when b oth dep endent sou rces and a one -way broadca st channel are av a ilable, ( ii) a pr oof of optimality of this trade-off for para llel channels an d sources whe n each s ub- channe l and source co mponent s atisfies a degrada tion order either in fav or of Bob or in fa vor or of Eve, and (iii) evaluation o f this optimal trade -of f in the Gaus sian case. Section II giv es a formal d escription of the problem setup , and S ection III des cribes the ma in results presented in this work. Section IV giv es an interpretation to the achie vabili ty part of the coding theorem The pape r concludes with a d iscussion and directions for future work in Section V. p Y , Z | X B O B E V E A L I C E Y n Z n X n ˆ M , ˆ K K M S n A S n B S n E Fig. 1. Problem setup: Alice and Bob want to share a key K an d ind ependent message M , both of which they want to be kept secret from E ve. Ali ce has a memoryless broadcast channel to Bob and Eve. Additionally , Alice, Bob, and Eve ha ve make correlated memoryless source observations. I I . P R O B L E M S E T U P Notation: W e deno te r andom v ariables by upp er- case lett ers ( e.g . , X ), t heir realizations by l ower -cas e letters ( e.g. , x ), and the alphabets over which they take values by calligraphic letters ( e .g. , X ). A vector ( X k , X k +1 , . . . , X n ) wil l be denoted by X n k . W hen k = 1 , the subscript will be d ropped as in X n = ( X 1 , X 2 , . . . , X n ) . W e con sider the follo wing mo del. Alice, Bob a nd Eve ob serve, respe cti vely , the dependen t memoryless process es (sources) S A,k , S B ,k , S E , k , where k = 1 , 2 , . . . is the time index. They have a joint distrib ution October 27, 2018 DRAFT 4 p S A ,S B ,S E over the alpha bet S A × S B × S E . Indepe ndent of these source s, there is a memoryless b roadcas t channe l from Alice to Bob and Eve gi ven by p Y , Z | X , where X k is the input to the channel, Y k is Bob’ s output, and Z k Eve’ s . W e will also a llo w Alice to have a ccess to a p ri vate random variable Φ A which is not available to Bob an d Eve and which is independe nt of all other random variables. Ali ce may use this priv a te random variable for purposes of rand omization. For ǫ > 0 , a ran dom v ariable U is defined to be ǫ -r ecoverable from another random variable V if the re is a function f such that Pr ( U 6 = f ( V )) ≤ ǫ . Suppos e the parties make n obs ervations of their so urces, and Alice send s an n -length inp ut X n to the chan nel. The input is a function of the observation S n A , the secret messag e M which is uniformly distribut ed over its alph abet M and indepe ndent of the so urces and c hannel, and the p ri v ate rando m vari able Φ A av a ilable only to Alice. W e say that K = g ( S n A , Φ A ) , for some g , is an ǫ -secret-k e y if (i) it is ǫ -recoverable from S n B , Y n , (ii) satisfies the secrecy condition 1 1 n I ( M , K ; Z n , S n E ) ≤ ǫ, (1) and (iii) s atisfies the uniformity co ndition 1 n H ( K ) ≥ 1 n log |K| − ǫ, where K is the a lphabet over which K takes its values. W e define ( R SK ,ǫ , R SM ,ǫ ) to be an ǫ -achievable rate pair if there is an ǫ -sec ret-key K n such that 1 n H ( K ( n ) ) = R SK ,ǫ , the s ecret messa ge M is ǫ - recoverable from ( Y n , S n B ) , and 1 n log |M| = R SM ,ǫ . A rate pair ( R SK , R SM ) is said to b e achievable if there is a sequenc e of ǫ n such that ( R SK ,ǫ n , R SM ,ǫ n ) are ǫ n -achiev able rate pairs, and as n → ∞ , ǫ n → 0 , R SK ,ǫ n → R SK , and R SM ,ǫ n → R SM . W e define the rate region C to be the set of a ll ach iev ab le rate pa irs. I I I . R E S U L T S In order to state our main re sults we will co nsider a more gene ral s etup than wha t we desc ribed above. Consider a memoryless broad cast ch annel p Y , Z | X,S with non-caus al state information S n av a ilable a t the encode r Alice w hose inpu t to the chan nel is X n ; Bo b and Eve receive, respec ti vely , Y n and Z n . The state sequ ence S n is indep endent and identically distributed with a probab ility mas s function p S . Note that the s etting in Section II is a spe cial case with S k = S A,k , Y k = ( Y k , S B ,k ) , and Z k = ( Z k , S E , k ) . 1 A stronger form of secrecy can be achiev ed by directly i n voking the i deas in [11] as will be briefly discussed in Section V. October 27, 2018 DRAFT 5 Let P joint be the set of all j oint distributions p of rand om v ariables V , U , X, S, Y , Z s uch that (i) the follo wing Markov chain holds: V − U − ( X , S ) − ( Y , Z ) , (ii) V is independ ent of S , and (iii) the joint conditional distrib ution of ( Y , Z ) giv en ( X, S ) as well as the marginal distributi on of S are co nsistent with the g i ven source and chan nel respec ti vely . Let C joint denote the se t of all a chiev able rate pa irs for this c hannel. For p ∈ P joint , let R joint ( p ) be the set of all no n-negati ve pairs ( R SK , R SM ) which satisfy the following two inequalities: R SM ≤ I ( U ; Y ) − I ( U ; S ) (2) R SK + R SM ≤ I ( U ; Y | V ) − I ( U ; Z | V ) . (3) W e prove the follo wing theorem in App endix A. Theorem 1. C joint ⊇ [ p ∈P joint R joint ( p ) . (4) W e ob tain our main ach ie vabilit y result as a c orollary of the ab ove theorem. Let P be the set of all joint distributions p of rando m variables U 1 , V 1 , V 2 , X, Y , Z , S A , S B , S E such that (i) ( U 1 , S A , S B , S E ) and ( V 1 , V 2 , X, Y , Z ) are independent, (ii) the follo wing tw o Markov cha ins hold: U 1 − S A − ( S B , S E ) , V 2 − V 1 − X − ( Y , Z ) , (iii) the joint distribution of ( S A , S B , S E ) a nd the joint conditional distributi on of ( Y , Z ) given X are consisten t w ith the given source and chan nel respectiv ely , and (i v) the following inequality holds: I ( V 1 ; Y ) ≥ I ( U 1 ; S A ) − I ( U 1 ; S B ) . (5) For p ∈ P , let R ( p ) be the se t of all non -negati ve pairs ( R SK , R SM ) wh ich satisfy the following two inequalities R SM ≤ I ( V 1 ; Y ) − ( I ( U 1 ; S A ) − I ( U 1 ; S B )) , (6) R SK + R SM ≤ [ I ( V 1 ; Y | V 2 ) − I ( V 1 ; Z | V 2 )] + + [ I ( U 1 ; S B ) − I ( U 1 ; S E )] + , (7) October 27, 2018 DRAFT 6 where [ x ] + def = max(0 , x ) . The next the orem states that all pairs of rates belon ging to R ( p ) are ach ie vable. An interpretation o f the result is presented in Sec tion IV. Theorem 2. C ⊇ [ p ∈P R ( p ) . Remark: It can be shown that in taking the union above, it su f fices to cons ider aux iliary random variables with a s ufficiently lar ge, but finite cardina lity . In particular , we may res trict the size s of the a lphabets U 1 , V 1 , V 2 of the auxiliary random variables U 1 , V 1 , V 2 , respe ctiv ely , to |U 1 | = |S A | + 2 , |V 1 | = ( |X | + 3)( |X | + 1) and |V 2 | = |X | + 3 . This c an b e shown using a strengthen ed form of F enchel-Egg leston- Carath ´ eode ry’ s theorem [12, pg. 310] (see, for example, [3] for a s imilar calculation). Pr oo f of Theorem 2 . Set V = V 2 and U = ( U 1 , V 1 ) in The orem 1. Then we h av e the follo wing: I ( U ; Y ) − I ( U ; S ) = I ( V 1 ; Y ) + I ( U 1 ; S B ) − I ( U 1 ; S A ) I ( U ; Y | V ) − I ( U ; Z | V ) = I ( V 1 ; Y | V 2 ) − I ( V 1 ; Z | V 2 ) + I ( U 1 ; S B ) − I ( U 1 ; S E ) Note that if I ( U 1 ; S B ) − I ( U 1 ; S E ) ≤ 0 , we can inc rease the achiev able region by mak ing U 1 independ ent of S A . Like wise, if I ( V 1 ; Y | V 2 ) − I ( V 1 ; Z | V 2 ) < 0 , we can increase the region by making V 1 = V 2 . Thus, we have established the rate region in Theo rem 2 as a s pecial case. The next theorem states tha t the above inne r boun d is tight for the cas e of parallel cha nnels and sources where each s ub-chann el and s ource co mponent satisfies a degradation order (either in fa vor o f the legitimate receiv er or in fav or of the eavesdropper). Theorem 3. Cons ider the following: (i) The channel has two independent components 2 denoted by F an d R : X = ( X F , X R ) , Y = ( Y F , Y R ) , and Z = ( Z F , Z R ) such that p Y F ,Y R ,Z F ,Z R | X F ,X R = p Y F ,Z F | X F p Y R ,Z R | X R . Moreover , the first su b- channel F is de graded in fav or o f Bob, which we ca ll forwardly de graded, and the se cond sub- channel R is de graded in fav or of E ve, wh ich we call reversely d e grade d; i.e. , X F − Y F − Z F and X R − Z R − Y R ar e Markov chains . (ii) T he sources also have two ind epende nt components, again denoted by F a nd R : S A = ( S A,F , S A,R ) , S B = ( S B ,F , S B ,R ) , and S E = ( S E , F , S E , R ) with p S A , S B , S E = p S A,F ,S B,F ,S E ,F p S A,R ,S B,R S E ,R . The first c ompone nt is degraded in favo r of Bob an d the second in favo r of Eve; i.e. , S A,F − S B ,F − S E , F and S A,R − S E , R − S B ,R ar e Markov chains . 2 W e denote the channel input, outputs, and the sources using bold letters to make this explicit. October 27, 2018 DRAFT 7 X n R A L I C E P ( Z R | X R ) Z n R E V E P ( Y R | Z R ) Y n R B O B X n F A L I C E P ( Y F | X F ) Y n F B O B P ( Z F | Y F ) Z n F E V E S n A,F A L I C E P ( S B ,F | S A,F ) S n B ,F B O B P ( S E ,F | S B ,F ) S n E ,F E V E S n A,R A L I C E P ( S E ,R | S A,R ) S n E ,R E V E P ( S B ,R | S E ,R ) S n B ,R B O B Fig. 2. Theorem 3 states that the inner bound to the rate r egion C established in Theorem 2 is tight i f the sources and channels can be decomposed t o satisfy a de gradation order , either i n fav or of Bob or Eve. In this ca se, C = [ p ∈ ˜ P ˜ R ( p ) , where ˜ P is the s et of joint distrib utions o f the f orm p V 2 ,F ,X F p Y F ,Z F | X F p X R p Y R ,Z R | X R p U 1 ,F | S A,F p S A,F ,S B,R ,S E ,R p S A,R ,S B,R ,S E ,R and ˜ R ( p ) is the set of non-negative pa irs of ( R SK , R SM ) satisfying R SM ≤ I ( X F ; Y F ) + I ( X R ; Y R ) − ( I ( U 1 ,F ; S A,F ) − I ( U 1 ,F ; S B ,F )) , and (8) R SK + R SM ≤ I ( X F ; Y F | V 2 ,F ) − I ( X F ; Z F | V 2 ,F ) + I ( U 1 ,F ; S B ,F ) − I ( U 1 ,F ; S E , F ) . (9) W e pro ve this the orem in Appendix B where we also show that, a s one would expect, the result holds ev en if we only ha ve stoch astic d egradation instea d o f phys ical degrad ation. It turns o ut the result is mo re general than the form presented above, but the se exten sions are omitted to be able to state the resu lt cleanly . These extensions are discus sed in greater detail in Section V. October 27, 2018 DRAFT 8 + + B O B E V E A L I C E Y n Z n X n ˆ M , ˆ K K M S n A S n B + N source N Eve N Bob Fig. 3. The scalar Gaussian case with no source observ ations at Eve. A. The s calar Gaus sian case Let us consider a scalar Gaussian examp le (Figure 3). Suppose the observations of Alice and Bob are jointly Gaussia n. Then, without loss o f generality , we can mode l them as S B = S A + N source , where S A and N source are independent z ero mean Ga ussian. Let N source be unit v ariance, a nd let the variance of S A be SNR src . Let E ve hav e no source o bservation. Suppose that the b roadcas t c hanne l has additiv e Gaussian noise with a power c onstraint on X of SNR Bob . Let Y = X + N Bob , and Z = X + N Eve , where N Bob and N Eve are Gauss ians indepe ndent of X , and su ch that N Bob has u nit vari ance and N Eve has a variance SNR Bob / SNR Eve . W e have the follo wing proposition, which is plotted i n Figure 4 and proved in Appendix C. October 27, 2018 DRAFT 9 0 1 2 3 4 5 6 7 8 0 0 . 5 1 1 . 5 2 2 . 5 3 3 . 5 4 4 . 5 5 R SK ( b / s a m p l e ) R S M ( b / s a m p l e ) 5d B 15d B 25d B C apa c i t y o f t he c hanne l t o B ob S NR so u r ce = 25d B S NR B ob = 25d B S NR E v e = - 20d B 0d B Fig. 4. The figure plots t he optimal tradeo ff between secret key and secret message from Proposition 4, which is the special case i n which there is no source at Eve. The tradeof f curv es abov e reveal distinguishing features between the secret k ey and secret message rates. For instance, the l argest possible secret ke y rate is greater than that of the largest possible secret message rate, and the t radeof f between key and message is gov erned by a curv e that is not simply linear . Proposition 4. The rate r e gion C for this problem is set of all no n-ne g ative ( R SK , R SM ) pairs satisfying R SM ≤ 1 2 log (1 + SNR src )(1 + SNR Bob ) 1 + SNR src + min( SNR Bob , SNR Eve ) , R SK ≤ 1 2 log (1 + SNR src )(1 + SNR Bob ) exp( − 2 R SM ) − SNR src 1 + min( SNR Bob , SNR Eve ) Remark: When Eve also ha s a s ource obs ervation jointly Gaussian with the observations of Alice and Bob, the problem is covered by the cases in Theorem 3. Howe ver , unlike in the proposition above, we were unable to show that a Gaus sian ch oice of the auxiliary random vari ables is optimal. Indeed , even for the secret key p roblem u nder jointly Gauss ian sources a nd only a public bit-pipe channel from Alice to Bo b and Eve, the optimality of Gaussian auxiliary random v ariables remains open to the bes t of our October 27, 2018 DRAFT 10 knowledge. I V . I N T U I T I O N B E H I N D T H E O R E M 2 : A S E PA R A T I O N S T R A T E G Y In this section we will sketch informally the intuition beh ind the a chiev able sc heme of Theorem 2. W e will briefly desc ribe three examples before proceeding. Examples 1 and 2 highlight well known achiev ab le strategies in the secrecy literature. The key idea is shown in Example 3; name ly , that the s trategies in Examples 1 and 2 ca n be used a s building blocks to construct a s trategy for E xample 3, muc h in the same way a so urce and a channel c ode c an be used as b uilding blocks to construct an ach iev ab le strategy in a joint source -channe l con text. This is what we mean by a separation strategy , which estab lishes the basic intuition for Theorem 2. The rema inder of the se ction extend s this to the more gen eral p roblem setup of the p aper . Example 1. Su ppose Alice h as a thr ee-bit no iseless channel ( x 1 , x 2 , x 3 ) to Bob . Eve can obs erve o nly two o f the three bits sent by Alice (i.e. ( x 1 , x 2 , ∗ ) , ( x 1 , ∗ , x 3 ) , or ( ∗ , x 2 , x 3 ) ), b ut n ot all of them. Alice can use this advantage Bob ha s over Ev e to send a one-bit secr et mes sage m ∈ { 0 , 1 } to Bob such tha t Eve will consider bo th outcomes to be equally likely . In order to do this, Alice ma y make u se of two fair coin tosses ( c 1 , c 2 ) , deno ted as 0 or 1 . Then Alice choose s h er channel inputs ( x 1 , x 2 , x 3 ) as follows: ( x 1 , x 2 , x 3 ) = ( c 1 , c 2 , c 1 ⊕ c 2 ⊕ m ) , where ⊕ is an XOR. Then Bob ca n decipher m fr om his thr ee channel inputs simply by XORing all his observ ations together . Eve, on the o ther hand , will have per fect eq uivocation on the v alue of m r egar dles s of which two bits s he sees since all p ossible value s ar e equipr oba ble re gardless of the value of m . The next example highlights how secrecy can be attained in the s ource setting. Example 2. (a) Consider the setting in which Alice is allowed to transmit on e bit x across a noiseless public channe l to Bob an d Eve. Furthermore, Alice ob serve s a two-bit string ( s 1 , s 2 ) uniformly distributed over the se t of all all 2-bit string s. Bob obse rves either the first bit ( s 1 , ∗ ) or the se cond bit ( ∗ , s 2 ) of Alice’ s string, b ut not bo th, and Alice d oes not learn which of the two b its Bob obs erved . Ev e observe s nothing. T hen, Alice and Bob can agr ee to make the secret key the first bit, and Alice’ s input to the channel can simply be the XOR of her two bits: x = s 1 ⊕ s 2 . October 27, 2018 DRAFT 11 Then, Bo b has enoug h infor mation to determine the secret key , but Eve has p erfect equ ivocation since she is e qually likely to see 0 o r 1 re gardless of the value of the s ecret ke y . (b) Suppos e that Alice is allowed n ow to transmit two bits across the n oiseless public channel ( x 1 , x 2 ) to Bob and Ev e, and the source obs ervations ar e the same as in pa rt (a). Instead of tr ansmitting a secret ke y , Alice is given a secret message m ∈ { 0 , 1 } to commun icate. The n, Alice can simply transmit ( x 1 , x 2 ) = ( s 1 ⊕ s 2 , s 1 ⊕ m ) , which in ef fect u ses the first channel symbol to con struct a secret ke y as in part (a), and the second to use it as a o ne-time pad on the message. Sinc e Bob can decode the se cret ke y a s earlier , Bob discovers the se cret me ssage. Eve, on the othe r hand, h as per fect e quivoca tion abou t m s ince re g ardl ess of the message, all four values of ( x 1 , x 2 ) are equipr oba ble. W e now provide an example to illustrate how the above s trategies can be combined. Example 3. S uppose A lice ha s a three-bit noisele ss chann el to Bob, and E ve ca n o bserv e only two of the thr ee bits a s in Ex ample 1. Additionally , Alice an d Bob have source obser vations as in Examp le 2, where Eve obse rves no source. The key idea is to c ombine the strate gies used above, e xcept to replace Alice’ s coin tosses ( c 1 , c 2 ) in Exam ple 1 with the input to the public channel fr om Example 2(b). Since Eve can learn the v alues of the c oin tosses if she obse rves the first two c hannel inp uts, under this s trate gy , tho se values function a s a public b it pipe. This leads to the following channel inputs: ( c 1 , c 2 ) = ( s 1 ⊕ s 2 , s 1 ⊕ m 2 ) ( x 1 , x 2 , x 3 ) = ( c 1 , c 2 , c 1 ⊕ c 2 ⊕ m 1 ) = ( s 1 ⊕ s 2 , s 1 ⊕ m 2 , s 2 ⊕ m 1 ⊕ m 2 ) W ith this combined strate gy , Alice ca n s end a two-bit s ecret message ( m 1 , m 2 ) to Bob, who can de code m 1 as in E xample 1 an d m 2 as in E xample 2(b). Eve, on the other hand , ha s per fect equivoca tion ab out ( m 1 , m 2 ) since re gardless of their values , all poss ible value s are equally likely for wh ichever cha nnel symbols she ca n obser ve. Example 3 provides t he essen ce of ou r separation approach for t ransmitting a secret message and matches the dia gram shown in Figure 6: 1) Distill the cha nnel into a pu blic b it-pipe ( c 1 , c 2 ) in ad dition to the p ri v ate bit-pipe x 3 over which the se cret message is sen t. October 27, 2018 DRAFT 12 2) Use part of the public bit-pipe to distill the s ources and gene rate a secret key . 3) Use the remainder of the public bit-pipe to sen d a se cret mess age by using the secret key just generated as a one-time pad. In the s imple example above, we could exploit both the source a nd the chan nel to the fullest as se en by co mparing with Examp les 1 and 2(b). H owe ver , in general, it may be no t desirable or ev en possible for Alice to attempt to con vey her sou rce to Bob (for ins tance, if the con ditional entropy of the s ource at Alice con ditioned o n that at Bob is larger than the c apacity of the c hannel). In the se quel, we will describe how the a bove strategy maps to this more gene ral setting. The sketch o f the strategy follows the spirit of Examples 1, 2, and 3 and as de scribed ab ove, provide a n interpretation of the res ult a s a separation strategy . 1) Cas e of no sour ces: Sec r ecy via the channel : Cons ider the case in which there is a noisy broa dcast channe l from Alice to Bob and Eve; but the re are no sources. Note that this resembles the case s studie d in Example 1 with the added wrinkle that the channel to Bob ma y also be noisy . Recall that in Example 1, given sufficiently ma ny fair c oin tosses , Alice uses the cha nnel to send a mes sage s ecretly to Bob . The work of Csisz ´ ar and K ¨ orner [3] generalizes this ap proach as a me ans of providing secrecy for a ll noisy broadcast ch annels. They also consider a common message in a ddition to the priv a te mes sage and characterize the set of all ra te pa irs s uch tha t the common me ssage can be reliably rec overed by b oth Bob and Eve while the pri v ate message is recov ered reliably by Bob, b ut remains s ecret fr om Eve 3 . W e may consider a slight twist to this setting. W e ag ain c onsider two ind epende nt messag es – one private as in [3] and the other wha t we call public – b oth uniformly distrib uted over the ir alphab ets both o f which need to be de li vered reliably to Bob with the former remaining a sec ret from Eve. The only dif ference from the setting of Csisz ´ ar an d K ¨ orner is that we d o not require the public mess age to be reliably recovered by Eve. The following proposition can be proved directly follo wing Csisz ´ a r and K ¨ orner (also see [13]). Proposition 5. F or any given joint distribution of random v ariables V 1 , V 2 , X, Y , Z su ch that V 2 − V 1 − X − ( Y , Z ) is a Ma rkov chain and the joint conditional distribution of ( Y , Z ) g iven X is consistent with the given channe l, the rate pair ( R p rivate , R public ) is achievable for the setting des cribed above, wher e R p rivate = [ I ( V 1 ; Y | V 2 ) − I ( V 1 ; Z | V 2 )] + , and R public = I ( V 1 ; Y ) − R p rivate . 3 In fact, they consider the equiv ocation rate of Eve as a third parameter and characterize the rate triple, but this is not relev ant to our discussion. October 27, 2018 DRAFT 13 The proof is a straightforward adaptation of the achiev able s trategy in [3] 4 . 2) Cas e of two no iseless bit pipes: Private and pub lic : Now c onsider the setting in which the cha nnel is deterministic. In particular , the cha nnel is made up of two bit-pipes: (1) a pr ivate bit-pipe of rate R p rivate which d eli vers its input bits from Alice faithfully and on ly to Bob, a nd (2) a pu blic bit-pipe of rate R public which delivers faithfully its input bits from Alice to both Bob a nd Eve. Alice M E N C R p rivate R public D E C sources S n A S n B Eve ˆ M Bob Fig. 5. Con sider the case in which Alice and B ob share correlated source observ ations, and there is both a priv ate bit-pipe from Alice to Bob and a public bit-pipe from Ali ce to Bob and Eve. This generalizes the problem considered in Examp le 2, and the strategy considered in that setting generalizes naturally , as well. a) Secret-ke y on ly; no source observation at Eve: Co nsider the goal of generating the lar g est secret- key rate p ossible when there is no source o bservation at Eve. 5 This is reminiscent of Exa mple 2(a) but with two add ed dimens ions no t present in that setting. F irst, the re may not be enough rate on the public bit-pipe for Bob to determine Alice’ s sou rce ob servation perfectly to gen erate a sec ret key . A mo dified form of W yn er- Zi v’ s so urce coding strategy can be employed to hand le this, which simply in volv es 4 Roughly , the random coding argument runs as follows: a V 2 codeboo k of rate I ( V 2 ; Y ) is formed and a conditional V 1 codeboo k of rate I ( V 1 ; Y ) is formed for each V 2 code word. The conditional codebook s are binned so that the rate of each bin is I ( V 1 ; Z | V 2 ) . At Al ice, a part of the public message bits worth rate I ( V 2 ; Y ) selects the V 2 code word, the priv ate message selects the bin of the correspond ing conditional V 1 codeboo k and the rest of the public message selects t he V 1 code word wi thin the bin. Bob performs joint typical decoding. The reli ability and secrecy of the scheme can be shown along the lines of [3 ]. 5 When Eve has a dependent source observ ation S E , a further binning of the codebook described in this section can be used to get a secret-k ey rate of R SK = I ( U ; S B ) − I ( U ; S E ) + R private , where w e restrict U to those which satisfy I ( U ; S A ) − I ( U ; S B ) < R public + R private , and the Marko v chain U − S A − ( S B , S E ) . October 27, 2018 DRAFT 14 quantizing A lice’ s source and using tha t to generate the secret k ey . S econd, in addition to the pub lic bit- pipe, there is also a p ri v ate bit-pipe. Note that any compon ent s ent on the priv ate b it-pipe is automatically a se cret key , as well. Thus , if p art of the b in index is sent on the priv ate bit-pipe, it is als o sec ret from Eve. Then, given an auxiliary rand om variable U which satisfies the Markov chain U − S A − S B , a s ecret-key rate of ( I ( U ; S B ) + R ) + R p rivate − R is a chiev able if I ( U ; S A ) − I ( U ; S B ) ≤ R public + R, and R ≤ R p rivate . Again, the w ork of Ahlswede -Csisz ´ ar [4] can be use d to sh ow that this has the requ ired s ecrecy and uniformity prope rties, and thus, the resu lting secret-key rate is R SK = ( I ( U ; S B ) + R ) + ( R p rivate − R ) = I ( U ; S B ) + R p rivate , where we res trict U to those wh ich satisfy I ( U ; S A ) − I ( U ; S B ) < R public + R p rivate , and the Markov chain U − S A − S B . b) Secret message only; no source obse rvation at E ve: Consider the case in which Alice des ires to commun icate a messag e secretly at the largest pos sible rate wh en there is no source obs ervation a t Eve. 6 This scenario, de picted in Figure 5, is a straightforward generalization of Exa mple 2(b) from the introduction. In tha t example, Alice achieves s ecrecy across a public bit-pipe b y binning he r source observation ba sed on Bob’ s side information to g enerate a s hared secret key . O n the res t of the public bit-pipe, Alice u ses this key as a one-time p ad to send the s ecret messa ge. As earlier , there are two add ed dimens ions in the current setting that are not present in Example 2(b). First, there may not be e nough rate on the public bit-pipe for Bob to de termine Alice’ s so urce obs ervation perfectly and thu s gene rate a s ecret key . Aga in, Alice s imply quan tizes the s ource obse rvati on and ap plies 6 When Eve has a correlated source observ ation, a further binning of the codebook described in this section can be used to get a secret message rate of R SM = R private + [ I ( U ; S B ) − I ( U ; S E )] + , where U sati sfies the Marko v chain U − S A − ( S B , S E ) and the condition R public > I ( U ; S A ) . October 27, 2018 DRAFT 15 the binning strategy as before, which corresp onds to W yner-Zi v ’ s source coding sc heme. Second, in addition to the public b it-pipe, there is also a p ri vate bit-pipe. Becaus e there is no w a secret mes sage, we split the mes sage into two parts: the pri v ate bit-pipe is us ed fully to s end part o f the secret mes sage (at rate R p rivate ), an d the public bit-pipe is use d as before to co mmunicate the remaining bits s ecretly with the correlated sou rces being exploited to provide the secrecy . Howe ver , since we now have to a gree o n specific ran dom bits instead o f any common random bits, we ha ve two additional restrictions, which can cause the rate of the se cret messa ge to be lower than the secret key cas e above. F irst, we have to reserve part of the public bit-pipe for sending the one-time padde d s ecret message, which constrains part of the public b it-pipe rate R public for ge nerating the secret key from the so urces. Secon d, sending pa rt of the W yner-Zi v bin index on the p ri v ate b it-pipe will cost rate that can be used for sen ding a priv ate message . Thus, it is better to reserve the priv ate bit-pipe for send ing a se cret me ssage , which costs R ≤ R p rivate bits that could have been us ed for generating the secret key , which means the rate of the secret key u sed as a one -time pad, an d thus the ef fecti veness of the p ublic bit-pipe, is significantly limited c ompared with the ca se of the s ecret key . The work of A hlswede-Csisz ´ ar [4] ca n be adapted to s how that this a pproach satisfie s the required secrecy and u niformity properties. The se cret-key is then us ed as a on e-time pad to en crypt some extra messag es bits. Using this approach , given any joint distrib u tion of U − S A − S B , a sec ret key of I ( U ; S B ) can be ge nerated by c onsuming I ( U ; S A ) − I ( U ; S B ) bits from the public bit-pipe. This sec ret key can then be u sed a s a one -time pad on another I ( U ; S B ) bits of the public bit-pipe to send a se cret mes sage of that rate. Hence, we must c hoose auxiliary rando m vari able U such that R public > ( I ( U ; S A ) − I ( U ; S B )) + I ( U ; S B ) = I ( U ; S A ) , and the total secret messa ge rate obtained is R SM = R p rivate + I ( U ; S B ) . Unlike in the work of Cs iszar-Narayan [6], in which Alice a nd Bob o nly need to agree on any common random bits to construct a se cret ke y , for a secret message, we have the a dded constraint that they must agree o n spe cific ran dom bits. Thus, the rates a chiev able for secret mes sage a re less than those achiev able for secret key . October 27, 2018 DRAFT 16 c) Secret m essage – secret-ke y tradeof f; no source o bserv ation at Eve: A se cret-message – se cret- key trade of f optimal s trategy here 7 turns o ut to be a natural combina tion of the abov e tw o: If (1) R SM ≤ R p rivate , the se cret-message is se nt entirely over the p ri v ate b it-pipe, and the left-over rate ( R p rivate − R SM ) of the pri vate bit-pipe rate along with the public bit-pipe is used for agreeing on a s ecret-key from the correlated so urces. Th is sec ret-ke y ste p is es sentially the sec r et-ke y o nly ca se discus sed ab ove. Otherwise, i.e. , if (2) R SM ≥ R p rivate , all o f the pri vate bit-pipe is us ed to carry a part o f the sec ret message . For communicating the rest of the s ecret mess age, at a rate o f R SM − R p rivate , an d for agree ing on a se cret-key , the pu blic bit-pipe a nd the sources are made use of. The way the public bit-pipe is used is ess entially the same as in the secr et me ssage on ly case above. The only dif feren ce is that instead of utilizing all of the s ecret-key generated from the sources a s a one-time pad to secu re commun ication of a message over the pu blic bit-pipe, here, only a part of the secret-key is used for this purpos e. The rate o f the unus ed part of the secret-key is R SK . The resulting trade off is gi ven by R SM ≤ R public + R p rivate − ( I ( U ; S A ) − I ( U ; S B )) , an d R SM + R SK ≤ I ( U ; S B ) + R p rivate , where U satisfies the Markov chain U − S A − S B and the con dition I ( U ; S A ) − I ( U ; S B ) ≤ R public + R p rivate . 3) Th e General cas e: Now let us turn to the genera l case with s ources in which the channe l is not neces sarily deterministic. This resemb les Example 3, and as in that case, we can apply a comb ination of the strategies in Sec tion IV -1 and IV -2. Indeed, by treating the random coin tosses Ali ce uses in Proposition 5 a s a pu blic bit-pipe, we can cons truct a public and pri vate bit-pipe from the c hannel and can l ev erage the source strategy from Section IV -2. This approach enab les us t o obtain the rates 7 When Eve has a correlated source observ ation S E , t he tradeoff becomes R SM ≤ R public + R private − ( I ( U ; S A ) − I ( U ; S B )) , and R SM + R SK ≤ [ I ( U ; S B ) − I ( U ; S E )] + + R private , where U sati sfies the Marko v chain U − S A − S B and the condition I ( U ; S A ) − I ( U ; S B ) ≤ R public + R private . October 27, 2018 DRAFT 17 channe l distill distrib uted so urces binning A L I C E S n A B O B S n B E V E S n E X n P ( Z, Y | X ) Y n Z n key K insecure bit p ipe secure bit p ipe secure bit pipe Fig. 6. The intuition behind Theorem 2. In this approach , the channel is distilled into a public bit pipe and a priv ate one. The sources take adv antage of part of the rate from each of these chan nels to generate a secret key . This ke y i s di vided into the final secret ke y and a one-time pad, the latter of which is used to secure the remainder of the public bit pipe f or sending part of the secret message. The remainder of the priv ate bit pipe i s used to send the remainder of the secret message. in (6) an d (7). Howev er , we should n ote that ne ither the indepe ndenc e requirement n or the uniformity requirement in Proposition 5 h old for the messages sent over the bitpipes in IV -2, though they may hold approxima tely . And hence , this discu ssion does not constitute a proof of Theorem 2. Formalizing the above is an alternati ve approach to proving Theorem 2, but we do no t pursue it he re. A sche matic interpretation of the discuss ion in this section is shown in Figure 6. V . D I S C U S S I O N A. Extens ions and Additional Re sults a) Stochastically de graded sources and channels: There a re several additional results related to the present work that we wish to note. For instan ce, it turns out that the result presented in The orem 3 holds more gene rally than the degradedne ss conditions outlined. First, the degrad edness con ditions can be relaxed to stocha stically degraded conditions for both the source and chann els. This simply in volves a slightly more cumbe rsome argument in our co n verse proof, but no chan ges to the ac hiev a ble strategy are neces sary . For c ompleteness , the con verse argument is giv en in Append ix B. b) Bandwidth mismatch: W e only considered the cas e of ma tched ban dwidths, i.e., there is one source symb ol per every channel symbol. Ou r results can be readily extend ed when there is a bandwidth mismatch of say m S source symbols for every m C channe l s ymbols. By conside ring a vector sou rce with m S symbols an d vector c hannel with m C symbols, we can directly in v oke Theo rem 2 . Further , by restricting the auxiliary rando m variables to be i.i.d. a cross the vector compo nents we can arriv e at October 27, 2018 DRAFT 18 the following achiev able r egion. Let P mismatch be the s et of all joint distributi ons p of random variables U 1 , V 1 , V 2 , X, Y , Z , S A , S B , S E which sa tisfy the sa me con ditions as in the definition of P exc ept for (5) being replace d by m C I ( V 1 ; Y ) ≥ m S ( I ( U 1 ; S A ) − I ( U 1 ; S B ) . Let R mismatch ( p ) , for p ∈ P mismatch be the set of all rate pa irs ( R SK , R SM ) which satisfy R SM ≤ I ( V 1 ; Y ) − m S m C ( I ( U 1 ; S A ) − I ( U 1 ; S B )) , R SK + R SM ≤ [ I ( V 1 ; Y | V 2 ) − I ( V 1 ; Z | V 2 )] + + m S m C [ I ( U 1 ; S B ) − I ( U 1 ; S E )] + . Then the se t C mismatch of all ac hiev ab le rate pairs, where rates are measured per c hanne l use, satisfies C mismatch ⊇ [ p ∈P mismatch R mismatch ( p ) . For the degrade d ca se considered in Theorem 3, we c an also s how the optimality of t he above achiev able region un der ba ndwidth mismatch. Appen dix B discuss es the mod ifications needed in the c on verse. A conseq uence of this is that the optimality of Gau ssian sign alling shown in Propos ition 4 c ontinues to hold even un der bandwidth mismatch . c) Str ong secrecy: All the secrecy res ults in this pap er c an be directly streng thened by dropp ing the 1 /n factor in (1) withou t any pe nalty on the rates achieved. Th is follows directly from the work of Maurer an d W olf [11] on priv acy amplification using extractors. Maurer a nd W olf de monstrate this for the problems of se cret key ag reement of Ahlswede and Cs isz ´ ar , and secure messa ge transmis sion of Csisz ´ ar and K ¨ orner . The key idea is to perform several (i ndepen dent) repetitions of the sc heme which produce s weakly secure keys an d ac hiev es we akly secu re data transmission . A priv a cy amplification step us ing an extractor can be emp loyed on the weakly se cure keys to generate a s trongly secure key . The priv acy amplification step in volves Alice us ing a s mall (polylog arithmic in block length) purely ran dom key which she ne eds to sha re with B ob over a public ch annel. Alice will n eed to use the broadc ast channe l to do this, but the overhead in volved is n egligible a nd does n ot af fec t t he rates achieved. T o send a strong ly secure messag e in a ddition to gene rating a strongly secure ke y , Alice will first in vert the extractor ope ration to produce the e quiv a lent weakly s ecure mes sages that wh en pas sed through the extractor would p roduce the strongly secu re message she intends to trans mit. The n she proceed s to transmit thes e equiv alen t weakly secure mess ages using the scheme in this paper . The small key is also sen t separately using a cha nnel code. At the end of a ll the transmissions, B ob who will have recov ered all the wea kly se cure keys and weakly secure messages as well as the small k ey ca n in voke the extractor to recover the strongly secu re key and the strongly secu re message. October 27, 2018 DRAFT 19 d) T wo extensions of Theor em 3: T wo other extensions of the results in Theorem 3 were shown in [14]. First, gi ven o nly the sources and a public bit-pipe from Alice to Bo b and Eve, the c ondition under which Alice a nd Bo b canno t ge nerate a positiv e rate secret-key is in fact wea ker than the case whe re the so urces are degraded in fav or of Eve 8 . Unde r this weaker condition, it was shown in [14] tha t the optimal strategy in volv es igno ring the sources, an d utilizing only the channel. In particular , R ( p ) is now the se t of all non-negative rate pairs sa tisfying the con dition R SK + R SM = [ I ( V 1 ; Y ) − I ( V 1 ; Z )] + , where V 1 − X − ( Y , Z ) is a Marko v ch ain. Th us the optimal s trategy in this case red uces to that of Cs isz ´ ar and K ¨ orner [3], an d there is esse ntially no distinction between sending a secret messag e and generating a se cret-key . Second , a c hannel degraded in fav or of Eve is a condition under which the c hanne l resou rce by itself cannot pro vide any sec recy , but note that the con dition under wh ich the channel resou rce can not provide any secrecy is loos er than this type of degrad ation. This con dition is when the chan nel to Eve is ‘less noisy’ than the chan nel to Bob [3, Corollary 3, pg. 341]. Unde r this looser con dition, but when the source compone nt degraded in fav o r of Eve is absen t, the optimality of turning the chann el into a public b it-pipe was shown in [14] for s ecret-key gen eration. In the special case whe re E ve has no source observation, this optimality was shown for secret co mmunication as well. e) Secu r e s ource-channel coding: Note tha t send ing a sec ret message is equiv alent to the ca se in which Alice must send a discrete un iform source lossles sly to Bob that must be kept se cret from Eve. A s traightforward extension of our resu lt for the secret me ssage c ase, s hown in [15], de monstrates that optimality continues to hold if o ne is interested in reco nstructing any discrete memo ryless source , both for the los sless and lossy c ases . In this situation, an additional layer of separa tion between the priv a te bit pipes an d the comp ression of the sou rce can be shown to establish the resu lt. B. Open p r ob lems The above extension s do not close the door on this problem, and there are se veral cons iderations that currently warrant further r esearch . Inde ed, the general rate region an d structure of optimal strategi es are 8 This condition which can be inferred from [4] is that for ev ery ˜ U 1 , ˜ U 2 satisfying the Markov chain ˜ U 2 − ˜ U 1 − S A − ( S B , S E ) , I ( ˜ U 1 ; S B | ˜ U 2 ) ≤ I ( ˜ U 1 ; S E | ˜ U 2 ) . October 27, 2018 DRAFT 20 still open problems. One avenue is to co nsider extensions of the resu lt beyond the degrad ed c ase and beyond some of the extens ions discussed above. Another interesting avenue to consider is the setting in which the sou rces and channe l a re correlated. Note that in s uch a s etting, the re may not b e a clean d istinction b etween a so urce o bservation and a channe l output at either Bob o r E ve, which rese mbles the setup for Theorem 1. Indee d, the strategy and proof prese nted for Theorem 1 con tinues to hold if the sources and c hanne ls a re correlated. Furthermore, the setting of the strategy pres ented in Theorem 1 co incides with a problem studied by Chen an d V inck [8], in which Alice must sen d Bob a se cret messa ge (i.e., R SK = 0 ), Alice has non-ca usal state information a bout the c hannel. Chen an d V inck ma ke the additional as sumption that Eve obs erves degraded versions of Bob’ s channe l outpu ts, but Theorem 1 h olds e ven without this assumption. In fact, when Chen and V inck’ s sche me is considered in the c ontext o f The orem 3 (i.e., inde penden t s ources and cha nnels), but with the degradedn ess con dition of Ch en and Han V inck (i.e., there is no reversely degraded channel componen t), we already know fr om T heorem 3 that the s ecrecy capacity is given b y C SM = max min { I ( X F ; Y F ) − I ( U ; S A | S B ) , I ( X F ; Y F | V ) − I ( X F ; Z F | V ) + I ( U ; S B | S E ) } , (10) where the maximization is over joint distribut ions of the form p V , X F p U,S A . Ch en an d Ha n V inck’ s achiev able secrecy rate is R = max min { I ( W ; Y ) − I ( W ; S A ) , I ( W ; Y ) − I ( W ; Z ) } = max min { I ( W ; Y F , S B ) − I ( W ; S A ) , I ( W ; Y F , S B ) − I ( W ; Z F , S E ) } , (11) where the maximization is over p W,X F | S A . Whe never the maximizer of (10) is such that V is a c onstant, we may choose W = ( X F , U ) in (11) to match the c apacity . For instance , in the Gauss ian example of Section III-A, it is ind eed the case that optimal joint distribution in volves a co nstant V . Note that the Gaussian c ase of Chen a nd V inc k’ s sc heme was first considere d by Mitrpant, V inck a nd Luo [16]. But in gene ral, it does not app ear to be the c ase that (11) equals the secrecy capa city in (10). While Che n and V inc k present an upper bound o n the secret messa ge rate, it does not c oincide with either their ac hiev ab le strategy or Theorem 1 . Th e work in [17] provides a mar ginal improvement to the upper bou nd presen ted by Chen and V inck, but the problems of c haracterizing the rate region and optimal strategies remain open. Indeed , this region ma y also be tighten ed by improving up on the achiev able strate gy in Theo rem 1. Progress on any of these fronts could lead to new insights on how strategies may optimally combine source and channel resou rces for sec recy , a s well a s on the interplay between secret keys and me ssage s. October 27, 2018 DRAFT 21 A P P E N D I X A P R O O F O F T H E O R E M 1 Since a s ecret messag e a utomatically satisfies the c onstraints of a s ecret key , it is e nough to p rove tha t the following ( R SK , R SM ) pair is achiev able. R SM = min( I ( U ; Y ) − I ( U ; S ) , I ( U ; Y | V ) − I ( U ; Z | V )) , an d R SK = [ I ( U ; Y | V ) − I ( U ; Z | V ) − ( I ( U ; Y ) − I ( U ; S ))] + = [ I ( U ; S ) − I ( V ; Y ) − I ( U ; Z | V )] + . W e di vide the proo f into two ca ses. In each cas e, we use a random c oding a r gument to show the existence of a codebook for which the probability of an encoding error at Alice, de coding error at Bob, and deco ding error at Ev e gi ven additional side information a re all small. W e then show that s uch a code satisfies the s ecrecy and uniformity conditions. A. Case 1: I ( U ; S ) ≤ I ( V ; Y ) + I ( U ; Z | V ) In this ca se, we ne ed only prove that the pair R SM = I ( U ; Y | V ) − I ( U ; Z | V ) , an d R SK = 0 is ach iev able . Random Coding Argument a) Codeb ook gen eration: W e crea te a code book of b locklength n with 2 n ( I ( U ; Y ) − 3 δ ) elements compose d of t wo parts. W e create a blocklength- n V -codeboo k of size 2 n ( I ( V ; Y ) − δ ) by drawi ng the codewords uniforml y from the ǫ -strong ly typical set [12, Ch apter 1. 2] A ∗ ( n ) ǫ of n -length V seque nces. Let us index thes e c odewords using i ∈ { 1 , . . . , 2 n ( I ( V ; Y ) − δ ) } . For ea ch such codeword v n ( i ) , a conditional U -codebook of size 2 n ( I ( U ; Y | V ) − 2 δ ) is created b y drawing the c odewords un iformly from the set of all n -length U sequences which are conditionally ǫ -strongly typical conditioned on the V sequenc e v n . For each s uch con ditional code book, we d istrib ute the se s equen ces into 2 n ( R SM − δ ) bins s uch that eac h bin contains 2 n ( I ( U ; Z | V ) − δ ) codewords, indexing each bin by m ∈ { 1 , . . . , 2 n ( R SM − δ ) } . Let the codew ords in each bin be indexed by j ∈ { 1 , . . . , 2 n ( I ( U ; Z | V ) − δ ) } . Note that there is a direct co rresponden ce betwee n the bins, a nd the pri vate bit-pipe, with the code - words in e ach bin a nd the V -code book correspon ding to the public bit-pipe of the sep aration strategy . October 27, 2018 DRAFT 22 V -code book 2 n ( I ( U ; Z | V ) − δ ) 2 n ( I ( V ; Y ) − δ ) b b b 2 n ( R SM − δ ) bins U -conditional cod ebooks b b b Fig. 7. The codebook used for Case 1 of the achiev able strategy consists of a V -codebook , each code word of which index es a conditional codebook. The bins in the conditional codebook correspond directly to the pri v ate bit-pipe, and the V -codebook and code word s in each bin to the public bit-pipe. Analogou sly , the code words in each conditional codebook correspond to quantization points for the source S n . Furthermore, as will be s een in enc oding, the U -codew ords a re simply quantization points for the source S n . A s chematic of this c odebook is dep icted in Figure 7. In this separation context, Case 1 refers to the s cenario in whic h there is insuf fi cient rando mness from the sou rce S n alone to determine the input to the public b it-pipe. Th us, we further divide the set o f all U -codew ords into 2 n ( I ( V ; Y )+ I ( U ; Z | V ) − I ( U ; S ) − 3 δ ) buc kets 9 as follo ws: if (i ) I ( U ; S ) ≥ I ( V ; Y ) , that is, there is sufficient randomness in the source to determine the V -codeword completely , we divide up the codewords in e ach bin of ev ery conditional co debook amon g the buckets such that each b ucket has the same numb er of codewords. Thu s, in each b in of each o f the conditional co debooks there are 2 n ( I ( U ; Z | V ) − δ − I ( V ; Y ) − I ( U ; Z | V )+ I ( U ; S )+3 δ ) = 2 n ( I ( U ; S ) − I ( V ; Y )+2 δ ) codewords which belong to a gi ven bucket. If (ii) I ( U ; S ) < I ( V ; Y ) , then, the U -codewords are d i vided up among the buckets such that every bucket has no more than o ne codeword which belongs to the same 9 For there t o be at least one bucket, we require that 3 δ < I ( V ; Y ) + I ( U ; Z | V ) − I ( U ; S ) . Howe ver , this is not an i ssue since w e will take δ → 0 . October 27, 2018 DRAFT 23 bin of a c onditional code book. In this cas e, for a given bucket, there are 2 n ( I ( U ; Y ) − 3 δ − ( R SM − δ ) − ( I ( V ; Y )+ I ( U ; Z | V ) − I ( U ; S ) − 3 δ )) = 2 n ( I ( U ; S )+ δ ) codewords ea ch belonging to a dif ferent conditional code book and holding the same bin index. The buckets are indexed by k ∈ { 1 , . . . , 2 n ( I ( V ; Y )+ I ( U ; Z | V ) − I ( U ; S ) − 3 δ ) } . For a U -codeword, we will explicity indicate its bucket ind ex along with the conditional cod ebook it belong s to, its bin-index and its index within the bin as u n ( i, m, j, k ) . b) Encod ing: Let m ∈ { 1 , . . . , 2 n ( R SM − δ ) } index the sec ret me ssage . T o send m , using her p ri- vate random s tring Φ A , Alice o btains a Φ buck et which is un iformly distrib uted over the s et { 1 , . . . , 2 n ( I ( V ; Y )+ I ( U ; Z | V ) − I ( U ; S ) − 3 δ ) } , ass igns k = Φ buck et , and looks in bin m (of all the con ditional cod e- books) for a V n ( i ) , U n ( i, m, j, k ) such that ( V n ( i ) , U n ( i, m, j, k ) , S n ) are jointly typical. Thus, the U codeword is selec ted s uch that it belon gs to bin m and bucket k = Φ buck et and suc h that it is jointly typical with the sou rce obse rv ation S n . If more than one choice is found, Alice choo ses one of them arbitrarily . If non e a re found, Alice de clares an error . A test chann el p X | U ,S stochas tically gen erates the channe l input X n . The probab ility of encod ing f ailure can be bounded as follows. In case (i), P e ≤ Pr ( S n / ∈ A ∗ ( n ) ǫ ) + X s n ∈A ∗ ( n ) ǫ p S ( s n ) X k p Φ bucket ( k ) · X v n ∈A ∗ ( n ) ǫ Pr ( V n = v n ) h 1 − Pr (( v n , U n , s n ) ∈ A ∗ ( n ) ǫ | V n = v n ) i 2 n ( I ( U ; S ) − I ( V ; Y )+2 δ ) 2 n ( I ( V ; Y ) − δ ) . where the term Pr ( V n = v n ) is evaluated with the distrib ution for V n being gi ven by the uniform distrib ution over all ǫ -strongly typical v n sequen ces, a nd Pr (( v n , U n , s n ) ∈ A ∗ ( n ) ǫ | V n = v n ) is evaluated with the d istrib ution for U n being given by the uniform distribution over all u n sequen ces which are conditionally ǫ -strongly typical with v n . Sinc e V and S are indepe ndent, for s n ∈ A ∗ ( n ) ǫ and v n ∈ A ∗ ( n ) ǫ , this probab ility is Pr (( v n , U n , s n ) ∈ A ∗ ( n ) ǫ | V n = v n ) ≥ 2 − n ( I ( U ; S | V )+ ǫ 1 ) . (12) Here ǫ 1 → 0 as ǫ → 0 . This will also be the cas e for any future subscripted ǫ # in the s equel. Us ing t his October 27, 2018 DRAFT 24 in the term within the brace s in the up perbound for P e and simplifying X v n ∈A ∗ ( n ) ǫ Pr ( v n ) h 1 − Pr (( v n , U n , s n ) ∈ A ∗ ( n ) ǫ | V n = v n ) i 2 n ( I ( U ; S ) − I ( V ; Y )+2 δ ) 2 n ( I ( V ; Y ) − δ ) ≤ h 1 − 2 − n ( I ( U ; S | V )+ ǫ 1 ) i 2 n ( I ( U ; S ) − I ( V ; Y )+2 δ ) 2 n ( I ( V ; Y ) − δ ) (a) ≤ e − 2 n ( I ( U ; S )+ δ ) 2 − n ( I ( U ; S | V )+ ǫ 1 ) (b) = e − 2 n ( δ − ǫ 1 ) , where (a) follo ws from (1 − x ) n ≤ e − nx , and (b) from the f act that I ( U ; S ) = I ( V , U ; S ) = I ( U ; S | V ) which in turn is a c onsequ ence of the Ma rkov ch ain V − U − S and the indep endenc e of V and S . Substituting this in the upperbou nd for P e , P e ≤ Pr ( S n / ∈ A ∗ ( n ) ǫ ) + X s n ∈A ∗ ( n ) ǫ p S ( s n ) X k p Φ bucket ( k ) · e − 2 n ( δ − ǫ 1 ) = Pr ( S n / ∈ A ∗ ( n ) ǫ ) + (1 − Pr ( S n / ∈ A ∗ ( n ) ǫ )) e − 2 n ( δ − ǫ 1 ) . Thus, we can ma ke P e as s mall as desired by choosing sufficiently small δ , ǫ ( δ > ǫ 1 ), an d sufficiently lar ge n . Under ca se (ii), the prob ability of encoding failure ca n be similarly boun ded. No w , we hav e P e ≤ Pr ( S n / ∈ A ∗ ( n ) ǫ ) + X s n ∈A ∗ ( n ) ǫ p S ( s n ) X k p Φ bucket ( k ) · X v n ∈A ∗ ( n ) ǫ Pr ( v n ) h 1 − Pr (( v n , U n , s n ) ∈ A ∗ ( n ) ǫ | V n = v n ) i 2 n ( I ( U ; S )+ δ ) . where the term Pr (( v n , U n , s n ) ∈ A ∗ ( n ) ǫ | V n = v n ) can b e e valuated as in (12). Sub stituting this ab ove and following similar steps, by choosing sufficiently small δ , ǫ ( δ > ǫ 1 ), and sufficiently lar ge n , we c an make P e as small as desired. c) Decod ing at Bob: Bo b recei ves Y n and s earches for a unique ( V n , U n ) pair such t hat ( V n , U n , Y n ) that are ǫ -strongly jointly typical. If no such p air e xists, Bob declares an error . Otherwise, Bob identifies the co rresponding bin-index ˆ m , an d dec lares this the secret mess age. He nce, co nditioned o n encod ing being succes sful, a decoding error results only if there is a ˆ m 6 = m such that, there are ˆ i , ˆ j , ˆ k such that ( v n ( ˆ i ) , u n ( ˆ i, ˆ m, ˆ j , ˆ k ) , Y n ) are ǫ -strongly jointly typ ical. Us ing the union b ound, we can upperbound the October 27, 2018 DRAFT 25 probability of this by X ˆ i X ˆ m 6 = m X ˆ j Pr ( V n ( ˆ i ) , U n ( ˆ i, ˆ m, ˆ j ) , Y n ) ∈ A ∗ ( n ) ǫ = X ˆ i 6 = i X ˆ m 6 = m X ˆ j Pr ( V n ( ˆ i ) , U n ( ˆ i, ˆ m, ˆ j ) , Y n ) ∈ A ∗ ( n ) ǫ + X ˆ m 6 = m X ˆ j Pr ( V n ( i ) , U n ( i, ˆ m, ˆ j ) , Y n ) ∈ A ∗ ( n ) ǫ ≤ 2 n ( I ( U ; Y ) − 3 δ ) 2 − n ( I ( U ; Y ) − ǫ 2 ) + 2 n ( R SM − δ ) 2 n ( I ( U ; Z | V ) − δ ) 2 − n ( I ( U ; Y | V ) − ǫ 3 ) which can b e made as small as de sired by choo sing sufficiently small δ , ǫ ( δ > ǫ 2 , ǫ 3 ), an d sufficiently lar ge n . d) Decod ing at E ve w ith side informa tion: Con sider Eve who has a ccess to M , V n . Then , the bin in which a potential U n exists is kn own to be at most 2 n ( I ( U ; Z | V ) − δ ) . W e may upperbo und the probability of dec oding e rror as w e did above. Conside r the jointly typical de coder for U n giv en Z n in this b in. There are two error events: E 1 is the ev ent no sequen ce in t he bin is jointly typical with Z n , and E 2 is the event a false se quence in the s ubbin is jointly typical with Z n . W e have, Pr ( E 1 ) → 0 as n → ∞ and the prob ability a false sequ ence is jointly typical with Z n is 2 − n ( I ( U ; Z | V ) − ǫ 4 ) . By a u nion b ound, we can make the probability of error as s mall as de sired by choosing sufficiently small δ , ǫ ( δ > ǫ 4 ), and sufficiently lar ge n . By the usua l random coding arguments, we ma y now c onclude tha t for any δ > 0 , for sufficiently lar ge n , there exists a codeb ook (with rates as in the codebook cons truction above) such tha t (i) Bob c an recover the secret me ssage with probab ility of error not larger tha n δ and (ii) Eve, when provided with the messag e and the V c odeword, can recover the U codeword with probability of error not lar ge r than δ . W e now simply h av e to verify that this codebo ok also has the prope rty that Eve’ s information about the mess age (gi ven Z n ) is small, i.e. , the secrecy condition. Pr oo f of Secrecy Condition . October 27, 2018 DRAFT 26 First observe that H ( M | Z n ) ≥ H ( M | Z n , V n ) = H ( M , Z n | V n ) − H ( Z n | V n ) = H ( M , U n , Z n | V n ) − H ( U n | M , Z n , V n ) − H ( Z n | V n ) (a) ≥ H ( U n , Z n | V n ) − H ( U n | M , Z n , V n ) − H ( Z n | V n ) = H ( U n | V n ) + H ( Z n | U n , V n ) − H ( U n | M , Z n , V n ) − H ( Z n | V n ) . (13) where (a) follows from non -negati vity of c onditional entropy . W e now bo und eac h of these terms. Let us de fine, for every u n ( i, m, j, k ) codeword E = { s n : ∃ ( i, m, j, k ) su ch that ( v n ( i ) , u n ( i, m, j, k ) , s n ) ∈ A ∗ ( n ) ǫ } Recall that for all α > 0 , there exists n sufficiently lar ge s uch that decod ing (and hence encoding) succe eds with probability greater than 1 − α , i.e., Pr ( S n ∈ E ) ≥ 1 − α . Furthermore, the prob ability Pr (( V n , U n ) = ( v n ( i ) , u n ( i, m, j, k )) , S n ∈ E ) = Pr ( M = m, Φ buck et = k , ( V n , U n ) = ( v n ( i ) , u n ( i, m, j, k )) , S n ∈ E ) ≤ Pr ( M = m ) · 2 − n ( I ( V ; Y )+ I ( U ; Z | V ) − I ( U ; S ) − 3 δ ) · X s n :( v n ( i ) , u n ( i,m,j,k ) ,s n ) ∈A ∗ ( n ) ǫ Pr ( S n = s n ) ≤ 2 − n ( R SM − δ ) · 2 − n ( I ( V ; Y )+ I ( U ; Z | V ) − I ( U ; S ) − 3 δ ) · 2 nH ( S | U )+ nǫ · 2 − nH ( S )+ nǫ = 2 − n ( R SM − δ ) · 2 − n ( I ( V ; Y )+ I ( U ; Z | V ) − I ( U ; S ) − 3 δ ) · 2 − nI ( S ; U )+2 nǫ = 2 − n ( R SM − δ ) · 2 − n ( I ( V ; Y )+ I ( U ; Z | V ) − 3 δ )+2 nǫ , which along with the lowerbound on Pr ( S n ∈ E ) a bove implies tha t Pr (( V n , U n ) = ( v n ( i ) , u n ( i, m, j, k )) | S n ∈ E ) ≤ 2 − nR SM · 2 − n ( I ( V ; Y )+ I ( U ; Z | V ))+ nǫ 5 = 2 − nI ( U ; Y )+ nǫ 5 . October 27, 2018 DRAFT 27 Also, we kno w that the s ize of the codebo ok in which ( V n , U n ) take values is les s than 2 nI ( U ; Y ) which implies that H ( U n , V n | S n ∈ E ) ≥ n I ( U ; Y ) − nǫ 5 . Using this we can bound the first term in (13). H ( U n | V n ) = H ( U n , V n ) − H ( V n ) ( a ) ≥ H ( U n , V n ) − n ( I ( V ; Y )) ( b ) ≥ H ( U n , V n | S n ∈ E ) · Pr ( S n ∈ E ) − n ( I ( V ; Y )) = nI ( U ; Y ) − nI ( V ; Y ) − nǫ 6 = nI ( U ; Y | V ) − nǫ 6 , where (a) follo ws from the fact that V n takes values in a c odeboo k wh ose size is smaller than 2 nI ( V ; Y ) , and (b) follows from the fact that conditioning reduce s e ntropy . W e bo und the sec ond term in (13) as follo w s H ( Z n | U n , V n ) = H ( Z n | U n ) = X u n P r ( U n = u n ) H ( Z n | U n = u n ) ( a ) = X u n P r ( U n = u n ) X µ ∈U N ( µ | u n ) H ( Z | U = µ ) ( b ) ≥ X u n P r ( U n = u n ) X µ ∈U n ( P r ( U = µ ) − ǫ ) H ( Z | U = µ ) = X u n P r ( U n = u n )( nH ( Z | U ) − nǫ 7 ) = nH ( Z | U ) − nǫ 7 , where (a) follows from the memoryless nature o f the virtual cha nnel from U to Z and N ( µ | u n ) counts the number of times µ appears in the codeword u n , and (b) follo ws from the fact that a ll the u n codewords belong to A ∗ ( n ) ǫ . Note that from (a) onwards, we u se U , Z to deno te a pair of rando m variables distrib uted according to the joint distrib ution p U , Z . The third term can be bounded by using Fano’ s inequality and the fact that Eve can recover the U n codeword with a probability o f e rror ǫ when she h as ac cess to M a nd V n in addition to her ob servation October 27, 2018 DRAFT 28 Z n . H ( U n | M , K , Z n , V n ) ≤ 1 + n · ǫ · I ( U ; Z | V ) = nǫ 8 . Finally , to bo und the fourth term, let T be a n indica tor random variable which takes on the value 1 when ( V n , Z n ) ∈ A ∗ ( n ) ǫ and 0 otherwise. H ( Z n | V n ) ≤ H ( Z n , T | V n ) ≤ 1 + H ( Z n | V n , T = 1) P r ( T = 1) + n log |Z | P r ( T = 0) . (14) But P r ( T = 0) = P r (( V n , Z n ) / ∈ A ∗ ( n ) ǫ ) ≤ ǫ 9 . Furthermore, we have H ( Z n | V n , T = 1) = X v n P r ( V n = v n | T = 1) H ( Z n | V n = v n , T = 1) ( a ) ≤ X v n P r ( V n = v n | T = 1) log |A ∗ ( n ) ǫ ( p Z | V | v n ) | ≤ X v n P r ( V n = v n | T = 1)( nH ( Z | V ) + nǫ ) = nH ( Z | V ) + nǫ, where in (a ) we used |A ∗ ( n ) ǫ ( p Z | V | v n ) | to denote the size of the se t of a ll z n such that ( z n , v n ) ∈ A ∗ ( n ) ǫ . Thus, (14) b ecomes H ( Z n | V n ) ≤ nH ( Z | V ) + n ǫ 10 . Hence, we ma y conclud e from (13) that 1 n H ( M | Z n ) ≥ I ( U ; Y | V ) + H ( Z | U ) − H ( Z | V ) + ǫ 11 = I ( U ; Y | V ) − I ( U ; Z | V ) + ǫ 11 = R SM + ǫ 11 . Thus we have shown the secrecy condition. October 27, 2018 DRAFT 29 B. Case 2: I ( U ; S ) > I ( V ; Y ) + I ( U ; Z | V ) In this ca se, we only n eed to show the achiev ability of R SM = I ( U ; Y ) − I ( U ; S ) , and R SK = I ( U ; S ) − I ( V ; Y ) − I ( U ; Z | V ) . W e procee d as in c ase 1. No te tha t below we ass ume R SM > 0 . If R S M = 0 , the only mod ification needed is to avoid the binning step as sociated with the s ecret messa ge. Random Coding Argument e) Codeb ook Gene ration: W e ge nerate a cod ebook of blocklen gth- n with 2 n ( I ( U ; Y ) − 2 δ ) elements compose d of two parts. The firs t part is a b locklength n V -codebo ok o f size 2 n ( I ( V ; Y ) − δ ) codewords by drawi ng the codew ords uniformly from the ǫ -strongly typical set A ∗ ( n ) ǫ of n -length V sequ ences , indexing e ach by i ∈ { 1 , . . . , 2 n ( I ( V ; Y ) − δ ) } . For each codeword v n , a con ditional U -codebook of size 2 n ( I ( U ; Y | V ) − δ ) is created by drawing the c odewords uniformly from the se t of n -length U se- quence s which are co nditionally ǫ -strongly typical c onditioned on the V seque nce v n . For each con- ditional codebo ok, we distri bute these sequen ces among 2 n ( R SM − 3 δ ) bins such that ea ch bin contains 2 n ( I ( U ; S ) − I ( V ; Y )+2 δ ) codewords. W e index the bins by m , where m ∈ { 1 , . . . , 2 n ( R SM − 3 δ ) } . The sequence s in eac h bin a re as signed to 2 n ( R SK +3 δ ) subbins so that e ach su bbin contains 2 n ( I ( U ; Z | V ) − δ ) codewords, indexing each subbin by k ∈ { 1 , . . . , 2 n ( R SK +3 δ ) } . W e ind ex each of the elements in the subbin by ℓ ∈ { 1 , . . . , 2 n ( I ( U ; Z | V ) − δ ) } , an d denote the specific index as Φ sub − index . For a U - codeword, we wil l explicitly indicate its index as u n ( i, m, k , ℓ ) . f) Enco ding: Let m ∈ { 1 , . . . , 2 n ( R SM − 3 δ ) } index the sec ret message. For this fixed m , Alice selects a V n ( i ) , U n ( i, m, k , ℓ ) s uch that ( V n , U n ( i, m, k , ℓ ) , S n ) a re jointly typical. If none are found, Alice declares an error . A test channel p X | U ,S stochas tically enc odes the channel input X n . The subbin index k is set as the se cret key . No te tha t the secret key is determined automa tically by the U n ( i, m, k , ℓ ) selected. For a fixed m , and the probability of an encoding fail ure is giv en by P e ≤ Pr ( S n / ∈ A ∗ ( n ) ǫ ) + X s n ∈A ∗ ( n ) ǫ p S ( s n ) · X v n ∈A ∗ ( n ) ǫ Pr ( V n = v n ) h 1 − Pr (( v n , U n , s n ) ∈ A ∗ ( n ) ǫ | V n = v n ) i 2 n ( I ( U ; S ) − I ( V ; Y )+2 δ ) 2 n ( I ( V ; Y ) − δ ) . where the term Pr ( V n = v n ) is evaluated with the distrib ution for V n being gi ven by the uniform distrib ution over all ǫ -strongly typical v n sequen ces, Pr (( v n , U n , s n ) ∈ A ∗ ( n ) ǫ | V n = v n ) is evaluated October 27, 2018 DRAFT 30 with U n being u niformly distributed over the set A ∗ ( n ) ǫ ( p U | V | V n = v n ) of all u n sequen ces which are conditionally ǫ -strongly typical with v n . As in c ase 1, since V and S a re independ ent, for s n ∈ A ∗ ( n ) ǫ and v n ∈ A ∗ ( n ) ǫ , Pr (( v n , U n , s n ) ∈ A ∗ ( n ) ǫ | V n = v n ) ≥ 2 − n ( I ( U ; S | V )+ ǫ 1 ) = 2 − n ( I ( U ; S )+ ǫ 1 ) . Using this in the term within the braces in the upperbound for P e and simplifying X v n ∈A ∗ ( n ) ǫ |A ∗ ( n ) ǫ ( p V ) | − 1 · h 1 − Pr (( v n , U n , s n ) ∈ A ∗ ( n ) ǫ | V n = v n ) i 2 n ( I ( U ; S ) − I ( V ; Y )+2 δ ) 2 n ( I ( V ; Y ) − δ ) ≤ X v n ∈A ∗ ( n ) ǫ |A ∗ ( n ) ǫ ( p V ) | − 1 · h 1 − 2 − n ( I ( U ; S )+ ǫ 1 ) i 2 n ( I ( U ; S ) − I ( V ; Y )+2 δ ) 2 n ( I ( V ; Y ) − δ ) = h 1 − 2 − n ( I ( U ; S | V )+ ǫ 1 ) i 2 n ( I ( U ; S ) − I ( V ; Y )+2 δ ) 2 n ( I ( V ; Y ) − δ ) ( a ) ≤ e − 2 − n ( I ( U ; S )+ ǫ 1 ) · 2 n ( I ( U ; S )+ δ ) , where (a) follows from the inequ ality 1 − x ≤ e − x . Substituting this in the up perbound for P e , as in case 1, we can make P e as small as desired by ch oosing s ufficiently small δ , ǫ ( δ > ǫ 1 ), and su f ficiently lar ge n . g) Decod ing at Bob: Bob receiv es Y n and searche s for a unique ( V n ( ˆ i ) , U n ( ˆ i, ˆ m, ˆ k , ˆ ℓ )) pair such that ( V n ( ˆ i ) , U n ( ˆ i, ˆ m, ˆ k , ˆ ℓ ) , Y n ) ∈ A ∗ ( n ) ǫ . If no such pair exists, Bob decla res an error . Otherwise Bob de clares ˆ m to be the secret messa ge and ˆ k to be the secret key . Conditioned on e ncoding b eing succe ssful, an error re sults only if there is a pair ( ˆ m, ˆ k ) 6 = ( m, k ) such that there are ˆ i a nd ˆ ℓ and V n ( ˆ i ) , U n ( ˆ i, ˆ m, ˆ k 1 , ˆ ℓ ) , Y n ∈ A ∗ ( n ) ǫ . W e ca n upperbou nd the probability of this by X ˆ i X ( ˆ m , ˆ k ) 6 =( m,k ) X ˆ ℓ Pr ( V n ( ˆ i ) , U n ( ˆ i, ˆ m, ˆ k , ˆ ℓ ) , Y n ) ∈ A ∗ ( n ) ǫ = X ˆ i 6 = i X ( ˆ m , ˆ k ) 6 =( m,k ) X ˆ ℓ Pr ( V n ( ˆ i ) , U n ( ˆ i, ˆ m, ˆ k , ˆ ℓ ) , Y n ) ∈ A ∗ ( n ) ǫ + X ( ˆ m , ˆ k ) 6 =( m,k ) X ˆ ℓ Pr ( V n ( i ) , U n ( i, ˆ m, ˆ k , ˆ ℓ ) , Y n ) ∈ A ∗ ( n ) ǫ ≤ 2 n ( I ( U ; Y ) − 2 δ ) 2 − n ( I ( U ; Y ) − ǫ 2 ) + 2 n ( I ( U ; Y | V ) − δ ) 2 − n ( I ( U ; Y | V ) − ǫ 3 ) , which can b e made as small as de sired by choo sing sufficiently small δ , ǫ ( δ > ǫ 2 , ǫ 3 ), an d sufficiently lar ge n . October 27, 2018 DRAFT 31 h) Decod ing at Eve with side information: Consider Eve who has ac cess to M , K, V n . T hen, the subbin in which a po tential U n exists is known to be a t most 2 n ( I ( U ; Z | V ) − δ ) . The probability of error of the jointly typical dec oder for U n in this bin gi ven Z n can be bound ed as above. Th ere a re two error ev ents: E 1 is the event no seque nce in the bin is jointly typ ical with Z n , and E 2 is the event a false sequen ce in the sub bin is jointly typical with Z n . W e h av e, P ( E 1 ) → 0 as n → ∞ and the p robability that a false sequence is jointly typical wit h Z n is 2 − n ( I ( U ; Z | V ) − ǫ 4 ) . By a union bound, we can make the probability of error a s small a s des ired by choo sing suf ficiently small δ , ǫ ( δ > ǫ 4 ), an d sufficiently large n . By the u sual random cod ing a r guments, as in case 1, w e may now c onclude that for any δ > 0 , for sufficiently lar ge n , there exists a c odeboo k with the rates as set above, such that (i) Bob can recover the se cret mess age and the s ecret key with the probability of error not lar ger tha n δ and (ii) Eve, when provided with the mess age and the V co dewor d, can recover the U co dew ord with proba bility of error not larger than δ . W e now have to verif y that this implies that (1) Eve’ s information about the message (gi ven Z n ) goes to zero (secrecy co ndition) and (2) the secret key is approximately uniformly distrib uted over it s alpha bet (uniformit y condition). Pr oo f of Secrecy Condition . First we ob serve that H ( M , K | Z n ) ≥ H ( M , K | Z n , V n ) = H ( M , K, Z n | V n ) − H ( Z n | V n ) = H ( M , K, U n , Z n | V n ) − H ( U n | M , K , Z n , V n ) − H ( Z n | V n ) ( a ) ≥ H ( U n , Z n | V n ) − H ( U n | M , K , Z n , V n ) − H ( Z n | V n ) = H ( U n | V n ) + H ( Z n | U n , V n ) − H ( U n | M , K , Z n , V n ) − H ( Z n | V n ) , (15) where (a) follows from H ( M , K | U n , Z n , V n ) ≥ 0 . W e now bo und each of these terms. Let us define , for every u n ( i, m, k , ℓ ) codeword E = { s n : ∃ ( i, m, k , ℓ ) such tha t ( v n ( i ) , u n ( i, m, k , ℓ ) , s n ) ∈ A ∗ ( n ) ǫ } (16) Recall that for all α > 0 , there exists n sufficiently lar ge s uch that decod ing (and hence encoding) succe eds with probability greater than 1 − α , i.e., Pr ( S n ∈ E ) ≥ 1 − α . October 27, 2018 DRAFT 32 Furthermore, the prob ability Pr (( V n , U n ) = ( v n , u n ( i, m, k , ℓ )) , S n ∈ E ) = Pr ( M = m, K = k , ( V n , U n ) = ( v n , u n ( i, m, k , ℓ )) , S n ∈ E ) = Pr ( M = m ) · X s n :( v n ( i ) , u n ( i,m,j,k ) ,s n ) ∈A ∗ ( n ) ǫ Pr ( S n = s n ) ≤ 2 − n ( I ( U ; Y ) − I ( U ; S ) − 3 δ ) · 2 nH ( S | U )+ nǫ 2 − nH ( S )+ nǫ = 2 − nI ( U ; Y )+3 nδ +2 nǫ , which, along with the lower bound on Pr ( S n ∈ E ) above implies that Pr (( V n , U n ) = ( v n , u n ( i, m, k , ℓ )) | S n ∈ E ) ≤ 2 − nI ( U ; Y )+ nǫ 12 . (17) Also, we know that the size of the code book in which ( V n , U n ) take values is less tha n 2 nI ( U ; Y ) , which implies that H ( U n , V n | S n ∈ E ) ≥ nI ( U ; Y ) − nǫ 12 . Using this, we c an bound the first term in (15): H ( U n | V n ) = H ( U n , V n ) − H ( V n ) ( a ) ≥ H ( U n , V n ) − n ( I ( V ; Y )) ( b ) ≥ H ( U n , V n | S n ∈ E ) · Pr ( S n ∈ E ) − nI ( V ; Y ) = nI ( U ; Y ) − nI ( V ; Y ) − nǫ 13 , where (a) follo ws from the fact that V n takes values in a c odeboo k wh ose size is smaller than 2 nI ( V ; Y ) , and (b) follows from the fact that conditioning cann ot increase entropy . October 27, 2018 DRAFT 33 W e bo und the sec ond term in (15) as follo w s: H ( Z n | U n , V n ) = H ( Z n | U n ) = X u n P r ( U n = u n ) H ( Z n | U n = u n ) ( a ) = X u n Pr ( U n = u n ) X µ ∈U N ( µ | u n ) H ( Z | U = µ ) ( b ) ≥ X u n Pr ( U n = u n ) X µ ∈U n ( Pr ( U = µ ) − ǫ ) H ( Z | U = µ ) = X u n Pr ( U n = u n )( nH ( Z | U ) − nǫ 14 ) = nH ( Z | U ) − nǫ 14 , where (a) follows from the memoryless nature o f the virtual cha nnel from U to Z and N ( µ | u n ) counts the number of times µ appears in the codeword u n , and (b) follo ws from the fact that a ll the u n codewords belong to A ∗ ( n ) ǫ . The third term can be bounded by using Fano’ s inequality and the fact that Eve can recover the U n codeword with a probability o f e rror ǫ when she h as ac cess to M a nd V n in addition to her ob servation Z n . H ( U n | M , K , Z n , V n ) ≤ 1 + n · ǫ · I ( U ; Z | V ) = n ǫ 15 . Finally , to bo und the fourth term, let T be a n indica tor random variable which takes on the value 1 when ( V n , Z n ) ∈ A ∗ ( n ) ǫ and 0 otherwise. H ( Z n | V n ) ≤ H ( Z n , T | V n ) ≤ 1 + H ( Z n | V n , T = 1) Pr ( T = 1) + n log |Z | Pr ( T = 0) . (18) But Pr ( T = 0) = Pr (( V n , Z n ) / ∈ A ∗ ( n ) ǫ ) ≤ ǫ 16 . October 27, 2018 DRAFT 34 Furthermore, we have H ( Z n | V n , T = 1) = X v n Pr ( V n = v n | T = 1) H ( Z n | V n = v n , T = 1) ( a ) ≤ X v n Pr ( V n = v n | T = 1) log |A ∗ ( n ) ǫ ( p Z | V | v n ) | ≤ X v n Pr ( V n = v n | T = 1)( nH ( Z | V ) + nǫ ) = nH ( Z | V ) + nǫ, where in (a ) we used |A ∗ ( n ) ǫ ( p Z | V | v n ) | to denote the size of the se t of a ll z n such that ( z n , v n ) ∈ A ∗ ( n ) ǫ . Thus, (18) b ecomes H ( Z n | V n ) ≤ nH ( Z | V ) + n ǫ 17 . Hence, we ma y conclud e from (15) that 1 n H ( M , K | Z n ) ≥ I ( U ; Y | V ) + H ( Z | U ) − H ( Z | V ) + ǫ 18 = I ( U ; Y | V ) − I ( U ; Z | V ) + ǫ 18 = R SM + R SK + ǫ 18 . Thus we have shown the secrecy condition. Pr oo f of Uniformity Cond ition . Note that from (17), we have that H ( K ) = H ( V n , M , K , Φ sub − index ) − H ( V n , M , Φ sub − index | K ) ( a ) ≥ H ( V n , M , K , Φ sub − index ) − ( I ( V ; Y ) + n R SM + I ( U ; Z | V )) ( b ) ≥ H ( V n , M , K , Φ sub − index | S n ∈ E ) · Pr ( S n ∈ E ) − ( I ( V ; Y ) + nR SM + I ( U ; Z | V )) ( c ) ≥ nI ( U ; Y ) − ( I ( V ; Y ) + n R SM + I ( U ; Z | V )) − nǫ 13 = R SK − nǫ 13 . where (a) follo ws sinc e V n is drawn from a co debook with no more than 2 nI ( V ; Y ) elements, M ha s less tha n 2 nR SM elements, an d Φ sub − index has no more than 2 nI ( U ; Z | V ) elements; (b) sinc e conditional entropy is less than or equal to entropy; a nd (c) from the lo wer bound in (17). Since ǫ 13 → 0 as ǫ → 0 , we satisfy the u niformity cond ition. October 27, 2018 DRAFT 35 A P P E N D I X B P R O O F O F T H E O R E M 3 The achievabili ty follo ws directly from Th eorem 2 b y se tting the a uxiliary rando m variables as follows. V 1 = ( X F , X R ) , V 2 = ( V 2 ,F , X R ) , U 1 = U 1 ,F . It is easy to se e tha t this satisfies the Markov con ditions on the auxiliary random v ariables. Sub stituting these in the expression in Theorem 2 sh ows the achiev ability . The interpretation is that, we have ignored the reversely degraded source component, and the re versely degraded channe l is used purely as a ch annel for public c ommunication. T o s how the con verse, let J and J ′ be inde penden t random variables both uniformly distrib uted over { 1 , 2 , . . . , n } and independ ent of all o ther random variables. T o get the first co ndition (ignoring o ( n ) terms) n ( I ( X F ,J ; Y F ,J ) + I ( X R,J ; Y R,J )) ≥ nI ( X J ; Y J ) ≥ nI ( X J ; Y J | J ) ≥ I ( X n ; Y n ) (a) = I ( X n ; Y n , Z n F ) = I ( M , K, S n A , X n ; Y n , Z n F ) ≥ I ( M , K, S n A ; Y n , Z n F ) ≥ I ( M , K, S n A ; Y n , Z n F ) − I ( S n B , S n E ; Y n , Z n F ) (b) = I ( M , K, S n A ; Y n , Z n F | S n B , S n E ) = I ( M ; Y n , Z n F | S n B , S n E ) + I ( K, S n A ; Y n , Z n F | S n B , S n E , M ) (c) = H ( M | S n B , S n E ) + I ( K, S n A ; Y n , Z n F | S n B , S n E , M ) = H ( M ) + I ( K, S n A ; Y n , Z n F | S n B , S n E , M ) = nR SM + I ( K, S n A ; Y n , Z n F | S n B , S n E , M ) October 27, 2018 DRAFT 36 where (a) is due to the sub -channe l F to E ve being degraded w .r .t. the cha nnel to Bob, (b) is be cause ( S n B , S n E ) − S n A − ( M , K, Y n , Z n F ) is a Markov chain, and (c) follo ws f rom Fano’ s ineq uality wh ich gi ves H ( M | Y n , S n B ) = o ( n ) . Now , to bound the second term, we write I ( K, S n A ; Y n , Z n F | S n B , S n E , M ) = H ( Y n , Z n F | S n B , S n E , M ) − H ( Y n , Z n F | K, M , S n A , S n B , S n E ) ≥ H ( Y n , Z n F | S n B , S n E , M ) − H ( K, Y n , Z n F | S n A , S n B , S n E , M ) (a) = H ( K , Y n , Z n F | S n B , S n E , M ) − H ( K, Y n , Z n F | S n A , S n B , S n E , M ) = I ( K, Y n , Z n F ; S n A | S n B , S n E , M ) (b) = I ( M , K, Y n , Z n F ; S n A | S n B , S n E ) ≥ I ( M , K, Y n , Z n F ; S n A,F | S n A,R , S n B , S n E ) = I ( M , K, Y n , Z n F ; S n A,F | S n A,R , S n B ,F , S n E , F ) = n X i =1 I ( M , K, Y n , Z n F ; S A,F ,i | S i − 1 A,F , S n A,R , S n B ,F , S n E , F ) ≥ n X i =1 I ( M , K, Y n , Z n F ; S A,F ,i | S n A,R , S n B ,F , S n E , F ) = n X i =1 I ( M , K, Y n , Z n F , S B ,F , ˜ i , S E , F , ˜ i , S n A,R ; S A,F ,i | S B ,F ,i , S E , F ,i ) = nI ( M , K, Y n , Z n F , S B ,F , ˜ J ′ , S E , F , ˜ J ′ , S n A,R ; S A,F ,J ′ | S B ,F ,J ′ , S E , F ,J ′ , J ′ ) (c) = n I ( M , K, Y n , Z n F , S B ,F , ˜ J ′ , S E , F , ˜ J ′ , S n A,R , J ′ ; S A,F ,J ′ | S B ,F ,J ′ , S E , F ,J ′ ) = nI ( U 1 ,F ; S A,F ,J ′ | S B ,F ,J ′ , S E , F ,J ′ ) (d) = nI ( U 1 ,F ; S A,F ,J ′ | S B ,F ,J ′ ) (e) = n ( I ( U 1 ,F ; S A,F ,J ′ ) − I ( U 1 ,F ; S B ,F ,J ′ )) , where we d efine S B ,F , ˜ i def = ( S i − 1 B ,F , S n B ,F ,i +1 ) , S E , F , ˜ i def = ( S i − 1 E , F , S n E , F ,i +1 ) , and U 1 ,F def = ( M , K , Y n , Z n F , S B ,F , ˜ J ′ , S E , F , ˜ J ′ , S n A,R , J ′ ) . Note that (a) follows from Fano’ s inequa lity which implies tha t H ( K | Y n , S n B ) = o ( n ) , (b) follows the independ ence of M from ( S n A , S n B , S n C ) . T o see (c), note that ( S A,J ′ , S B ,J ′ , S E , J ′ ) has the sa me joint distribution as ( S A , S B , S E ) . This e quiv a lence of joint distrib utions together with the fact that U 1 ,F does indeed satisfy the Markov c ondition U 1 ,F − S A,J ′ − ( S B ,J ′ , S E , J ′ ) implies that U 1 ,F − S A,F ,J ′ − S B ,F ,J ′ − S E , F ,J ′ is a Markov chain, which gi ves u s (d) and October 27, 2018 DRAFT 37 (e). T o ge t condition 2, n ( R SK + R SM ) ≤ I ( M , K ; Y n , Z n F , S n B , S n E ) (a) = I ( M , K ; Y n , Z n F , S n B , S n E ) − I ( M , K ; Z n , S n E ) (b) = I ( M , K ; Y n , Z n F , S n B , S n E ) − I ( M , K ; Z n , Y n R , S n E ) ≤ I ( M , K ; Y n , Z n F , S n B , S n E ) − I ( M , K ; Z n F , Y n R , S n E ) (c) = I ( M , K ; Y n F , S n B ,F | Y n R , Z n F , S n E ) = I ( M , K ; Y n F | Y n R , Z n F , S n E ) + I ( M , K ; S n B ,F | Y n , Z n F , S n E ) ≤ I ( M , K, S n E , Y n R , X n F ; Y n F | Z n F ) + I ( M , K ; S n B ,F | Y n , Z n F , S n E ) = I ( X n F ; Y n F | Z n F ) + n X i =1 I ( M , K ; S B ,F ,i | Y n , Z n F , S i − 1 B ,F , S n E ) = H ( Y n F | Z n F ) − n X i =1 H ( Y F ,i | X F ,i , Z F ,i ) + n X i =1 I ( M , K ; S B ,F ,i | Y n , Z n F , S i − 1 B ,F , S n E ) ≤ n X i =1 H ( Y F ,i | Z F ,i ) − n X i =1 H ( Y F ,i | X F ,i , Z F ,i ) + n X i =1 I ( M , K, Y n , Z n F , S B ,F , ˜ i , S E , F , ˜ i , S n A,R ; S B ,F ,i | S E , i ) = nI ( X F ,J ; Y F ,J | Z F ,J , J ) + nI ( M , K, Y n , Z n F , S B ,F , ˜ J ′ , S E , F , ˜ J ′ , S n A,R ; S B ,F ,J ′ | S E , F ,J ′ , J ′ ) ≤ nI ( X F ,J ; Y F ,J | Z F ,J , J ) + nI ( U 1 ,F ; S B ,F ,J ′ | S E , F ,J ′ ) (d) = n ( I ( X F ,J ; Y F ,J | V 2 ,F ) − I ( X F ,J ; Z F ,J | V 2 ,F )) + n ( I ( U 1 ,F ; S B ,F ,J ′ ) − I ( U 1 ,F ; S E , F ,J ′ )) where V 2 ,F def = J , (a) follows from the hy pothesis I ( M , K ; Z n , S n E ) = o ( n ) , (b) from the fact tha t I ( M , K ; Y n R | S n E , Z n ) = 0 , which we show below , (c) from the Markov chain ( M , K, Y n , Z n F , S n A ) − S n E , R − S n B ,R , and (d) from the degradation of the source compone nt F an d the sub-cha nnel F . 0 = I ( S n A , M , K ; Y n R | Z n ) (a) = I ( S n E , S n A , M , K ; Y n R | Z n ) ≥ I ( M , K ; Y n R | S n E , Z n ) , where (a) follows from the Markov chain S n E − ( S n A , M , K ) − Z n − Y n R . By non-negativity of mutua l information, I ( M , K ; Y n R | S n E , Z n ) = 0 as claimed above. Thus, we ha ve shown that if ( R 1 , R 2 ) ∈ C , then there must exist rand om variables U 1 ,F and V 2 ,F jointly distributed with X F , Y F , Z F , X R , Y R , Z R , S A,F , S B ,F , S E , F , S A,R , S B ,R , S E , R such that their joint distrib ution is of the following form p S A,F ,S B,F ,S E ,F p S A,R ,S B,R ,S E ,R p U 1 ,F | S A,F ,S A,R p V 2 ,F ,X F p Y F ,Z F | X F p X R p Y R ,Z R | X R , October 27, 2018 DRAFT 38 and R SM ≤ I ( X F , Y F ) + I ( X R ; Y R ) − ( I ( U 1 ,F ; S A,F ) − I ( U 1 ,F ; S B ,F )) , (19) R SK + R SM ≤ I ( X F ; Y F | V 2 ,F ) − I ( X F ; Z F | V 2 ,F ) + I ( U 1 ,F ; S B ,F ) − I ( U 1 ,F ; S E , F ) . (20) The form of the right hand sides above further allows us to ass ert that the U 1 ,F above may b e inde pende nt of S A,R , i.e., it is enoug h to consider joint distributi ons o f the form p S A,F ,S B,F ,S E ,F p S A,R ,S B,R ,S E ,R p U 1 ,F | S A,F p V 2 ,F ,X F p Y F ,Z F | X F p X R p Y R ,Z R | X R . (21) This completes the proof. i) Bandw idth mismatch: Sup pose there is a bandwidth mismatch of m S source symbols for every m C channe l symbols. Then, for a bloc klength n o f channel s ymbols, we ha ve n S def = ⌊ nm S /m C ⌋ source symbols. The only modification we need to make to the conv erse is to set J ′ to be uniformly distri buted over { 1 , 2 , . . . , n S } . It is straightforward to verif y that the ar guments carry o ver to the bandwidth mismatc h setting. j) Stochastically de graded case : T heorem 3 also holds when the channels and sources are o nly stochas tically degraded. Achievabilit y follows directly from Th eorem 2 . For the c on verse, let us recall the de finition of stoch astic degradation. For the so urce compo nent F made u p of S A,F , S B ,F , S E , F , stochas tic degradation means tha t there is a conditional distrib ution p ˜ S E ,F | S A,F ,S B,F such that we may define a random variable ˜ S E , F jointly distri buted with S A,F , S B ,F which s atisfies (i) S A,F − S B ,F − ˜ S E , F is a Markov cha in, and (ii) ( S A,F , ˜ S E , F ) ha s the same joint distributi on as ( S A,F , S E , F ) . W ith- out loss of gene rality , we may assu me that the joint d istrib ution of S A,F , S B ,F , S E , F , ˜ S E , F follo ws p S A,F ,S B,F p S E ,F | S A,F ,S B,F p ˜ S E ,F | S A,F ,S B,F . Similarl y , for the subcha nnel F giv en by p Y F ,Z F | X F , there is a conditional distribution p Y F , ˜ Z F | X F such that X F − Y F − ˜ Z F is a Markov cha in a nd p ˜ Z F | X F is the same as p Z F | X F . Again, without los s of generality , the co nditional d istrib ution of Y F , Z F , ˜ Z F conditional on X may be as sumed to follow p Y F | X F p Z F | X F ,Y F p ˜ Z F | X F ,Y F . Similarly , we have ˜ S B ,R and ˜ Y R . Let ˜ S E = ( ˜ S E , F , S E , R ) , ˜ S B = ( S B ,F , ˜ S B ,R ) , ˜ Y = ( Y F , ˜ Y R ) , and ˜ Z = ( ˜ Z F , Z R ) . Notice, first of all, tha t for any coding scheme I ( M , K ; S n E , Z n ) only de pends on the joint distribution of random variables a v ailable at Alice and Eve. By the defi nition of stochastic degradation, this implies that I ( M , K ; ˜ S n E , ˜ Z n ) = I ( M , K ; S n E , Z n ) . Hence, secrecy condition also a pplies for a dummy Eve w ho recei ves ( ˜ S n E , ˜ Z n ) instead o f the actua l observations of Eve, i.e. , I ( M , K ; ˜ S n , ˜ Z n ) = o ( n ) . Similarly , the probability of dec oding error for Bob October 27, 2018 DRAFT 39 is only a function of the joint distrib ution of random v ariables at Alice and Bo b which is again p reserved if we consider a dummy Bob who receiv es ( ˜ S n B , ˜ Z n ) . Hence , we may now repe at o ur con verse ar guments for the setup with dummy Bob and d ummy E ve who h av e p hysically degraded s ources and channe ls. W e can verif y tha t i f ( R 1 , R 2 ) ∈ C , our con verse p roof in fact implies the existence o f U 1 ,F and V 2 ,F which satisfy R SM ≤ I ( X F , Y F ) + I ( X R ; ˜ Y R ) − ( I ( U 1 ,F ; S A,F ) − I ( U 1 ,F ; S B ,F )) , (22) R SK + R SM ≤ I ( X F ; Y F | V 2 ,F ) − I ( X F ; ˜ Z F | V 2 ,F ) + I ( U 1 ,F ; S B ,F ) − I ( U 1 ,F ; ˜ S E , F ) . (23) with joint dis trib u tions of the form p S A,F ,S B,F p S E ,F | S A,F ,S B,F p ˜ S E ,F | S A,F ,S B,F p S A,R ,S B,R , ˜ S B,R ,S E ,R p U 1 ,F | S A,F p V 2 ,F ,X F p Y F ,Z F , ˜ Z F | X F p X R p Y R , ˜ Y R ,Z R | X R . (24) Using the fact that p ˜ Y R | X = p Y R | X we can rep lace ˜ Y R in (22). Similarly , using the fact that p ˜ Z F | X F = p Z F | X F (which b y (24) implies that p V 2 ,F ,X F , ˜ Z F = p V 2 ,F ,X F ,Z F ) we may rep lace ˜ Z F in (23) by Z F . By a similar argument, we may also replace ˜ S E , F in (23) by S E , F . Finally , by marginalizing away the dummy variables in (24) we h av e the result for stoc hastically degraded case as well. A P P E N D I X C P R O O F O F P RO P O S I T I O N 4 While we stated the Theorems 2 and 3 only for finite alphabets, the resu lts can be extended to continuous alphabets. W e n ote that the sca lar Gaus sian prob lem satisfie s the con ditions of The orem 3 (along with Remark 1 following it). Observe that in the notation of Th eorem 2 , S A,F = S A and S B ,F = S B . Further , S A,R , S B ,R , S E , F , and S E , R are abs ent (ass umed to be c onstants). When, SNR Eve ≥ SNR Bob , we have X R = X, Y R = Y , and Z R = Z , and the forwardly d egraded sub -channel is ab sent (a gain, we may take the ran dom variables of this sub-channe l to be constants). When SNR Bob ≥ SNR Eve , we have X F = X , Y F = Y , and Z F = Z and the reversely degrad ed su b-channe l is abse nt. Hence, from Theorem 2 , C is given by the union o f ˜ R ( p ) over all joint distributions p . Also, ˜ R ( p ) is described by R SM ≤ I ( X F ; Y F ) + I ( X R ; Y R ) − I ( U 1 ; S A | S B ) , (25) R SK + R SM ≤ I ( X F ; Y F | V 2 ) − I ( X F ; Z F | V 2 ) + I ( U 1 ; S B ) . (26) October 27, 2018 DRAFT 40 When spec ialized to the Gauss ian case above, it is easy to s ee that I ( X F ; Y F ) + I ( X R ; Y R ) ≤ C Y , and I ( X F ; Y F | V 2 ) − I ( X F ; Z F | V 2 ) ≤ [ C Y − C Z ] + , where C Y = 1 2 log(1 + SNR Bob ) an d C Z = 1 2 log(1 + SNR Eve ) . The se boun ds a re s imultaneously ach iev ed when p is such tha t V 2 is a constant and X is Gaussian of v ariance SNR Bob . He nce, we may re write, the conditions above as R SM ≤ C Y − I ( U 1 ; S A ) + I ( U 1 ; S B ) , (27) R SK + R SM ≤ [ C Y − C Z ] + + I ( U 1 ; S B ) . (28) Now we show o uterbounds to the above ˜ R ( p ) which match the two c onditions in proposition 4. It will also be come clear that a jointly Gaussian c hoice for p in f act a chieves these outerbound thus c ompleting the proof. W e first d eri ve an upp erbound on R SM which matches the first condition in propo sition 4. From the two inequa lities (27) and (28) above, we h av e R SM ≤ C Y − I ( U 1 ; S A ) + I ( U 1 ; S B ) , (29) R SM ≤ [ C Y − C Z ] + + I ( U 1 ; S B ) . (30) Using entropy power inequality , exp(2 h ( S B | U )) ≥ exp (2 h ( S A | U )) + exp(2 h ( N source )) Using this in (29), we may write exp(2 R SM ) ≤ exp(2( C Y + I ( U 1 ; S B ) − h ( S A ))) (exp(2( h ( S B ) − I ( U 1 ; S B ))) − exp(2 h ( N source ))) = exp(2( C Y − h ( S A ) + h ( S B ))) − exp(2( C Y − h ( S A ) + h ( N source ))) exp(2 I ( U 1 ; S B )) ( a ) ≤ exp(2( C Y − h ( S A ) + h ( S B ))) − exp(2 R SM ) exp(2( C Y − [ C Y − C Z ] + − h ( S A ) + h ( N source ))) , where (a) results from (30). Rearrang ing, we have R SM ≤ exp(2( C Y − h ( S A ) + h ( S B ))) 1 + exp(2( C Y − [ C Y − C Z ] + − h ( S A ) + h ( N source ))) = (1 + SNR Bob )(1 + SNR src ) 1 + SNR src + min( SNR Bob , SNR Eve ) October 27, 2018 DRAFT 41 which is the first condition in proposition 4 . Now let us fix R SM such that it satisfie s thi s condition. Let us rewri te (27) as follows h ( S A | U ) ≥ ( R SM − C Y + h ( S A ) − h ( S B )) + h ( S B | U ) . Entropy power ine quality implies that exp(2 h ( S B | U )) ≥ exp (2 h ( S A | U )) + exp(2 h ( N source )) ≥ exp(2( R SM − C Y + h ( S A ) − h ( S B ))) exp(2 h ( S B | U )) + 1 . Since R SM ≤ (1 + SNR Bob )(1 + S NR src ) 1 + SNR src + min( SNR Bob , SNR Eve ) ≤ 1 2 log (1 + SNR Bob )(1 + SNR src ) SNR src = C Y − h ( S A )+ h ( S B ) , we have exp(2 h ( S B | U )) ≥ 1 1 − exp(2( R SM − C Y + h ( S A ) − h ( S B ))) . From (28), exp(2 R SK ) ≤ exp(2([ C Y − C Z ] + + h ( S B ) − h ( S B | U ) − R SM )) ≤ exp(2([ C Y − C Z ] + + h ( S B ) − R SM ))(1 − exp(2( R SM − C Y + h ( S A ) − h ( S B )))) ≤ exp(2([ C Y − C Z ] + − C Y ))(exp(2( C Y + h ( S B ) − R SM )) − exp(2 h ( S A ))) which evaluates to the second condition re quired. The ine qualities used ab ove are ti ght u nder a Gau ssian choice for the a uxiliary random variable which proves the achiev a bility . R E F E R E N C E S [1] A. W yner , “The wire-tap channel, ” Bell System T echnical Journa l , vol. 54, no. 8, pp. 1355– 1387, 1975. [2] T . Cov er , “Broadcast channels, ” IEEE T rans. Inform. Theory , vol. 18, no. 1, pp. 2–14, 1972. [3] I. Csisz ´ ar and J. K ¨ orner , “Broadcast channels with confidential messages, ” IEE E Tr ans. Inform. Theory , vol. 24 , no. 3, 1978. [4] R. Ahlswede and I. Csisz ´ ar , “Common randomn ess in information theory and cryptograp hy – part I : S ecret sharing, ” IE EE T rans. Inform. Theory , vo l. 39, no. 4, pp. 1121–11 32, July 1993. [5] U. M. Maurer , “Secret ke y agreement by public discussion from common i nformation, ” IEEE Tr ans. Inform. Theory , vol. 39, no. 3, pp. 733–742, May 1993. [6] I. Csisz ´ ar and P . Narayan, “Common randomness and secret ke y gen eration with a helper , ” IEEE T rans . Inform. Theory , vol. 46, no. 2, pp. 344–366, 2000. [7] K. Khalil, M. Y oussef, O. O. K oyluo glu, and H. El Gamal, “On the delay limited secrecy capacity of fading channels, ” i n IEEE International Symposium on Information Theory , 2009, pp. 2617– 2621. October 27, 2018 DRAFT 42 [8] Y . Chen and A. Han V inck, “W i retap chann el wit h side information, ” Information Theory , IEEE T ransactions on , vol. 54, no. 1, pp. 395–402, Jan. 2008. [9] A. Khisti, S. Diggavi, and G. W ornell, “Secret-ke y agreement with channel state information at the transmitter , ” IEEE T rans. Information F or ensics and Security , vol. 6, no. 3, pp. 672–68 1, September 2011. [10] ——, “Secret-key generation using correlated sources and channels, ” IEEE T ransa ctions on Information Theory , vo l. 58, no. 2, pp. 652–670, February 2012. [11] U. Maurer and S. W olf, “Information-theoretic key agreement: From w eak to strong secrecy for free, ” in Lectur e Notes in Computer Science . Springer-V erlag, 2000, pp. 351–368. [12] I. Csisz ´ ar and J. K ¨ orn er , Information Theory . Akad ´ emiai Kiad ´ o , Budapest, 1986. [13] J. Xu and B. Chen, “Broadcast, confidential and public messages, ” in Information Sciences and Systems (CISS), 2008. 42nd Annual Confer ence on , 2008, pp. 630–635. [14] V . Prabhakaran and K. Ramchandran, “ A separation result for secure communication, ” in Allerton Confer ence , Sept 2007. [15] K. Eswaran and K. Ramchandran, “S ecrecy via sources and channels: Secure transmission of an independent source with decoder side information, ” in Procee dings of the 46th Annual A llerton Confer ence on Communication, Con tr ol and Computation , Monticello, IL, S ept 2008. [16] C. Mitrpant, H. V inck, and Y . Luo, “ An achiev able region for the Gaussian wiretap channel wit h side information, ” IEE E T rans. Inform. Theory , vo l. 52, no. 5, pp. 2181–21 90, May 2006. [17] K. Eswaran, V . Prabhakaran , and K. Ramchand ran, “Secret commun ication using sources and chan nels, ” in Pr oc. 42nd Asilomar Confere nce on Signals, Systems and Computers , Pacific Grov e, CA, 2008. October 27, 2018 DRAFT
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment