State Amplification Subject To Masking Constraints
This paper considers a state dependent broadcast channel with one transmitter, Alice, and two receivers, Bob and Eve. The problem is to effectively convey ("amplify") the channel state sequence to Bob while "masking" it from Eve. The extent to which …
Authors: O. Ozan Koyluoglu, Rajiv Soundararajan, Sriram Vishwanath
1 State Amplification Subject T o Masking Constraints O. Ozan Ko yluoglu, Raji v Sou ndararajan, and Sriram V ishwanath Abstract This paper considers a stat e dependent broadcast channel w ith one transmitter, Alice, and two recei v ers, Bob and Eve. The problem is to effecti ve ly con ve y (“amplify”) the channel st ate sequence to Bob while “masking” it from Eve. The extent to which the state sequence cannot be masked from Eve i s referred to as leakage. T his can be vie wed as a secrecy problem, where we desire that the channel state itself be minimally leaked to Eve while being communicated to B ob . The paper is aimed at characterizing the t rade-of f region between amplification and leakage rates for such a system. An achiev able coding scheme is presented, wherein the transmitter transmits a partial state information ov er the chann el to facilitate the amplification proces s. For the case when Bob ob serves a stronger signal than Eve, the achiev able coding scheme is enhanced wit h secure refinement. Outer bounds on t he trade-off region are also deriv ed, and used in characterizing some special case results. In particular, the optimal amplification-leakage rate differe nce, called as differential amplification capacity , is characterized for the reverse ly degraded discrete memoryless channel, the degrade d binary , and the degraded Gaussian channels. In addition, for t he degraded Gaussian model, the extremal corner points of the t rade-of f region are characterized, and the gap between the outer bound and achiev able rate-regions i s sho wn to be less t han half a bit for a wide set of channel parameters. I . I N T R O D U C T I O N A. Pr oblem S tatement In this paper, we con sider a state d epende nt b roadcast channe l m odel with two u sers, and in vestigate the q uestion of to what extent the state of the chann el c an be amplified at th e receiver ( Bob) an d ma sked from th e oth er rece i ver (referre d to as Eve). Th e en tire ch annel state seq uence is presumed to be known non-cau sally to the transmitter (Alice). T he only manner to affect th e state info rmation at Bob and Eve is by the encodin g scheme used at the transmitter . For such a system, we aim to c haracterize the trade-o ff be tween the “amp lification”-rate R a (at which the legitimate pair c an opera te) an d the “leakag e”-rate R l (to the e av esdropp er). Formally , co nsider a discrete memoryless chan nel g i ven by p ( y , z | x, s ) , wh ere x ∈ X is th e chan nel inp ut, s ∈ S is the channel state, an d ( y , z ) ∈ ( Y × Z ) is the chan nel outp ut. Here , y cor respond s to th e r eceived chann el ou tput O. Ozan Koylu oglu is with the Department of Electrica l and Computer Engineeri ng, The Unive rsity of Arizona, Tucson, AZ. Email: ozan@ema il.arizon a.edu. Rajiv Soundararaja n is with Qualcomm R esearch India, Bangalore, In dia. Email: raji vs@ute xas.edu. Sriram V ishwa nath is with the Department of Electric al and Computer Engineering, The Uni versi ty of T ex as, Austin, TX. Email: sriram@austin.ute xas.edu. The m ateria l in this paper was presented in part at the Forty-Nint h Annual Allerton Conference on Communication, Control, and Computing, Montice llo, IL in S eptember 2011. Novem ber 19, 2018 DRAFT 2 Z n , R l ≥ 1 n I ( S n ; Z n ) X n ( S n ) Enco der Channel p ( y , z | x , s ) S n ∼ Q n i =1 p ( s i ) Y n , R a ≤ 1 n I ( S n ; Y n ) Fig. 1. The s ystem model for amplification subject to masking problem. at the legitimate receiver (Bob) and z is th e output at the eavesdropper (Eve). Th e channe l is memoryless in the sense that p ( Y n = y n , Z n = z n | X n = x n , S n = s n ) = n Y i =1 p ( y i , z i | x i , s i ) , and th e state seq uence S n is independe nt an d ide ntically d istributed (i.i.d.) according to a prob ability distribution indicated b y p ( s ) . I t is assumed that the chan nel state sequence is non- causally known at the tr ansmitter . The system model is g iv en in Fig 1. The task of the e ncoder is to “amplify ” th e state sequence at Bob (chan nel o utput Y n ) and to “ma sk” the state sequence f rom Eve ( Z n ). Formally , the encoder Enc ( S n , n ) is a mapp ing of channe l state S n to the chan nel inpu t X n , i.e., En c : S n → X n , which can b e characterize d by a co nditional probab ility distribution p ( x n | s n ) . ( R a , R l ) is said to be achiev able, if for any given ǫ > 0 , ther e exist a seque nce o f encod ers { E nc ( S n , n ) } such that 1 n I ( S n ; Y n ) ≥ R a − ǫ (1) 1 n I ( S n ; Z n ) ≤ R l + ǫ (2) for suf ficiently large n , where the mutual information terms are with respect to the jo int distrib ution p ( s n , x n , y n , z n ) = p ( x n | s n ) n Q i =1 p ( s i ) p ( y i , z i | s i , x i ) . The p roblem is to character ize all ach iev able ( R a , R l ) pair s, which we de note by the trad e-off region C . The perform ance of the enco der is also quantified by m easuring th e d ifference between the ac hiev able amplifi- cation and leaka ge r ates. Th e differ ential amp lification rate R d is said to be achievable if R d = R a − R l for some ( R a , R l ) ∈ C . The m aximum value of the differential amplification rate is called as th e differ ential amplifica tion capacity , deno ted by C d , wher e C d = sup ( R a ,R l ) ∈C R a − R l . Note that this quantity is a p roper ty of a giv en trad e-off region C and can play a role for so me applications, as this difference measures the knowledge difference between the two receivers regarding the state of the chann el. (This quantity , C d , is d iscussed further in the later p arts of the seq uel.) In general, we ar e not only in terested in the knowledge difference, but also in the en tire rate trad e-off region, which con stitutes the main foc us of th is paper . Novem ber 19, 2018 DRAFT 3 A cost c onstraint may also be impo sed on the chann el inp ut with 1 n n X i =1 E { c ( X i ) } ≤ C, (3) where c : X → R + defines the cost per in put letter and the expectation is over th e distribution of the channel inp ut. In this scenario, we say ( R a , R l ) is achie vable und er th e cost function c ( . ) a nd at expec ted co st C , if th ere exists a sequence of encod ers satisfying (1 ), ( 2), and (3) in the limit of large n . (W e use this con straint f or the Gaussian channel, where th e cost is the av erage transmitted p ower .) Finally , we n ote that the equivocation r ate (denoted by ∆ l ) can be used in this fo rmulation as well. In particular, ∆ l is said to be achiev able, if there exists a seq uence of e ncoder s satisfying ∆ l ≤ 1 n H ( S n | Z n ) + ǫ, for sufficiently large n . Acco rding ly , the ach iev able ( R a , ∆ l ) pairs ca n be defined . Hen ce, the pro blem can be re-form ulated in term s of equivocation rate, where we seek to characterize all achievable ( R a , ∆ l ) pairs in the limit of large n . Since both the eq uiv ocation and leakage rate n otions char acterize the same trade-o ff, both notions can be used interchangeab ly . B. Related works a nd app lications The problem of communicatio n over state dep enden t channels is studied by Gel’fand a nd Pinsker [1 ], where a message has to be reliab ly transmitted over the chann el with n on-cau sal state knowledge at the transmitter . The Gaussian version of th e prob lem is solved in [2] th rough the famou s d irty paper codin g scheme. While th e wiretap channel is intr oduced and solved in [3], these results are extended to a broadcast setting in [4]. T he p roblem of sending secure message s over state d epend ent wir etap chann els is studied in [5], [6]. On the other hand , th e problem s of state amp lification and state masking are individually so lved in [7], [ 8], [9] for point-to-po int chan nels. Both [7], [8] and [9] con sider the pr oblem of reliable tr ansmission of messages in addition to state amp lification and state masking, r espectively . I n this p aper, we consider th e p roblem o f amplifying the state to a desired recei ver while trying to minimize the leakage to ( or mask the state from) th e ea vesdropper . W e note that, if we set R a = 0 in our pr oblem definition, it reduces to the state masking problem as studied in [9 ]. In other word s, R a = 0 R l = min p ( x | s ) s.t. E { c ( X ) }≤ C I ( S ; Z ) can b e shown to be achievable [9]. Also, when R l ≥ H ( S ) , the pro blem red uces to a state amp lification pro blem [8], and one can achieve the fo llowing r ate pair . R a = min { H ( S ) , max p ( x | s ) I ( X, S ; Y ) } R l ≥ H ( S ) Novem ber 19, 2018 DRAFT 4 These repr esent two extreme s of the trade-o ff r egion between the amplificatio n and maskin g rates. The main focus of this pa per is to character ize the entire trade-off r egion of amplification a nd leakage r ates. In the following, we list some a pplications of the proposed model. 1) Br oadcast channels: Communication ov er state depende nt channels ha ve a wide set of applications in broadcast channels, especially for the MIMO systems [ 10], [11], [12]. In such multi-user channels, a n inform ation-carr ying signal can be modeled as a state sequence for an other . As the transmitter knows the infor mation-car rying signal for the first user, it can b e treated as a known state for the transmission to the seco nd user . I n a ddition, th e secon d sig nal (intended for the second user) can be con sidered as an overlay to th e first one, espec ially for multicastin g scenarios. (Here, th e codeword of the first signal m ay fo llow an i.i.d . pr ocess, or this can be an un coded inf ormation [9].) In such multi-user settings, the secur ity of the commu nication arise as an impo rtant pro blem due to the br oadcast nature of the m edium. Utilizing the f ramework g i ven in this paper, the amp lification an d leakag e o f the signals can be analyzed . 2) Cognitive radio and r elay channels: Another relev ant setting wher e ou r r esults will b e of significant interest is cogn itiv e r adio systems [13], [1 4], [15], [ 16], [ 17], [18], [19], [20]. For example, in an overlay c onginitive radio scenario [16], the cog nitive en coder (Alice) can facilitate the secure co mmunica tion of the prim ary signal (th e state sequ ence of the chann el) by amplifying the signal at the prim ary receiver (Bob) while masking it fro m the eav esdropp er (Eve). In such an application, the codeboo k inform ation of the primary signal may be ab sent at th e cognitive radio, an d the for mulation we h av e in this paper would be r elev ant. Mor e generally , this setting belong s to user cooper ation o r relay ing arch itectures that ca n incr ease the security of the com munication systems [21], [2 2], [23], [ 24]. In an anoth er cognitive radio setup, two c ognitive users (Alice and Bob ) can utilize the primary sign al (interferin g sequence S n ) to sha re a secre t key betwee n eac h o ther in the presence of an e av esdropp er . (Ap plications to key sharing from chan nel states are detailed in the following.) 3) Secr et ke y generation fr om channel states: In recent years, a b ridge between crypto graphy and informatio n theoretic security has emerged in the f orm o f channe l state-depen dent key g eneration [25], [26], [27], [28]. In this line of work , a state depen dent channel (such as the wireless chann el) is conside red, and a fun ction of this state is intended to be a “shared secret” between the legitimate transmitter (Alice) and legitimate receiv er ( Bob) while aiming to keep th e eavesdropper ( Eve) as much in the d ark as p ossible. I n the b est case, th e state(s) seen by Bob and Eve will b e com pletely d ifferent (indepen dent). Howev er , th e states of the chann els are usually depen dent. Moreover , for some mode ls (e.g ., the dirty pap er codin g setup [2]), there is a single channel state defin ing the channel for b oth Bob and Eve. (For example, Y = X + S + N Bob at Bob an d Z = X + S + N Eve at Eve as considered in [ 29]. See also the discussion ab ove for c ognitive radio chann els.) For su ch scenarios, a s long as there is a n on-trivial d ifference between the am plification an d leakag e rates, the state knowledge can b e u sed to develop shared keys an d enable cryptog raphic algorithms. For in stance, u tilizing the coding sch emes pr oposed here, Bob can have nR a bits of inf ormation regarding the ch annel state, where a t most nR l number of these bits a re leaked to Eve. Then, priv acy amp lification [3 0] can b e utilized to distill secret keys. T hat is, the m ethods p rovided in this paper can be utilized for the “advantage distillation” phase of a key agreem ent pro tocol [30]. In this for m, the Novem ber 19, 2018 DRAFT 5 problem at hand is very much related to the gen eration of secrecy using sources and channels. Here, no t only the source in the m odel is the chan nel state (different from the models studied in [3 1], [32]), but also it is non -trivially combined with the e ncoded signals of Alice (via the state dependen t ch annel) to prod uce the ob servations at Bob and Eve. Henc e, our formu lation can be closely associated with th e p roblem of secret key agre ement over state depend ent chan nels [33], [29]. In such pro blems, one is interested in th e design of coding strategies that allow an agreemen t of secu re bits between the legitimate user s utilizing the state depen dent channel. F or exam ple, sending secure b its over the chann el will increase th e secret key rate [29], [ 5], [ 6]. On the o ther h and, the pro blem studied in this work, when specialized to the differential am plification measure, is an an alysis of the channel state knowledge difference. T his analy sis is very much related to th e question of how m any secr et bits ca n be extracted from a giv en channel state ([2 5], [26], [27], [28]). W e inten d to provide information -theor etic gu arantees (achiev able r ates and upper bou nds) to th e extent to wh ich such a shared secret can be realized for our system mo del 1 . Once obtain ed, this shared secret can now be e mployed to seed n umero us symmetric key cryp tosystems [ 34], [35]. 4) W atermarking and chann el estimation : Consider that a host im age S is utilized in a watermarking scenario, where th e encode r modifies the image in or der to amp lify it at the receiv er Y and mask it from rec eiv er Z . T his model is similar to c hannel estimation scenario s in wirele ss com munication s. F or exam ple, a pilo t signal can be constructed at th e transmitter such th at it n ot only facilitates the fading g ain estimation at the receiver but also hides the chann el state f rom the eav esdropp ers as much as possible. For in stance, while co mmun icating to a ba se station for c hannel state estimation, a mo bile user may want to hid e this informa tion f rom the external no des in order to establish pr iv acy of her location . C. Summary of results a nd organization In this p aper, we aim at developing an u nderstand ing o f the trade-off between amplification and ma sking r ates in a state depende nt broa dcast chann el throu gh ach iev able regions and outer bou nds, an d ch aracterizing spec ial cases when they match. The main results of this paper can be summarized as fo llows. 1) Achievable Regions: The main achie vability argum ent o f the paper utilizes the transmission of a state dependent message over the state d epend ent chan nel. T o facilitate this, we construct a (Gel’fand-Pinsker) co deboo k, where in the corr espondin g code word (denoted by U n ) carry this re finement infor mation in such a way that reliab le commu ni- cation c an b e ach iev ed over the state-d epende nt channel. Su bsequently , we utilize this r efinement in formatio n within a W yner-Ziv co ding scheme to derive expressions f or the achiev able amp lification rates. In particular, we show that, ev en though the side inf ormation is no t gener ated in an i.i.d . fashion, W yner-Ziv appr oach can b e used to facilitate amplification proc ess. The leak age rates are th en determin ed by d eriving single- letter b ound s on 1 n I ( S n ; Z n ) , and the ach iev able r egions are established over the input pr obability distributions p ( u , x | s ) . The b ounds show that the 1 W e note that differen tial amplification capacity is relate d to the secret ke y rate that can be achie v ed utilizing the corresponding channe l state. For example , consider Y = S 1 , Z = S 2 , where S 1 and S 2 are uniform bits. Here, C d = 0 w hen the state is defined as S = [ S 1 , S 2 ] , and C d = 1 when the state is taken as S 1 , implying that 1 bit of secret key rate can be supported. Novem ber 19, 2018 DRAFT 6 rate o f the refinemen t m essage n ot only increases th e amp lification rate R a , but also increases th e leak age rate R l , thereby establishing a trade-off for im plementatio n. 2) Secure Refinement: W e show that it is possible to extend th e prop osed region by transmitting secur e refinement informa tion when Bob obser ves a “stronger ” chan nel than Eve. In prec ise term s, this correspon ds to instances of p ( u, x | s ) satisfy ing I ( U ; Y ) ≥ I ( U ; Z ) . Note that a channe l is said to have a less noisy struc ture if I ( U ; Y ) ≥ I ( U ; Z ) fo r all input probability distrib utions [36], [37 ]. W e find t hat the utilization of the notion of secure refinement approa ch is critical to such chann els, and we show tha t the le akage due to transmission o f the me ssage can be minimized by secur ing the message. In the proc ess of establishing these results, w e also develop an alter nate proof of secure me ssage transmission over state dep endent wiretap chann els. 3) Special Classes o f Chann els: Our outer b ound argumen ts are based on up per bou nding 1 n I ( S n ; Y n ) an d lower bo unding 1 n I ( S n ; Z n ) . The ach iev able schem es and outer bou nds presented are used to establish op timality results for a class o f chan nels. In par ticular, we show that the p ropo sed schem e achieves the optimal differential amplification cap acity (i.e., the maxim um value of R a − R l over th e set of achievable ( R a , R l ) pa irs) for the reversely degrade d d iscrete mem oryless chan nel (DMC), the degrad ed binary ch annel, and the d egraded Gaussian channel. 4) Gaussian Cha nnels: W e character ize the cor ner points o f the region for the degrade d Gaussian chann el. In th is scenario, we fu rther boun d the gap b etween achiev able an d converse regions, and show th e following: Let us deno te the message capacity of Bob’ s channel as C b = 1 2 log(1 + SNR b ) an d that of E ve’ s channel as C e = 1 2 log(1 + SNR e ) , where SNR b (SNR e ) is the signal-to-n oise ratio o f Bob (respectively , E ve). Then , for any given leak age rate R l , w e show that the gap between th e up per and lower bou nds o n the amp lification rate R a is bo unded by C b . Similarly , for any given amplificatio n r ate R a , we show that the achiev able leaka ge r ate is within C e of the lower bound on R l . In p articular, the correspon ding gap s a re within half a bit when SNR b ≤ 1 and SNR e ≤ 1 , respectively . The rest of the pa per is organized as follows. Section II details the prop osed c oding schemes and presen ts the correspo nding ac hiev able regions. Section I II pr ovides the outer bound s to the trade-off region. Section IV includes optimality discussions and nu merical r esults for special classes of DMCs in cluding th e reversely degraded ch annel, modulo additiv e binary c hannel model, an d the memo ry with defective cells mode l. The Gau ssian c hannel mo del is considered in Section V alon g with corre sponding optimality results. Finally , we conclude the paper in Section VI. Some of the proofs are c ollected in app endices to improve the flow o f the p aper . I I . A C H I E V A B L E R E G I O N S W e consider transmitting state depen dent information over the state dep endent ch annel. This refinemen t infor- mation can be utilized at Bob in resolving some ambigu ity regard ing the chan nel state. W e divide the discussion into two parts, eac h providing a different scheme fo r the transmission o f this refin ement informa tion. A. State enha nced messaging : Refineme nt In order to facilitate th e tr ansmission of information regarding the state sequen ce, we co nsider a n encode r that transmits a state depende nt message over the state depen dent chan nel. Here, by emp loying the Gel’fand-Pin sker Novem ber 19, 2018 DRAFT 7 coding [1], a rate o f I ( U ; Y ) − I ( U ; S ) can be reliably commun icated over the state depe ndent ch annel. Utilizing this co mmun ication rate, Alice can provide a refinement in formatio n to Bob. In particular, we co nsider a W yner-Zi v coding [38] appro ach to pr ovide such a r efinement informa tion (as discussed in [8]) . Here, the side inform ation at the re ceiv er is ( U n , Y n ) - con sisting of Gel’fand-Pinsker codeword U n and the o bserved sequence from the channel Y n . For this side informatio n scenario, we utilize the Gel’fand-Pin sker info rmation r ate a s the bin index o f a covering co dew ord V n . W e note tha t, in the original setup of source cod ing with side informatio n (aka W yn er-Zi v model), the side inf ormation is gener ated i.i.d . with the sour ce sequence. For the scenario con sidered here, th e side informa tion is not in th is fo rm. In th e f ollowing, we show th at this issue can b e resolved, and th e W yner-Ziv coding scheme can be utilized to incre ase the amplification rate. On the other hand, this transmission scheme m ay leak some state in formation to Eve. Acco rdingly , we obtain appro priate boun ds on th e leak age r ate th at dep ends on the Gel’fand-Pisker c oding rate u sed fo r the re finement. The correspon ding achievable region is giv en b y the f ollowing. Theor em 1: [Refinemen t] Let R 1 be the clo sure of the union of all ( R a , R l ) p airs satisfying R a ≤ I ( S ; Y , U ) + R r R l ≥ min I ( U, S ; Z ) , I ( S ; Z, U ) + R r R r ≤ min { I ( U ; Y ) − I ( U ; S ) , H ( S | Y , U ) } , over all distributions p ( u, x | s ) satisfying I ( U ; Y ) ≥ I ( U ; S ) . T hen, R 1 ⊆ C . 1) Pr oof of Theorem 1 : W e pr ovide main step s o f th e coding argum ent here and relegate some of the d etails to append ices. Codeboo k Generation: Fix a p ( u | s ) and a p ( v | s ) . (Th e requirem ent on these distributions will be specified later .) Rand omly and indepen dently gen erate 2 nR v sequences V n ( l ) , l ∈ [1 , 2 nR v ] , each accord ing to Q n i =1 p V ( v i ) . Partition indices l in to equal-size subsets r eferred to a s b ins B ( b ) = [( b − 1)2 n ( R v − R r ) + 1 : b 2 n ( R v − R r ) ] , b ∈ [1 : 2 nR r ] . (Th is is the cod ebook used for W yner-Ziv codin g [38], [39].) For each m ∈ [1 : 2 nR m ] , gen erate 2 n ( R u − R m ) number o f U n sequences ra ndomly and in depend ently according to Q n i =1 p U ( u i ) . In dex these sequences as U n ( m, k ) with k ∈ [1 : 2 n ( R u − R m ) ] . (Th is is the codeb ook used for Gel’fand -Pinsker codin g [ 1], [39]. ) Encodin g: Here, the en coder sets R m = R r and m = b and transmits message b of rate R r with Gel’ fand- Pinsker coding in o rder to perf orm refinemen t via W yn er-Zi v co ding. Given s n , the en coder fin ds an ind ex l such that ( s n , v n ( l )) ∈ T ( n ) ǫ ′ . If there exists more than o ne index, it selects one u niformly ra ndom amo ng these. If th ere exists no such ind ex, it selects on e un iformly rand om f rom [1 , 2 nR v ] . Fr om l , encod er determin es th e bin index b in the V n codebo ok such th at l ∈ B ( b ) . Then, it sets m = b an d find s a n ind ex k such that ( s n , u ( n ) ( m, k )) ∈ T ( n ) ǫ ′ . If no such (covering ) index k exists or if ther e are more than on e, the encod er pic ks o ne unifor mly at r andom . The encoder then tr ansmits x n that is g enerated i.i.d. ac cording to Q n i =1 p X | U,S ( x i | u i ( m, k ) , s i ) . Amplificatio n rate an alysis: In the following, we show that the decoder ca n obtain U n codeword u sing Gel’ fand- Pinsker decodin g and then V n codeword by utilizing the side in formatio n ( U n , Y n ) . (This is th e discussion provide d in [8] f or a state am plification mec hanism, which we d etail h ere.) W e then pr ovide a deriv ation of the state amplification rate utilizing this deco ding m echanism. Novem ber 19, 2018 DRAFT 8 Let ǫ > ǫ ′′ > ǫ ′ . Upon receiving y n , decoder declares th at ˆ m ∈ [1 : 2 nR r ] and ˆ k ∈ [1 : 2 n ( R u − R r ) ] are chosen at the encoder, if these are the uniq ue indice s su ch that ( u n ( ˆ m, ˆ k ) , y n ) ∈ T ( n ) ǫ , otherwise it declares an erro r . (W e rema rk that compare d to the Gel’fand-Pinsker setup [1], [3 9], wher e on ly the message m has to be u niquely decoded , we require decode r to obtain the co deword u n chosen at th e encod er . This w ill be u tilized in de coding error analysis.) Then, the decoder finds the unique index ˆ l ∈ B ( ˆ m ) such that ( v n ( ˆ l ) , u n ( ˆ m, ˆ k ) , y n ) ∈ T ( n ) ǫ . Otherwise, it declares an error . Consider th e erro r event E = { ˆ M 6 = M , ˆ K 6 = K, ˆ L 6 = L } , wh ere ( K, L, M ) are th e indices chosen at the transmitter, and ( ˆ K , ˆ L, ˆ M ) are the indices decoded at the receiver . Decod er makes an er ror on ly if on e or mor e of the fo llowing e vents occur . E 1 = { ( U n ( M , k ) , S n ) / ∈ T ( n ) ǫ ′ for all k } E 2 = { ( U n ( M , K ) , Y n ) / ∈ T ( n ) ǫ } E 3 = { ( U n ( m, k ) , Y n ) ∈ T ( n ) ǫ for some m 6 = M } E 4 = { ( U n ( M , k ) , Y n ) ∈ T ( n ) ǫ for some k 6 = K } E 5 = { ( V n ( l ) , S n ) / ∈ T ( n ) ǫ ′ for all l } E 6 = { ( V n ( L ) , S n , U n ( M , K ) , Y n ) / ∈ T ( n ) ǫ } E 7 = { ( V n ( l ) , U n ( M , K ) , Y n ) ∈ T ( n ) ǫ for some l 6 = L, l ∈ B ( M ) } Here, E = ∪ 7 j =1 E j . By the union of events b ound , we h av e Pr {E } ≤ Pr {E 1 } + Pr {E c 1 ∩ E 2 } + Pr {E 3 } + Pr {E c 3 ∩ E 4 } + Pr {E 5 } + Pr {E c 1 ∩ E c 5 ∩ E 6 } + Pr {E 7 } . W e bou nd each term above in th e fo llowing. a) By covering lemma [39], P r {E 1 } → 0 as n → ∞ , if R u − R m > I ( U ; S ) + δ ( ǫ ′ ) . (4) b) E c 1 implies that ( U n ( M , K ) , S n ) ∈ T ( n ) ǫ ′ , which implies that ( U n ( M , K ) , S n , X n ) ∈ T ( n ) ǫ ′′ due to i.i.d. generation o f x i from ( s i , u i ) a nd the cond itional typicality lemma [3 9] fo r som e ǫ ′′ > ǫ ′ . Similarly , as Y n is generated i.i.d. f rom ( s i , u i , x i ) thro ugh ( s i , x i ) , we have Pr {E c 1 ∩ E 2 } → 0 as n → ∞ fo r some ǫ > ǫ ′′ , ag ain due to the co nditional typicality lemma [39]. c) E ach U n ( m, k ) for some m 6 = M is distributed i.i.d. n Q i =1 p U ( u i ) and indepen dent of Y n . By packing lemma [39], we hav e Pr {E 3 } → 0 as n → ∞ , if R u < I ( U ; Y ) − δ ( ǫ ) . (5) The three error a nalyses above are the same a rguments u sed for dec oding the m essage index in Gel’fand- Pinsker coding [1], [39]. d) The analysis for showing that P r {E c 3 ∩ E 4 } → 0 as n → ∞ is d etailed in Append ix A. W e use the arguments giv en in [40] in or der to show this. Th e original pro blem stud ied in [40] do es n ot in volve a state-depend ent chan nel, Novem ber 19, 2018 DRAFT 9 but the cod ing scheme con structs chann el in puts via x ( u, s ) , which can be viewed as a state dep endent chann el. (The argumen t we have here - the Gel’fand-Pinsker cod ew ord decoded at the d ecoder is the same as the o ne chosen at the encoder - also app ears in [41], which states that this obser vation follows f rom the argu ments in [40], [42].) e) By covering lemma [39], P r {E 5 } → 0 as n → ∞ , if R v > I ( S ; V ) + δ ( ǫ ′ ) . (6) f) For analyzin g Pr {E c 1 ∩ E c 5 ∩ E 6 } , we n ote that one m ay no t u tilize c ondition al typicality le mma as d one in the pr oof of W yner-Ziv co ding (see, e. g., [39, Section 11.3 .1]). Because, the side informatio n her e, ( U n , Y n ) , is not generated i.i.d. throug h p U,Y | S ( u i , y i | s i ) . Therefo re, o ne can not g o from joint typicality of ( V n ( L ) , S n ) to the joint typicality o f ( V n ( L ) , S n , U n ( M , K ) , Y n ) . Instead , we consider th e following ap proach: E c 5 implies that ( V n ( L ) , S n ) ∈ T ( n ) ǫ ′ , (7) and E c 1 implies th at ( U n ( M , K ) , S n ) ∈ T ( n ) ǫ ′ . Now , consider a pair ( v n , s n ) ∈ T ( n ) ǫ ′ . In addition , we have Pr { U n ( M , K ) = u n | S n = s n , V n ( L ) = v n } = Pr { U n ( M , K ) = u n | S n = s n } such th at the pm f p ( u n | s n ) satisfies the fo llowing two cond itions: T he first condition is that lim n →∞ Pr { ( s n , U n ( M , K )) ∈ T ( n ) ǫ ′ ( S, U ) } = 1 , (8) which is d ue to the fact that probab ility of th e E c 1 ev ent vanishes. An d, the secon d cond ition is that, for every u n ∈ T ( n ) ǫ ′ ( U | s n ) and fo r sufficiently large n , 2 − n ( H ( U | S )+ δ ( ǫ ′ )) ≤ p ( u n | s n ) ≤ 2 − n ( H ( U | S ) − δ ( ǫ ′ )) (9) for some δ ( ǫ ′ ) → 0 as ǫ ′ → 0 . This secon d assertion, i.e., (9), follows f rom [39, Lem ma 12. 3]. Now , from (7 ), (8), (9), and the fact that V → S → U forms a Markov chain, we obtain from Markov lemma [39] that, for some ǫ ′′ > ǫ ′ , lim n →∞ Pr { V n ( L ) , S n , U n ( M , K ) ∈ T ( n ) ǫ ′′ | V n ( L ) = v n , S n = s n } = 1 . Denoting ˜ E 6 = { V n ( L ) , S n , U n ( M , K ) / ∈ T ( n ) ǫ ′′ } , th e analysis a bove implies that Pr {E c 1 ∩ E c 5 ∩ ˜ E 6 } → 0 as n → ∞ . (W e n ote that, an analy sis similar to the on e above is gi ven for the B erger-T un g inner bound in [39]. The dif ference here is that the bin index of the co vering code- word V n determines the M index of U n . Still, given a realization of v n , and hence m , we have covering sequ ences, i.e., the set { U n ( m, k ) } | 2 n ( R u − R r ) k =1 , for s n , fro m which the argumen t follows.) Then, by condition al typic ality lemma [ 39], for som e ǫ > ǫ ′′ , we have ( V n ( L ) , S n , U n ( M , K ) , Y n ) ∈ T ( n ) ǫ , as ( V n ( L ) , S n , U n ( M , K )) ∈ T ( n ) ǫ ′′ (due to having E c 1 ∩ E c 5 ∩ ˜ E 6 w .h. p.), and Y n | ( V n ( L ) = v n , S n = s n , U n ( M , K ) = u n ) ∼ n Q i =1 p Y | U,S ( y i | u i , s i ) . This imp lies th at Pr {E c 1 ∩ E c 5 ∩ E 6 } → 0 as n → ∞ . g) Finally , Pr {E 7 } analysis fo llows as that f or the pro of o f W yner-Ziv c oding (see, e.g. , [39, Section 11 .3.1] ). In particular, con sidering the ev ent ˜ E 7 = { ( V n ( l ) , U n ( M , K ) , Y n ) ∈ T ( n ) ǫ for some l ∈ B (1) , M 6 = 1 } , [39, Lemma 11.1] shows that Pr {E 7 } ≤ P r { ˜ E 7 } . Th en, as each V n ( l ) with l ∈ B (1) is genera ted b y n Q i =1 p V ( v i ) an d indepen dent of ( U n ( M , K ) , Y n ) , fro m packing lemm a [39], Pr { ˜ E 7 } → 0 as n → ∞ , if R v − R r < I ( V ; U, Y ) − δ ( ǫ ) . (10) Novem ber 19, 2018 DRAFT 10 Therefo re, Pr {E 7 } → 0 as n → ∞ , if (10 ) h olds. The analysis above imp lies that Pr {E } → 0 as n → ∞ if (4), (5), (6 ), and (10) hold. Here, we set R m = R r and R u − R r = I ( U ; S ) + δ 1 and R v = I ( V ; S ) + δ 2 , for some arbitrarily small δ 1 and δ 2 . This will satisfy (4) and (6). Furthermore , we set R r = I ( V ; S ) − I ( V ; U, Y ) + δ 2 = I ( V ; S | U, Y ) + δ 2 , which is the r ate req uired to describe V n to the decoder . Th is will satisfy (1 0). Finally , f or a g iv en p ( u | s ) , we c hoose some p ( v | s ) to suppor t transmission o f the refinement message b = m with rate R r = R m < I ( U ; Y ) − I ( U ; S ) − δ 1 . This will satisfy (5), as R u = R r + ( I ( U ; S ) + δ 1 ) < I ( U ; Y ) . W e note that, R r ≤ H ( S | U , Y ) + δ 2 as I ( V ; S | U, Y ) ≤ H ( S | U, Y ) in above. Therefo re, for a g iv en p ( u | s ) , th e propo sed coding scheme su pports any state refin ement rate R r satisfying R r ≤ min { I ( U ; Y ) − I ( U ; S ) , H ( S | U, Y ) } . Utilizing the analysis above, we now detail the der iv ation o f the amplification rate. W e start by d efining an ev ent indicator random variable E . Consider setting E = 1 , if a decodin g error ( E ) occur s o r the state seq uence observed at the enco der ( S n ) turn s ou t to be no n-typica l. Set E = 0 , otherwise. W e con tinue with the following set of relations. (W e dr op the ind icies of codewords.) 1 n I ( S n ; Y n ) ( a ) ≥ 1 n I ( S n ; Y n | E ) − 1 n = 1 n I ( S n ; Y n | E = 0) Pr { E = 0 } + 1 n I ( S n ; Y n | E = 1) Pr { E = 1 } − 1 n ( b ) ≥ 1 n I ( S n ; Y n | E = 0) Pr { E = 0 } − 1 n ( c ) = 1 n I ( S n ; Y n , U n , V n | E = 0 )(1 − Pr { E = 1 } ) − 1 n ( d ) ≥ 1 n I ( S n ; Y n , U n , V n | E = 0 ) − ( H ( S ) + ǫ 1 ) Pr { E = 1 } ) − 1 n ( e ) ≥ ( H ( S ) − ǫ 1 ) − 1 n H ( S n | Y n , U n , V n , E = 0) − ǫ 2 ( f ) ≥ ( H ( S ) − ǫ 1 ) − H ( S | Y , U , V ) − ǫ 2 ( g ) = I ( S ; Y , U ) + R r − ( ǫ 1 + ǫ 2 + δ 2 ) , where ( a) follows by the ind icator e vent cond itioning lemm a given in Appendix B, ( b) f ollows as I ( S n ; Y n | E = 1) ≥ 0 , (c) is d ue to the defin ition of E , wherein E = 0 imp lies that E c , i.e., the d ecodab ility of the c odewords ( U n ( M , K ) , V n ( L )) form Y n , (d) is by I ( S n ; Y n , U n , V n | E = 0) ≤ H ( S n | Y n , U n , V n , E = 0) ≤ H ( S n | E = 0) ≤ n ( H ( S ) + ǫ 1 ) fo r some a rbitrarily small ǫ 1 as E = 0 imp lies that S n is ty pical (and S n is gene rated i.i.d. he re), (e) is by taking ǫ 2 = ( H ( S ) + ǫ 1 ) Pr { E = 1 } ) − 1 n , which can be made arbitrarily small as n → ∞ (as Pr { E = 1 } → 0 as n → ∞ ), and lower boundin g H ( S n | E = 0) ≥ n ( H ( S ) − ǫ 1 ) , which follows as E = 0 imp lies that S n is typical (and S n is gene rated i.i. d. here), (f) fo llows as H ( S n | Y n , U n , V n , E = 0) = n P i =1 H ( S i | S i − 1 1 , Y n , U n , V n , E = 0) ≤ n P i =1 H ( S i | Y i , U i , V i ) = n ( H ( S | Y , U, V )) , and (g) is b y H ( S ) − H ( S | Y , U, V ) = I ( S ; Y , U, V ) = I ( S ; Y , U ) + I ( S ; V | Y , U ) = I ( S ; Y , U ) + R r − δ 2 . ( This fo llows, as we set R r = I ( S ; V | Y , U ) + δ 2 in the coding sch eme.) From the last expression, we identify that any R a ≤ I ( S ; Y , U ) + R r is achiev able. Novem ber 19, 2018 DRAFT 11 Leakage rate an alysis: W e first show the ach iev ability of R l ≥ I ( U, S ; Z ) with the f ollowing. 1 n I ( S n ; Z n ) ≤ 1 n I ( U n , S n ; Z n ) = 1 n H ( Z n ) − 1 n H ( Z n | U n , S n ) = 1 n n X i =1 H ( Z i | Z i − 1 1 ) − 1 n n X i =1 H ( Z i | Z i − 1 1 , U n , S n ) ( a ) ≤ 1 n n X i =1 H ( Z i ) − 1 n n X i =1 H ( Z i | U i , S i ) ( b ) = I ( U, S ; Z ) , where (a) fo llows as H ( Z i | Z i − 1 1 ) ≤ H ( Z i ) due to the fact that conditionin g does no t increase the en tropy an d H ( Z i | Z i − 1 1 , U n , S n ) = H ( Z i | U i , S i ) as ( Z i − 1 1 , U i − 1 1 , S i − 1 1 , U n i +1 , S n i +1 ) → ( U i , S i ) → Z i forms a Markov c hain (this is due to i.i.d. g eneration of x i from ( u i , s i ) , and i.i.d. ge neration of z i from ( x i , s i ) ), an d (b) follows as 1 n n P i =1 I ( U i , S i ; Z i ) = I ( U, S ; Z ) . (W e note that sin gle letterizatio n argument for H ( Z n | U n , S n ) above is also given in (2 6) of [9].) Next, we focu s on the achievability of R l ≥ I ( S ; Z , U ) + R r . W e have the f ollowing. 1 n I ( S n ; Z n ) ≤ 1 n I ( U n ( M , K ) , S n ; Z n ) ( a ) = 1 n I ( U n ( M , K ) , M ; Z n ) + 1 n I ( S n ; Z n | U n ( M , K )) ( b ) ≤ 1 n H ( M ) + 1 n H ( U n ( M , K ) | M ) + 1 n I ( S n ; Z n | U n ( M , K )) = 1 n H ( M ) + 1 n H ( U n ( M , K ) | M ) + 1 n H ( Z n | U n ( M , K )) − 1 n H ( Z n | U n ( M , K ) , S n ) ( c ) ≤ R r + I ( U ; S ) + δ 1 + H ( Z | U ) − H ( Z | U, S ) = I ( S ; Z , U ) + R r + δ 1 , where (a) is by adding I ( M ; Z n | U n ( M , K )) = 0 as M can be determ ined f rom U n ( M , K ) f or a g i ven codeboo k, (b) fo llows as I ( U n ( M , K ) , M ; Z n ) ≤ H ( U n ( M , K ) , M ) = H ( M ) + H ( U n ( M , K ) | M ) , (c) is du e to having H ( M ) ≤ nR r (a ran dom variable with 2 nR r values has entropy at most nR r ), H ( U n ( M , K ) | M ) ≤ n ( R u − R r ) = n ( I ( U ; S ) + δ 1 ) (her e, given M , th ere are 2 n ( R u − R r ) number o f U n codewords, and the entro py of th is set is maximized if the index K h as the uniform d istribution), H ( Z n | U n ) = n P i =1 H ( Z i | Z i − 1 1 , U n ) ≤ n P i =1 H ( Z i | U i ) = nH ( Z | U ) , an d having H ( Z n | U n , S n ) = n P i =1 H ( Z i | Z i − 1 1 , U n , S n ) = n P i =1 H ( Z i | U i , S i ) = nH ( Z | U, S ) , where th e second ineq uality holds as ( Z i − 1 1 , U i − 1 1 , S i − 1 1 , U n i +1 , S n i +1 ) → ( U i , S i ) → Z i forms a Markov chain (as d etailed in the previous paragr aph). (W e n ote that sing le letterization arguments f or H ( Z n | U n ) and H ( Z n | U n , S n ) above are also given in (24) a nd (26 ) o f [9].) This con cludes the proof of Theorem 1 , whe re we take the union of the achievable pair s over all p ( u | s ) × p ( x | u, s ) = p ( u, x | s ) d istributions. Novem ber 19, 2018 DRAFT 12 2) Message transmission: W e no te that it is possible to a llocate a p art of the Gel’fand -Pinsker co ding rate in th e scheme propo sed above in ord er to transmit messages. In particular, if ( R a , R l , R r ) satisfies the inequalities given in Theo rem 1, the n ( R a , R l , R m ) is achiev able with a message rate of R m = I ( U ; Y ) − I ( U ; S ) − R r . That is, the Gel’fand-Pinsker codin g rate given by I ( U ; Y ) − I ( U ; S ) can be divided into r efinement r ate R r and message rate R m . 3) State seq uence covering : In the region g iv en by Theorem 1 , in creasing R r will not only increase the amplification rate but will also incr ease the leak age rate. Th erefor e, for some scenarios, im plementing only a covering scheme migh t be advantageous. By ch oosing an arbitrarily sm all refineme nt rate R r in Theorem 1, the following region can b e achieved. Cor o llary 2: [Covering] Le t R 2 be the clo sure of the u nion of all ( R a , R l ) pairs satisfying R a ≤ I ( S ; Y , U ) R l ≥ min I ( U, S ; Z ) , I ( S ; Z , U ) over all distributions p ( u, x | s ) satisfying I ( U ; Y ) > I ( U ; S ) . Th en, R 2 ⊆ C . W e no te that the rate of refinem ent, R r = I ( V ; S | U , Y ) + δ 2 , is set to an arbitrarily small value here, and the codeword V n only serves as a covering of th e state sequence. Further im plications of this covering scheme on the amplification -leakage region is discussed in Section IV -A. W e note that, by transmission of a covering of the state, the leakag e rate is shown to satisfy R l ≥ min I ( U, S ; Z ) , I ( S ; Z , U ) . Remarkab ly , o ne can gua rantee su ch a bo und, even if some state d epend ent info rmation is transmitted over the channel. I n particu lar , if the chann el seen b y Bob is stronger th an the one seen by Eve, on e can send the refinem ent info rmation securely over the state depend ent chan nel. T his app roach is detailed in th e next section. B. Secure r efinement Consider all in put distributions p ( u, x | s ) satisfying I ( U ; Y ) ≥ I ( U ; Z ) . For such distributions, it is possible to send refinement information securely over the chan nel. Th is way , the leakage increase due to refinement index is decreased as the security of the index will lo wer th e cor respond ing leakag e rate achieved at Eve compa red to the non-secu red case. I n th e following, we first f ocus on tr ansmission of secu re messages over the state dep endent channel, and th en detail the p roposed secure r efinement appr oach. 1) Secure message transmission over state d epende nt channels: Consider that a transmitter wants to sen d a secure message M over the state dependen t chann el in the pr esence of eavesdropper . This pro blem is studie d in [5], and, we will r evisit it h ere. In particular, we giv e a c odebo ok co nstruction and provide a lemma that upp er bound s I ( M ; Z n ) , the leakag e to th e eav esdropp er 2 . This result is th en utilized in the f ollowing part, in showing the propo sed secure refinement appro ach. 2 The codebook we provide here is a s pecia l case of the one proposed in [5 ], which considers an exten ded version for an equiv ocati on rate analysi s. Novem ber 19, 2018 DRAFT 13 Codeboo k Gen eration: W e divide the co deboo k con struction in two pa rts, depend ing on whether I ( U ; Z ) > I ( U ; S ) or not. If I ( U ; Z ) > I ( U ; S ) , generate 2 nR u codewords U n ( M , T , K ) , wh ere M ∈ [1 : 2 nR m ] , K ∈ [1 : 2 n ( I ( U ; S )+ δ ) ] , an d T ∈ [1 : 2 n ( R u − R m − I ( U ; S ) − δ ) ] . Here , T is rand omly selected. W e set R m = I ( U ; Y ) − I ( U ; Z ) and R u = I ( U ; Y ) − δ , which imply H ( T ) = n ( I ( U ; Z ) − I ( U ; S ) − 2 δ ) . If I ( U ; Z ) ≤ I ( U ; S ) , g enerate 2 nR u codewords U n ( M , K ) , where M ∈ [1 : 2 nR m ] , K ∈ [1 : 2 n ( I ( U ; S )+ δ ) ] . W e set R m = I ( U ; Y ) − I ( U ; S ) − 2 δ an d R u = I ( U ; Y ) − δ . In both c ases, M is the secu re message index, an d K is used as a covering index ( similar to the pr evious section). The above codebook constructio n is the same as that of Gel’fand- Pinsker co deboo k (describ ed in the previous section), with the only d ifference b eing that, for pro bability distributions satisfying I ( U ; Z ) > I ( U ; S ) , we select a part of the message as rando m (r epresented as T in the paragrap h ab ove). This enab les to argue that the remaining part of the Gel’fand- Pinsker m essage (represente d as M in th e parag raph above) is secure again st th e eavesdropper (in terms o f the le akage rate). W e ha ve th e following. Lemma 3: For the co deboo k gene ration g i ven ab ove, for so me ǫ → 0 as n → ∞ , I ( M ; Z n ) ≤ n I ( S ; Z , U ) + H ( S n | U n , Z n ) − H ( S n | M , T ) + nǫ, where T is a random variable un iformly d istributed on the set [1 : 2 n ( I ( U ; Z ) − I ( U ; S ) − 2 δ ) ] for I ( U ; Z ) > I ( U ; S ) , and T = ∅ for I ( U ; Z ) ≤ I ( U ; S ) . Pr oof: See Appendix C. This result is u tilized in the next section for the secur e refineme nt approach . Here, we no te the following corollary , which is an alternate pro of of security in state depen dent wiretap channels. Cor o llary 4: For the co deboo k gene ration given above, if M is ind ependen t of S n , then 1 n I ( M ; Z n ) ≤ ǫ , for some ǫ → 0 as n → ∞ . Pr oof: See Appendix D. 2) Secure refinement via secur e and state dependen t message transmission: In the previous sectio n, a refinem ent approa ch is pr oposed wher e a Gel’fand-Pinsker cod ed message is utilized to resolve som e am biguity regarding the channel state at Bob. In such an approach , the message rate is utilized as the b in index of a covering ( V n ) codebo ok. (See proof of Theorem 1.) Here, we consider transmission of this refinement me ssage secu rely to Bob by u tilizing the c odebo ok construction g iv en above. As the message rate is modified from I ( U ; Y ) − I ( U ; S ) to secure message rate I ( U ; Y ) − max { I ( U ; S ) , I ( U ; Z ) } , this mo dification results in a refinement rate of R r ≤ min I ( U ; Y ) − ma x { I ( U ; S ) , I ( U ; Z ) } , H ( S | Y , U ) . Howe ver , the leakage rate due refinement is now independent of R r . The co rrespon ding region is given b y the following. Novem ber 19, 2018 DRAFT 14 Theor em 5: Let R 3 be the clo sure of the union of all ( R a , R l ) pairs satisfying R a ≤ I ( S ; Y , U ) + R r R l ≥ min { I ( U, S ; Z ) , I ( S ; Z, U ) } R r ≤ min I ( U ; Y ) − max { I ( U ; S ) , I ( U ; Z ) } , H ( S | Y , U ) over all distributions p ( u, x | s ) satisfying I ( U ; Y ) ≥ I ( U ; S ) Th en, R 3 ⊆ C . Note th at, the a mplification r ate th at can b e o btained with such an app roach is lower than the previous case (Theor em 1), as the message rate R r ≤ I ( U ; Y ) − I ( U ; Z ) < I ( U ; Y ) − I ( U ; S ) if I ( U ; Z ) > I ( U ; S ) . Therefore, the improvement on the le akage expression co mpared to Theorem 1 is obtained with a degrad ation o n the amplification rate. Pr oof: W e use the codeboo k g eneration gi ven ab ove. Th e only difference c ompared to the codebook construction utilized in the proof of Theorem 1 is t hat part of the message (i.e., T ) is selected as random when I ( U ; Z ) > I ( U ; S ) . Therefo re, the amplification rate analysis is the same as that of the one given in th e pr oof o f Theorem 1 with R r bound ed b y R r ≤ I ( U ; Y ) − max { I ( U ; S ) , I ( U ; Z ) } in stead of R r ≤ I ( U ; Y ) − I ( U ; S ) . T he proo f of R l ≥ I ( U, S ; Z ) fo llows from the same steps giv en in th e pro of of Theorem 1 . Here, we sh ow that R l ≥ I ( S ; Z , U ) holds as well. Consider the fo llowing. I ( S n ; Z n ) ≤ I ( M , S n ; Z n ) = I ( M ; Z n ) + I ( S n ; Z n | M ) ( a ) ≤ [ nI ( S ; Z , U ) + H ( S n | U n , Z n ) − H ( S n | M , T ) + nǫ ] + H ( S n | M ) − H ( S n | M , Z n ) = nI ( S ; Z , U ) + nǫ + H ( S n | U n , Z n ) − H ( S n | M , Z n ) + H ( S n | M ) − H ( S n | M , T ) ( b ) ≤ nI ( S ; Z , U ) + n ǫ + I ( S n ; T | M ) ( c ) = nI ( S ; Z , U ) + nǫ, where (a) follows fr om Lemma 3, (b) follows as H ( S n | U n , Z n ) = H ( S n | U n , M , Z n ) ≤ H ( S n | M , Z n ) and H ( S n | M ) − H ( S n | M , T ) = I ( S n ; T | M ) , a nd (c) follows by observin g I ( S n ; T | M ) = 0 , where T = ∅ for I ( U ; Z ) ≤ I ( U ; S ) ; an d, T is indepen dently gene rated random variable f or I ( U ; Z ) > I ( U ; S ) , imply ing that H ( T | M ) = H ( T | M , S n ) = H ( T ) . As ǫ ca n b e made arbitrarily small, we conclud e from last expression that any R l ≥ I ( S ; Z , U ) is achiev able. I I I . O U T E R B O U N D S In this sectio n, we provid e outer bound s on the achievable a mplification-le akage rate region. In particular, we derive regions d enoted by R o , to which any ach iev able ( R a , R l ) must b elong. Novem ber 19, 2018 DRAFT 15 Pr oposition 6 : If ( R a , R l ) is achiev able, then ( R a , R l ) ∈ R 1 o , where R 1 o is the closure of the unio n of all ( R a , R l ) pairs satisfying R a ≤ min { H ( S ) , I ( X , S ; Y ) } R l ≥ I ( S ; Z, U ) over all p ( u , x | s ) d istributions satisfyin g I ( U ; Z ) ≥ I ( U ; S ) . Pr oof: See Appendix E. If the c hannel is d egraded, wher ein p ( y , z | x, s ) = p ( y | x, s ) p ( z | y ) , the following o uter bou nd can be obtained . Pr oposition 7 : If the channel satis fies p ( y , z | x, s ) = p ( y | x, s ) p ( z | y ) and if ( R a , R l ) is achie vable, th en ( R a , R l ) ∈ R 2 o , where R 2 o is the clo sure of the u nion o f all ( R a , R l ) pairs sarisfying R a ≤ min { H ( S ) , I ( X , S ; Y ) } R l ≥ I ( S ; Z, U ) R a − R l ≤ I ( X , S ; Y | Z ) over all p ( u , x | s ) d istributions satisfyin g I ( U ; Y ) ≥ I ( U ; S ) . Pr oof: See Appendix F . These outer b ound regions are used in the fo llowing to establish special case results. I V . S P E C I A L D I S C R E T E M E M O RY L E S S C H A N N E L M O D E L S A. Reversely degraded DMCs W e say that the ch annel is reversely d egraded if ( X, S ) → Z → Y forms a Mar kov Chain. Note that, this correspo nds to a stronger channel seen by Eve compar ed to that of Bob. There fore, re versely degra ded scen arios imply C d ≤ 0 , mean ing that the state kn owledge at Bob is not highe r than that of at E ve. W e h av e the following result fo r th is set of channels. Theor em 8: The op timal differential am plification rate for r ev ersely degraded DMCs is given by C d = max p ( x | s ) I ( S ; Y ) − I ( S ; Z ) Pr oof: Achiev ability of the stated difference f ollows f rom Theorem 1 by sub stituting U = ∅ . W e provide the conv erse in Ap pendix G. Note that co ding can not improve this difference as th e chan nel is reversely degraded. Thus, cod ing might help to increase R a at the expen se of po ssibly de creasing R a − R l for the r ev ersely degrad ed scenario. This is also the case fo r the covering sch eme given by Corollary 2. That is, R a vs. R a − R l can be tr aded-o ff u sing different input distributions. ( U = ∅ case will co rrespon d to the m aximum R a − R l , and achie ve C d .) Novem ber 19, 2018 DRAFT 16 B. Modulo additive b inary mode l Consider the ch annels given by Y i = X i ⊕ S i ⊕ N i Z i = X i ⊕ S i ⊕ ˜ N i , where th e state and noise distributions are gener ated i.i.d. as S i ∼ Bern ( p s ) , N i ∼ Bern ( p n ) , ˜ N i ∼ Bern ( p n z ) . ( All p k s satisfy p k ∈ [0 , 0 . 5 ] for k ∈ { s, n, n z } .) In this section , we use the fo llowing notation for the b inary co n volution p ⊗ q , p (1 − q ) + q (1 − p ) . 1) State can cellation scheme: T o can cel the state fr om the chan nel, we send X i = U i ⊕ S i , where U i ∼ Bern ( p u ) and the co dewords U n carry a description o f the state sequ ence S n . This way , we achieve the fo llowing inner-bound. Cor o llary 9: Th e state ca ncellation scheme, which sends Bern ( p u ) distributed signal XORed with state sequenc e at each time instant, achieves the set o f ( R a , R l ) pairs den oted by the region R SC , wher e R SC = Con vex Hull [ p u ∈ [0 , 0 . 5] ,p u ⊗ p s ≤ 0 . 5 ( R a ( p u ) , R l ( p u )) ⊆ C , with R a ( p u ) ≤ min { H ( p s ) , H ( p u ⊗ p n ) − H ( p n ) } R l ( p u ) ≥ H ( p u ⊗ p n z ) − H ( p n z ) . Pr oof: Ac hiev ability fo llows from Corollary 2. 2) Optimal differ ential a mplification rate: Cor o llary 10 : If p n ≤ p n z and H ( p s ) ≥ 1 − H ( p n ) for a bin ary mod el, the optim al amplificatio n an d leaka ge rate difference is given by C d = H ( p n z ) − H ( p n ) . Pr oof: Fr om Propo sition 7 , we o btain the following. If p n ≤ p n z , any given ( R a , R l ) ∈ C satisfies R a − R l ≤ H ( p n z ) − H ( p n ) + max p ( x | s ) { H ( X ⊕ S ⊕ N ) − H ( X ⊕ S ⊕ N z ) } . Note that, th is upp er-bound can be evaluated by o bserving max p ( x | s ) { H ( X ⊕ S ⊕ N ) − H ( X ⊕ S ⊕ N z ) } = max p ( x | s ) { H ( X ⊕ S ⊕ N ) − H ( X ⊕ S ⊕ N ⊕ N ∗ z ) } ≤ max p ( x | s ) { H ( X ⊕ S ⊕ N ) − H ( X ⊕ S ⊕ N ⊕ N ∗ z | N ∗ z ) } = 0 , Novem ber 19, 2018 DRAFT 17 0 1 1 0 0 0 1 1 0 0 1 1 S i = 0 w.p. p S i = 1 w.p. q S i = 2 w .p. r X i Y i Y i N i Z i X i Y i X i Y i Fig. 2. Channel model of memory with defecti v e cells. p = Pr { S = 0 } (probability of being stuck at 0 ), q = Pr { S = 1 } (probability of being stuck at 1 ), r = Pr { S = 2 } (probability of being in a noiseless state), and N ∼ B er n ( n ) , where n ∈ [0 , 0 . 5] is the cross ove r probabilit y of the BSC from Y to Z . where th e equ ality is due to the chann el degradedn ess co ndition with appro priate noise term N ∗ z indepen dent of N such that N ⊕ N ∗ z = N z , and the inequality is due to the fact that co nditionin g d oes not increa se th e e ntropy . Using this we observe that the outer-bound is maximiz ed with a cho ice of p ( x ) = 0 . 5 , which evaluates to R a − R l ≤ H ( p n z ) − H ( p n ) . This expression is ach iev ed by Corollary 9, wh en we ch oose p ( u ) = 0 . 5 , if H ( p s ) ≥ 1 − H ( p n ) . C. Memory with defe ctive cells mo del W e co nsider the mo del of infor mation transmission over write-once me mory d evice with stuck -at def ectiv e cells [ 43], [ 44], [ 45]. In this channe l model, ea ch m emory cell cor respond s to a chann el state instan t with card inality |S | = 3 , where the bina ry c hannel outpu t is determ ined from the b inary chann el inp ut and the ch annel state a s: p ( y = 0 | x, s = 0) = 1 p ( y = 1 | x, s = 1) = 1 p ( y = x | x, s = 2 ) = 1 , where Pr { S = 0 } = p is th e prob ability that the cha nnel is stuck at 0 , Pr { S = 1 } = q is the probab ility that the ch annel is stuck at 1 , and Pr { S = 2 } = r is the proba bility of having a goo d chan nel where y = x with p + q + r = 1 . W e con sider a binary symmetric c hannel (BSC) from Y to Z , where Z i = Y i ⊕ N i , with N i ∼ B ern ( n ) for some n ∈ [0 , 0 . 5] . This co rrespon ds to a degraded DM C model. ( See Fig. 2 .) W e p resent numerical results for this channel m odel with three regions: Unc oded region, coded region, and an outer-bound region. The un coded r egion is obtained by setting U = ∅ in The orem 1, where we have the set of Novem ber 19, 2018 DRAFT 18 ( R a , R l ) pairs satisfying R a ≤ I ( S ; Y ) R l ≥ I ( S ; Z ) over all possible p ( x | s ) . For the coded r egion, we simulate a sub-r egion of the one given in Theo rem 1, where we set U = Y and achie ve the set of ( R a , R l ) pairs satisfying R a ≤ min { H ( S ) , H ( Y ) } R l ≥ I ( Y , S ; Z ) = H ( Z ) − H ( N ) over all p ossible p ( x | s ) . For con verse a rguments, we co nsider the o uter-bound region given by the set of ( R a , R l ) pairs satisfying R a ≤ min { H ( S ) , I ( X , S ; Y ) = H ( Y ) } R l ≥ I ( S ; Z ) over all possible p ( x | s ) . This outer-bound r egion follows fr om Proposition 6 . W e evaluate th e regions ab ove in terms of the ch annel par ameters as follows. L et Pr { X = 1 } = α . Then, H ( S ) = H ( p , q , r ) , H ( Y | S ) = r H ( α ) H ( Y ) = H ( q + r α ) H ( Z | S ) = ( p + q ) H ( n ) + rH ( α ⊗ n ) H ( Z ) = H (( q + r α ) ⊗ n ) , where H ( · , · , · ) is the ter nary entropy fu nction, H ( · ) is the binary en tropy f unction , and ⊗ is the binar y co n volution giv en b y p ⊗ q = p (1 − q ) + q (1 − p ) . The numerical results are given in Fig. 3. The regions are truncated with R l ≤ H ( S ) as any R l > H ( S ) is trivially ach iev able. W e n ote th at, the co ded region is potentially larger than its uncod ed counte rparts even when we o nly compute a subset of th e c oded achiev able region. T his sh ows the gains that can b e lev eraged by th e propo sed sch eme, i.e., send ing a refinem ent of the state sequence over th e channel. V . G AU S S I A N S C E N A R I O Consider the ch annels given by Y i = X i + S i + N i Z i = X i + S i + ˜ N i , where th e state and noise distributions are gener ated i.i.d. as S i ∼ N (0 , σ 2 s ) , N i ∼ N (0 , σ 2 n ) , ˜ N i ∼ N (0 , σ 2 n z ) , and the cost co nstraint on th e channel in put is given by c ( x ) = x 2 and C = P , i.e., 1 n n P i =1 E { X 2 i } ≤ P . (See Fig. 4.) Novem ber 19, 2018 DRAFT 19 0.2 0.4 0.6 0.8 1 0 0.5 1 1.5 R a R l p=1/3, q=1/3, r=1/3, n=0.1 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 R a R l p=1/12, q=1/12, r=5/6, n=0.1 Uncoded Scheme Coded Scheme Converse Region Fig. 3. Simulation results for memory with defecti ve cells model. S i N i ˜ N i X i Y i Z i Fig. 4. The channel model for the Gaussian setting. S i ∼ N (0 , σ 2 s ) , N i ∼ N (0 , σ 2 n ) , and ˜ N i ∼ N (0 , σ 2 n z ) . A. An inner bound fo r C using an uncod ed scheme The inner bou nd is based on sending an amp lified version o f S together with some addition al G aussian noise. This un coded sign al is co nstructed as fo llows. X i = ρ σ x σ s S i + p (1 − ρ 2 ) σ x T i , (11) where T i ∼ N (0 , 1 ) ind epende nt o f S i , ρ ∈ [ − 1 , 1] , and σ 2 x ≤ P . Here, ρ 2 is the fraction of th e power allocated to S i . This scheme achieves the following region. Theor em 11: The u ncoded scheme , which f orwards S i at ea ch time step toge ther with so me i.i.d . Gaussian noise as given in ( 11), achieves the set of ( R a , R l ) pairs denoted by the region R uncoded , wher e R uncoded = Conve x Hu ll [ ρ ∈ [ − 1 , 1] ,σ 2 x ∈ [0 ,P ] ( R a ( ρ, σ x ) , R l ( ρ, σ x )) ⊂ C , Novem ber 19, 2018 DRAFT 20 with R a ( ρ, σ x ) = 1 2 log 1 + σ 2 s + 2 ρσ s σ x + ρ 2 σ 2 x σ 2 n + (1 − ρ 2 ) σ 2 x (12) R l ( ρ, σ x ) = 1 2 log 1 + σ 2 s + 2 ρσ s σ x + ρ 2 σ 2 x σ 2 n z + (1 − ρ 2 ) σ 2 x . (13) The expr essions above are obtained b y evaluating R a = I ( S ; Y ) and R l = I ( S ; Z ) on account of u ncode d transmission in (1 1). Examples: • If P ≥ σ 2 s , on e can set X = − S and ach iev e the pair ( R a = 0 , R l = 0) . • Another trivial po int is obtain ed by setting X = 0 , which a chieves R a = 1 2 log 1 + σ 2 s σ 2 n , R l = 1 2 log 1 + σ 2 s σ 2 n z . B. Outer-bounds o n C Cor o llary 12 : Let ρ denote the co rrelation coefficient between X and S . The set of all achiev able rate pairs ( R a , R l ) , satisfy R a ≤ 1 2 log 1 + σ 2 s + σ 2 x + 2 ρσ s σ x σ 2 n R l ≥ 1 2 log 1 + σ 2 s + ρ 2 σ 2 x + 2 ρσ s σ x σ 2 n z + σ 2 x (1 − ρ 2 ) for − 1 ≤ ρ ≤ 1 an d σ 2 x ≤ P . Pr oof: Usin g Pro position 7 , we ha ve R a ≤ I ( X, S ; Y ) = h ( Y ) − h ( Y | X , S ) = h ( Y ) − h ( N ) ≤ 1 2 log 1 + σ 2 s + 2 ρσ s σ x + σ 2 x σ 2 n . (14) Using Proposition 7, the linear estimate ˆ S ( Z ) = E [ S Z ] E [ Z 2 ] Z , and the fact the conditioning doe s not increase entropy , we get R l ≥ I ( S ; Z , U ) ≥ I ( S ; Z ) = h ( S ) − h ( S | Z ) = h ( S ) − h ( S − ˆ S ( Z ) | Z ) ≥ h ( S ) − h ( S − ˆ S ( Z )) . Since the en tropy maximizing distribution for a gi ven secon d momen t is a Gaussian, we h av e h ( S − ˆ S ( Z )) ≤ 1 2 log 2 π e σ 2 s 1 + σ 2 s +2 ρσ s σ x + ρ 2 σ 2 x σ 2 n z + σ 2 x (1 − ρ 2 ) , leading to R l ≥ 1 2 log 1 + σ 2 s + 2 ρσ s σ x + ρ 2 σ 2 x σ 2 n z + σ 2 x (1 − ρ 2 ) . Novem ber 19, 2018 DRAFT 21 Cor o llary 13 : Let ρ den ote the cor relation coefficient between X and S . If σ 2 n ≤ σ 2 n z , then the set of all achiev able r ate pairs ( R a , R l ) satisfy R a − R l ≤ 1 2 log 1 + σ 2 s + 2 ρσ s σ x + σ 2 x σ 2 n − 1 2 log 1 + σ 2 s + 2 ρσ s σ x + σ 2 x σ 2 n z , (15) for − 1 ≤ ρ ≤ 1 an d σ 2 x ≤ P . Pr oof: By Proposition 7, we hav e R a − R l ≤ I ( X , S ; Y | Z ) . W ithout loss o f gener ality , we consider ˜ N = N + N ′ with σ 2 n z = σ 2 n + σ 2 n ′ where N ′ is in depend ent of N . Noting that, I ( X, S ; Y | Z ) = h ( Y | Z ) − h ( Y | X, S, Z ) = h ( Y | Z ) − h ( N | ˜ N ) , we upp er bound h ( Y | Z ) using the f ollowing. Consider two ze ro-mean correlated ran dom variables A and B . h ( A | B ) ( a ) = h ( A − ˆ A ( B ) | B ) ≤ h ( A − ˆ A ( B )) ( b ) ≤ 1 2 log(2 π e σ 2 e ) , where in (a) we used ˆ A ( B ) as the estimate of A given B , and ( b) follows by definin g the estimation er ror variance σ 2 e , E h ( A − ˆ A ( B )) 2 i and the fact tha t Gaussian distribution maximizes en tropy given the variance. W e then upper bou nd the optimal estimato r error variance by the lin ear MMSE variance. Ther efore, h ( A | B ) ≤ 1 2 log 2 π e var ( A ) − E ( AB ) 2 var ( B ) !! . Using the a bove, we ob tain R a − R l ≤ 1 2 log 2 π e σ 2 s + 2 ρσ s σ x + σ 2 x + σ 2 n − ( σ 2 s + 2 ρσ s σ x + σ 2 x + σ 2 n ) 2 σ 2 s + 2 ρσ s σ x + σ 2 x + σ 2 n + σ 2 n ′ − 1 2 log 2 π e σ 2 n − ( σ 2 n ) 2 σ 2 n + σ 2 n ′ = 1 2 log 1 + σ 2 s + 2 ρσ s σ x + σ 2 x σ 2 n − 1 2 log 1 + σ 2 s + 2 ρσ s σ x + σ 2 x σ 2 n + σ 2 n ′ . (16) This com pletes th e pro of. C. Comparison of in ner and o uter boun ds for the de graded G aussian channel W e n ow compar e the unco ded scheme and the outer bou nd presented above. In particu lar , we show that the uncod ed transmission scheme achieves cer tain corner points o f the am plification-m asking region a nd that the gap between the inne r and outer boun ds o n the region is within 1 / 2 bits for a wide set of chann el p arameters. W e also show that the u ncode d sche me achieves the op timal difference R a − R l . Novem ber 19, 2018 DRAFT 22 1) Characterization of the g ap b etween a chievable and conver se re gio ns: W e show that given any po int ( R a , R l ) in the conv erse region correspo nding to a giv en ( ρ, σ x ) , un coded transm ission a chieves within 1 / 2 bits of the con- verse region u nder certain condition s on ch annel param eters. In particu lar , for any giv en R a , unco ded tr ansmission achieves that R a and within 1 / 2 bits of the boun d on R l if P σ 2 n z ≤ 1 . Similarly f or any given R l , un coded transmission achieves the given R l and within 1 / 2 bits of th e boun d on R a if P σ 2 n ≤ 1 . W e p rove the se as f ollows. Using Corollary 12, any po int in th e outer bo und r egion is described as R a = 1 2 log 1 + σ 2 s + σ 2 x + 2 ρσ s σ x σ 2 n R l = 1 2 log 1 + σ 2 s + ρ 2 σ 2 x + 2 ρσ s σ x σ 2 n z + σ 2 x (1 − ρ 2 ) (17) for − 1 ≤ ρ ≤ 1 and σ 2 x ≤ P . Now le t us show that unco ded transmission achieves a ny R l in the region above an d the gap from R a as ab ove is within 1 / 2 bits. Le t the u ncoded schem e be desig ned such that X i = σ x σ s ρS i + T i , where T i ∼ N (0 , σ 2 x (1 − ρ 2 )) and independ ent of S i . Now , by (13) an d (1 2) th is in put achieves a lea kage, I ( S ; Z ) = 1 2 log 1 + σ 2 s + ρ 2 σ 2 x +2 ρσ s σ x σ 2 n z + σ 2 x (1 − ρ 2 ) and R a giv en b y I ( S ; Y ) = 1 2 log 1 + σ 2 s + ρ 2 σ 2 x +2 ρσ s σ x σ 2 n + σ 2 x (1 − ρ 2 ) , which implies that the g ap is given by I ( X, S ; Y ) − I ( S ; Y ) = I ( X ; Y | S ) = 1 2 log 1 + σ 2 x (1 − ρ 2 ) σ 2 n ≤ 1 2 , (18) for P σ 2 n ≤ 1 . Now , in ord er to prove th e other claim, that uncod ed ach iev es a ny g iv en R a and the gap with R l is within 1 / 2 bits, we pro ceed as follows. Given R a = 1 2 log 1 + σ 2 s + σ 2 x +2 ρσ s σ x σ 2 n , we achieve this by choo sing an uncod ed scheme such that X i = σ x ′ σ s S i if ρ ≥ 0 o r X i = − σ x ′ σ s S i if ρ < 0 . W e choose σ x ′ such that σ 2 x ′ + 2 ρ | ρ | σ s σ x ′ = σ 2 x + 2 ρσ s σ x , if ρ 6 = 0 , σ 2 x ′ + 2 σ s σ x ′ = σ 2 x , if ρ = 0 , where 0 ≤ σ x ′ ≤ √ P . By the intermediate value theor em for continuo us fun ctions, it is clear th at ther e exists σ x ′ ≤ √ P such that the cond itions above are satisfied. Further, th e unc oded scheme ach iev es the desired R a . For ρ 6 = 0 , the scheme ach iev es an R l giv en by 1 2 log 1 + σ 2 s + σ 2 x ′ +2 ρ | ρ | σ s σ x ′ σ 2 n z leading to a gap 1 2 log 1 + σ 2 s + σ 2 x ′ + 2 ρ | ρ | σ s σ x ′ σ 2 n z ! − 1 2 log 1 + σ 2 s + ρ 2 σ 2 x + 2 ρσ s σ x σ 2 n z + σ 2 x (1 − ρ 2 ) = 1 2 log 1 + σ 2 s + σ 2 x + 2 ρσ s σ x σ 2 n z − 1 2 log 1 + σ 2 s + ρ 2 σ 2 x + 2 ρσ s σ x σ 2 n z + σ 2 x (1 − ρ 2 ) = 1 2 log σ 2 n z + σ 2 s + σ 2 x + 2 ρσ s σ x σ 2 n z − 1 2 log σ 2 n z + σ 2 s + σ 2 x + 2 ρσ s σ x σ 2 n z + σ 2 x (1 − ρ 2 ) = 1 2 log σ 2 n z + σ 2 x (1 − ρ 2 ) σ 2 n z = 1 2 log 1 + σ 2 x (1 − ρ 2 ) σ 2 n z ≤ 1 2 log 2 = 1 2 , (19) when P σ 2 n z ≤ 1 . Follo wing the same steps, ρ = 0 case can be shown as well, implying the ch aracterizatio n of the trade-off region within 1 / 2 b its for P σ 2 n z ≤ 1 and P σ 2 n ≤ 1 . Novem ber 19, 2018 DRAFT 23 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5 3 3.5 R a R l P=2, σ s 2 =10, σ n 2 =1, σ n z 2 =5 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5 3 R a R l P=4, σ s 2 =5, σ n 2 =1, σ n z 2 =0.5 Uncoded Scheme Converse Region Fig. 5. Simulation results for the Gaussian scenario. 2) Differ ential amp lification capacity: Note th at the uncode d transmission ac hieves th e maximu m R a − R l . The upper bou nd on R a − R l in (1 5) is ma ximized for σ 2 x = P and ρ = 1 . Th is m aximum difference between R a and R l is achieved by un coded transmission cor respond ing to X = √ P σ s S in Th eorem 11, and is given by C d = 1 2 log 1 + ( σ s + √ P ) 2 σ 2 n ! − 1 2 log 1 + ( σ s + √ P ) 2 σ 2 n z ! . 3) Corner points of the trade-o ff r e g ion: Consider the corne r points of the amplification -masking region . Inspect- ing (14), we obser ve that the point in the outer bound region correspo nding to max imum am plification is given by ρ = 1 . Clearly , from (18) and (1 9), we see that the gap is zer o fo r ρ = 1 . Similarly , consider the p oint cor respond ing to min imum leakag e R l in the weak and mo derate interference regimes as in [9]. T hese poin ts correspon d to ρ = − 1 and we have I ( X , S ; Y ) = I ( S ; Y ) and I ( X , S ; Z ) = I ( S ; Z ) , leading to the ga p b eing zero. This is also verified by setting ρ = − 1 in (18) and (19 ). 4) Numerical r esults: W e compare the unco ded region with an outer-bound region (g i ven in Proposition 12) in Fig. 5 . Th e first case co rrespon ds to a degraded scenar io, where the g ap between the r egions is fairly sm all as expected from th e an alysis given ab ove. Howev er , for the reversely degrad ed scenar io with larger power constrain t P comp ared to th e state power σ 2 s , the g ap is larger . In Fig. 6, we plo t the differential amplificatio n capacity for a degraded chann el ( σ 2 n = 1 , σ 2 n z = 5 ) for a rang e of p ower co nstraints P and different values o f σ 2 s . Note that the differential amp lification capacity saturates in the high SNR regime, and the effect of encoder in incr easing C d is decreasing as the power of the additive state increases. V I . C O N C L U S I O N W e study th e problem of state amplification under the masking constraints, where the encoder (with the knowledge of non-ca usal state S n ) facilitates the amplificatio n rate ( 1 n I ( S n ; Y n ) ) at Bob, wh ich observes Y n , while minim izing Novem ber 19, 2018 DRAFT 24 −50 −40 −30 −20 −10 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 1.2 1.4 P (dB) C d C d vs. P (dB) for σ n 2 =1, σ n z 2 =5 σ s 2 =20dB σ s 2 =10dB σ s 2 =1dB σ s 2 =−10dB σ s 2 =−20dB Fig. 6. Dif ferenti al amplificatio n capacit y C d vs. po wer P (dB). T he dashed curv e with diamond marke rs correspond to the same scenario gi ven in the degraded setting of Fig. 5 ( σ 2 s = 10 ) for diff erent power le vel s. the leak age rate ( 1 n I ( S n ; Z n ) ) as muc h as p ossible at Eve, which observes Z n . Ou r codin g schemes ar e ba sed on tr ansmission of state d epend ent messages over the state d epende nt channel to Bob . The achiev able region correspo nding to th is refin ement stra tegy is deriv ed by calculating bounds on amplification and m asking r ates. W e also show that f or the input distributions enabling Bob to b e a “stronger” receiver than Eve, th e refineme nt informa tion can be sent securely over the channel. This secure r efinemen t app roach is shown to lead to non - trivial achiev able regions. W e also p rovided outer bou nds, using which we showed that the scheme withou t secure refinement achieves the o ptimal R a − R l over the r egion in the reversely degraded DMCs, the degraded binary channels, a nd Gaussian chann els. For the d egraded Gaussian model, we also c haracterized the optim al cor ner points, and the gap between the outer bound an d achievable regions. Sev eral inter esting pr oblems can be con sidered as future directions. First, the chan nel cost may be introdu ced for th e DMC mod el as well, and th e cost may have some depend ence on the state sequenc e or vary accordin g to a stochastic model. Second, cau sal ch annel state k nowledge can be con sidered. Th ird, in addition to the task of state amplification and masking , transm ission of messages to the rece iv ers can be co nsidered . T owards th is end , signaling techniqu es fo r (secure) broa dcast ch annel models can b e u tilized. Anoth er extension direction is the coded state sequen ce setting [45], wh ich is a scenario that is mo re r elev ant to the b roadcast and cogn iti ve radio systems, where the co ded signal (that carries a message from a codebo ok) correspon ds to the chann el state sequence that is non-ca usally kn own at the enco der . Finally , a sou rce coding extension of the cur rent model can be stud ied. In such a problem setup , the trade-off betwe en th e distor tions a chieved at both users can be analyzed . A r elev ant work f or such an extension is [46], where a so urce coding setup is considered with two sources (one has to b e amplified an d the other has to be masked) with a single receiver . W e rema rk that the amplification an d leak age ra tes an alyzed in this paper can be utilized to provide a lower b ound o n the d istortion that can be achieved at Bob an d E ve, Novem ber 19, 2018 DRAFT 25 respectively , by evaluating the correspond ing distortion- rate fun ctions. AC K N O W L E D G E M E N T The auth ors are thankful to Y anling Chen of Ruhr-Univ ersity Boch um, Y eow-Khiang Chia o f In stitute for Infoco mm Research , V incen t Y . F . T an of National University of Singapor e, and anonymo us revie wers fo r their valuable feed back. A P P E N D I X A Pr {E c 3 ∩ E 4 } → 0 A S n → ∞ W e use the arguments gi ven in [4 0] in ord er to show this. Similar to the joint source-c hannel coding scenario studied in [4 0], we have a cod ew ord U n with a covering ind ex K , which makes th e cod ebook and covering index K depend ent to each oth er . In other words, S n sequence r ealization together with th e cod ebook determ ines the index K , from which ( S n , U n ) gen erates Y n throug h an i.i.d. generatio n pr ocess. Under such a scenario, as rep orted in [40], deco ding th e index K with Y n at receiver is successfu l if the numb er o f K ind ices is less than 2 nI ( U ; Y ) . W e now provide this a nalysis her e for completeness. Using the steps g i ven in [4 0] for the scena rio here, we have the following: For a given M , we have 2 nR k number of U n ( M , k ) c odewords ( k ∈ [1 : 2 nR k ] ), and the bo und below . Novem ber 19, 2018 DRAFT 26 Pr {E c 3 ∩ E 4 } = P r { ( U n ( M , k ) , Y n ) ∈ T ( n ) ǫ for some k 6 = K } ( a ) ≤ 2 nR k X k =1 Pr { ( U n ( M , k ) , Y n ) ∈ T ( n ) ǫ , K 6 = k } = 2 nR k X k =1 X s n p ( s n ) Pr { ( U n ( M , k ) , Y n ) ∈ T ( n ) ǫ , K 6 = k | S n = s n } ( b ) = 2 nR k X s n p ( s n ) Pr { ( U n ( M , 1) , Y n ) ∈ T ( n ) ǫ , K 6 = 1 | S n = s n } ≤ 2 nR k X s n p ( s n ) Pr { ( U n ( M , 1) , Y n ) ∈ T ( n ) ǫ | K 6 = 1 , S n = s n } = 2 nR k X s n p ( s n ) X ( u n ,y n ) ∈T ( n ) ǫ Pr { U n ( M , 1) = u n , Y n = y n | K 6 = 1 , S n = s n } ( c ) = 2 nR k X s n p ( s n ) X ( u n ,y n ) ∈T ( n ) ǫ X ¯ C Pr { U n ( M , 1) = u n , Y n = y n | K 6 = 1 , S n = s n , ¯ C = ¯ C } × Pr { ¯ C = ¯ C | K 6 = 1 , S n = s n } ( d ) = 2 nR k X s n p ( s n ) X ( u n ,y n ) ∈T ( n ) ǫ X ¯ C Pr { U n ( M , 1) = u n | K 6 = 1 , S n = s n , ¯ C = ¯ C } × Pr { Y n = y n | K 6 = 1 , S n = s n , ¯ C = ¯ C } Pr { ¯ C = ¯ C | K 6 = 1 , S n = s n } ( e ) ≤ 2 nR k X s n p ( s n ) X ( u n ,y n ) ∈T ( n ) ǫ X ¯ C 2 P r { U n ( M , 1) = u n } × Pr { Y n = y n | K 6 = 1 , S n = s n , ¯ C = ¯ C } Pr { ¯ C = ¯ C | K 6 = 1 , S n = s n } = 2 nR k X s n p ( s n ) X ( u n ,y n ) ∈T ( n ) ǫ 2 P r { U n ( M , 1) = u n } P r { Y n = y n | K 6 = 1 , S n = s n } ( f ) ≤ 2 nR k X s n p ( s n ) X ( u n ,y n ) ∈T ( n ) ǫ 4 P r { U n ( M , 1) = u n } P r { Y n = y n | S n = s n } ( g ) = 2 nR k +2 X ( u n ,y n ) ∈T ( n ) ǫ n Y i =1 p U ( u i ) Pr { Y n = y n } ( h ) ≤ 2 n ( R k − I ( U ; Y )+ δ ) where (a) is due to the union of e vents b ound , and (b) follows b y the symmetry of the codebook generation and coding, (c) is by defining ¯ C = { U n ( M , k ) , k 6 = 1 } , (d) is due to th e fact that given K 6 = 1 , U n ( M , 1) → ( ¯ C , S n ) → Y n forms a Markov chain, (e) is du e to Lemma 14 given at the end of this section, (f) is due to Lemma 1 5 g iv en at the end of this section, (g) is due to h aving i.i.d . gener ation for U n , i.e., Pr { U n ( M , 1) = u n } = n Q i =1 p U ( u i ) , (h ) is due to joint typicality lemm a [39]. Novem ber 19, 2018 DRAFT 27 From the last expression, we o btain that P r {E c 3 ∩ E 4 } → 0 as n → ∞ , if R k < I ( U ; Y ) − δ . Th is implies the existence of sequen ce of codes imply ing the desired result as we set R k = I ( U ; S ) + δ < I ( U ; Y ) − δ . Lemma 14 (Lemma 1 in [40]): For sufficiently large n , Pr { U n ( M , 1) = u n | K 6 = 1 , S n = s n , ¯ C = ¯ C } ≤ 2 P r { U n ( M , 1) = u n } Pr oof: W e hav e Pr { U n ( M , 1) = u n | K 6 = 1 , S n = s n , ¯ C = ¯ C } = Pr { U n ( M , 1) = u n | S n = s n , ¯ C = ¯ C } × Pr { K 6 = 1 | U n ( M , 1) = u n , S n = s n , ¯ C = ¯ C } Pr { K 6 = 1 | S n = s n , ¯ C = ¯ C } ( a ) ≤ Pr { U n ( M , 1) = u n } Pr { K 6 = 1 | S n = s n , ¯ C = ¯ C } ( b ) ≤ 2 Pr { U n ( M , 1) = u n } where (a) is due to in depend ence o f U n ( M , 1) and ( S n , ¯ C ) to gether with b oundin g P r { K 6 = 1 | U n ( M , 1) = u n , S n = s n , ¯ C = ¯ C } ≤ 1 , a nd (b) f ollows fro m P r { K 6 = 1 | S n = s n , ¯ C = ¯ C } ≥ 1 2 , as shown below . Consider t = t ( ¯ C , s n ) = |{ u n ( M , k ) ∈ ¯ C : ( u n ( M , k ) , s n ) ∈ T ( n ) ǫ ′ }| . Then, if t ≥ 1 , by the symmetry of the codebo ok gen eration a nd codin g, Pr { K = 1 | S n = s n , ¯ C = ¯ C } = Pr { ( U n ( M , 1) , s n ) ∈ T ( n ) ǫ ′ } t + 1 ≤ 1 t + 1 ≤ 1 2 , where we upper bound th e probab ility with 1 and used t ≥ 1 . On the o ther han d, if t = 0 , for sufficiently large n , and due to the symmetry of the codeb ook g eneration and cod ing, we have Pr { K = 1 | S n = s n , ¯ C = ¯ C } ≤ Pr { ( U n ( M , 1) , s n ) ∈ T ( n ) ǫ ′ } + Pr { ( U n ( M , 1) , s n ) / ∈ T ( n ) ǫ ′ } 2 nR k ≤ Pr { ( U n ( M , 1) , s n ) ∈ T ( n ) ǫ ′ } + 1 2 nR k ≤ 2 − n ( I ( U ; S ) − δ ( ǫ ′ )) + 1 2 nR k ≤ 1 2 , where b ound the probab ility with 1 and u tilized the joint ty picality lem ma [3 9]. The last inequ ality holds in th e limit of large n as R k > 0 and I ( U ; S ) > δ ( ǫ ′ ) , where δ ( ǫ ′ ) → 0 as ǫ ′ → 0 . Lemma 15 (Lemma 2 in [40]): For sufficiently large n , Pr { Y n = y n | K 6 = 1 , S n = s n } ≤ 2 Pr { Y n = y n | S n = s n } Pr oof: W e hav e Pr { Y n = y n | K 6 = 1 , S n = s n } = Pr { Y n = y n | S n = s n } P r { K 6 = 1 | S n = s n , Y n = y n } Pr { K 6 = 1 | S n = s n } ( a ) ≤ Pr { Y n = y n | S n = s n } Pr { K 6 = 1 | S n = s n } ( b ) ≤ 2 P r { Y n = y n | S n = s n } , Novem ber 19, 2018 DRAFT 28 where (a) follows as P r { K 6 = 1 | S n = s n , Y n = y n } ≤ 1 , an d (b) is due to having Pr { K 6 = 1 | S n = s n } ≥ 1 / 2 for sufficiently large n due to the symmetry of the codebook generatio n and co ding. A P P E N D I X B I N D I C A T O R E V E N T C O N D I T I O N I N G L E M M A Lemma 16: Conside r an indicator rando m variable E , where E = 1 for E , an d E = 0 fo r E c , for an event E . Then, for any A, B , H ( A | B ) ≤ H ( A | B , E ) + 1 I ( A ; B ) ≥ I ( A ; B | E ) − 1 I ( A ; B ) ≤ I ( A ; B | E ) + 1 Pr oof: H ( A | B ) ( a ) ≤ H ( A | B ) + H ( E | B , A ) = H ( E | B ) + H ( A | B , E ) ( b ) ≤ H ( A | B , E ) + 1 (20) I ( A ; B ) ( c ) ≥ H ( A | E ) − H ( A | B ) ( d ) ≥ I ( A ; B | E ) − 1 I ( A ; B ) ( c ) ≤ H ( A ) − H ( A | B , E ) ( e ) ≤ I ( A ; B | E ) + 1 , where ( a) is due to H ( E | B , E ) ≥ 0 , (b) is d ue to upper b ound o n the entropy of a bin ary rand om variable, (c) follows as conditio ning does not increase en tropy , ( d) f ollows by (20 ), and (e) f ollows by con sidering B = ∅ in (20) to u pper boun d H ( A ) . A P P E N D I X C P R O O F O F L E M M A 3 Pr oof: W e first consider I ( U ; Z ) > I ( U ; S ) case, for which the codewords are rep resented by U n ( M , T , K ) . I ( M ; Z n ) = H ( M ) − H ( M | Z n ) = H ( M ) − H ( T , K , Z n , M ) + H ( Z n ) + H ( T , K | Z n , M ) = H ( Z n ) + H ( T , K | Z n , M ) − H ( T , K | M ) − H ( Z n | M , T , K ) ( a ) ≤ n ( H ( Z ) + ǫ 1 ) − H ( T | M ) − H ( U n | M , T ) − H ( Z n | M , T , U n ) ( b ) = n ( H ( Z | U ) + I ( U ; S ) + ǫ 2 ) − H ( U n , Z n | M , T ) = n ( H ( Z | U ) + I ( U ; S ) + ǫ 2 ) − H ( U n , Z n | M , T , S n ) − I ( S n ; U n , Z n | M , T ) ( c ) ≤ n ( H ( Z | U ) + I ( U ; S ) + ǫ 2 ) − H ( Z n | M , T , S n , U n ) − H ( S n | M , T ) + H ( S n | U n , Z n ) ( d ) ≤ n ( I ( S ; Z , U ) + ǫ 2 ) − H ( S n | M , T ) + H ( S n | U n , Z n ) , where (a) follows by H ( Z n ) = n P i =1 H ( Z i | Z i − 1 1 ) ≤ n P i =1 H ( Z i ) = n H ( Z ) , H ( T , K | Z n , M ) ≤ nǫ 1 for some ǫ 1 → 0 as n → ∞ (this is deco ding of ( T , K ) at eavesdropper h aving ( Z n , M ) , following fro m steps similar to the Novem ber 19, 2018 DRAFT 29 proof of Th eorem 1 replacing Bob with Eve, as number of ( T , K ) ind ices is 2 n ( I ( U ; Z ) − δ ) , see a lso Appe ndix A), H ( K | M , T ) = H ( U n | M , T ) , and having H ( Z n | M , T , K ) = H ( Z n | M , T , U n ) as ( M , T , K ) is a o ne-to- one function of ( M , T , U n ( M , T , K )) , (b ) follows by H ( T | M ) = H ( T ) = n ( I ( U ; Z ) − I ( U ; S ) − 2 δ ) as T is indepen dent of M and unifo rmly random and takin g ǫ 2 = ǫ 1 + 2 δ , (c) follows as H ( U n | M , T , S n ) ≥ 0 and H ( S n | M , T , U n , Z n ) = H ( S n | U n , Z n ) as ( M , T ) un iquely d etermined given U n ( M , T , K ) , (d) is by having H ( Z n | M , T , S n , U n ) = n X i =1 H ( Z i | Z i − 1 1 , M , T , S n , U n ) = n X i =1 H ( Z i | S i , U i ) = n H ( Z | S , U ) , which is du e to the Markov c hain ( Z i − 1 1 , M , T , S i − 1 1 , S n i +1 , U i − 1 1 , U n i +1 ) → ( S i , U i ) → Z i as ( U i , S i ) gen erates ( X i , S i ) wh ich g enerates Z i i.i.d. due to the memor yless channel. Secondly , consider I ( U ; Z ) ≤ I ( U ; S ) case, for wh ich the c odewords are represen ted by U n ( M , K ) . W e consider K = [ K 1 , K 2 ] with K 1 = [1 : 2 n ( I ( U ; S ) − I ( U ; Z )+2 δ ) ] and K 2 = [1 : 2 n ( I ( U ; Z ) − δ ) ] , wh ich together represen t the covering index K . (This can be o btained via rando m binning of 2 n ( I ( U ; S )+ δ ) number o f co dew ords U n ( M , k ) , k ∈ [1 : 2 n ( I ( U ; S )+ δ ) ] , into bins represented b y k 1 , where the co deword index per bin represented by k 2 .) W e continue as fo llows. I ( M ; Z n ) = H ( M ) − H ( M | Z n ) = H ( M ) − H ( M , K 1 , K 2 , Z n ) + H ( Z n ) + H ( K 1 , K 2 | Z n , M ) = H ( Z n ) + H ( K 1 | Z n , M ) + H ( K 2 | Z n , M , K 1 ) − H ( K 1 , K 2 | M ) − H ( Z n | M , K 1 , K 2 ) ( a ) ≤ n ( H ( Z | U ) + I ( U ; S ) + ǫ 2 ) − H ( U n | M ) − H ( Z n | M , U n ) = n ( H ( Z | U ) + I ( U ; S ) + ǫ 2 ) − H ( U n , Z n | M ) = n ( H ( Z | U ) + I ( U ; S ) + ǫ 2 ) − H ( U n , Z n | M , S n ) − I ( S n ; U n , Z n | M ) ( b ) ≤ n ( H ( Z | U ) + I ( U ; S ) + ǫ 2 ) − H ( Z n | M , S n , U n ) − H ( S n | M ) + H ( S n | U n , Z n ) ( c ) ≤ n ( I ( S ; Z , U ) + ǫ 2 ) − H ( S n | M ) + H ( S n | U n , Z n ) , where (a) fo llows by having H ( Z n ) ≤ nH ( Z ) as d escribed above, H ( K 1 | Z n , M ) ≤ H ( K 1 ) ≤ n ( I ( U ; S ) − I ( U ; Z ) + 2 δ ) , H ( K 2 | Z n , M , K 1 ) ≤ n ǫ 1 for so me ǫ 1 → 0 as n → ∞ (this is d ecoding of K 2 at eavesdropper having ( Z n , M , K 1 ) , following from steps similar to the proo f of Theore m 1 re placing Bob with Eve, as number of K 2 indices is 2 n ( I ( U ; Z ) − δ ) , see also Ap pendix A), H ( K 1 , K 2 | M ) = H ( U n | M ) , having H ( Z n | M , K 1 , K 2 ) = H ( Z n | M , K 1 , K 2 , U n ) as U n ( M , K 1 , K 2 ) is determined by ( M , K 1 , K 2 ) , and definin g ǫ 2 = 2 δ + ǫ 1 , (b) is by H ( U n | M , S n ) ≥ 0 , and (c) is same as that of the step (d) of th e previous par agraph . Novem ber 19, 2018 DRAFT 30 A P P E N D I X D P R O O F O F C O RO L L A RY 4 From Lemm a 3, we h av e I ( M ; Z n ) ≤ n I ( S ; Z , U ) + H ( S n | U n , Z n ) − H ( S n | M , T ) + nǫ. Here, H ( S n | U n , Z n ) = n P i =1 H ( S i | S i − 1 1 , U n , Z n ) ≤ n P i =1 H ( S i | U i , Z i ) = nH ( S | U, Z ) . In ad dition, as S n is generated i.i.d ., and also indepen dent o f ( M , T ) , we hav e H ( S n | M , T ) = H ( S n ) ≥ n ( H ( S ) − ǫ 1 ) w ith som e ǫ 1 → 0 as n → ∞ . (The latter exp ression follows as we can bound H ( S n ) ≥ H ( S n | E ) where E is an indicato r random variable with E = 1 if S n is ty pical. The n, H ( S n | E ) = Pr { E = 0 } H ( S n | E = 0) + Pr { E = 1 } H ( S n | E = 1) ≥ Pr { E = 1 } H ( S n | E = 1 ) ≥ (1 − ǫ 0 ) n ( H ( S ) − ǫ 0 ) for som e arbitrarily small ǫ 0 , fro m which the assertion follows by tak ing ǫ 1 = ǫ 0 (1 + H ( S ) − ǫ 0 ) .) The n, u sing these two o bservations in the equation ab ove, we obtain I ( M ; Z n ) ≤ n I ( S ; Z , U ) + nH ( S | U, Z ) − nH ( S ) + nǫ 1 + nǫ = n ( ǫ 1 + ǫ ) , which co ncludes the p roof. A P P E N D I X E P R O O F O F P RO P O S I T I O N 6 Pr oof: De fine a random variable Q unif orm over { 1 , · · · , n } and inde penden t o f everything e lse. (A standa rd technique o f usin g Q as a tim e sh aring par ameter will be utilized in the following.) Also define U i = ( S i − 1 1 , Z n i +1 ) . W e have the following bound s. I ( S n ; Y n ) ≤ I ( X n , S n ; Y n ) ( a ) ≤ n X i =1 I ( X i , S i ; Y i ) ( b ) = nI ( X Q , S Q ; Y Q | Q ) ( c ) ≤ nI ( U Q , Q, X Q , S Q ; Y Q ) (21) where (a) is due to the memory less chann el p ( y i | x i , s i ) and the fact tha t co ndition ing d oes not in crease the e ntropy , (b) f ollows from the distribution of Q , (c) follows as the added ter m I ( Q ; Y Q ) + I ( U Q ; Y Q | Q, X Q , S Q ) ≥ 0 . In addition, I ( S n ; Y n ) ≤ H ( S n ) = n X i =1 H ( S i | S i − 1 1 ) ( a ) = n X i =1 H ( S i ) = nH ( S Q | Q ) , (22) Novem ber 19, 2018 DRAFT 31 where (a) h olds as S n has an i.i.d . distribution. Moreover , I ( S n ; Z n ) = H ( S n ) − H ( S n | Z n ) = n X i =1 H ( S i ) − H ( S i | S i − 1 1 , Z n ) ( a ) ≥ n X i =1 H ( S i ) − H ( S i | Z i , S i − 1 1 , Z n i +1 ) ( b ) = n X i =1 H ( S i ) − H ( S i | Z i , U i ) ( c ) = nI ( S Q ; Z Q , U Q | Q ) , (23) where (a) is d ue the fact that con ditioning does n ot inc rease the en tropy , (b) f ollows fr om the d efinition o f U i , an d (c) is the stan dard time shar ing argumen t. Finally , we have 0 ≤ n X i =1 I ( Z n i +1 ; Z i ) = n X i =1 I ( S i − 1 1 , Z n i +1 ; Z i ) − I ( S i − 1 1 ; Z i | Z n i +1 ) ( a ) = n X i =1 I ( S i − 1 1 , Z n i +1 ; Z i ) − I ( Z n i +1 ; S i | S i − 1 1 ) ( b ) = n X i =1 I ( S i − 1 1 , Z n i +1 ; Z i ) − I ( S i − 1 1 , Z n i +1 ; S i ) ( c ) = n X i =1 I ( U i ; Z i ) − I ( U i ; S i ) = n ( I ( U Q ; Z Q | Q ) − I ( U Q ; S Q | Q )) , (24) where ( a) follows by Csisz ´ ar’ s sum lemma [39] (note th at th is is similar to th e c onv erse result of the Gel’fand- Pinsker problem giv en in [44], whe re the indices of S an d Z are r ev ersed), (b) is due to I ( S i − 1 1 ; S i ) = 0 as S n has an i.i.d . distribution, (c) follows fr om definitio n o f U i , (d) is the standard time sh aring argum ent. Now , defin e U = ( U Q , Q ) , S = S Q indepen dent o f Q (no te th at this argumen t is also u sed in [9] in the correspo nding single-letter ization argumen ts), X = X Q , Y = Y Q , and Z = Z Q . Then, (21) im plies I ( S n ; Y n ) ≤ nI ( U Q , Q, X Q , S Q ; Y Q ) ≤ nI ( U, X , S ; Y ) = nI ( S, X ; Y ) as U → ( X , S ) → Y due to the memory less channel p ( y | x, s ) . ( 22) red uces to I ( S n ; Y n ) ≤ nH ( S Q | Q ) = nH ( S ) , as S Q = S is independ ent o f Q . (23) implies I ( S n ; Z n ) ≥ nI ( S Q ; Z Q , U Q , Q ) = nI ( S ; Z , U ) as I ( S Q ; Z Q , U Q | Q ) = H ( S Q | Q ) − H ( S Q | Z Q , U Q , Q ) = H ( S Q ) − H ( S Q | Z Q , U Q , Q ) = I ( S Q ; Z Q , U Q , Q ) due to the indepen dence of S Q = S and Q . (24) is 0 ≤ n ( I ( U Q ; Z Q | Q ) − I ( U Q ; S Q | Q )) . Consider add ing n ( I ( Q ; Z Q ) − I ( Q ; S Q )) to th e r ight h and side of th is inequ ality . W e h ave, as S Q = S and Q ar e indepen dent, I ( Q ; S Q ) = 0 , and hen ce n ( I ( Q ; Z Q ) − I ( Q ; S Q )) ≥ 0 . Then, 0 ≤ n ( I ( U Q ; Z Q | Q ) − I ( U Q ; S Q | Q )) ≤ n ( I ( U Q , Q ; Z Q ) − I ( U Q , Q ; S Q )) = n ( I ( U ; Z ) − I ( U ; S )) , which implies 0 ≤ I ( U ; Z ) − I ( U ; S ) as a necessary conditio n. Novem ber 19, 2018 DRAFT 32 This conclud es the proo f, as comb ining the bo unds above with the fact that any ac hiev able ( R a , R l ) for the giv en chann el p ( y , z | x, s ) and p ( s ) shou ld satisfy 1 n I ( S n ; Y n ) ≥ R a − ǫ and 1 n I ( S n ; Z n ) ≤ R l + ǫ imp lies th e inequalities stated in th e pro position. A P P E N D I X F P R O O F O F P RO P O S I T I O N 7 Pr oof: L et P 1 denote th e set of p ( u, x | s ) satisfying I ( U ; Y ) ≥ I ( U ; S ) , and den ote P 2 denote the set of p ( u, x | s ) satisfying I ( U ; Z ) ≥ I ( U ; S ) . For the ch annel p ( y , z | x, s ) = p ( y | x, s ) p ( z | y ) , any p ∈ P 2 also satisfies p ∈ P 1 . Ther efore, using the Proposition 6, we h av e that if ( R a , R l ) is achie vable, then ( R a , R l ) ∈ R 3 o , where R 3 o = [ p ( u,x | s ) ( R a , R l ) satisfying R a ≤ min { H ( S ) , I ( X , S ; Y ) } R l ≥ I ( S ; Z, U ) 0 ≤ I ( U ; Y ) − I ( U ; S ) . It remain s to show R a − R l bound . W e add the f ollowing b ound to the ones stated ab ove. (No te that, we use the same U i and Q -depen dent definitio ns stated in Appendix E, and henc e the following bound can b e add ed to the on es stated in App endix E.) n ( R a − R l ) = I ( S n ; Y n ) − I ( S n ; Z n ) = I ( X n , S n ; Y n ) − I ( X n , S n ; Z n ) − ( I ( X n ; Y n | S n ) − I ( X n ; Z n | S n )) ( a ) ≤ I ( X n , S n ; Y n ) − I ( X n , S n ; Z n ) ( b ) = I ( X n , S n ; Y n | Z n ) = n X i =1 H ( Y i | Y i − 1 1 , Z n ) − n X i =1 H ( Y i | Y i − 1 1 , Z n , X n , S n ) ( c ) ≤ n X i =1 H ( Y i | Z i ) − n X i =1 H ( Y i | Z i , X i , S i ) = n X i =1 I ( X i , S i ; Y i | Z i ) ( d ) = n I ( X Q , S Q ; Y Q | Z Q , Q ) ( e ) ≤ n I ( U Q , Q, X Q , S Q ; Y Q | Z Q ) ( f ) = n I ( U, X , S ; Y | Z ) ( g ) = n I ( X, S ; Y | Z ) , Novem ber 19, 2018 DRAFT 33 where (a) and (b) ar e du e to the degraded ness conditio n, (c) is d ue to the fact th at con ditioning does not increase entropy and the Markov chain ( Y i − 1 1 , Z i − 1 1 , Z n i +1 , S i − 1 1 , S n i +1 , X i − 1 1 , X n i +1 ) → ( X i , S i , Z i ) → Y i as ( X i , S i ) generates Y i , (d) follows fro m the memoryless pro perty of the ch annel, (d) and (f ) are due to definition s giv en in Append ix E, (e ) is by adding the n on-negative term I ( Q ; Y Q | Z Q ) + I ( U Q ; Y Q | Z Q , Q, X Q , S Q ) , an d (g) is due to the Markov chain U → ( X , S ) → ( Y , Z ) as o utputs ( y i , z i ) are generated i.i.d. using ( x i , s i ) over the memo ryless channel p ( y , z | x, s ) = p ( y | x, s ) p ( z | y ) . Th e result then follows by taking the union over all joint distributions p ( u, x | s ) . A P P E N D I X G P R O O F O F T H E C O N V E R S E F O R T H E O R E M 8 Pr oof: W e bound the rate d ifference as follows. n ( R a − R l ) ≤ I ( S n ; Y n ) − I ( S n ; Z n ) = n X i =1 I ( S n ; Y i | Y i − 1 1 ) − I ( S n ; Z i | Z n i +1 ) ( a ) = n X i =1 I ( S n ; Y i | Y i − 1 1 , Z n i +1 ) + I ( Z n i +1 ; Y i | Y i − 1 1 ) − I ( Z n i +1 ; Y i | Y i − 1 1 , S n ) − I ( S n ; Z i | Y i − 1 1 , Z n i +1 ) + I ( Y i − 1 1 ; Z i | Z n i +1 ) − I ( Y i − 1 1 ; Z i | Z n i +1 , S n ) ( b ) = n X i =1 I ( S n ; Y i | Y i − 1 1 , Z n i +1 ) − I ( S n ; Z i | Y i − 1 1 , Z n i +1 ) = n X i =1 I ( S i ; Y i | Y i − 1 1 , Z n i +1 , S i − 1 1 , S n i +1 ) + I ( S i − 1 1 , S n i +1 ; Y i | Y i − 1 1 , Z n i +1 ) − I ( S i ; Z i | Y i − 1 1 , Z n i +1 , S i − 1 1 , S n i +1 ) − I ( S i − 1 1 , S n i +1 ; Z i | Y i − 1 1 , Z n i +1 ) ( c ) ≤ n X i =1 I ( S i ; Y i | U i ) − I ( S i ; Z i | U i ) ( d ) = n [ I ( S ; Y | U ) − I ( S ; Z | U )] , where, in (a ), we used the equalities I ( S n ; Y i | Y i − 1 1 ) + I ( Z n i +1 ; Y i | Y i − 1 1 , S n ) = I ( Z n i +1 ; Y i | Y i − 1 1 ) + I ( S n ; Y i | Y i − 1 1 , Z n i +1 ) and I ( S n ; Z i | Z n i +1 ) + I ( Y i − 1 1 ; Z i | Z n i +1 , S n ) = I ( Y i − 1 1 ; Z i | Z n i +1 ) + I ( S n ; Z i | Z n i +1 , Y i − 1 1 ); in (b ), we used the Csiszar’ s sum lemm a [4] (an d a co nditional f orm of it) to obtain th e equa lities n X i =1 I ( Z n i +1 ; Y i | Y i − 1 1 ) = n X i =1 I ( Y i − 1 1 ; Z i | Z n i +1 ) and n X i =1 I ( Z n i +1 ; Y i | Y i − 1 1 , S n ) = n X i =1 I ( Y i − 1 1 ; Z i | Z n i +1 , S n ); Novem ber 19, 2018 DRAFT 34 in (c), we d efine U i , ( Y i − 1 1 , Z n i +1 , S i − 1 1 , S n i +1 ) ; and use the rev ersely d egradedne ss of the chan nel re sulting in I ( S i − 1 1 , S n i +1 ; Z i | Y i − 1 1 , Z n i +1 ) ≥ I ( S i − 1 1 , S n i +1 ; Y i | Y i − 1 1 , Z n i +1 ) , in (d) , we ob tain the single-letter expression (b y defining U = ( U i , i ) , etc., see, e.g., [47]). Note that, with the d efinition of U i in (c), X i is gene rated from S n , an d hence fr om ( U i , S i ) . In addition, given ( X i , S i ) , Z i is indep endent of ( U i , S i ) . Thu s, ( U, S ) → ( X , S ) → Z forms a Markov chain, and the upper boun d is gi ven b y R a − R l ≤ max p ( u,x | s ) s.t. ( U,S ) → ( X ,S ) → Z I ( S ; Y | U ) − I ( S ; Z | U ) = max p ( x | u ∗ ,s ) , u ∗ ∈U I ( S ; Y | U = u ∗ ) − I ( S ; Z | U = u ∗ ) = max p ( x | s ) I ( S ; Y ) − I ( S ; Z ) , where th e equalities follow due to the following: First, the conditio nal mutual inform ation expression is max imized with a par ticular inpu t u ∗ (as ran domizing over different u values will not incr ease the sum P u ( I ( S ; Y | U = u ) − I ( S ; Z | U = u )) Pr { U = u } ) ) and a probability distribution p ∗ ( x | u ∗ , s ) . Second, th e op timal p ∗ ( x | u ∗ , s ) will correspo nd to a p ( x | s ) . Thu s, th e converse result can be stated over inp ut distributions in the f orm p ( x | s ) , matching to the ach iev ability resu lt. R E F E R E N C E S [1] S. Gel’fand and M. Pinsker , “Codin g for channels with random parameters, ” Probl. Contr . and Inform. Theory , vol. 9, no. 1, pp. 19–31, 1980. [2] M. H. M. Costa, “Writing on dirty paper , ” IE EE T rans. Inf. Theory , vol. 29, no. 3, pp. 439–441, May 1983. [3] A. W yner , “The wire-tap channel, ” The Bell System T echnica l J ournal , vol. 54, no. 8, pp. 1355–1387, Oct. 1975. [4] I. Csisz ´ ar and J. K ¨ o rner, “Broadcast channels with confidential messages, ” IEEE T rans. Inf. Theory , vol. 24, no. 3, pp. 339–348, May 1978. [5] Y . Chen and A. J. H. V inck, “Wiretap channe l with side informati on, ” IEEE T rans. Inf. Theory , vol. 54, no. 1, pp. 395–402, Jan. 2008. [6] C. Mitrpant , A. J. H. V inck, and Y . Luo, “An achie v able regio n for the Gaussian wiretap channel with side information, ” IEEE T rans. Inf. Theory , vol. 52, no. 5, pp. 2181–2190, May 2006. [7] A. Sutiv ong, M. Chiang, T . M. Cover , and Y .-H. Kim, “Channel capacit y and state estimatio n for state-depend ent Gaussian channels, ” IEEE Tr ans. Inf. Theory , vol. 51, no. 4, pp. 1486–1495, Apr . 2005. [8] Y .-H. Kim, A. Suti v ong, and T . M. Cover , “State amplificat ion, ” IEE E T ra ns. Inf. Theory , vol. 54, no. 5, pp. 1850–1859, May 2008. [9] N. Merhav and S. Shamai, “Information rates subject to s tate masking, ” IEEE T rans. Inf. Theory , vol. 53, no. 6, pp. 2254–2261, Jun. 2007. [10] G. Caire and S. Shamai, “On the achie v able throughp ut of a multiantenna Gaussian broadcast channel, ” IEE E T ran s. Inf. Theory , vol. 49, no. 7, pp. 1691–1706, July 2003. [11] G. Caire, S. Shamai (Shitz), Y . Steinber g, and H. W ei ngarten, “On information-the oretic aspect s of MIMO broadcast channels, ” in Space T ime W ir eless Systems, F r om A rray Pro cessing to MIMO Communicat ions , H. B ol cksei, D. Gesbert, C. B. Papadia s, and A.-J. va n der V een, Eds. London, UK: Cambridge Univ . Press, 2008. [12] S. A. Jafar , G. J. Foschi ni, and A. J. Goldsmith, “PhantomNet: Exploring Optimal Multic ellula r Multiple Antenna Systems, ” EURASIP J ournal on Advances in Signal Processi ng , no. 5, pp. 591–605, May 2004. [13] J. Mitola III and G. Q. Maguire Jr ., “Cognit i ve radio: Making software radios m ore personal, ” IEEE P ersona l Commun. Mag. , vol. 6, no. 4, pp. 13–18, Aug. 1999. [14] J. Mitola III, “Cogniti v e radio: An integrate d agent archite cture for software defined radio, ” Ph.D . dissertati on, Computer Communication System Laboratory , Departmen t of T eleinformati cs, Royal Institute of T ech nology (KTH), Stockholm, Sweden, May 2000. [15] N. De vroye , P . Mitran, and V . T arokh, “Achie v able rates in cogniti ve radio channels, ” IEEE T rans. I nf. Theory , vo l. 52, no. 5, pp. 1813–1827, May 2006. Novem ber 19, 2018 DRAFT 35 [16] A. Goldsmith, S. Jafar , I. Maric, and S. Srini v asa, “Breaking Spectrum Gridlock With Cogniti ve Radios: An Information T heoreti c Perspecti ve, ” Pr oceedi ngs of the IEEE , vol. 97, no. 5, pp. 894–914, May 2009. [17] Y . L iang, A. Somekh-Baruch, H. V . Poor , S. Shamai, and S. V erdu, “Capaci ty of cogniti ve interference channels with and without secrecy , ” IEEE Tr ans. Inf. Theory , vol. 55, no. 2, pp. 604–619, Feb . 2009. [18] O. S imeone and A. Y ener , “The cogniti v e multiple access wire-tap channel, ” in Proc . 43r d Annual Confere nce on Information Scienc es and Systems (CISS 2009) , Baltimore, MD, Mar . 2009. [19] L. T oher, O. O. Koylu oglu, and H. E . Gamal, “Secrec y games ove r the cogniti v e channel, ” in P r oc. 2010 IEE E International Symposium on Information Theory (ISIT 2010) , Austin, T X, Jun. 2010. [20] J. Zhang and M. Gursoy , “Sec ure relay beamforming over cognit i ve radio channe ls, ” in Pr oc. 45th Annual Confer enc e on Information Scienc es and Systems (CISS 2011) , Baltimore, MD, Mar . 2011. [21] L. Lai and H. El Gamal, “The relay-e av esdropper channel: Cooperation for secrecy , ” IEEE Tr ans. Inf. Theory , vol. 54, no. 9, pp. 4005–4019, Sep. 2008. [22] R. T andon , S. Ulukus, and K. Ramchan dran, “Secure source coding with a helper , ” in Pr oc. 47th Annual A llerton Confer ence on Communicat ion, Contr ol, and Computing (Allerton 2009) , 2009. [23] O. O. Ko yluoglu and H. El Gamal, “Coopera ti ve encoding for s ecrec y in interference channels, ” IEE E T rans. Inf. Theory , vol. 57, no. 9, pp. 5682–5694, Sep. 2011. [24] M. Y uksel, X. Liu, and E. Erkip, “A secure communication game with a relay helping the ea vesdropp er , ” IE EE T r ans. Inf. F ore nsics Securit y , vol. 6, no. 3, pp. 818–830, Sep. 2011. [25] B. Azimi-Sadjadi, A. Kiayias, A. Mercado, and B. Y ener , “Robust key generati on from signal en v elopes in wireless network s, ” in Proc. 14th ACM Confer ence on Computer and Communicat ions Security (CCS 2007) , Alexandri a, V A, Oct. 2007. [26] S. Mathur , W . Trappe , N. Mandayam, C. Y e, and A. Reznik , “Radio-tele paty extra cting a secret ke y from an unauthent icate d wireless channe l, ” in Proc. 14th ACM Internati onal Confer ence on Mobile Computing and Networki ng (MobiCom 2008) , S an Francisco, CA, Sep. 2008. [27] S. Jana, S. N. Premnath, M. Clark, S. K. Kasera, N. Patw ari, and S. V . Krishnamurthy , “On the ef fecti v eness of s ecret ke y extracti on from wireless signal strength in real envi ronments, ” in Pr oc. 15th A nnual International Confer ence on Mobile Computing and Networking (MobiCom 2009) , Beijin g, China, Sep. 2009. [28] S. Mathur , A. Reznik, Y . Shah, W . Trapp e, and N. B. Mandayam, “Informa tion-the oreticall y secret ke y generation for fading wirele ss channe ls, ” IEE E T rans. Inf. F ore nsics Security , vol . 5, no. 2, pp. 240–254, Jun. 2010. [29] A. Khisti, S. Diggavi , and G. W ornel l, “Secret -ke y agreement with channel state information at the transmitter , ” IEE E T rans. Inf. F ore nsics Securit y , vol. 6, no. 3, pp. 672 –681, Sep. 2011. [30] C. H. Benne tt, G. Brassard, C. Crepeau, and U. M. Maurer , “Gener alized priv ac y amplifica tion, ” IEEE T rans. Inf. Theory , vol. 41, no. 6, pp. 1915–1923, Nov . 1995. [31] A. Khisti, S. N . Diggavi, and G. W ornel l, “Secret-ke y generation with correla ted sources and noisy channels, ” in P r oc. 2008 IEEE Internati onal Symposium on Information Theory (ISIT’08) , Jul. 2008. [32] V . Prabhakaran , K. E swaran, and K. Ramchandran, “Secrec y via sources and channels: A secret ke y - secret message rate tradeof f region, ” in Pr oc. 2008 IE EE International Symposium on Information Theory (ISIT’08) , Jul. 2008. [33] A. Khisti, “Secret key agreemen t on wiretap channel s with transmitte r side information, ” in P r oc. 16th Euro pean W ir eless Confer ence (E W 2010) , Lucca, Italy , Apr . 2010. [34] H. Delfs and H. Knebl, Intro duction to Cryptogr aphy: Principle s and Applicat ions , 2nd ed. Spring er , 2007. [35] O. Goldreic h, Foundatio ns of Cryptograp hy: Volume II, Basic Applications . Cambridge Unive rsity Press, 2004. [36] J. K ¨ orner and K . Marton, “A source network problem in v olving the comparison of two channels II, ” T r ans. Colloquim Inform. Theory , Aug. 1975, K eszthel y , Hungary. [37] ——, “Comparison of two noisy channels, ” T opics in Information Theory , Coll. Math. Soc. J. Bolyai No. 16, Ed. P . E lias and I. Csisz ´ ar , North Holland , pp. 411–423, 1977. [38] A. W yner and J. Ziv , “The rate-dist ortion function for source coding with side information at the decoder , ” IEEE T rans. Inf. T heory , vol. 22, no. 1, pp. 1 – 10, Jan. 1976. [39] A. El Gamal and Y .-H. Kim, Network information theory . Cambridge Univ ersity Press, 2011. Novem ber 19, 2018 DRAFT 36 [40] S. Lim, P . Minero, and Y .-H. Kim, “Lossy communicati on of correla ted source s over multiple access channel s, ” in Proc. 48th A nnual Allerton Confer enc e on Communicati on, Contr ol, and Computing (Allerton 2010) , Allert on, IL , Sep. 2010. [41] C. Choudhuri and U. M. Ming, “On non-ca usal side information at the encoder , ” in Pr oc. 50th A nnual Allerton Confer enc e on Communicat ion, Contr ol, and Computing (Allerton 2012) , Monticell o, IL, Oct. 2012. [42] A. L apidot h and S. Tinguel y , “Sending a bi v ariat e Gaussian over a Gaussian MAC, ” IE EE T rans. Inf. Theory , vol. 56, no. 6, pp. 2714–2752, Jun. 2010. [43] A. V . Kuznetsov and B. S. Tsybakov , “Coding in a memory with defecti ve cells, ” Probl . P eredac hi Inf. , vol. 10, no. 2, pp. 52–60, 1974. [44] C. Heegard, “Capac ity and coding for computer memory with defects, ” Ph.D. dissertati on, Stanford Univ ersity , Stanford, CA, Nov . 1981. [45] C. Heegard and A. E. Gamal, “On the capacit y of computer memory with defect s, ” IE EE T rans. Inf. Theory , vol. 29, no. 5, pp. 731–739, Sep. 1983. [46] T . Courtade, “Informatio n masking and amplification: The source coding settin g, ” in Pro c. 2012 IEE E International Symposium on Informatio n Theory (ISIT 2012) , Cambridge, MA, Jul. 2012. [47] T . Cover and J. Thomas, E lement s of Informatio n Theory . John Wil ey and Sons, Inc., 1991. Novem ber 19, 2018 DRAFT
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment