The Gaussian MAC with Conferencing Encoders

We derive the capacity region of the Gaussian version of Willems's two-user MAC with conferencing encoders. This setting differs from the classical MAC in that, prior to each transmission block, the two transmitters can communicate with each other ov…

Authors: Shraga I. Bross, Amos Lapidoth, Michele A. Wigger

The Gaussian MA C with Conferencing Encoders Shraga I. Bross Bar Ilan Univ ersity Ramat Gan, 52900 , Israel brosss@macs.biu.ac. il Amos La pidoth ETH Zurich 8092 Zurich, Switzerland lapidoth@isi.ee.eth z.ch Mich ` ele A. W igger ETH Zurich 8092 Zurich , Switzerland wigger@isi.ee.ethz. ch Abstract —W e derive the capacity region of the Gaussian version of Willems’ s two-user MA C with conferencing encoders. This setting differs fr om th e classical MA C in that, prior to each transmission block, the t wo transmitters can communicate with each other ov er noise-free bit-p ipes of given capacities. The deriv ation requires a new t echnique for pro ving the optimality of Gaussian input distributions in certain mutual informa tion maximizations und er a M arko v constraint. W e also consider a Costa-type extension of the Gaussian MA C with conferencing encoders. In this extension, the channel can be described as a two-user MA C with Gaussian n oise and Gaussian interference where the interference is k nown non-causally to the encoders but not to the decoder . W e show that as in Costa’ s setting the in terference sequence can be perfectly canceled, i.e., that the capacity reg ion without interference can be achi ev ed . I . I N T RO D U C T I O N W e consider a com munication scenar io known as the MAC with co nferencin g encoders where two transmitters wish to transmit independent messages to a single rec ei ver . Prior to each transmission block, the two encoders are allowed to hold a conference , i.e., they c an commu nicate with eac h oth er over no ise-free bit-pipe s of gi ven c apacities. Special cases are the classical multiple- access setting, wher e the en coders are ignorant of each oth ers messages (both bit-p ipes of zero capacities); th e fully -cooper ativ e setting (both bit-pipes of infinite capacities); and the asym metric m essage sets setting, where one of the enco ders is f ully co gnizant of the message the other enc oder intends to send (the p ipe from the cognizan t transmitter to the no n-cogn izant transmitter of zero cap acity and the other pipe of infinite capacity). The MAC with con ferencin g en coders was in troduced by W illems in [1], wh o also d erived the capacity region for the discrete m emoryless setting. Here we derive th e c apacity region for the Gaussian setting under a verage power co n- straints. The achievability p art is very similar to the o ne in [1]. The co n verse, howev er , require s a novel tool fir st derived in [4] for proving that Ga ussian distributions max imize certain mutual inform ation expressions under a Markovity-co nstraint. For such maximization problem s the traditional approach of proving th e optimality of Gaussian distributions b y employing the Max -Entropy The orem [3, Theor em 12.1.1.] or a co ndi- tional version thereof [ 5] fails. The reason is that replacin g a non-Gau ssian vector satisfying the Markovity cond ition by a The research of Mich ` ele A. Wi gger was supported by the Swiss N ationa l Science Found ation under Grant 200021-11 1863/1. Gaussian vector o f the same covariance m atrix ma y result in a Gaussian vector that viola tes the Mar kovity c ondition. W e also consider an additional scenario wher e the received signal is also corru pted by an additive Gaussian interference sequence which is non- causally known to both encod ers but not to the decoder . W e show that, e ven thou gh the decoder is non- informe d, the in terference can b e perfectly cance led in the sense that the c apacity region of the setting without interferen ce is a chiev ab le also in this setting with interferen ce. W e next descr ibe the channel models m ore pr ecisely and then proceed to state our results. The goal of the tran smission is that T ransmitters 1 and 2 convey their m essages M 1 and M 2 to the receiver . The messages M 1 and M 2 are assumed to b e indepen dent and unifor mly distributed over the sets M 1 = { 1 , . . . , ⌊ e nR 1 ⌋} and M 2 = { 1 , . . . , ⌊ e nR 2 ⌋} . Here n denotes the block-len gth, and R 1 and R 2 denote the rates of transmission in nats per channel use. Prior to each block of n chan nel uses, the two encoders hold a conf erence, i.e., they exchange in formation over k uses of the pipes. The pipes are assum ed to be • p erfect in th e sense that any in put sym bol to a pipe is av ailab le imm ediately an d error-free at the ou tput o f th e pipe; and • o f limited throug hputs C 12 and C 21 , in the sense that when the k inputs to the pipe from T ransmitter 1 to T rans- mitter 2 take values in th e sets V 1 , 1 , . . . , V 1 ,k and th e k inputs to the pipe fr om Transmitter 2 to Transmitter 1 take values in the sets V 2 , 1 , . . . , V 2 ,k , then k X ℓ =1 log |V 1 ,ℓ | ≤ nC 12 and k X ℓ =1 log |V 2 ,ℓ | ≤ nC 21 . ( 1) Here and through out all logarith ms are natural logarithms. Note that the communic ation over the pipes is assumed to be held in a conferen cing way , so that the ℓ -th inpu ts V 1 ,ℓ ∈ V 1 ,ℓ and V 2 ,ℓ ∈ V 2 ,ℓ can d epend on the r espectiv e messages a s well as on the past o bserved p ipe-outp uts: V 1 ,ℓ = f 1 ,ℓ ( M 1 , V 2 , 1 , . . . , V 2 ,ℓ − 1 ) , (2) V 2 ,ℓ = f 2 ,ℓ ( M 2 , V 1 , 1 , . . . , V 1 ,ℓ − 1 ) , (3) for some gi ven sequences of enco ding func tions { f 1 ,ℓ } k ℓ =1 and { f 2 ,ℓ } k ℓ =1 where f 1 ,ℓ : M 1 × V 2 , 1 × . . . × V 2 ,ℓ − 1 − → V 1 ,ℓ , (4) f 2 ,ℓ : M 2 × V 1 , 1 × . . . × V 1 ,ℓ − 1 − → V 2 ,ℓ . (5) W e define an ( n, C 12 , C 21 ) -conference to be the collec- tion of an integer num ber k , two sets of input alph abets {V 1 , 1 , . . . , V 1 ,k } and {V 2 , 1 , . . . , V 2 ,k } , and two sets of enco d- ing functions { f 1 , 1 , . . . , f 1 ,k } and { f 2 , 1 , . . . , f 2 ,k } as in ( 4) and (5), where n, C 12 , C 21 , k , and the sets {V 1 , 1 , . . . , V 1 ,k } and {V 2 , 1 , . . . , V 2 ,k } satisfy (1). After th e conf erence, T ransmitter 1 is co gnizant of the se- quence V 2 = ( V 2 , 1 , . . . , V 2 ,k ) and Transmitter 2 is cog nizant of the seq uence V 1 = ( V 1 , 1 , . . . , V 1 ,k ) . The channel input se- quences X 1 = ( X 1 , 1 , . . . , X 1 ,n ) and X 2 = ( X 2 , 1 , . . . , X 2 ,n ) can then be described with enco ding fun ctions ϕ ( n ) 1 and ϕ ( n ) 2 as X 1 = ϕ ( n ) 1 ( M 1 , V 2 ) , (6) X 2 = ϕ ( n ) 2 ( M 2 , V 1 ) , (7) where ϕ ( n ) 1 : M 1 × V 2 , 1 × . . . × V 2 ,k − → R n , (8) ϕ ( n ) 2 : M 2 × V 1 , 1 × . . . × V 1 ,k − → R n . (9) Additionally , we impose an average blo ck p ower constraint on both channel input sequences: 1 n E " n X t =1 ( X ν,t ) 2 # ≤ P ν , ν ∈ { 1 , 2 } . (10) The multiple-access chann el is described as follows. For giv en discr ete-time t and channel inputs x 1 ,t , x 2 ,t ∈ R , the time t channel output Y t is Y t = x 1 ,t + x 2 ,t + Z t , (11) where { Z t } models the n oise corr upting the ch annel and is giv en b y a sequence o f independe nt and identically distributed (IID) zero -mean Gaussian ra ndom variables o f variance σ 2 > 0 . Based on the ou tput sequence Y = ( Y 1 , . . . , Y n ) the decoder applies a decoding function φ ( n ) , φ ( n ) : R n → M 1 × M 2 , (12) to prod uce the message estimates ˆ M 1 and ˆ M 2 , i. e., ( ˆ M 1 , ˆ M 2 ) = φ ( n ) ( Y ) . (13) An error occurs whenever ( M 1 , M 2 ) 6 = ( ˆ M 1 , ˆ M 2 ) . A rate pair ( R 1 , R 2 ) is said to be achievable over the Gaussian MAC with confer encing encoders if there exist a sequence of { ( n, C 12 , C 21 ) } -conf erences, two sequences of encodin g functions { ϕ ( n ) 1 , ϕ ( n ) 2 } as in (8) and (9), a nd a sequence of decod ing functio ns { φ ( n ) } as in (12) such that the pr obability o f erro r tend s to 0 a s the b lock-leng th n tend s to infinity , i.e., lim n →∞ Pr h ( M 1 , M 2 ) 6 = ( ˆ M 1 , ˆ M 2 ) i = 0 . (14) The capa city r e g ion C is defined as the closur e o f the set of all achiev a ble rate pairs. W e also co nsider an extension of the setting at hand in the sense of Costa’ s “Writing on Dirty Paper” channel [2]. Thus, we assume an additional additive Gaussian inter ference sequence wh ich is non- causally known to b oth transm itters but no t to the r eceiv er . T here exist two different scen arios one co uld envision. A scenario where the tran smitters learn the interfer ence sequence before the con ference, an d thus the inputs to the bit-pipes can depend also on the interference; or a scenar io where the tr ansmitters learn the interference only after the conf erence. It tu rns out that the pr esented results d o not depend o n which of the two scenarios is considered . Th e conv erse we p resent ho lds also for the setting where the transmitters k now the interfer ence alrea dy befor e the con ference; and the e ncoding scheme applies also to the setting where th e transmitters know the interf erence on ly a fter the confer ence. In the following we will fo cus on th e setting where the tran smitters learn the in terference sequen ces after the conferenc e. For th e setting with in terference we need to mod ify the definitions of the channel in (11) and the encodin g f unctions in (8) and (9); the decoding function remains as in (12). For given in puts x 1 ,t and x 2 ,t the chan nel ou tput is g i ven by Y t = x 1 ,t + x 2 ,t + S t + Z t , (15) where the no ise sequence { Z t } is defin ed as before, and wh ere the inter ference sequence { S t } is an IID zero- mean Gaussian sequence of variance Q and inde pendent of the n oise sequ ence and of the m essages. Denoting the interf erence sequence by S = ( S 1 , . . . , S n ) , the channel input sequences X 1 and X 2 are described as X 1 = ϕ ( n ) 1 , IF ( M 1 , V 2 , S ) , X 2 = ϕ ( n ) 2 , IF ( M 2 , V 1 , S ) , for some encoding functions ϕ ( n ) 1 , IF , ϕ ( n ) 2 , IF of the form ϕ ( n ) 1 , IF : M 1 × V 2 , 1 × . . . × V 2 ,k × R n − → R n , ϕ ( n ) 2 , IF : M 2 × V 1 , 1 × . . . × V 1 ,k × R n − → R n . The inpu t sequences ar e sub ject to the p ower constraints (10). The prob ability of er ror, ach iev able rate pairs, an d cap acity region C IF for this new setting are defined as befor e. I I . M A I N R E S U LT S Definition 1: Defin e the r egion C G , [ 0 ≤ β 1 ,β 2 ≤ 1 ( ( R 1 , R 2 ) : R 1 ≤ 1 2 log  1 + β 1 P 1 σ 2  + C 12 , R 2 ≤ 1 2 log  1 + β 2 P 2 σ 2  + C 21 , R 1 + R 2 ≤ 1 2 log  1 + β 1 P 1 + β 2 P 2 σ 2  + C 12 + C 21 , R 1 + R 2 ≤ 1 2 log 1 + P 1 + P 2 + 2 p P 1 P 2 ¯ β 1 ¯ β 2 σ 2 ! ) , (16) where ¯ β 1 = 1 − β 1 and ¯ β 2 = 1 − β 2 . Theor em 1 : The capacity region C of the Gaussian MA C with confer encing encode rs is equ al to C G , C = C G . (17) Remark 1: Th e m ain step in the proof (see Section III-A) is to show that under th e Markov con dition X 1 ⊸ − − U ⊸ − − X 2 the region R Conf ( X 1 , U , X 2 ) (Definition 2) is maxim ized by choosing jointly Gaussian distributions on ( X 1 , U , X 2 ) . A subset of the mutual inf ormation expressions wh ich char- acterize the region R Conf ( X 1 , U , X 2 ) can be fo und in the characterizatio n of an achiev able region fo r the MA C with perfect feedb ack proposed by Cover and L eung [6] and in the characterization of the cap acity region of the MA C with common messages der i ved by Slepian and W olf [7]. Ag ain, in both char acterizations th e triple X 1 ⊸ − − U ⊸ − − X 2 is req uired to be Mar kov , and the same tools as in Sectio n III -A can be used to prove that for Gaussian channels also th e Cover-Leung region and the Slepian -W o lf re gion are maximized by choosing jointly Gaussian distributions on X 1 ⊸ − − U ⊸ − − X 2 [4]. Theor em 2 : The capac ity region C IF of the G aussian M A C with conf erencing encoder s and additiv e Gaussian interf erence sequence non-cau sally kn own at both encod ers eq uals the capacity region C of the setting without interfer ence C IF = C . (18) Note that Costa’ s result [2] o n “Writing on Dir ty Paper” and Gel’fand and Pinsker’ s result [8] on “ Multi-access Writing on Dirty Paper” are special cases of Th eorem 2. I I I . P RO O F O F T H E O R E M 1 The achiev a bility of C G , i.e., C G ⊆ C , (1 9) follows by apply ing the scheme described in [1] with a Gaussian input distribution. The details are om itted. A. Convers e T o prove the conv erse, i.e., C ⊆ C G , (20) we first outer bound C b y C Out (Lemma 1). The con verse is then established by sho w ing C Out = C G . T o this end, in Lemma 2 w e express th e region C G in a similar form to C Out , i.e., as a union of regions where the u nion is taken over cer tain distributions satisfying a Markov condition and power con straints. W e then notice that C Out and C G differ only with respect to the set of distributions over which the unions ar e taken (see ( 21) and (23)): for C Out the union is taken over all d istributions satisfying the Mar kov cond ition and th e power co nstraints, and fo r C G the u nion is only over th ose that are Ga ussian . W e conclud e the proof by showing that for C Out it is sufficient to take the u nion only over Gaussian distributions (Lemm a 3). Definition 2: For a given distribution p X 1 U X 2 ( · , · , · ) on the random triple X 1 , U , X 2 , define R Conf ( X 1 , U , X 2 ) , n ( R 1 , R 2 ) : R 1 ≤ I ( X 1 ; Y | X 2 U ) + C 12 , R 2 ≤ I ( X 2 ; Y | X 1 U ) + C 21 , R 1 + R 2 ≤ I ( X 1 X 2 ; Y | U ) + C 12 + C 21 , R 1 + R 2 ≤ I ( X 1 X 2 ; Y ) o , where the mu tual information s are compu ted with respect to the law p U X 1 X 2 Y ( u, x 1 , x 2 , y ) = p X 1 U X 2 ( x 1 , u , x 2 ) p ( y | x 1 , x 2 ) , where p ( y | x 1 , x 2 ) den otes the channel law . Definition 3: Defin e the r egion C Out , [ X 1 ⊸ − − U ⊸ − − X 2 E [ X 2 1 ] ≤ P 1 , E [ X 2 2 ] ≤ P 2 R Conf ( X 1 , U , X 2 ) , (21) where the union is over all join t distributions (not necessarily Gaussian) for which X 1 ⊸ − − U ⊸ − − X 2 is Markov and for which E  X 2 1  ≤ P 1 and E  X 2 2  ≤ P 2 Lemma 1: The region C Out is an outer bound on the capacity region o f th e Gau ssian MA C with conf erencing en coders, C ⊆ C Out . (22) Pr oof: Requires only a slight modification of the outer bound in [1] to account for the p ower con straints. Lemma 2: The region C G in (16) can be expressed as C G = [ X G 1 ⊸ − − U G ⊸ − − X G 2 E h ( X G 1 ) 2 i ≤ P 1 , E h ( X G 2 ) 2 i ≤ P 2 R Conf ( X G 1 , U G , X G 2 ) , (23) where th e superscrip t G is u sed to indicate tha t the union is taken only over Gaussian M arkov distributions satisfying the second moment constraints. Pr oof: Follows by ev alu ating the various mutual info rma- tion terms in the definition of R Conf for Gau ssian d istributions on X 1 ⊸ − − U ⊸ − − X 2 . The righ t hand-sides of (21) and ( 23) differ only with respect to the set of distributions over which the unions are taken. Therefo re, in or der to conclude the proof of the con verse (20), by Lem ma 1 and Equation s (21) and (2 3), it suffices to show that there is no loss in o ptimality if the unio n in (21) is taken only over Gaussian Markov triples fulfilling the second moment constraints. This is established b y the following Lemm a 3. Lemma 3: For any Markov triple X 1 ⊸ − − U ⊸ − − X 2 fulfilling E  X 2 1  ≤ P 1 and E  X 2 2  ≤ P 2 , ther e exists a Gaussian Markov triple X G ∗ 1 ⊸ − − V G ∗ ⊸ − − X G ∗ 2 fulfilling the power con- straints E h  X G ∗ 1  2 i ≤ P 1 and E h  X G ∗ 2  2 i ≤ P 2 , such that R Conf ( X 1 , U , X 2 ) ⊆ R Conf ( X G ∗ 1 , V G ∗ , X G ∗ 2 ) . (24) W e postpone the proo f of Lemma 3 and first state a sequen ce of definitions and lemmas. Lemma 4: For any (not necessarily Markov) ran dom triple X 1 , U , X 2 of finite second momen ts, R Conf ( X 1 , U , X 2 ) ⊆ R Conf ( X G 1 , U G , X G 2 ) where ( X G 1 , U G , X G 2 ) is a center ed Gaussian vector wh ose covariance ma trix is equal to that of ( X 1 , U , X 2 ) . Pr oof: Follo ws b y a con ditional version of th e Max - Entropy Theorem [3 , Theorem 1 2.1.1 .], see also [5]. Definition 4: Defin e K G as the set of 3 × 3 positive sem i- definite matrices K =  k 11 k 12 k 13 k 12 k 22 k 23 k 13 k 23 k 33  (25) satisfying one of the two con ditions 1) k 22 6 = 0 an d k 13 k 22 = k 12 k 23 ; 2) k 22 = k 12 = k 13 = k 23 = 0 . Lemma 5: A Gaussian triple is Markov if, and o nly if , its covariance ma trix is in K G . Pr oof: The “if ” direction follows because the law of a Gaussian triple is fully cha racterized by its mean and its covariance m atrix and by no ting that for any cov ariance matrix K ∈ K G we can construct a Gaussian Markov triple of covariance ma trix K . The proof of the later is omitted. For th e proo f of the “o nly if ” direction we assume a triple A ⊸ − − B ⊸ − − C which forms a M arkov ch ain in this order . W e distinguish between two cases: V ar ( B ) = 0 a nd V ar ( B ) 6 = 0 . If V ar ( B ) = 0 , th en B is deterministic and the Markov ch ain A ⊸ − − B ⊸ − − C implies that A and C are indep endent. The covariance matr ix of A, B , C is then diagona l, and Con dition 2) is satisfied. If V ar ( B ) 6 = 0 , then define A 0 , A − E [ A ] , B 0 , B − E [ B ] , and C 0 , C − E [ C ] and compute E [ A 0 C 0 ] = E [ E [ A 0 C 0 | B 0 ]] = E [ E [ A 0 | B 0 ] E [ C 0 | B 0 ]] = E  E [ A 0 B 0 ] V ar ( B 0 ) B 0 E [ B 0 C 0 ] V ar ( B 0 ) B 0  = E [ A 0 B 0 ] E [ B 0 C 0 ] E [ B 2 0 ] . (26) Here, the second equality follows by the M arkovity an d the third equality by the G aussianity . By multiplying (26) with E  B 2 0  = V ar ( B ) Con dition 1) is obtained . Lemma 6: Conside r a Markov triple X 1 ⊸ − − U ⊸ − − X 2 with X 1 and X 2 of finite second momen ts. Let V = E [ X 1 | U ] − E [ X 1 ] . (27) Then, th e covariance matrix of the tr iple ( X 1 , V , X 2 ) is in K G , and R Conf ( X 1 , U , X 2 ) ⊆ R Conf ( X 1 , V , X 2 ) . (28) Pr oof: The inclu sion (28) follows by the following two observations. Exchan ging U by a determin istic functio n of U increases all m utual informatio n expressions in R Conf which are conditio nal on U . And chang ing U doe s no t c hange the join t distribution of X 1 , X 2 , and h ence the unco nditional mutual inform ation expression remains the same . That the covariance m atrix of the triple ( X 1 , V , X 2 ) is in K G follows becau se ( 27) and the Markov condition X 1 ⊸ − − U ⊸ − − X 2 imply that Cov [ V , X 2 ] = Cov [ X 1 , X 2 ] and Cov [ V , X 1 ] = V ar ( V ) . Pr oof of Lemma 3: Let V , E [ X 1 | U ] − E [ X 1 ] , (29) and d efine the triple X G ∗ 1 , V G ∗ , X G ∗ 2 to be zer o-mean jointly Gaussian with the same covariance matrix as the triple X 1 , V , X 2 . T o conclude the proo f we shall show that X G ∗ 1 ⊸ − − V G ∗ ⊸ − − X G ∗ 2 forms a Markov ch ain and that Con- dition ( 24) is satisfied. That the triple X G ∗ 1 ⊸ − − V G ∗ ⊸ − − X G ∗ 2 forms a Markov chain follows by the Gau ssianity o f X G ∗ 1 , V G ∗ , X G ∗ 2 and b y Lemma 5, because b y Lemma 6 the covariance matrix of X 1 , V , X 2 is in K G and th us, by construction , also th e covariance ma trix of X G ∗ 1 , V G ∗ , X G ∗ 2 is in K G . Note that the triple X 1 , V , X 2 , e ven thou gh its covariance matrix is in K G , does no t necessarily for m a Markov chain in th is order, because it is no t r estricted to be Gaussian. That Condition (24) is satisfied follows by the following sequence of inclusions: R Conf ( X 1 , U , X 2 ) ⊆ R Conf ( X 1 , V , X 2 ) ⊆ R Conf ( X G ∗ 1 , V G ∗ , X G ∗ 2 ) , (30) where the first inclusion follows by Lemma 6 and th e second inclusion follows by Lemma 4. I V . P R O O F O F T H E O R E M 2 The “conv erse” C IF ⊆ C (31) follows because C outer bound s the cap acity r egion of the channel with inter ference even when the interference is also known at the receiver . It remains to prove the “direct part” C ⊆ C IF , (32) i.e., that every rate p air in C is achievable in th e presen ce of interferen ce. This follows by Lemm as 7 and 8 ahea d. Definition 5: Defin e the r egion R Ach = [ 0 ≤ β 1 ,β 2 ≤ 1 ( ( R 1 , R 2 ) : R 1 ≤ 1 2 log  1 + β 1 P 1 σ 2  + C 12 , (33) R 1 ≤ 1 2 log  1 + β 1 P 1 σ 2  + 1 2 log 1 + ( p ¯ β 1 P 1 + p ¯ β 2 P 2 ) 2 β 1 P 1 + β 2 P 2 + σ 2 ! , (34) R 2 ≤ 1 2 log  1 + β 2 P 2 σ 2  + C 21 , (35) R 2 ≤ 1 2 log  1 + β 2 P 2 σ 2  + 1 2 log 1 + ( p ¯ β 1 P 1 + p ¯ β 2 P 2 ) 2 β 1 P 1 + β 2 P 2 + σ 2 ! , (36) R 1 + R 2 ≤ 1 2 log  1 + β 1 P 1 + β 2 P 2 σ 2  + C 12 + C 21 , (37) R 1 + R 2 ≤ 1 2 log 1 + P 1 + P 2 + 2 p P 1 P 2 ¯ β 1 ¯ β 2 σ 2 ! ) , (38) where ¯ β 1 = 1 − β 1 and ¯ β 2 = 1 − β 2 . Lemma 7: The capac ity r egion C IF includes R Ach : C IF ⊇ R Ach . (39) Pr oof: See Section IV - A. Lemma 8: The achievable region R Ach equals the capacity region C o f th e Gau ssian MA C with con ferencing encoders, R Ach = C . A. Codin g T echnique A chieving R Ach In this section we sketch a coding tech nique that achieves the region R Ach . The analysis is omitted. The two transmitters first create a co mmon message by commun icating over the pip es as in [1]. Thu s, af ter the confere nce T r ansmitter 1 is co gnizant o f the commo n message and of an indepen dent private message. It allocates power (1 − β 1 ) P 1 to th e comm on message and p ower β 1 P 1 to th e priv ate message. Similarly for Transmitter 2. The cod ing techn ique in volves time-shar ing between two schemes. Both schem es apply successive d ecoding at the receiver , where the receiver first decodes the comm on message followed by the priv ate messages. But they differ in the d e- coding ord er of the p riv ate m essages. W e describe the scheme where the deco ding of the co mmon me ssage is followed by th e deco ding of the pr iv ate message of T ransmitter 1 and only thereafter by the decodin g of the private message of Transmitter 2. The other scheme wh ere the dec oding of the commo n message is followed by th e priv a te message of T ransmitter 2 is analogo us. W e first d escribe th e en coding of th e c ommon m es- sage. Befor e tr ansmission begins, the tran smitters ag ree on a (single -user) dirty-paper code for power P 0 ,  p (1 − β 1 ) P 1 + p (1 − β 2 ) P 2  2 , no ise variance ( β 1 P 1 + β 2 P 2 + σ 2 ) , and inter ference S . Transmitter 1 en codes the common message using this dirty-p aper co de and scales the resulting sequen ce by √ (1 − β 1 ) P 1 √ P 0 . Transmitter 2 enco des the common me ssage with the same code, but scales the resulting sequence by √ (1 − β 2 ) P 2 √ P 0 . (Th e channel will coherently com- bine the two sequ ences.) Indepe ndently of th e commo n message, the tra nsmit- ters enco de the p riv ate message s. T ransmitter 1 encodes its pr iv ate message using a dirty-pa per code for power β 1 P 1 , noise variance σ 2 + β 2 P 2 , and inte rference S 1 , 1 − “ √ (1 − β 1 ) P 1 + √ (1 − β 2 ) P 2 ” 2 P 1 + P 2 +2 √ (1 − β 1 )(1 − β 2 ) P 1 P 2 + σ 2 ! S . T ran smitter 2 en- codes its priv ate m essage with a dirty-p aper cod e f or power β 2 P 2 , n oise variance σ 2 , a nd inte rference S 2 ,  1 − β 1 P 1 β 1 P 1 + β 2 P 2 + σ 2  S 1 . E ach transmitter sen ds th e sum of the two sequen ces pr oduced for the commo n message and for its priv ate message. The r eceiv er perfo rms successi ve decodin g. It starts by decodin g the com mon message based on nearest n eighbor decodin g as in [9], while tr eating the sequen ces which the transmitters p roduced for the private messages as add itional noise. Then, the receiver sub tracts (or “strips off ”) the decoded common -message codeword fr om the channel outputs and pro- ceeds to d ecode the priv ate message of T ransmitter 1. (Her e, the “common-m essage codew ord” is not th e resulting sequence of the dirty-p aper code , but the codeword in the bin of the common message which was selected d uring the encoding proced ure of the dirty-p aper cod e.) T o decode the private message of T ransmitter 1, the r eceiv er ag ain uses nearest neighbo r d ecoding and treats the sequence which T ransmitter 2 produ ced for its private message as additional n oise. Fin ally , it subtracts the deco ded T ransmitter 1-p riv ate-m essage co dew ord and decodes the priv a te message of Transmitter 2. Remark 2: T o e ncode the different messages the tran smit- ters u se d irty-pap er codes f or scaled version s of the interf er- ence S . The reason fo r this is that the cod ew o rds that the receiver subtracts depen d on the inte rference seq uence S , and the resulting cha nnel seen in subsequ ent decoding phases is interfered by a scaled version of S . Remark 3: Our cod ing sche me is different fr om W illems’ s scheme [1] (for the setting without interference). In his scheme the transmitters apply superposition co ding an d the receiver applies joint d ecoding . Ou r a pproach h as two advantages: It simplifies the analysis, an d it achiev es the same result also if the noise sequence { Z t } is not IID Gaussian but a ny arbitrary ergodic process of secon d m oment σ 2 . Howe ver, our approach leads to the add itional constra ints (34) and (36). Fortunately , by Lemma 8, th ese additional con straints d on’t shrin k the resulting region. A C K N O W L E D G M E N T W e would like to thank Stephan Tinguely , ˙ I. Emre T elatar , and Tsachy W eissman for helpful discussions. R E F E R E N C E S [1] F . M. J. Wi llems, “The discret e memoryless multiple access channel with partia lly cooperat ing encoders, ” IEEE T rans. on Inform. Theory , vol. 29, no. 3, pp. 441–445, Nov . 1983. [2] M. H. M. Costa, “Writing on dirty paper , ” IEEE T rans. on Inform. Theory , vol. 29, no. 3, pp. 441–445, Nov . 1983. [3] T . M. Cove r and J. A. Thomas, “Elements of information theory”, 2nd editi on, Wil ey public ation . [4] V . V enkate san, “Optimality of Gaussian inputs for a multi-acce ss achie v- able rate region ”, Semester Thesis, E TH Zurich, Switzerla nd, June 2007. [5] J. Thomas, “Feedback can at m ost double Gaussian m ultipl e access channe l capacity , ” IEE E T rans. on Inform. Theory , vol. 33 no. 5, Sep. 1987. [6] T . M. Cov er and C.S.K . Leung “ An achie vabl e rate re gion for the multiple -acce ss channel with feedback, ” IEEE T rans. on Inform. Theory , vol. 27 no. 3, pp. 292–298, May . 1981. [7] D. Slepia n and J.K. W olf, “ A coding theorem for multiple access channe ls with correlated sources, ” Bell System T ech. J . vol. 52, pp. 10 37– 1076, Sept. 1973. [8] S. Gel’fand and M. Pinsker , “On Gaussian channels with random paramete rs, ” in Proc. ISIT 1984, T aske nt, U.S.S.R., Sep. 1984, pp. 247– 250. [9] A.S. Cohen and A. Lapidoth, “The Gaussian W atermarki ng Game”, IEEE T rans. on Inform. Theory , vol. 48, no. 6, June 2002.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment