Communicating the Difference of Correlated Gaussian Sources Over a MAC

This paper considers the problem of transmitting the difference of two jointly Gaussian sources over a two-user additive Gaussian noise multiple access channel (MAC). The goal is to recover this difference within an average mean squared error distort…

Authors: Rajiv Soundararajan, Sriram Vishwanath

Communicating the Dif ference of Correlate d Gaussian Sources Ov er a MA C Raji v Soundararajan and Sriram V ishwanath Department of Electrical and Computer Engin eering, The Uni versity of T exas at Austin 1 Univer sity Station C0803, Aust in, TX 787 12, USA Email: rajiv .s@mail.ut exas.edu and sriram@ece.utexas.edu Abstract This paper co nsiders the problem of tr ansmitting the d ifference of two join tly Gau ssian sources over a two-user additive Gaussian no ise multiple access chan nel ( MA C). The goal is to recover this difference within an av erage mean squar ed erro r distortion criterion. Each transmitter has access to on ly one of the two Gaussian sources and is limited by an average power constraint. In this work, a lattice cod ing scheme that ach iev es a d istortion within a con stant of a distortion lower bou nd is presented if the sign al to no ise ratio (SNR) is g reater than a threshold . Further, uncod ed transmission is sh own to be worse in perf ormance to lattice coding metho ds. An alternati ve lattice coding schem e is presented tha t can potentially imp rove on the pe rforman ce of uncod ed transmission. I . I N T RO D U C T I O N In thi s paper , we cons ider the joint source channel codi ng p roblem of transmitti ng the difference of two positively correlated Gaus sians in a dist ributed fashion over an additive Gauss ian noise multip le access channel (MA C). Each transmitter in the MA C has, as its message, one com ponent of the biv ariate Gaussian source and its codebook is constrained by a second moment (aver age power) requirement. W e esti mate the diff erence between the two correlated sources while incurring the lowest possible mean squared error at the recei ver . The distortion suf fered by the diff erence between the two sources i s a function of the power constraints at the two transmi tters as well as channel st atistics. In general, there is no separation between source and channel coding over MA Cs, and a joint codin g schem e is desired. There has been signi ficant related work on both the source and channel aspects of this problem. In [1], the authors consid er the problem of communi cating a b iv ariate Gaus sian source over a Gauss ian MA C to recov er both components limited by i ndividual distorti on constraints . In [2], the problem of recovering a single Gaussi an source throu gh a Gauss ian sensor network is cons idered. Subsequently , the authors also address the problem com municating the sum of independent Gaus sian so urces over a Gaussian MA C in [3]. In the domain o f source coding, [4] consid ers and solves the two terminal Gauss ian source coding problem while a d istributed lattice based coding scheme for reconstructi ng a linear function of join tly Gaussian sources is developed in [5]. In [6], an outer bound on th e rate region for the dist ributed compression of linear functions of t wo Gaussian sources for certain correlations is presented. Th is boun d indicates that existing achiev able schemes are subo ptimal. In this work, w e present a latti ce coding scheme for th e distributed transm ission of the differenc e of Gaussians over th e M A C. Note that, for a d iffe rent s etting, l attice codes have been pre viously consi dered for joint source-channel coding in [7]. The key contributions of thi s paper are as follows: 1) W e present a lower bound on the dis tortion incurred while estimat ing the difference between the sources o ver a Gaussian MA C. This lower bound is based on augmenting the recei ver with a random var iable t hat in duces condi tional in dependence between the two sources and considering a statistically equivalent s ystem of two parallel channels from each of the t ransmitters to the same recei ver . This genie ai ded boun d is based on the work in [4] and [6] where the auth ors determi ne a lower bound on distort ion in a source coding sett ing. 2) W e develop a lattice coding scheme for commun icating the diffe rence o f the two s ources over this channel. The scheme we present for the MA C is sim ilar i n spirit t o the scheme in [3] and is an extension of [3] to correlated so urces. 3) W e sh ow that o ur s cheme performs “close” to th e lower bound by showing that the l ogarithm of the ratio of the di stortion achie ved to the distort ion lower bound is 1 bit if the si gnal to noise ratio (SNR) is greater than a threshol d. W e show that the lattice based transmi ssion scheme provides an improvement in distortion over uncoded t ransmission. 4) Finally , we propose a common di ther based lattice coding scheme in which the channel i nputs of the two us ers are correlated (by usi ng t he same d ither). Th is correlatio n can po tentially reduce th e distortion and can therefore com e closer to the l ower bo und in t erms of performance. The rest of the paper i s or ganized as follows. W e dev elop the system model and not ation in Section II. W e present a lower bound on achiev able dis tortion in Section III. In Section IV, we characterize the distort ion achie ved using an uncoded transmissio n scheme. In Sections V and VI we d escribe the scaled latt ice and comm on dither based lattice coding schemes and analyze their performance. Fin ally , we conclude the paper with Section VII. I I . S Y S T E M M O D E L A N D N OTA T I O N W e briefly explain the notati on used in this paper before presenting the syst em model. W e use capit als to denote random variables and bol dface capitals to denote matrices. E is used for expectation of a random var iable whil e we refer to an n -length vector as x n . Througho ut the p aper , logarithm s used are wit h respect to b ase 2 and the square of the 2 -norm o f an n -length vector x n is d enoted as k x n k 2 2 = n X i =1 ( x ( i )) 2 . The system model is depicted in Fig. 1. Consid er i ndependent and identically dist ributed (i.i.d) n - ENCODER 1 ENCODER 2 DECODER ˆ S n 3 X n 2 X n 1 S n 1 S n 2 Z n Y n Fig. 1 . System Mo del length sequences of Gaussian random v ariables, { S 1 ( i ) } n i =1 and { S 2 ( i ) } n i =1 . The covariance matrix of ( S 1 ( i ) , S 2 ( i )) is give n by Σ = " σ 2 ρσ 2 ρσ 2 σ 2 # for all i = 1 , 2 , ..., n . W ithout loss of generality , we assume ρ > 0 for t he purposes of this paper . T ransmitter k in the MA C has a realization of S n k for k = 1 , 2 . Also the num ber of source samples observed i s equal t o the number of channel uses ava ilable. Thus, the system has a bandwidt h expansion factor of 1. The channel i nput sequence at each us er i s a fun ction of the observed s ource sequence such that a power const raint is satisfied. Mathem atically , the channel input { X k ( i ) } n i =1 = f k ( { S k ( i ) } n i =1 ) for k = 1 , 2 . The power constraint is expressed as 1 n n X i =1 E [( X k ( i )) 2 ] ≤ P . The n oise { Z ( i ) } n i =1 is a sequence of i.i.d Gaussian random variables with zero mean and variance N . The recei ved signal at tim e in stant i is giv en by Y ( i ) = X 1 ( i ) + X 2 ( i ) + Z ( i ) . W e wish to estimate the sequ ence of the differe nce { S 1 ( i ) − S 2 ( i ) } n i =1 at the recei ver given the receiv ed sequence { Y ( i ) } n i =1 within a distortion . The distorti on m etric cons idered is the time av erage mean squared error . Let S 3 ( i ) = S 1 ( i ) − S 2 ( i ) and the estim ated sequence be { ˆ S 3 ( i ) } n i =1 . T he di stortion D is defined as D = 1 n n X i =1 E [( S 3 ( i ) − ˆ S 3 ( i )) 2 ] . Next, we present a l ower bou nd on D . I I I . L OW E R B O U N D O N D I S T O RT I O N W H E N D E T E R M I N I N G T H E D I FF E R E N C E O F J O I N T L Y G AU S S I A N S O U R C E S W e no w present a lo wer bound on the distortion in curred for the distributed transmission of the dif ference of correlated sources. One of the ideas used in the proof is augmenting the receiv er wit h a random variable that induces conditional independence b etween S 1 and S 2 as presented in [6]. W e consider th e following representation for the Gaussian sources ( S 1 , S 2 ) : S 1 = √ ρS + V 1 S 2 = √ ρS + V 2 . where S , V 1 and V 2 are independent Gaussi an random variables with mean zero and va riances σ 2 , σ 2 (1 − ρ ) and σ 2 (1 − ρ ) . Not e that, by s upplying the receiv er wit h t he sequence S n , the dist ortion incurred can o nly decrease. ENCODER 1 ENCODER 2 DECODER Z n 2 X n 2 X n 1 S n 1 S n 2 ˆ S n 3 Y n 1 Y n 2 Z n 1 Fig. 2. Parallel channels Further , we lower bound t he distortion by consi dering a modified channel setting as sh own in Fig. 2. This modified channel is a mem oryless Gaussian channel which at time i is represented mathematically as Y 1 ( i ) = X 1 ( i ) + Z 1 ( i ) Y 2 ( i ) = X 2 ( i ) + Z 2 ( i ) where Z 1 ( i ) and Z 2 ( i ) are Gaussi an random variables with mean zero and variance N / 2 , independent of each other and of X 1 ( i ) and X 2 ( i ) . The recei ver ob tains an estim ate of the differ ence based on the observations o f the vector ( Y n 1 , Y n 2 ) . The distortion in curred on this channel i s a lower bound on the distortion resulting from the original channel. In the original channel, the output X n 1 + X n 2 + Z n is a function of the outp ut of the modified channel, w hich is the vector ( X n 1 + Z n 1 , X n 2 + Z n 2 ) . Note t hat the output of the orig inal chann el (in Fig. 1) and the sum of the out puts of the mod ified channel (in Fig. 2) are statist ically equiv alent. The di stortion i ncurred in the modified channel with side informati on S n at the receiv er satisfies: D ≥ 1 n n X i =1 E [( S n 1 ( i ) − S n 2 ( i ) − E [ S n 1 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )] + E [ S n 2 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )]) 2 ] = 1 n n X i =1 E [( S n 1 ( i ) − E [ S n 1 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )]) 2 ] + E [( S n 2 ( i ) − E [ S n 2 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )]) 2 ] − 2 E [( S n 1 ( i ) − E [ S n 1 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )])( S n 2 ( i ) − E [ S n 2 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )])] . The following Markov con dition Y n 1 ↔ X n 1 ↔ S n 1 ↔ S n ↔ S n 2 ↔ X n 2 ↔ Y n 2 , (1) implies that E [ S n 1 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )] = E [ S n 1 ( i ) | S n , Y 1 ( i )] E [ S n 2 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )] = E [ S n 1 ( i ) | S n , Y 2 ( i )] . Therefore, D ≥ 1 n n X i =1 E [( S n 1 ( i ) − E [ S n 1 ( i ) | S n , Y 1 ( i )]) 2 ] + E [( S n 2 ( i ) − E [ S n 2 ( i ) | S n , Y 2 ( i )]) 2 ] − 2 E [( S n 1 ( i ) − E [ S n 1 ( i ) | S n , Y 1 ( i )])( S n 2 ( i ) − E [ S n 2 ( i ) | S n , Y 2 ( i )])] . (2) W e observe t hat 1 n n X i =1 E [( S n 1 ( i ) − E [ S n 1 ( i ) | S n , Y 1 ( i )]) 2 ] = σ 2 (1 − ρ ) 1 + 2 P N 1 n n X i =1 E [( S n 2 ( i ) − E [ S n 2 ( i ) | S n , Y 2 ( i )]) 2 ] = σ 2 (1 − ρ ) 1 + 2 P N (3) since these are t he av erage squared error dis tortions in S n 1 and S n 2 when each is transmitted ov er a point t o point Gaussian channel with noi se variance N / 2 , power constraint P and condi tional variance V ar( S 1 | S ) = V ar( V 1 ) = σ 2 (1 − ρ ) and V ar( S 2 | S ) = V ar( V 2 ) = σ 2 (1 − ρ ) . Now , the following conditio nal cross correlation, E [( S n 1 ( i ) − E [ S n 1 ( i ) | S n , Y 1 ( i )])( S n 2 ( i ) − E [ S n 2 ( i ) | S n , Y 2 ( i )]) | S n ] = E [( S n 1 ( i ) − E [ S n 1 ( i ) | S n , Y 1 ( i )]) | S n ] E [( S n 2 ( i ) − E [ S n 2 ( i ) | S n , Y 2 ( i )]) | S n ] due to the Markov conditi on stated in (1). But, by tower rule for expectations, we hav e E [( S n 1 ( i ) − E [ S n 1 ( i ) | S n , Y 1 ( i )]) | S n ] = 0 a.s , for all i = 1 , 2 , . . . , n . Thus E [( S n 1 ( i ) − E [ S n 1 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )])( S n 2 ( i ) − E [ S n 2 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )]) | S n ] = 0 a.s. ⇒ E [( S n 1 ( i ) − E [ S n 1 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )])( S n 2 ( i ) − E [ S n 2 ( i ) | S n , Y 1 ( i ) , Y 2 ( i )])] = 0 , (4) By combining (2), (3) and (4), we get D bound = 2 σ 2 (1 − ρ ) 1 + 2 P N . In the following sections, we discuss t he performance of various achiev able schemes relative t o this distortion bound. I V . U N C O D E D T R A N S M I S S I O N In this s ection, we compute the d istortion resulti ng from uncoded transm ission in order to com municate the diff erence { S 1 ( i ) − S 2 ( i ) } n i =1 . In t his setting, Transmitter 1 sends a scaled version of the source q P σ 2 S 1 ( i ) at t ime ins tant i and T ransmitter 2 sends − q P σ 2 S 2 ( i ) at time ins tant i . T he scaling is chosen such t hat both users satisfy their respective power cons traints. The received sequence is given by Y ( i ) = s P σ 2 ( S 1 ( i ) − S 2 ( i )) + Z ( i ) . The receiv er determines the mi nimum mean s quared error (MMSE) estim ate of the diffe rence S 1 ( i ) − S 2 ( i ) based on the receiv ed signal Y ( i ) . The distortio n resulting from this process can b e calculated as D = 1 n E [ k S n 3 − E [ S n 3 | Y n ] k 2 2 ] =2 σ 2 (1 − ρ ) − (2 √ P σ 2 (1 − ρ )) 2 2 P (1 − ρ ) + N = 2 σ 2 (1 − ρ ) 1 + 2 P (1 − ρ ) N . Note that the distort ion resulting from by uncod ed transmission does not meet the lower bound for any ρ > 0 . In the next two section s, we describe lattice based coding schemes which perform bett er than uncoded t ransmission (thus resulting in a lower distorti on). V . L A T T I C E C O D I N G S C H E M E W e now describe a scaling based latt ice scheme to comm unicate the d iffe rence of the two sou rces. An implicit ass umption th at we make in the study and design of lattice schem es i s that P ≤ σ 2 . In effect, for P > σ 2 , t he l attice quantizati on scheme presented below reduces to uncoded transm ission. W e briefly re view som e features of lattice codes and quantizers before we present th e scheme. A l attice of dimensi on n is defined as the set Λ = { x = z G : z ∈ Z n } where G ∈ Z n × n is k nown as the generator matrix and Z is th e set of all integers. The quantized value of x ∈ R n , Q ( x ) = arg min r ∈ Λ k x − r k . The s econd moment of a lattice is defined as σ 2 (Λ) = R ν k x k 2 dx R ν dx . The fundamental V oronoi region o f Λ is defined as ν 0 = { x ∈ R n : Q ( x ) = 0 } . Further , we use the no tation x mo d Λ = x − Q ( x ) . The lattice codi ng scheme described below is sim ilar i n nature to the lattice coding s cheme used in [3] for joint source channel coding of the sum of independent Gauss ian sources. Consider Λ , a lattice of dimension n with second mo ment σ 2 (Λ) = P . W e choose the same lattice Λ at b oth t he users such that i t is good for bot h source and channel coding. The proof of existence of such a latti ce and its con struction are detailed in [8]. Let U n 1 and U n 2 be i ndependent dithers (independent of themselves and ind ependent of the s ources) which are uniforml y distributed over t he fundamental V oronoi region ν 0 and known at the recei ver . The n -length channel in put at each transmitter i s X n 1 = ( γ S n 1 − U n 1 ) mo d Λ X n 2 = ( − γ S n 2 − U n 2 ) mo d Λ where γ is a s calar w hich i s chosen later . The si gnal at the recei ver is given by Y n = X n 1 + X n 2 + Z n . The decoder performs the following operations to estimate the di f ference. Y n 1 =[ αY n + U n 1 + U n 2 ] mo d Λ =[ α ( X n 1 + X n 2 + Z n ) + U n 1 + U n 2 ] mo d Λ =[ γ ( S n 1 − S n 2 ) + ( α − 1)(( γ S n 1 − U n 1 ) mo d Λ + ( − γ S n 2 − U n 2 ) mo d Λ) + αZ n ] mo d Λ =[ γ ( S n 1 − S n 2 ) + Z n 1 ] mo d Λ , where Z n 1 = ( α − 1)(( γ S n 1 − U n 1 ) mo d Λ + ( − γ S n 2 − U n 2 ) mo d Λ) + αZ n is the effecti ve noise. Not e that each t erm in the effec tiv e noise is independent o f the source since the dither is chosen uniforml y in the fun damental V oronoi region and in dependent of the sources [9] and the original noi se Z n is also independent of t he sources. By choosi ng α = 2 P 2 P + N , th e MMSE coefficient, we reduce t he variance of the eff ectiv e noise to 2 P N 2 P + N . Since Λ is chosen to be a go od channel lattice, i f γ 2 2 σ 2 (1 − ρ ) + 2 P N 2 P + N ≤ P , (5) we know from [10] that we can decode correctly and [ γ ( S n 1 − S n 2 ) + Z n 1 ] mo d Λ = γ ( S n 1 − S n 2 ) + Z n 1 . Therefore, i f P N > 1 2 , we choo se γ satis fying (5) with equali ty . M athematically , γ s atisfies γ 2 2 σ 2 (1 − ρ ) + 2 P N 2 P + N = P ⇒ γ 2 2 σ 2 (1 − ρ ) 2 P + N 2 P N + 1 = 2 P + N 2 N . (6) Under the assumption of correct decoding, we have Y n 1 = γ ( S n 1 − S n 2 ) + Z n 1 . W e now mul tiply th e receiv ed signal by 1 − K γ where K = 2 P N 2 P N +2 σ 2 (1 − ρ ) γ 2 (2 P + N ) , to obtain ˆ S n 3 = 1 − K γ ( γ S n 3 + Z n 1 ) = S n 3 − K S n 3 + 1 − K γ Z n 1 . The av erage distorti on that is achiev ed is sim ply the time av erage o f the expectation o f the t wo norm of 1 − K γ Z n 1 − K S n 3 , which can be calculated as D lattice = E [ k 1 − K γ Z n 1 − K S n 3 k 2 ] n = 2 σ 2 (1 − ρ ) 1 + 2 σ 2 (1 − ρ ) γ 2 (2 P + N ) 2 P N = 2 σ 2 (1 − ρ ) P N + 1 2 . where the last equality follows from (6). The l attice based coding scheme dev eloped above is close to the distort ion boun d p resented i n Section III in the sense that the logarithm of the ratio of the distortion bound to distortion resulting from the lattice s cheme is one bit for any S N R > 1 2 . This is because log D lattice D bound = log 2 σ 2 (1 − ρ ) P N + 1 2 1 + 2 P N 2 σ 2 (1 − ρ ) = log 2 = 1 . The SNR condition is necessary for the existence of the above lattice s cheme as discussed earlier . V I . C O M M O N D I T H E R B A S E D L A T T I C E C O D I N G S C H E M E W e now propose an alternative l attice codi ng scheme based on u sing a common dither at b oth the terminals. Let U n be the com mon dither at both th e terminals and the rest of the parameters of the latti ce code are the same as in the previous section. The channel input at each us er is given by X n 1 = ( S n 1 − U n ) mo d Λ X n 2 = − ( S n 2 − U n ) mo d Λ . W e know that X n k is ind ependent of S n k for k = 1 , 2 and is uni formly distributed over t he fundamental V oronoi region of t he lattice Λ [9]. Howe ver X n 1 and X n 2 are no long er independent. Let ρ ′ denote the correlation coef ficient between X n 1 and X n 2 . In thi s scheme, we perform the same sequence of operations at the recei ver as in the previous latti ce b ased scheme. T hus w e obtain Y n 1 = [ S n 1 − S n 2 + Z n 1 ] mo d Λ , where Z n 1 = ( α − 1)(( S n 1 − U n ) mo d Λ − ( S n 2 − U n ) mo d Λ) + αZ n is the effecti ve noi se. By choosin g α = 2 P (1+ ρ ′ ) 2 P (1+ ρ ′ )+ N , the variance of Z n 1 can be reduced to 2 P (1+ ρ ′ ) N 2 P (1+ ρ ′ )+ N . Again, as before, the effecti ve noise term is independent o f S n 1 − S n 2 . M oreover , [ S n 1 − S n 2 + Z n 1 ] mo d Λ = S n 1 − S n 2 + Z n 1 if 2 σ 2 (1 − ρ ) + 2 P (1 + ρ ′ ) N 2 P (1 + ρ ′ ) + N ≤ P . Thus we will be able to decode correctly for all P , N , ρ and ρ ′ satisfying t he above equation. Mu ltiplyin g the signal Y n 1 by 1 − K where K = 2 P (1+ ρ ′ ) N 2 P (1+ ρ ′ ) N +2 σ 2 (1 − ρ )(2 P (1+ ρ ′ )+ N ) , t he n et distorti on can be calculated similarly as D = E [ k (1 − K ) Z n 1 − K S n 3 k 2 ] n = 2 σ 2 (1 − ρ ) 1 + 2 σ 2 (1 − ρ )(2 P (1+ ρ ′ )+ N ) 2 P (1+ ρ ′ ) N In general, the di stortion resulting from the com mon dither based scheme i s better than that result ing from the independent dith er based s cheme. This improvement depends on ρ ′ , the correlation between t he channel input s. Characterizing ρ ′ is in general a no n-trivial task as it depend s on both source and channel parameters, and is therefore left uncharacterized in thi s p aper . V I I . C O N C L U S I O N W e present two lattice coding schemes for t he distributed source channel comm unication of th e differ - ence of two jointl y Gauss ian sources. In the scaling b ased lattice coding s cheme, we show th at we can find the scaling parameter γ to achieve a di stortion ver y clo se to th e lower bound on the dis tortion i f S N R > 1 2 . Future work includes exploring lattice based schemes to compute more general l inear functions of correlated Gaussian sources over a MA C. V I I I . A C K N OW L E D G M E N T The autho rs thank Aaron W agner and Ram Zamir for their helpful comments. R E F E R E N C E S [1] A. Lapidoth and S. Tinguely , “S ending a bi-v ariate Gaussian source over a Gaussian MA C, ” in Pr oc. IEEE Int Symp Info Theory , Seattle, W A 2006. [2] M. Gastpar , “Uncoded transmission is exactly optimal for a simple Gaussian sensor network, ” in Pro c. 2007 IT A W orkshop , San Di ego, CA 200 7. [3] B. Nazer and M. Gastpar , “S trcutured Random Codes and Sensor Network Coding T heorems, ” in Pr oceedin gs of the 20th Biennial International Zurich Seminar on Commununication (IZS 2008 ) , Zurich, Switzerland, 2008. [4] A. W agner , S . T avildar , and P . V iswanath, “Rate region of the Quadratic Gaussian Two-Encode r Source-Coding Pr oblem, ” IEEE T rans. Inf. T heory , 200 8, submitted for publication. Preprint a v ailable at http://arxiv .org/abs/cs/0 510095. [5] D. Krithiv asan and S. Pradhan, “Lattices for distributed source coding: Jointly Gaussian sources and reconstruction of a linear function, ” IEEE T rans. In f. Theo ry , 2007, submitted for pub lication. Preprint a v ailable at http://arxiv .org/abs/070 7.3461. [6] A. W agner , “ An o uter bound for distributed compression of linear functions, ” in 42nd Annual Con fer ence on Information Sciences and Systems (CISS) , P rinceton, NJ 2008. [7] Y . Koch man and R. Zamir, “Joint W yner-Zi v/Dirty-Paper Coding by Modulo-Lattice Modulation, ” IEE E T r ans. Inf. Theory , 2008, submitted for publication. Preprint a v ailable at http://www .eng.tau.ac.il/ ∼ zamir/publications.html. [8] U. Erez, S. Li tsyn, and R . Zamir , “Lattices which are good for (almost) e v erything, ” IEEE Tr ans. Inf. Theory , vol. 51, pp. 3401–3416, Oct. 200 5. [9] U. Erez and R. Zamir , “ Achie ving 1 2 log(1 + S N R ) on the A WGN channel with lattice encoding and decoding, ” IEEE T ran s. Inf. Theory , v ol. 50, pp. 2293–2 314, Oct. 2004. [10] G. Poltyrev , “On coding without restructions for the A WGN channe l, ” IE EE T rans. Inf. Theory , vol. 40, pp. 40 9–417, Mar . 1994 .

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment