Joint Wyner-Ziv/Dirty Paper coding by modulo-lattice modulation

The combination of source coding with decoder side-information (Wyner-Ziv problem) and channel coding with encoder side-information (Gel'fand-Pinsker problem) can be optimally solved using the separation principle. In this work we show an alternative…

Authors: Yuval Kochman, Ram Zamir

Joint Wyner-Ziv/Dirty Paper coding by modulo-lattice modulation
1 Joint W yner -Zi v/Dirt y-Paper Cod ing by Modulo-Lattice Modulation † Y uval K ochman and Ram Zamir Dept. Electrical Engineering - Systems, T el A vi v Uni versity Abstract The combinatio n of source coding with deco der side-in formation (W y ner-Zi v problem ) and channel coding with encod er side- informatio n (Gel’fand- Pinsker problem ) can be optimally solved using the separation principle. In this work we sh ow an alternative scheme for th e quadr atic-Gaussian case, which me rges source and ch annel co ding. This sche me ach iev es the o ptimal per formanc e by a app lying modulo -lattice modulation to the analog source. Thus it sav es the com plexity of quantizatio n and channe l decodin g, and rema ins with the task of “shaping ” only . Furtherm ore, for h igh signal-to -noise ratio (SNR), the sch eme approach es the optimal perfo rmance using an SNR-indep endent enco der, thus it is robust to unknown SNR at th e enc oder . keywords: joint so urce/chann el c oding, a nalog transmission, W yn er- Ziv problem, writing on dirty paper , modulo lattice mod ulation, MMSE estimation, unkn own SNR, broadc ast channel. I . I N T R O D U C T I O N Consider the quadratic-Gaussian joint s ource/chan nel c oding problem for the W yner-Zi v (WZ) source [1] a nd Gel’fand-Pinsker channe l [2], as de picted in Figure 0. In the W yner-Zi v s etup, the source is jointly distrib uted with some side information (SI) known at the decod er . In the Ga ussian case , the WZ-source sequen ce S k is given by: S k = Q k + J k , (1) where the unknown source part, Q k , is Gaussian i.i.d. with variance σ 2 Q , while J k is an a rbitrary SI sequen ce known at the decode r . In the Gel’fand-Pinsker se tup, the ch annel trans ition distribution depend s † Parts of this work were presented at ISIT2006, Seattle, W A, July 2006. This work was supported by the Israeli Science Founda t ion (IS F) under grant # 1259/07, and by the Advanc ed Communication Center (ACC). The first author was also supported by a fellowship of the Y itzhak and C haya W einstein R esearch Institute for Signal Pr ocessing at T el A viv Univ ersity . I ˆ S side information Channel Source side informati on Σ Σ ENCODER DECODER S Source Q J CHANNEL X Y Z Reconstr uction Figure 0: The W yner-Zi v / dirt y -paper coding problem on a state that serves as enco der SI. In the Gaussian case , kn own as the d irty paper chann el (DPC) [ 3], the DPC output, Y k , is giv e n by: Y k = X k + Z k + I k , (2) where X k is the chan nel input, the unkn own c hannel n oise, Z k , is Gauss ian i.i.d. with variance N , while I k is an arbitrary interference, k nown at the enc oder . When referring to I k and J k , we u se the terms interference and SI i n terchangea bly , since they may be seen either as externa l components added to the source and to the c hannel noise, or a s known parts of these entities. From here on ward we us e the bold notation to deno te K -dimensiona l vectors, i.e. X = [ X 1 , · · · , X k , · · · , X K ] . The sequenc es Q , J , Z and I are all mutua lly inde penden t, hence the channel noise Z is indepe ndent of the channel i np ut sequence X . The encode r is some fun ction of the source vector tha t may dep end on the cha nnel SI vector as well: X = f ( S , I ) , (3) and must obey the power constraint 1 K E {k X k 2 } ≤ P , (4) where k · k denotes the E uclidean norm. The d ecoder is so me function of the cha nnel ou tput vec tor that may depen d on the source SI vector as well: ˆ S = g ( Y , J ) , (5) and the recons truction quality performance criterion is the me an-squa red error (MSE): D = 1 K E {k ˆ S − S k 2 } . (6) 2 The setup of Figu re 0 desc ribed above is a sp ecial case of the joint WZ-source a nd Gel’fand-Pinsker channe l s etting. Th us, by Me rhav e and Shamai [4], Shanno n’ s separation p rinciple holds. So a combination of optimal sou rce and channel codes can ap proach the optimum distortion D opt , satisfying: R WZ ( D opt ) = C DPC (7) where R WZ ( D ) is the WZ-sou rce rate-distortion function and C DPC is the dirty paper chan nel cap acity . Howe ver , the o ptimality of “digital” sepa ration-based schemes come s at the price of large delay and complexity . Moreover , they suffer from lack of robustness: if the cha nnel signal-to-noise ratio (SNR) turns out to be lower than expe cted, the resu lting distortion may be very lar g e, while if the SNR is higher than expe cted, the re is no improvement in the distortion [6], [7]. In the sp ecial case of white Gaussian s ource and chann el without side information ( I = J = 0 ), it is well known tha t analog transmission is op timal [8]. In that c ase, the encoding and de coding functions X k = β S k , ˆ S k = α β Y k (8) are me re scalar factors, where β is a “zoom in” factor c hosen to satisfy the chann el power cons traint and α is the channel MMSE (W iener) coefficient. This scheme ac hiev es the optimal distortion (7) while having low co mplexity (two multiplications p er sa mple), zero delay and full r obustness : on ly the receiver needs to know the channel SNR, while the transmitter is c ompletely ignorant of that. Such a perfect matching of the source to the ch annel, which allo ws single-letter coding , only occurs un der very spe cial conditions [9]. In the qu adratic-Gaussian s etting in the presenc e of side information, the se conditions do no t h old [4]. It is interesting to note that in this case, R WZ ( D ) is just the Gauss ian rate-distortion function for the unknown source part Q [5], while C DPC is just the A WGN ca pacity for the cha nnel noise Z [3], i.e. the SI comp onents I and J are “eliminated” a s would be done had they be en known to both the encoder and the decod er . W e s ee, then, tha t this perfect interferenc e c ancelation is not achievable by single-letter coding. In this work we propose a scheme for the joint W yner -Ziv/dirty-paper prob lem that takes a middle path, i.e., a “semi-ana log” s olution which partially gains the complexity and rob us tness advantages of analog transmission : It c an be made o ptimal (in the s ense of (7) ) for any fixed SNR, with reduc ed complexity . Moreover , it allows a good compromise between the performance at different SNRs, and becomes SNR-indepe ndent at the limit of high SNR. 3 The sc heme we present sub tracts the channe l interference I at the encod er modu lo-lattice, the n u ses again su btraction of the source known part J in conjun ction with modulo-lattice arithmetic at the dec oder . Thus it achieves an equivalen t single-letter channel with I = J = 0 . Since the proce ssing is ap plied to the ana log signal, withou t using any information-bearing c ode, we call this app roach modulo-lattice modulation (MLM). Modulo-lattice c odes were s ugges ted as a tool for side information source a nd cha nnel problems; see [10], [11], whe re a lattice is us ed for shap ing of a digital cod e (which may itself h av e a lattice structure as well, yielding a nes ted lattice structure). Mod ulo-lattice transmiss ion of an an alog signal in the WZ s etting was first introduced in [12], in the context of joint sou rce/channe l c oding with ban dwidth expansion, i.e. when the re are several channel us es per each source sa mple. Here we generalize an d formalize this approac h, and app ly it to SI prob lems. In a preliminary version of this work [13], we use d the MLM scheme as a building block in A nalog Matching of c olored source s to colored chan nels. Later , W ilson et al. [14], [15] use d transmission of an ana log signal modulo a random code to arri ve at similar results. Rec ently , MLM was used in network settings for computation over the Gaus sian MA C [16] or for coding for the c olored Gauss ian relay network [17]. The rest of the pa per is o r ga nized as follows: In Sec tion II we b ring p reliminaries a bout multi- dimensional latti c es, and discuss the existence of latti c es that are a symptotically suitable for j oint WZ/DPC coding. In Se ction III we present the joint WZ/DPC sc heme and prove its optimality . In Section IV we examine the sche me in an unknown S NR s etting and s how its asymptotic robustness. F inally , Section V discuss es c omplexity redu ction issues. I I . B AC K G R O U N D : G O O D S H A P I N G L A T T I C E S F O R A N A L O G T R A N S M I S S I O N Before we p resent the sc heme, we need some definitions an d results conc erning multi-dimensiona l lattices. Let Λ be a K -dimensional lattice, defined by the g enerator ma trix G ∈ R K × K . The lattice includes a ll points { l = G · i : i ∈ Z K } where Z = { 0 , ± 1 , ± 2 , . . . } . The nearest neighbor qua ntizer assoc iated with Λ is de fined by Q ( x ) = arg min l ∈ Λ k x − l k , where k · k denotes the Euclidian norm an d ties are broken in a systema tic mann er . Let the basic V oronoi cell of Λ be V 0 = { x : Q ( x ) = 0 } . 4 The sec ond mome nt of a lattice is given by the v arianc e o f a uniform d istrib ution ov e r the ba sic V oronoi cell: σ 2 (Λ) = 1 K Z V 0 k x k 2 d x . (9) The modulo-lattice op eration is defined by: x mo d Λ = x − Q ( x ) . By defin ition, this operation satisfie s the “distrib u ti ve law”: [ x mo d Λ + y ] mo d Λ = [ x + y ] mo d Λ . (10) The covering radius of a lattice is giv e n by r (Λ) = max x ∈V 0 k x k . (11) For a dither vector d , the dithered mod ulo-lattice operation is: y = [ x + d ] mo d Λ . If the dither vector D is independent of x and uniformly d istrib uted over the basic V oronoi c ell V 0 , then Y = [ x + D ] mo d Λ is uniformly d istrib uted over V 0 as w ell, a nd inde penden t o f x [18]. Consequently , the sec ond moment of Y per eleme nt is σ 2 (Λ) . The loss f a ctor L (Λ , p e ) of a lattice w .r . t. Gaussian no ise at error probability p e is defined a s follows. Let Z be Ga ussian i.i.d. vector with element variance equal to the lattice seco nd moment σ 2 (Λ) . Then L (Λ , p e ) = min  l : Pr  Z √ l / ∈ V 0  ≤ p e  . (12) For s mall e nough p e this factor is at least on e. By [19, Th eorem 5], there exists a s equenc e of lattices which posse sses a vanishing loss at the limit of h igh dimension 1 , i.e.: lim p e → 0 lim K →∞ L (Λ K , p e ) = 1 . (13) Moreover , there exists a seq uence of such lattices that is also good for cover ing , i.e. defining : ˜ L (Λ) = r 2 (Λ) K · σ 2 (Λ) , (14) where r (Λ) was define d in (11), the sequence also satisfies 2 : lim K →∞ ˜ L (Λ K ) = 1 . Howe ver , for this work we need a slightly mo dified result, which a llo ws to re place the Gau ssian no ise b y a combina tion 1 These lattices are si multaneously good for source and channel coding; see more on this in Appendix I. 2 Note that by definition, ˜ L (Λ K ) ≥ 1 always. 5 of Gaus sian and “s elf-noise” componen ts. T o that end, we de fine for any 0 ≤ α ≤ 1 the α -mi x ture n oise as: Z α = p 1 − (1 − α ) 2 W − (1 − α ) D , where W is Gaussian i.i.d. with element v a riance σ 2 (Λ) , a nd D is uniform ov er V 0 and independen t of W . Note tha t s ince 1 K k D k 2 = σ 2 (Λ) , the resulting mixture also ha s average per-element variance σ 2 (Λ) . W e re-define the loss factor w .r .t. this mixture noise as L (Λ , p e , α ) = min  l : Pr  Z α √ l / ∈ V 0  ≤ p e  . (15) Note tha t this definition reduc es to (12) for α = 1 . Using this definition, we have the followi n g, wh ich is a direct con sequen ce of [20]. Pr opos ition 1: (Existenc e of good lattices) For a ny e rror probability p e > 0 , and for any 0 ≤ α ≤ 1 , there exists a sequ ence of K -dimensional lattices Λ K satisfying: lim p e → 0 lim K →∞ L (Λ K , p e , α ) = 1 , (16) and lim K →∞ ˜ L (Λ K ) = 1 . (17) Note that since by definition, L (Λ K , p e , α ) is non-increasing in p e , it follows that for any p e > 0 this sequen ce o f lattices satisfies: lim sup K →∞ L (Λ K , p e , α ) ≤ 1 . (18) In Appe ndix I we elaborate more on the sign ificance of this result, a nd on its conne ction to mo re commonly use d measures of goodnes s of lattices. I I I . M O D U L O - L A T T I C E W Z / D P C C O D I N G W e now present the joint source /channel scheme for the SI problem of Figure 0. As explained in the Introduction, the quadratic-Gaussian rate-distorti on fun ction (RDF) of the WZ source (1) is equal to the RDF of the so urce Q k (without the known part J k ), given by : R WZ ( D ) = 1 2 log σ 2 Q D . (19) 6 ˆ S Σ J α S β β Σ Σ − β J − − Y mod Λ α C X α I SOURCE CHANNEL Z I ENCODER Σ Σ Q S J D D T mod Λ DECODER M Figure 0: Analog W yner-Zi v / dirty-paper coding sche me: S = source, ˆ S = rec onstruction, Z = chann el noise, I = interference known a t the encoder , J = source componen t kn own at the decod er , D = d ither Similarly , the capa city of the Gaussian DPC (2) is e qual to the A WGN capacity (without the interference I k ): C DPC = 1 2 log  1 + P N  . (20) Recalling that the sepa ration principle ho lds for this problem [4], the o ptimum distortion (7) is thu s given by: D opt = N P + N σ 2 Q . (21) W e sh ow how to ap proach D opt using the joint source/cha nnel co ding s cheme dep icted in Figure 0. In this sch eme, the K -dimensiona l encoding and decoding functions (3),(5) a re gi ven by: X =[ β S + D − α I ] m o d Λ (22a) ˆ S = α S β n [ α C Y − D − β J ] mo d Λ o + J , (22b) respectively , where the s econd mo ment (9) of the lattice is σ 2 (Λ) = P , and the dither vector D is uniformly dis trib uted over V 0 and independe nt of the sou rce and of the channe l. The chan nel power constraint is satisfie d a utomatically b y the properties of dithered latti c e quantiza tion discu ssed in Section II. The factors α S , α C and β will be chosen in the seque l. For optimum performance , β wh ich is used at the enc oder will depend up on the variance of the so urce u nknown p art, wh ile α C used at the decod er will depend u pon the c hannel SNR. It is assume d, then, that b oth the encoder a nd the deco der have full knowledge of the source and ch annel statistics; we will b reak with this ass umption in the next section. The following the orem g i ves the performanc e o f the sch eme, in terms of the lattice paramete rs L ( · , · , · ) in (15) and in ˜ L ( · ) (14), an d the quantities: α 0 ∆ = P P + N , (23a) ˜ α ∆ = max  α 0 − L (Λ , p e , α 0 ) − 1 L (Λ , p e , α 0 ) , 0  . (23b) 7 W e will also us e these quantities in the s equel to specify the choice of factors α S , α C and β . Theorem 1: (Perf ormanc e of the MLM scheme with any lattice) For any lattice Λ and any e rror probability p e > 0 , there exists a cho ice o f factors α C , α S , β such that the s ystem of (22) (depicted in Figure 0) satisfies : D ≤ L (Λ , p e , α 0 ) D opt + p e D max , where the op timum distortion D opt was defined in (21), and D max = 4 σ 2 Q 1 + ˜ L (Λ) ˜ α ! . (24) W e pro ve this theorem in the sequel. As a direct corollary from it, taking p e to b e an arbitrarily s mall probability and us ing the p roperties of go od lattices (17) and (18), we have the follo wing asy mptotic optimality result 3 Theorem 2: (Optimality of the MLM sche me) Let D (Λ K ) be the distortion achievable by the sy stem of (22) with a lattice from a seq uence { Λ K } that is simultaneously good for so urce a nd channel coding in the sense of Propos ition 1. Then for a ny ǫ > 0 , there exists a c hoice o f factors α C , α S and β , such that lim sup K →∞ D (Λ K ) ≤ D opt + ǫ . For proving Theorem 1 we start with a lemma, sh owi n g equivalence in probability to a real-additiv e noise chan nel (see Figure 1b). The equ i valent ad diti ve noise is: Z eq = α C Z − (1 − α C ) X , (25) where Z a nd X are the physica l chan nel inp ut and A WGN, respectively . By the properties of the dithered modulo-lattice ope ration, the physical channe l input X is uniformly distrib uted ov e r V 0 and independent of the so urce. Thus, Z eq is indeed additive an d has per- e lement variance: σ 2 eq = α 2 C N + (1 − α C ) 2 P . (26) 3 The explicit deriv ation of D max is not necessary for proving T heorem 2; see Appendix II-B. 8 α S β ˆ Q Σ J ˆ S + Σ β Q + Z eq mod Λ T M (a) Equiv alent modulo-lattice channel. Q + Z eq ˆ Q ˆ S + J β Σ α S β Σ Power Constrai nt P L (Λ ,p e ,α C ) M (b) Equiv alent real-additi ve noise chann el w .p. ( 1 − p e ) . Figure 0: Equiv alen t c hannels for the WZ/WDP c oding scheme Lemma 1: (Equivalent additi ve noise channel) Fix some p e > 0 . In the system d efined by (1),(2) and (22), the dec oder modulo output M (see F igure 0) satisfies: M = β Q + Z eq w .p. (1 − p e ) , (27) provided that β 2 σ 2 Q + σ 2 eq ≤ P L (Λ , p e , α C ) , (28) where Z eq , d efined in (25), is indepen dent of Q and J and has pe r -eleme nt variance σ 2 eq (26), and L ( · , · , · ) was defined in (15). Consequ ently , a s long as (28) holds , the whole s ystem is equiv a lent with proba bility (1 − p e ) to the channe l de picted in Figure 1b: ˆ S = J + α S β Z eq + α S Q = S + α S β Z eq − (1 − α S ) Q . (29) Pr oof: W e will first prove equi valence to the cha nnel of Figure 1a: M = [ β Q + Z eq ] mo d Λ , (30) where Z eq was define d in (25). T o tha t end , let T = α C Y − D − β J de note the input of the deco der 9 modulo operation (see (22b) and Figure 0). Comb ine (2) and (22a) to assert: T = α C ( X + Z + I ) − D − β J = [ β S + D − α C I ] m o d Λ + Z eq + α C I − D − β J . Now , using (1) and the “d istrib utive law” (10): T m o d Λ = [ β Q + Z eq ] mo d Λ , and since T = M mo d Λ , we e stablish (30). Now we note that β Q + Z eq = β Q + α C Z − (1 − α C ) X ∆ = p 1 − (1 − α C ) 2 W − (1 − α C ) X , where W is Gau ssian i.i.d., X is un iform over the basic c ell V 0 of the lattice Λ , and the total variance (per element) is given by the l.h.s. of (28). By the definition of L ( · , · , · ) , we have that T = β Q + Z eq ∈ V 0 (31) w .p. at lea st (1 − p e ) . Substituting this in (30), we get (27). This channe l equiv alen ce holds for any ch oice of dimension K , lattice Λ and factors α C , α S and β , as long a s (28) holds. For the proo f of Theorem 1 we make the following choice (using the pa rameters of (23)): α C = α 0 , (32a) β 2 = ˜ α P σ 2 Q , (32b) α S = ˜ αP ˜ αP + α 0 N . ( 3 2c) It will become evident in the s equel, that α C and α S are the MMSE (W iene r) co ef fic ients for estimating X from X + Z a nd Q from Q + Z eq β , respec ti vely , while β is the maximum zoo ming f ac tor that allo ws to satisfy (28) w ith equality , when ev e r poss ible. Pr oof of The or em 1 : For calculating the achievable distorti o n, fi rst note tha t b y the properties of MMSE estimation, σ 2 eq = α C N = α 0 N . Using this, it can be verified that our ch oice of β satisfies (28 ), thus (29) holds with prob ability (1 − p e ) . Denoting by D cor r ect and D incor r ect the distortions when (29) holds or doe s n ot hold, respec ti vely , we 10 have: D = (1 − p e ) D cor r ect + p e D incor r ect ≤ D cor r ect + p e D incor r ect . (33) W e shall now bound both conditional distortions. For the first one , we hav e : D cor r ect = 1 K E (     α S β K Z eq − (1 − α S ) Q     2 ) ( a ) = α S σ 2 eq β 2 = σ 2 Q σ 2 eq β 2 σ 2 Q + σ 2 eq = D opt 1 − α 0 + ˜ α = min  L (Λ , p e , α C ) D opt , σ 2 Q  ≤ L (Λ , p e , α C ) D opt , where (a) stems from the properties of MMSE estimation. It remains to show that D incor r ect ≤ D max , which is estab lished in Appendix II-A.  As mentioned in the Introduction, a recent work [15] de ri ves a similar a symptotic result, replacing the shaping lattice of our s cheme by a rand om sh aping c ode. Such a choice is less restricti ve sinc e it is not tied to the p roperties of good Euc lidean lattices, though it leads to higher co mplexity du e to the lack of structure. The use of lattices also allows an alysis in finite dimension as in Theorem 1 and in Section V. Fu rthermore, s tructure is ess ential in network joint sou rce/channe l s ettings; se e e.g. [16]. Lastly , the dithered lattice formulation a llo ws to treat any interference signals, see Re mark 2 in the seq uel. W e con clude this section by the follo wing remarks, intend ed to shed more light on the significanc e o f the results above. 1. Optimal dec oding. The decod er we described is n ot the MMSE e stimator of S from Y . This is for two reasons: First, the dec oder ignores the probab ility of incorrect lattice decoding. Sec ond, since Z eq is no t Gaus sian, the mod ulo-lattice operation w .r .t. the lattice V oronoi ce lls is n ot equiv ale nt to maximum-likelihood estimation o f the lattice po int (se e [20] for a similar dis cussion in the con text of channe l c oding). Conseq uently , for any finite dimension the de coder can be improved. W e shall discuss further the issue o f working with finite-dimension lattices in Se ction V. 2. Universality w .r .t. I and J . None of the scheme parameters depen d up on the nature of the ch annel interference I an d source known part J . Conseq uently , the scheme is adequ ate for arbitrary (indi vidu al) sequen ces. This h as no effect o n the as ymptotic pe rformance of The orem 2, but for finite-dimension al lattices the scheme may be improv ed , e. g. if the interferenc e signa ls are known to be Ga ussian with lo w 11 enough v arianc e. A similar argument a lso ho lds wh en the source or c hannel statistics is not perfectly known, s ee Section IV in the se quel. 3. Non -Gaussian Setting . If the sou rce unknown part Q or the channel noise Z are not Gaussian, the optimum quadratic-Gauss ian distortion D opt may still be approach ed using the MLM sch eme, tho ugh it is no longer the o ptimum performance for the given source and channe l. 4. Asy mptotic choice of para meters. In the limiting c ase where L (Λ , p e , α 0 ) → 1 , we hav e that α S = ˜ α = α 0 in (32), i.e. the c hoice of pa rameters approaches : α C = α S = P P + N = α 0 , (34a) β 2 = α 0 P σ 2 Q . (34b) 5. Properties of the eq uival e nt additive-noise c hannel. W ith h igh prob ability , we have the eq ui valent real-additi ve n oise ch annel of (29) and F igure 1b. This dif fers from the mo dulo-additivit y of the lattice strategies of [20], [21]: Closenes s o f po int unde r a modulo arithmetic does n ot mean closene ss under a diff e rence distortion measure. The c ondition (28 ) forms a n output-power c onstraint: No matter wh at the noise level of the channel is, its output must have a power o f no more than P ; this replac es the input-power c onstraint of the physica l ch annel. Furthermore, by the lattice qu antization noise properties [18], the “self noise” co mponent (1 − α C ) X in (25) is as ymptotically Gau ssian i.i.d., a nd conseq uently so is the equi valent noise Z eq . Thus the add iti ve e quiv alent cha nnel (29) is asymptotically an ou tput-power constrained A WGN channel . 6. N oise margin. Th e additi v ity in (29) is achieved throug h leaving a “n oise margin”. The co ndition (28) mea ns that the sum o f the (scaled ) unknown source p art and equiv ale nt noise sho uld “fit into” the lattice cell (se e (31)). Con sequen tly , the un known sou rce part Q is inflated to a power strictly smaller than the lattice power P . In the limit of infinite dimension, wh en the c hoice of parameters bec omes (34), this p ower becomes β 2 σ 2 Q = α 0 P . In co mparison, it is s hown in [21] that in a lattice solution to a digital SI prob lem, if the information-bearing code (fine lattice) oc cupies a portion of power γ P with any α 0 ≤ γ ≤ 1 , ca pacity is ach iev ed 4 . This freed om, howev er , has to do with the mo dulo-additivit y of the equ i valent ch annel; in our joint so urce/chann el setting, nec essarily γ = α 0 . 7. Comparison with a nalog trans mission. Lastly , cons ider the similarity between o ur a symptotic A WGN c hannel and the optimal analog transmission scheme without SI (8): Since we have “e liminated 4 In [22] a similar observation is made, and a code of power α 0 P i s presented as a preferred choice, since it all o ws easy iterativ e decoding between the information-bearing code and the coarse latt ice. 12 from the picture” the S I comp onents I and J , we are left with the transmission of the so urce unknown compone nt through an equiv alent additi ve noise ch annel. As mentioned ab ove, the un known source part Q is only a djusted to p ower α 0 P (in the limit of high dimen sion), wh ile in (8) the sou rce S is adjusted to power P ; but since the eq ui valent n oise Z eq has variance α 0 N , the equiv a lent cha nnel h as signal-to-noise ratio of P / N , just a s the phys ical channel. I V . T R A N S M I S S I O N U N D E R U N C E RT A I N T Y C O N D I T I O N S W e now turn to c ase where either the variance of the chann el noise N , or the variance of the source unknown part σ 2 Q , are un known at the enc oder 5 . In S ection IV -A we as sume tha t σ 2 Q is kn own at both sides, but the ch annel SNR is unkn own at the e ncoder . W e show tha t in the limit o f high SNR, optimality can still be ap proached . In Section IV - B , we ad dress the general SNR cas e, as well as the ca se of unknown σ 2 Q ; for that, we a dopt an alternativ e broadc ast-channe l point of view . For con venience , we present our res ults in terms of the channe l s ignal-to-noise ratio SNR ∆ = P N (35) and the achieved signa l-to-distortion ratio SDR ∆ = σ 2 Q D . (36) Denoting the theoretically op timal SDR as SDR opt , (21) bec omes: SDR opt = 1 + SNR . (37) Our ach iev ability results in this s ection are based upon application of the MLM sche me, g enerally with a sub-optimal choice of parameters due to the un certainty . W e only bring asymptotic results, u sing high-dimensiona l “g ood” lattices. W e presen t, then , the followi n g lemma, using the definition: β 2 0 = P σ 2 Q . (38) Lemma 2: Let SDR (Λ K ) be the d istortion achiev a ble by the s ystem of (22) with a lattice from a sequen ce { Λ K } that is goo d in the sense of Propo sition 1. For any ch oice of factors α C , α S and β , lim inf K →∞ SDR (Λ K ) ≥ β 2 (1 − α S ) 2 β 2 + α 2 S h α 2 C SNR + (1 − α C ) 2 i β 2 0 , (39) 5 W e do not treat uncertainty at the decoder , since N can be learnt, while the major insight into t he matter of unkno wn σ 2 Q is gained already by assuming uncertainty at t he encoder . 13 provided that β 2 β 2 0 + α 2 C SNR + (1 − α C ) 2 < 1 . (40) Pr oof: This is a direct application of Lemma 1 and of (18). First we fix s ome p e > 0 , a nd n ote tha t (40) is eq uiv alent to (28). T he S DR of the equiv alen t channe l (29), at the limit L (Λ K , p e , α C ) → 1 is then giv e n by (39). Then for p e → 0 the effect of decoding errors vanishes, a s shown in Appendix II-B Note, tha t b y su bstituting the asymp totically optimal choic e o f parameters (34) in (39), the limit becomes SDR opt . A. As ymptotic Robustness for Unkn own SNR Imagine that we know that SNR ≥ SNR 0 , for some s pecific S NR 0 , and that σ 2 Q is known. Suppos e that we set the sche me parameters such that the correct decoding cond ition (40) holds for SNR 0 . Since the variance of the equiv alen t noise can only decrea se with the SNR, correct lattice de coding will hold for a ny SNR ≥ SNR 0 , and we are left with the eq ui valent additive-noise cha nnel where the resu lting SDR is a strictly decrea sing function o f the SNR. W e use this ob servation to d eri ve an asymptotic result, showing that for h igh SNR a single e ncoder c an a pproach op timality s imultaneously for all actual SNR. T o tha t end, we replace the cho ice giv en in (32) , wh ich leads to optimality at one SNR, by the h igh-SNR choice α C = α S = 1 , where β is chosen to en sure correct decod ing even a t the minimal SNR 0 . Theorem 3: (Robustness at high SNR) Let the source and channel b e giv en by (1) and ( 2 ), respectively . Then for any ǫ > 0 , there exists an SNR-indep endent sequen ce of en coding-dec oding s chemes (each one achieving S DR K ) that satisfies : lim inf K →∞ SDR K ≥ (1 − ǫ ) SDR opt , (41) for all suf fic iently lar ge (but fin ite) SNR. I.e., (41) holds for a ll SNR ≥ SNR 0 ( ǫ ) , where SNR 0 ( ǫ ) is finite for all ǫ > 0 . A li mit of a sequenc e of schemes is needed in the theorem, rather than a single scheme, since for any single scheme we h av e p e > 0 , thus the e f fec t o f incorrect decoding ca nnot be n eglected in the limit SNR → ∞ (meaning that the c on vergence in Lemma 2 in not uniform). If we restricted our attention to SNRs bound ed by some arbitrarily high value, a single sc heme would be suf fic ient. 14 Σ Σ Σ ENCODER DECODER DECODER 1 2 J 1 J 2 BRO ADCAST CHANNEL X S ˆ S 2 ˆ S 1 SDR 1 SDR 2 I Z 1 SNR 1 Y 1 Y 2 Z 2 SNR 2 Figure 0: A broadcas t p resentation of the unc ertainty p roblem. Pr oof: W e use a se quence o f MLM schemes with good la ttices in the sen se of Prop osition 1. If α C = 1 , then any β 2 < SNR 0 − 1 SNR 0 · β 2 0 satisfies the c ondition (40) for SNR 0 , thus for any SNR ≥ S NR 0 . He re we a ssume that S NR 0 > 1 , w . l.o.g. since we c an always choos e S NR 0 ( ǫ ) of the theorem ac cordingly . W ith this choice a nd with α S = 1 , we have by Lemma 2 that the SDR may a pproach (for any SNR ≥ SNR 0 ): β 2 β 2 0 SNR = SNR 0 − 1 SNR 0 · SN R = SNR 0 − 1 SNR 0 · SNR SNR + 1 · SD R opt ≥ SNR 0 − 1 SNR 0 + 1 · SD R opt . Now take ǫ = SNR 0 − 1 SNR 0 +1 − 1 . Since lim SNR 0 →∞ ǫ = 0 , o ne may find S NR 0 for any ǫ > 0 as req uired. Note that w e have here also a fixed dec oder; if we are only interested in a fi xed encod er we ca n ad just α S at the deco der and reduce the margin from optimality . B. Joint Source/Channel Br o adcasting Abandon ing the high SNR a ssumption, we can no longer simultaneous ly approach the o ptimal perfor- mance (37 ) for multiple SNRs . Howe ver , in many ca ses we can still do better than a se paration-based scheme . In order to demon strate that, we ch oose to a lternate our view to a broadcast sc enario, whe re the same source need s to be transmitted to multiple deco ders, eac h one with diff e rent conditions; yet all the decode rs sha re the same c hannel interference I , see Figure 0. The variation of the s ource SI component J between deco ders means that the sou rce has two decompos itions: S = Q 1 + J 1 = Q 2 + J 2 , (42) and we define the pe r -elemen t variances of the unknown parts a s σ 2 1 and σ 2 2 , respectively . Note that this variation doe s not imply any unce rtainty from the point o f v iew of the MLM en coder , as lon g as 15 2 4 6 8 10 12 1 1.5 2 2.5 3 3.5 SDR 2 SDR 1 (a) SNR 1 = 2 , SNR 2 = 10 5 10 15 20 25 30 35 40 45 50 55 1 2 3 4 5 6 7 8 9 10 11 12 SDR 2 SDR 1 (b) SNR 1 = 10 , S NR 2 = 50 Figure 0 : Broadca st performance . Solid line: Achiev a ble by sepa ration for arbitrary I and J . Das h-dotted line: Achiev ab le by MLM for arbitrary I and J . Da shed line: Achiev able by MLM for arbitrary J , with I = 0 . Dotted line: Outer bou nd o f ideal matching to both SNRs (achiev a ble b y analog trans mission when I = J = 0 ). σ 2 1 = σ 2 2 ; see [23] for a s imilar obs ervati o n in the context of s ource coding. W e denote the s ignal-to- noise ratios at the decod ers as SNR 1 ≤ SNR 2 , and find achiev ab le corres ponding signal-to-distortion ratio { SDR 1 , SDR 2 } p airs. It will bec ome evident from the exposition, that this approa ch is also g ood for a con tinuum of possible SNRs. W e start from the c ase σ 2 1 = σ 2 2 , for which we have the follo wing. Theorem 4: In the broad cast WZ/DPC chan nel o f Figure 0 with σ 2 1 = σ 2 2 , the signal-to-distortions p air  1 + α · SNR 1 α 2 C + (1 − α C ) 2 SNR 1 , 1 + α · SNR 2 α 2 C + (1 − α C ) 2 SNR 2  , where α = α C  2 − SNR 1 + 1 SNR 1 α C  , (43) can be a pproache d for any 0 < α C ≤ min  1 , 2 · SNR 1 1+ SNR 1  . In a ddition, if there is no c hannel interferenc e ( I = 0 ), then the pa ir n 1 + SNR 1 , 1 + SNR 1 (1+ SNR 2 ) 1+ SNR 1 o can be app roached as well. Pr oof: As in the proof o f Theo rem 3, we use Le mma 2 with a choice of β which allows correct decoding in the lo we r SNR. For the first pa rt o f the theorem, fix any α C according to the theorem 16 conditions, and choo se any β 2 < α P σ 2 Q , where α was defined in (43), in order to satisfy (40). In e ach decoder , optimize α S in (39) to approac h the desired distortion. For the sec ond pa rt of the theorem, if there is no c hannel interference, the e ncoder is α C -independe nt, thus eac h deco der may work with a different α C value. W e ca n therefore make the encode r a nd the fi rst decode r o ptimal for S NR 1 , while the secon d decoder only suffers from the ch oice of β at the enco der . Again we substitute in (39) to a rri ve at the d esired result By standa rd time-sharing arguments, the a chiev able SDR regions include the c on vex hull (in the distortions plane) define d by thes e p oints an d the tri vial { 1 + SNR 1 , 1 } and { 1 , 1 + SNR 2 } p oints. Figure 0 demons trates thes e regions, compa red to the ideal (unachievable) region of simultaneous optimality for both SNRs, and the sep aration-based region achieved by the c oncatena tion of succe ssiv e -refinement source c ode (see e .g. [24]) with b roadcas t channel code [25] (about the sub-optimality o f this c ombination without SI, see e.g. [26]). It is evident, that in mo st cases the us e of the MLM s cheme significantly improves the SDR tradeoff over the pe rformance offered by the separation principle, a nd that the sch eme approach es simultane ous optimality where bo th SNRs are high, as promise d by Theorem 3. Note that, unlike the sep aration-based approac h, the MLM ap proach a lso offers reason able SDRs for intermediate SNRs. Moreover , note that this region is achievable when no assumption is made a bout the statistics of I and J . If these interferences are n ot very strong co mparing to P and σ 2 Q , respec ti vely , then one may further extend the achiev a ble region by allowing some residual interference. T o conclude, we briefly discus s the case where σ 2 1 6 = σ 2 2 . W e d efine the SDR of eac h dec oder relative to its own variance, and a sk wha t are the a chiev ab le S DRs for a p air of SNRs , which ma y b e equa l or dif ferent. Ass ume here the simple cas e, whe re there is no chan nel interference, i.e. I = 0 . In this case , the e ncoder only ne eds to a gree upon β with the decoders, thus (by Lemma 2) we may a pproach for n = 1 , 2 : SDR n = 1 + β 2 β 2 opt,n SNR n , (44) where β opt,n is the optimum choic e of β for SNR n according to (34). It follows, that if the two decode rs require the same value of β , they may be both approac h the t he oretically optimal distortion. This translates to the op timality condition: σ 2 1 1 + SNR 1 SNR 1 = σ 2 2 1 + SNR 2 SNR 2 . This scenario was presen ted in [27], where simultaneous optimality using hybrid digital/analog sc hemes 17 was proven unde r a dif ferent co ndition: σ 2 1 SNR 1 = σ 2 2 SNR 2 . Both co nditions reflect the fact that be tter source co nditions (lower σ 2 Q ) can comp ensate for worse ch annel conditions (lower SNR). It follo ws from the difference between the conditions, that for so me parameter values the MLM scheme outperforms the approa ch of [27], thus extending the achiev a ble SDRs region. V . D I S C U S S I O N : D E L A Y A N D C O M P L E X I T Y W e ha ve p resented the joint source /channel MLM scheme, prov e n its optimality for joint WZ/DPC setting with k nown SNR a nd shown its improved robustness over a separation-ba sed s cheme. W e now discuss t h e poten tial comp lexity and delay ad vantages of our approa ch relative to s eparation-bas ed scheme s, first c onsidering the complexity at high dimens ion and then sugges ting a s calar variant. Consider a sep aration-based solution, with s ource an d chann el en coder/deco der pairs. An optimal channe l coding sche me typically consists of two codes : an information-bearing code and a s haping c ode, both of which require a nearest-neigh bor search a t the deco der . An optimal source co ding sch eme also consists of b oth a qu antization code an d a shap ing code in orde r to ac hiev e the full vector quantization gain (see e.g. [28]), thus two nea rest-neighbor search es are ne eded at the encod er . The MLM approach omits the information-bearing ch annel code an d the quantization c ode, a nd merges the c hannel and source shaping codes into one. It is conv en ient to compa re this approa ch with the nested lattices app roach to channe l and source coding with SI [10], s ince in that approa ch both the channel and source information bearing/shap ing code pairs are materialized by nested lattices. In comparison, ou r sch eme require o nly a single lattice (parallel to the coarse lattice of nested schemes ), and in add ition the source and channel lattices collaps e into a single one. There is a price to pay , howe ver: For the WZ problem, the coarse lattice s hould be good for chann el coding, while for the WDP p roblem the c oarse lattice should b e good for so urce coding [10]. Th e lattice used for MLM ne eds to be s imultaneously go od for source and c hannel coding (see Appen dix I). While the existence of s uch lattices in the high dimension limit is a ssured by [19], in finite d imension the lattice that is best in one sen se is not ne cessa rily best in the other se nse [29], resulting in a larger implementation los s. Quantiti vely , wherea s for source coding the lattice sho uld have a lo w normalized second moment, a nd for chan nel coding it should have a lo w volume-to-noise ratio, for joint source 18 expander compressor CHANNEL Z ENCODER Σ Σ − S ˆ S mod Λ mod Λ I X Y I g ( · ) g − 1 ( · ) DECODER Figure 0: Sc alar MLM/companding sc heme for joint source/channe l coding over a high-SNR d irty-paper channe l: S = sou rce, ˆ S = rec onstruction, Z = chan nel noise, I = interference known at the enc oder , g ( · ) = compand ing function. channe l coding the product L (Λ , p e ) (12) sho uld b e low 6 (see App endix I). The study of such lattices is currently unde r resea rch. Exac t comp arison of sc hemes in high d imension will in volve studying the achieved joint sour c e/channel excess distortion e x ponent (see [30] for a recent work about this exponent in the Gau ssian setting). From the practical point o f view , the qu estion of a low-dimensional s cheme is very important, s ince it implies both low complexity an d lo w de lay . One may ask then, what can be achieved using lo w- dimensional lattices, e.g. a scalar latt ice ? Th e dif fic ulty , h owe ver , is that in lo w d imensions a low probability of inc orrect decoding p e implies a high loss factor L (Λ , p e ) , thus the distortion promise d by Theorem 1 grows. S ome improvement may be a chieved by using an optimal deco der rather than the one desc ribed in this work (see Remark 1 at the e nd of Section III ), an issu e whic h is left for further research. A recen t work [31] sugge sts an alternative, for the case o f cha nnel interference on ly ( J = 0 ), by also chang ing the encode r: The sca lar z ooming factor β of the MLM sc heme is replaced by no n-linear compand ing of the signa l; se e Figure 0. At high SNR, the distorti o n loss o f such a scalar MLM sc heme with optimal compa nding comparing to (7) is sh own to be D companding D opt = √ 3 π 2 ∼ = 4 . 3 dB . In comp arison, the los s of a sep aration-based scalar scheme , cons isting of a sc alar qu antizer and a sc alar (uncoded ) chan nel co nstellation, is un bounded in the limit SNR → ∞ . This is sinc e in a separation-bas ed scheme the mapping of quantized source values to channel inputs is arbitrary; conseque ntly , keeping the 6 In Theorem 1 we show that the figure of merit is L (Λ , p e , α ) (15), but for reasonably high SNR it seems that the effect of self noise should not be too dominant, so we can set α = 1 . 19 loss bound ed implies that the error proba bility must go to zero in the high-SNR limit, and the gap of a scalar cons tellation from capacity gro ws . A C K N O W L E D G E M E N T W e thank Uri Erez for helping to make some o f the connec tions which led to this work. A P P E N D I X I M E A S U R E S O F G O O D N E S S O F L AT T I C E S In this appe ndix we discuss mea sures of goodnes s of lattices for source and channel coding, a nd the ir connec tion with the loss factor relev an t to our joint sou rce/channe l sche me. When a lattice is us ed as a qua ntization c odebook in the q uadratic Gaus sian se tting, the figure of merit is the lattice n ormalized secon d mome nt : G (Λ) ∆ = σ 2 (Λ) V (Λ) 2 K , (45) where the ce ll volume is V (Λ) = R V 0 d x . By the iso perimetric inequa lity , G (Λ) ≥ G ∗ K , where G ∗ K is the normalized seco nd mo ment of a ba ll with the sa me dimen sion K as the latti c e. This quan tity satisfies G ∗ K ≥ 1 2 π e , with asymptotic equa lity in the limit of large dimens ion. A s equen ce of K -dimensional lattices is sa id to be good for MSE qua ntization if lim K →∞ G (Λ K ) = 1 2 π e , (46) thus it asy mptotically achieves the minimum poss ible lattice second moment for a g i ven volume. When a lattice is us ed as an A WGN channe l codebook, the fi gure o f merit is the lattice volume -to-noise ratio at a giv e n error probability 1 > p e > 0 (see e.g. [32], [20]): µ (Λ , p e ) ∆ = V (Λ) 2 K σ 2 Z , (47) where σ 2 Z is the maximum variance (per element) of a white Ga ussian vector Z having an error probability Pr { Z / ∈ V 0 } ≤ p e . For any lattice, µ (Λ , p e ) ≥ µ ∗ K ( p e ) , where µ ∗ K ( p e ) is the volume-to-noise ratio of a ball with the same dimension K as the lattice. For any 1 > p e > 0 , µ ∗ K ( p e ) ≥ 2 π e , with asymptotic equality in the limit of lar ge dimens ion. A sequenc e of K -dimensional lattices is go od for A W GN channel coding if lim p e → 0 lim K →∞ µ (Λ K , p e ) = 2 π e , (48) 20 thus it possess es the property of having a minimum p ossible cell volume such that the probability of an i.i.d. Gauss ian vector of a giv en power to fall outside the cell vanishes. Combining the de finitions (45) and (47), we s ee that the loss factor L (Λ , p e ) (12) s atisfies: L (Λ , p e ) = G (Λ) · µ (Λ , p e ) . Furthermore, the existence of a good sequen ce of lattices in the sense of (13) is assured by the existen ce of a se quence that simultaneously satisfies (46) and (48), which was shown in [19, Theorem 5]. Proposition 1 is implicit in the p roof of [20, Theorem 5]. It is based upo n the existence of lattices that are simultaneously goo d for A W GN chan nel coding and for covering [19], where goodnes s for covering also implies goodne ss for MSE quan tization; for s uch lattices , it is shown that the mixture noise canno t be much worse tha n a Gau ssian noise of the sa me variance. L ater , it was sh own in [33] that, for such lattices, for small enough error p robability p e , the introduction of self noise ac tually reduces the los s factor , i.e. L (Λ , p e , α ) ≤ L (Λ , p e , 1) . A P P E N D I X I I T H E E FF E C T O F D E C O D I N G F A I L U R E O N T H E D I S T O RT I O N W ith probab ility p e , correct lattice de coding fails, i.e. (31) does not h old. T hese ev e nts contribute to the total distortion a portion of ˜ D ∆ = p e · D incor r ect , (49) where D incor r ect is the distortion given a d ecoding failure, as in the proof o f The orem 1. In this Append ix we qu antify this e f fect: In the first part we show that D max of (24) is a (rather loos e) bou nd on D incor r ect , thus c ompleting the proof of T heorem 1. In the second part, we show directly that ˜ D must vanish in the limit of sma ll p e , without resorting to an explicit bound on D incor r ect . In both pa rts we use the obs ervation that ˆ S − S = ˆ Q − Q , (50) where ˆ Q ∆ = α S β [ β Q + Z eq ] mo d Λ , see a lso Figure 1 b. W e no te that although Q is u nbounde d, we always have that ˆ Q ∈ α S β V 0 . (51) 21 A. A Bound on the Con ditional Distortion for Any La ttice In order to c omplete the proof of The orem 1, we now bound D incor r ect of (33). D incor r ect = 1 K E {k ˆ S − S k 2 | β Q + Z eq / ∈ V 0 } = 1 K E {k ˆ Q − Q k 2 | β Q + Z eq / ∈ V 0 } ≤ 2 K  E {k ˆ Q k 2 | β Q + Z eq / ∈ V 0 } + E {k Q k 2 | β Q + Z eq / ∈ V 0 }  , (52) where the inequ ality follo ws from assu ming maximizing ( − 1) correlation c oefficient a nd then applying the Cauchy-Schwartz ineq uality . W e sha ll now bound these two terms. For the first one, recalling the definition of the covering radius (11), we bound the c onditional exp ectation by the ma ximum po ssible value: E {k ˆ Q k 2 | β Q + Z eq / ∈ V 0 } ≤ max( k ˆ Q k 2 ) = α 2 S · r 2 (Λ) β 2 ≤ r 2 (Λ) β 2 . (53) For the second term, we have: E {k Q k 2 | β Q + Z eq / ∈ V 0 } ≤ E {k Q k 2 | β Q / ∈ V 0 } ≤ E {k Q k 2 | β Q / ∈ B 0 } , where B 0 is the circums phere of V 0 , of radius r (Λ) . It follo ws that E {k Q k 2 | β Q + Z eq / ∈ V 0 } ≤ σ 2 Q E { V | V > v 0 } , where V ∼ X 2 K and v 0 ∆ = r 2 (Λ) β 2 σ 2 Q . This cond itional expectation is gi ven by: E { V | V > v 0 } = Q ( K 2 + 1 , v 0 K 2 ) Q ( K 2 , v 0 K 2 ) ≤ v 0 + 2 , where Q ( · , · ) is the regularized inc omplete Ga mma function, and the ine quality can be shown by means of calculus . This gi ves the bound on the s econd term: E {k Q k 2 | β Q + Z eq / ∈ V 0 } ≤  r 2 (Λ) β 2 + 2 K σ 2 Q  . Substituting this and (53) in (52), we have that: D incor r ect ≤ 4  r 2 (Λ) K β 2 + σ 2 Q  . Recalling the ch oice of β in (32b) a nd the definition of ˜ L ( · , · ) in (14), the bound follows. 22 B. As ymptotic Ef fec t of Decod ing F ailur es In this part we follo w the c laims us ed by W yner in the sou rce c oding co ntext to establish [5, (5.2)], to se e that lim p e → 0 ˜ D = 0 , wh ere ˜ D was defined in (49), withou t using the explicit bou nd d eri ved in Appendix II-A. This s erves as a simpler proof of Theo rem 2 ; moreover , it also ap plies to a non-op timal choice of parameters, thus it serves in the analys is o f performance und er uncertainty conditions. Denoting the decod ing failure event by ε an d its indicator by I ε , an d recalling (50), w e re-write the contributi o n to the distortion a s: ˜ D = E { I ε · ( ˆ Q − Q ) 2 } . For any value of the sou rce unknown part Q , the distortion is bo unded by: d ( Q ) ∆ = s up ˆ Q ( ˆ Q − Q ) 2 . The expec tation E { d ( Q ) } is finite, s ince Q is Gaussian an d ˆ Q is bounded (see (51 )). W e now have that ˜ D ≤ E { I ε · d ( Q ) } . Using a simple lemma of Probability Theory [5, Lemma 5.1], s ince E { d ( Q ) } is finite, this expectation approach es z ero as p ( ε ) = p e → 0 . R E F E R E N C E S [1] A. W yner and J. Ziv , “The rate-distortion function for source coding with side information at the decoder , ” IEEE T rans. Info. Theory , vol. IT -22, pp. 1–10, Jan., 1976. [2] S. Gel fand and M. S. P insker , “Coding for channel with random parameters, ” Pro blemy P er ed. Inform. (P r oblems of I nform. T rans.) , vol. 9, No. 1, pp. 19–31, 1980. [3] M. Costa, “Writing on dirty paper , ” IEEE Tr ans. Info. T heory , vol. IT -29, pp. 439–441, May 1983. [4] N. Merhav and S . S hamai, “On joint source-channel coding for the Wyner-Zi v source and the Gel’fand-Pinsk er channel, ” IEEE T rans. Info. Theory , vol. IT -40, pp. 2844–2855, Nov . 2003. [5] A. W yner , “The rate-distortion function for source coding with si de information at the decoder - II: General sources, ” Information and Control , vol. 38, pp. 60–80, 1978. [6] J. Ziv , “The behavior of analog communication systems, ” IEEE T rans. Info. Theory , vol. IT -16, pp. 587–594, 1970. [7] M. D. T rott, “Unequal error protection codes: Theory and practice, ” in Pr oc. of Info. T h. W orkshop, Haifa, Israel , June 1996, p. 11. [8] T . Goblick, “Theoretical l imitations on the tr ansmission of data from analog sources, ” IE EE Tr ans. Info. Theory , vol. IT -11, pp. 558–567, 1965. [9] M. Gastpar, B. Ri moldi, and V etterli, “T o code or not to code: Lossy source-chann el communication revisited, ” IE EE T rans. Info. T heory , vol. IT -49, pp. 1147–115 8, May 2003. 23 [10] R. Zamir, S . Shamai, and U. Erez, “Nested linear/lattice codes for structured multi terminal binning, ” IEEE Tr ans. Info. Theory , vol. IT -48, pp. 1250–127 6, June 2002. [11] R. Barron, B. Chen, and G. W . W ornell, “The duality between information embedding and source coding with side information and some applications, ” IEE E Tr ans. Info. T heory , vol. IT -49, pp. 1159–11 80, 200 3. [12] Z . Reznic, M. Feder , and R. Z amir , “Distortion bounds for broadcasting with bandwidth expan sion, ” IEEE T rans. Info. Theory , vol. IT -52, pp. 3778–378 8, Aug. 2006. [13] Y . K ochman and R. Zamir, “ Analog matching of colored sources to colored chan nels, ” in ISIT -2006, Seattle, W A , 2006, pp. 1539–1543 . [14] M. Wilson, K. Narayanan, and G. Caire, “Joint source chennal coding with side information using hybrid digital analog codes, ” in Pr oceedings of the Information T heory W orkshop, L ake T ahoe , CA , Sep. 2007, pp. 299–308. [15] —— , “Joint source chenna l coding with side information using hyb ri d digital analog codes, ” IEEE T rans. Info. T heory , submitted. Electronically av ail able at http://arxiv .org/abs/0802.38 51 [16] B. Nazer and M. Gastpar , “Computation ov er multiple-access channels, ” IE EE T rans. Info. Theory , vol. IT -53, pp. 3498– 3516, Oct. 2007. [17] Y . K ochman, A . Khina, U. Erez, and R. Zamir, “Rematch and forward for parallel relay networks, ” in ISIT -2008, T or onto, ON , 2008, pp. 767–77 1. [18] R. Zamir and M. Feder , “On latti ce quantization noise, ” IEEE T rans. Info. Theory , pp. 1152–11 59, July 1996. [19] U. Erez, S. Litsyn, and R. Z amir , “Lattices which are good for (almost) ev erything, ” IE EE T rans . Info. Theory , vol. IT -51, pp. 3401–3416 , Oct. 2005. [20] U. Erez and R. Zamir, “ Achie ving 1/2 log(1+SNR) on the A WGN channel with lattice encoding and decoding, ” IEEE T rans. Info. T heory , vol. IT -50, pp. 2293–231 4, Oct. 2004. [21] U. Erez, S. Shamai, and R. Zamir, “Capacity and lattice strategies for cancelling kno wn interference, ” IE EE Tr ans. Info. Theory , vol. IT -51, pp. 3820–383 3, Nov . 2005. [22] A. Bennatan, D. Burshtein, G. Caire, and S. Shamai, “Superposition coding f or si de-information channels, ” IEEE T rans. Info. Theory , vol. IT -52, pp. 1872–1889, May 2006. [23] J. K. W olf, “Source coding for a noiseless broadcast channel, ” in Conf. Information Science and Systems, Princeton, N J , Mar . 2004, pp. 666–671. [24] W . H. R. Equitz and T . M. Cover , “Successiv e refinement of information, ” IE EE T rans. Info. Theory , vol. IT -37, pp. 851–85 7, Nov . 1991. [25] T . M. Cover , “Broadcast channels, ” IEE E Tr ans. Info. Theory , vol. IT -18, pp. 2–14, 1972. [26] B. Chen and G. W ornell, “ Analog error-correcting codes based on chaotic dynamical systems, ” IEEE T rans. C ommunica- tions , vol. 46, pp. 881–890, July 1998. [27] D. Gunduz, J. Nayak, and E. Tuncel, “W yner-Zi v coding over broadcast channels using hybrid digital/analog transmission, ” in ISIT -2008, T or onto, ON , 2008, pp. 1543–1547. [28] T . Lookabaugh and R. M. Gray , “Hi gh resolution qua ntizati on theory and t he vector quantizer adv antage, ” IEEE T rans. Info. Theory , vol. IT -35, pp. 1020–1033, S ept. 1989. [29] J. H. Conway and N. J. A. Sloane, Spher e P ack ings, L attices and Gro ups . New Y ork, N.Y .: Springer-V erlag , 1988. [30] Y . Zhong, F . Alajaji, and L. Campbell, “On the excess distortion exponen t for memoryless gaussian source-chan nel pairs, ” in ISIT -2006, Seattle, W A , 2006. 24 [31] I. Leibowitz, “The Zi v-Zakai bound at high fidelity , analog matching, and compand i ng, ” Master’ s thesis, T el A vi v Univ ersity , Nov . 2007. [32] G. D. Forney Jr ., M.D.Tro tt , and S .-Y . Chung, “Sphere-bo und-achiev ing coset codes and multile vel coset codes, ” IEEE T rans. Info. T heory , vol. IT -46, pp. 820–850, May , 2000. [33] T . Liu, P . Moulin, and R. Koetter , “On error exponen ts of modulo latti ce additiv e noise channels, ” IEEE Tr ans. Info. Theory , vol. 52, pp. 454–471, Feb. 2006. 25

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment