Joint Source Channel Coding with Side Information Using Hybrid Digital Analog Codes

We study the joint source channel coding problem of transmitting an analog source over a Gaussian channel in two cases - (i) the presence of interference known only to the transmitter and (ii) in the presence of side information known only to the rec…

Authors: Makesh Pravin Wilson, Krishna Narayanan, Giuseppe Caire

Joint Source Channel Coding with Side Information Using Hybrid Digital   Analog Codes
1 Joint Source Channel Coding with Side Information Using Hybrid Digital Analog Codes Makesh Pravin W ilson Dept. of Electrical Engineering T e xas A&M Univ ersity College Station, TX 77843 makesh@ece.tamu.edu Krishna Narayanan Dept. of Electrical Engineering T e xas A&M Univ ersity College Station, TX 77843 krn@ece.tamu.edu Giuseppe Caire Dept. of Electrical Engineering Uni versity of Southern California Los Angeles, CA 90089 caire@usc.edu Abstract W e study the joint source channel coding problem of transmitting an analog source ov er a Gaussian channel in two cases - (i) the presence of interference known only to the transmitter and (ii) in the presence of side information known only to the receiv er . W e introduce hybrid digital analog forms of the Costa and W yner-Zi v coding schemes. Our schemes are based on random coding arguments and are different from the nested lattice schemes by Kochman and Zamir that use dithered quantization. W e also discuss superimposed digital and analog schemes for the above problems which show that there are infinitely many schemes for achieving the optimal distortion for these problems. This provides an extension of the schemes by Bross et al to the interference/side information case. W e then discuss applications of the hybrid digital analog schemes for transmitting under a channel signal-to-noise ratio mismatch and for broadcasting a Gaussian source with bandwidth compression. I . I N T RO D U C T I O N A N D P R O B L E M S T A T E M E N T Consider the classical problem of transmitting K samples of a discrete-time independent identically distributed (i.i.d) real Gaussian source v in N uses of an additi ve white Gaussian noise (A WGN) channel such that the mean- squared error distortion is minimized. Let the source be encoded into the sequence x which satisfies a power constraint E [ xx T ] ≤ N P . Let us first consider the case of K = N and let the output of the A WGN channel y be gi ven by y = x + w where w is a noise v ector of i.i.d Gaussian random variables with zero mean and v ariance σ 2 . If the source v ariance is σ 2 v , then the optimal mean-squared error distortion that can be achiev ed is D opt = σ 2 v 1+ P σ 2 . This optimal performance can be achie ved by two very simple schemes. The first one is separate source and channel coding, where the source is first quantized and the quantization index is transmitted using an optimal code for the A WGN channel. The second scheme is uncoded (analog) transmission with power scaling [3, 4], where the source is not explicitly quantized. Recently , it was sho wn by Bross, Lapidoth and T inguely [6] that there is a family of infinitely many schemes that are optimal, which contains the separation based scheme and uncoded (analog) transmission as special cases. This work was supported by the National Science Foundation under grant CCR 0515296. 2 In this paper , we consider the problem of transmitting K samples of an i.i.d Gaussian source through N uses of an A WGN channel. W e will refer to the ratio of N /K as the bandwidth efficienc y λ . W e first consider the case of λ = 1 and the presence of an interference known only to the transmitter and/or side information av ailable only at the recei ver . W e deriv e hybrid digital analog (HDA) coding schemes for these cases where the source is not explicitly quantized and show that they can obtain the optimal distortion. These can be viewed as the equi valent of uncoded transmission but in the presence of an interference or side information. Then, we show that there is a family of infinitely many schemes that are optimal for this problem which contain pure separation based schemes and HD A schemes as special cases. This can be viewed as the extension of Bross, Lapidoth and T inguely’ s [6] result in the presence of interference/side-information. An interesting aspect of the hybrid digital analog coding schemes proposed here is that they do not require binning unlike their separation based counterparts. The HD A scheme proposed here for the case of interference kno wn at the transmitter is closely related to the scheme considered by Kochman and Zamir in [7], although this was de veloped independently . The dif ference is that the proposed scheme is based only on random coding arguments and does not use nested lattices like in [7]. As a result, the relationship between the auxiliary random variable and the source is made more explicit. W e also consider sev eral applications of the HD A schemes which are not considered in [7]. W e consider the non-asymptotic SNR case unlike in [7]. Further , the performance of this scheme in the presence of SNR mismatch is analyzed. Finally , the use of a HD A Costa based scheme for broadcasting a Gaussian source to two users with bandwidth compression, where λ < 1 is discussed. In the case of side-information av ailable only at the receiv er, the proposed scheme is similar to the scheme in [2] and again uses random coding arguments instead of nested lattices. The paper is organized as follows. First in Section II, we discuss the problem of transmitting an i.i.d Gaussian source in the presence of a Gaussian interference known only to the transmitter . W e introduce a hybrid digital analog (HD A) Costa coding scheme where the source is not explicitly quantized and show that this is optimal. W e then discuss a generalized HD A Costa coding scheme and show that there infinitely many schemes that are optimal. In Section III, we discuss similar schemes and results for the case of ha ving side information av ailable only at the recei ver (W yner-Zi v problem) and in Section IV, briefly consider the situation having interference known only to the transmitter and side-information av ailable only at the receiv er . In [11], Merhav and Shamai hav e shown that separate W yner-Zi v coding followed by Gelfand-Pinsker coding is optimal for this problem. Howe ver , we show that there is a joint source-channel coding scheme for the case of Gaussian source, interference and side information. This result in Section IV is a fairly straightforward extension of the results in Section II and Section III, but is included for completeness and to make the e xposition clear . In Section V, we study the performance of these schemes when the SNR of the channel is dif ferent from the designed SNR and in Section VI the distortion exponents of these schemes are analyzed. In Section VII, we consider the problem of transmitting a Gaussian source in the absence of an interference, but when the channel bandwidth is smaller than the source bandwidth and show how the HD A Costa coding scheme is useful. Finally , in Section VIII, we consider the problem of broadcasting a Gaussian source to two users through A WGN channels and propose a joint source channel coding scheme based on HD A Costa coding. W e use the following notation in this paper . V ectors are denoted by bold face letters such as x . Upper case letters are used to denote scalar random v ariables. When considering a sequence of i.i.d random variables, a single upper case letter is used to denote each component of the random vector . I I . T R A N S M I S S I O N O F A G AU S S I A N S O U R C E O V E R A G A U S S I A N C H A N N E L W I T H I N T E R F E R E N C E K N O W N O N LY A T T H E T R A N S M I T T E R + + S ˆ V Encoder Decoder X V W Y Fig. 1. Block diagram of the joint source channel coding problem with interference kno wn only at the transmitter . W e first consider the problem of transmitting N samples of a real analog source v ∈ R N (this corresponds to K = N ) , with components V which are independent Gaussian random v ariables with V ∼ N (0 , σ 2 v ) in N uses 3 of an A WGN channel with noise v ariance σ 2 in the presence of an interference s ∈ R N which is known to the transmitter but unknown to the receiv er . Further , let us assume that S ’ s are a sequence of real i.i.d Gaussian random v ariables with zero mean and v ariance Q and let the input power to the channel E [ X 2 ] be constrained to be P . The problem setup is shown schematically in Fig. 1. The receiv ed signal y is giv en by y = x + s + w (1) where s is the interference and w is the A WGN. The optimal distortion of σ 2 v ( 1+ P σ 2 ) can be obtained e ven in the presence of the interference by using the follo wing (obvious) separate source and channel coding scheme. A. Separation based scheme with Costa coding (Digital Costa Coding) W e first quantize the source using an optimal quantizer to produce an index m ∈ { 1 , 2 , . . . , 2 N R } , where R = 1 2 log  1 + P σ 2  −  . Then, the index is transmitted using Costa’ s writing on dirty paper coding scheme [9]. Since the quantizer output is digital information, we refer to this scheme as digital Costa coding. W e briefly revie w this here to make it easier to describe our proposed techniques later on. Let U be an auxiliary random variable given by U = X + αS (2) where X ∼ N (0 , P ) is independent of S and α = P P + σ 2 . W e first create an N -length i.i.d Gaussian code book U with 2 N ( I ( U ; Y ) − δ ) code words, where each component of the codew ord is Gaussian with zero mean and v ariance P + α 2 Q . Then ev enly (randomly) distrib ute these over 2 N R bins. For each u , let i ( u ) be the index of the bin containing u . For a giv en m , we look for an u such that i ( u ) = m and ( u , s ) are jointly typical. Then, we transmit x = u − α s . Note that since ( u , s ) are jointly typical, from (2), we can see that x ⊥ s and satisfies the power constraint. The receiv ed sequence y is gi ven by y = x + s + w (3) At the decoder , we look for a u that is jointly typical with y and declare i ( u ) to be the decoded message. Since R = 1 2 log  1 + P σ 2  −  , the distortion in v gi ven by D ( R ) , where D is the distortion rate function. F or a Gaussian source and mean squared error distortion D ( R ) = σ 2 v 2 − 2 R and, hence, the ov erall distortion can be made to be arbitrarily close to σ 2 v ( 1+ P σ 2 ) by a proper choice of  and δ . While the above scheme is straightforward, in the follo wing three sections we show that there are a fe w other joint source channel coding schemes, which are also optimal. In fact, there are infinitely man y schemes which are optimal. Although, these schemes are all optimal when the channel SNR is known at the transmitter , their performance is in general different when there is an SNR mismatch. The joint source channel coding schemes to be discussed in the next sections hav e advantages over the separation based scheme discussed in such a situation. B. Hybrid Digital Analog Costa Coding Let us no w describe a joint source-channel coding scheme where the source v is not explicitly quantized. W e refer to this scheme as hybrid digital analog (HD A) Costa coding for which the code construction, encoding and decoding procedures are as follows. W e first define an auxiliary random v ariable U gi ven by U = X + αS + κV (4) where X ∼ N (0 , P ) and X , S and V are pairwise independent. 1) Codebook generation: Generate a random i.i.d code book U with 2 N R 1 sequences, where each component of each codew ord is Gaussian with zero mean and variance P + α 2 Q + κ 2 σ 2 v . 2) Encoding: Given an s and v , find a u such that ( u , s , v ) are jointly typical with respect to the distribution obtained from the model in (4) and transmit x = u − α s − κ v . If such an u cannot be found, we declare an encoder failure. Let P e 1 be the probability of an encoder failure. 4 From standard arguments on typicality and its extensions to the infinite alphabet case [15], it follows that P e 1 → 0 as N → ∞ provided R 1 > I ( U ; S, V ) (5) = h ( U ) − h ( U | S, V ) (6) = h ( U ) − h ( X | S, V ) (7) = h ( U ) − h ( X ) (8) = 1 2 log P + α 2 Q + κ 2 σ 2 v P (9) where the results follow because X = U − αS − κV and X ⊥ S, V . Notice that when a u that is jointly typical with s and v is found, x satisfies the po wer constraint. 3) Decoding : The receiv ed signal is y = x + s + w . At the decoder , we look for an u that is jointly typical with y . If such a unique u can be found, we declare u as the decoder output or , else, we declare a decoder failure. Let P e 2 be the probability of the ev ent that the decoder output is not equal to the encoded u (this includes the probability of decoder failure as well as the probability of a decoder error). In order to analyze P e 2 , consider the equiv alent communication channel between U and Y . Notice that we hav e in effect transmitted a code word u from a random i.i.d codebook for U with 2 N R 1 code words through the equiv alent channel whose output is y . Again, from the extension of joint typicality to the infinite alphabet case, P e 2 → 0 as N → ∞ provided that I ( U ; Y ) > R 1 I ( U ; Y ) = h ( U ) − h ( U | Y ) = h ( U ) − h ( U − αY | Y ) = h ( U ) − h ( X + αS + κV − αX − αS − αW | Y ) = h ( U ) − h ( κV + (1 − α ) X − αW | Y ) (10) No w , let us choose α = P P + σ 2 (11) κ 2 = P 2 ( P + σ 2 ) σ 2 v −  σ 2 v (12) For the above choice of α , it can be seen that E [( κV + (1 − α ) X − αW ) Y ] = 0 and, hence, (10) reduces to I ( U ; Y ) = h ( U ) − h ( κV + (1 − α ) X − αW ) = 1 2 log P + α 2 Q + κ 2 σ 2 v P −  (13) Hence, P e 2 can be made arbitrarily small as long as R 1 < 1 2 log P + α 2 Q + κ 2 σ 2 v P −  (14) Combining this with the condition for encoder failure, P e 1 and P e 2 can both be made arbitrarily small provided 1 2 log P + α 2 Q + κ 2 σ 2 v P < R 1 < 1 2 log P + α 2 Q + κ 2 σ 2 v P −  (15) Therefore, by choosing an  1 , 0 <  1 <  and R 1 = 1 2 log P + α 2 Q + κ 2 σ 2 v P −  1 we can satisfy (15) and make P e 1 → 0 and P e 2 → 0 as N → ∞ . 4) Estimation: If there is no decoding failure, we form the final estimate of v as an MMSE estimate of v from [ y u ] . After some algebra this is giv en by , 5 ˆ v = κσ 2 v P −  ( u − α y ) (16) The distortion is then giv en by , E [( V − ˆ V ) 2 ] = σ 2 v 1 + P σ 2 P P −  ≤ σ 2 v 1 + P σ 2 + δ (  ) (17) with δ (  ) is vanishing for arbitrarily small  . If an encoder or decoder failure was declared, we set the estimate of v to be the zero vector . Ho wev er, as shown above the probability of these ev ents can be made arbitrarily small and, hence, they do not contribute to the ov erall distortion, which can be seen to be arbitrarily close to the optimal distortion achiev able in the absence of the interference. W e hav e presented a joint source channel coding scheme in the presence of an interference known only to the transmitter . The use of the term hybrid digital analog Costa coding needs some explanation. The scheme is not entirely analog in that the auxiliary random variable is from a discrete codebook. Howe ver , in contrast to digital Costa coding, the source is not e xplicitly quantized and is embedded into the transmitted signal x in an analog fashion. This is the reason for calling this as HD A Costa coding and this has some interesting consequences which are discussed in the following section. Another feature of the HDA Costa coding scheme is that it does not mak e use of binning, rather it needs a single quantizer codebook that is also a good channel code. In practice, this may hav e some impact on the design since there are ensembles of codes that are prov ably good quantizers and channel codes [13]. In the Gaussian case, good lattices that are both good for coding and for quantization are known. The binning approach ho we ver , requires a nesting condition. That is, the fine code much be a good channel code, but it much contain a subcode (or a coarse code) and its cosets that must be good quantizers. This may be a more dif ficult condition to obtain in practice. C. Superimposed digital and HD A Costa coding scheme s v v ∗ e + Source Encoder Costa Encoder HDA Costa Encoder x c x hc x Fig. 2. Encoder model for superimposed coding Recently in [6], Bross, Lapidoth and Tinguely considered the problem of transmitting N samples of a Gaussian source in N uses of an A WGN channel, in the absence of the interferer . They showed that there are infinitely many superposition based schemes, which contain pure separation based scheme and uncoded transmission as special cases. In this section, we show that the same is true in the presence of an interference also and show the corresponding scheme, which is giv en in Fig. 2. The transmitted signal is a superposition of two signals x c and x hc , which are the outputs of a digital Costa encoder and an HD A encoder , respectively . The source is first quantized at a rate of R < C using an optimal source code and let the quantization error be e = v − v ∗ , where v ∗ is the reconstruction. The quantization error e has a v ariance σ 2 e = σ 2 v 2 − 2 R . The first stream in Fig. 2 is a digital Costa encoder that encodes the quantization index by treating s as interference and produces the signal x c , which has a po wer of P C . The second stream is a HD A Costa encoder of rate R which 6 treats s and x c as interference and produces x hc , which has a power of P H C = P − P C . The transmitted signal is the superposition (sum) of x c and x hc . In the digital Costa encoder in the first stream, the auxiliary random v ariable is given by u c = x c + α c s with x c ⊥ s . A power of P C = ( P + σ 2 )(1 − 2 − 2 R ) is used in the first stream and α c is chosen as P C P C + P H C + σ 2 . Note that this corresponds to treating x hc as noise in addition to the channel noise. In the second stream, the quantization error e is encoded using an HD A Costa coding scheme and a power of P H C = P − P C = ( P + σ 2 )2 − 2 R − σ 2 is used. Note that since R < C , the power P H C is always positi ve. The auxiliary random variable is chosen as u hc = x hc + α hc ( x c + s ) + κ e , where x c + s acts as the net interference. Hence, x hc is chosen to be independent of x c , s and e , and α hc is chosen to be P H C P H C + σ 2 . κ is chosen similar to (12) which giv es κ 2 = P 2 H C ( P H C + σ 2 ) σ 2 v 2 − 2 R −  σ 2 v 2 − 2 R . At the decoder the quantization index from the first stream is first decoded and the reconstruction v ∗ is obtained. Then, an estimate of the quantization error e is obtained from the second stream using the HD A costa decoder . The ov erall distortion is the distortion in estimating e . Using the analysis of the HD A Costa scheme in Section II-B, this can be seen to be D = σ 2 e 1 + ( P + σ 2 )2 − 2 R − σ 2 σ 2 + δ (  ) = σ 2 v 1 + P σ 2 + δ (  ) (18) By choosing  to be arbitrarily small we can make δ (  ) → 0 and achie v able a distortion of D = σ 2 v 1+ P σ 2 , which is the optimal distortion. Note that for any source coding rate chosen in the first stream namely R , the resulting distortion is optimal. By v arying R , we can get an infinite family of optimal joint source channel coding schemes. D. Generalized Hybrid Costa coding In the pre vious section, we described a superposition technique. In this section we show a scheme that does not explicitly do superposition. Moreover this also introduces an interesting scheme that is intermediate between HD A Costa having no bins to the digital Costa having bins corresponding to the capacity of the channel. Once again we quantize the source v to v ∗ at a rate R , that is strictly lesser than the channel capacity , using an optimal vector quantizer . Let e = v − v ∗ be the quantization error vector . Note that for an optimal quantizer , as the Rate-Distortion limit is approached, the quantization error e will be Gaussian. W e next define an auxiliary random variable U gi ven by U = X + αS + κ 1 E (19) where X ∼ N (0 , P ) , E ∼ N (0 , σ 2 v 2 − 2 R ) , and X , S and E are independent of each other . α and κ 1 are constants, the choice of which is discussed below 1) Codebook generation: Generate a random i.i.d code book U with 2 N I ( U ; Y ) sequences, where each component of each code word is Gaussian with zero mean and variance P + α 2 Q + κ 2 1 σ 2 v 2 − 2 R . These code words are uniformly distributed in 2 N R bins and this is shared between the encoder and the decoder . 2) Encoding: Let m be the quantization index corresponding to the quantized source v ∗ . Let i ( u ) represent the index of a bin that contains u . For a giv en m find an u such that i ( u ) = m and ( u , s , e ) are jointly typical with respect to the distribution in model (19). W e next transmit the vector x = u − α s − κ 1 e . Note that since ( u , s , e ) are jointly typical, from (19), we can see that x ⊥ s , e and satisfies the power constraint. 3) Decoding : The receiv ed signal is y = x + s + w . At the decoder , we look for an u that is jointly typical with y . If such a unique u can be found, we declare u as the decoder output or , else, we declare a decoder failure. Next we make an estimate of e from u and y . W e can see by similar Gelfand-Pinsker coding arguments that R < I ( U ; Y ) − I ( U ; S, E ) . Note 7 I ( U ; Y ) − I ( U ; S, E ) = h ( U | S, E ) − h ( U | Y ) = h ( X ) − h ( U − αY | Y ) = h ( X ) − h ( κ 1 E + (1 − α ) X − αW | Y ) ( a ) = h ( X ) − h ( κ 1 E + (1 − α ) X − αW ) = 1 2 log  P κ 2 1 σ 2 v 2 − 2 R + (1 − α ) 2 P + α 2 σ 2  (20) ( b ) > R In (20) we choose α = P P + σ 2 and κ 2 1 = P P + σ 2 ( P + σ 2 ) − σ 2 2 2 R σ 2 v −  (( P + σ 2 ) − σ 2 2 2 R ) P σ 2 v . The choice of α ensures (1 − α ) X − αW is orthogonal to Y to get the equality in (a). κ 1 is chosen as abov e to satisfy the inequality in (b). This shows that we can decode the codew ord u with a very high probability and we can decode the message m = i ( u ) and v ∗ . 4) Estimation: If there is no decoding failure, we form the final estimate of v as an MMSE estimate of v from [ v ∗ u y ] . The estimate can be obtained as follows. Let us define σ 2 e = σ 2 v 2 − 2 R . Let Λ be the cov ariance matrix of [ V ∗ U Y ] T and let Γ be the correlation vector between V and [ V ∗ U Y ] T . Then, Λ and Γ are gi ven by Λ =   σ 2 v − σ 2 e 0 0 0 P + κ 2 1 σ 2 e + α 2 Q P + αQ 0 P + αQ P + Q + σ 2   and Γ =  σ 2 v − σ 2 e κ 1 σ 2 e 0  T . The coef ficients of the linear MMSE estimate are given by Λ − 1 Γ and the minimum mean-squared error is gi ven by D = σ 2 v − Γ T Λ − 1 Γ = σ 2 v 1 + P σ 2 ! + δ (  ) where δ (  ) → 0 as  → 0 . Thus, in the limit of  → 0 , D =  σ 2 v 1+ P σ 2  . It must be noted that this scheme is an intermediate between digital Costa coding scheme with the maximum possible bins equal to the capacity of the channel and the analog Costa coding scheme with no bins. Thus we can get a family of schemes with varying bins for the Gaussian channel. The generalized hybrid Costa coding scheme appears to be closely related to the superimposed digital and HD A Costa coding schemes. The subtle difference howe ver is in the generalized hybrid Costa coding scheme, the transmitted signal X is not a superposition of two streams as seen in the superposition case. I I I . T R A N S M I S S I O N O F A G A U S S I A N S O U R C E T H R O U G H A C H A N N E L W I T H S I D E I N F O R M A T I O N A V A I L A B L E O N LY A T T H E R E C E I V E R In this section we consider the problem of transmitting a discrete-time analog source ov er a Gaussian noise channel when the receiv er has some side information about the source. This problem is a dual of the problem considered in the previous section and is considered here for the sake of completeness. Consider the system model as follows. Let v ∈ R N be the discrete-time analog source where V ’ s are independent Gaussian random v ariables with V ∼ N (0 , σ 2 v ) . Let v 0 ∈ R N be the side information that is known only at the receiv er . The correlation between the source and the side information is modeled as V = V 0 + Z (21) where Z ∼ N (0 , σ 2 z ) and V 0 is i.i.d Gaussian. Here V 0 and Z are mutually independent random variables. The source v must be encoded into x and transmitted ov er an A WGN channel and the receiv ed signal is y = x + w (22) 8 where x satisfies a power constraint P and w is A WGN having a noise v ariance of σ 2 . The following schemes can be shown to be optimal for this case. A. Separation Based Scheme with W yner Ziv Coding (Digital W yner Ziv Coding) One strategy is using a separation scheme with an optimal W yner-Zi v code of rate R follo wed by a channel code. W e also refer to this scheme as the digital W yner-Zi v scheme. W e briefly explain the digital W yner -Zi v scheme and then establish our information theoretic model for the HD A W yner-Zi v coding scheme. + V W X Y V Encoder Decoder ˆ V Fig. 3. Block diagram of the joint source channel coding problem with side information known only at the receiver . Suppose the side information is av ailable both at the encoder as well as the receiv er, the best possible distortion is D = σ 2 z 1+ P σ 2 . The same distortion can be achiev ed using the following scheme and is a direct consequence of W yner and Ziv’ s result [16]. This can be achiev ed as follows, Let U be an auxiliary random variable given by U = √ αV + B (23) where α = 1 − D σ 2 z = P P + σ 2 and B ∼ N (0 , D ) . W e create an N -length i.i.d Gaussian code book U with 2 N I ( U ; V ) code words, where each component of the codeword is Gaussian with zero mean and variance ασ 2 v + D and e venly distribute them over 2 N R bins. Let i ( u ) be the index of the bin containing u . For each v , find an u such that ( u , v ) are jointly typical. The index i ( u ) is the W yner-Zi v source coded index. The index i ( u ) is encoded using an optimal channel code of rate arbitrarily close to 1 2 log(1 + P σ 2 ) and transmitted ov er the channel. At the receiv er decoding of the index i ( u ) is possible with high probability as an optimal code book for the channel is used. Next for the decoded i ( u ) we look for an u in the bin whose index is i ( u ) such that ( u , v 0 ) are jointly typical. From v 0 and the decoded u we make an estimate of the source v as follows. ˆ v = v 0 + √ α ( u − √ α v 0 ) (24) This yields the optimal distortion D. B. Hybrid Digital Analog W yner Ziv Coding In this section, we discuss a dif ferent joint source channel coding scheme that does not in volv e quantizing the source explicitly . This scheme is quite similar to the modulo lattice modulation scheme in [2]; the difference being that a nested lattice is not used. The auxiliary random variable U is generated as follows. U = X + κV (25) where κ is defined as κ 2 = P 2 ( P + σ 2 ) σ 2 z −  σ 2 z and X ∼ N (0 , P ) . 1) Codebook generation: Generate a random i.i.d code book U with 2 N R 1 sequences, where each component of each codew ord is Gaussian with zero mean and variance P + κ 2 σ 2 v . This codebook is shared between the encoder and the decoder . 2) Encoding: F or a gi ven v find an u such that ( u , v ) are jointly typical and transmit x = u − κ v . This is possible with arbitrarily high probability if R 1 > I ( U ; V ) 9 3) Decoding: The receiv ed signal is y = x + w . Find an u such that ( v 0 , y , u ) are jointly typical. A unique such u can be found with arbitrarily high probability if R 1 < I ( U ; V 0 , Y ) . W e next show belo w that we can choose an R 1 to satisfy I ( U ; V ) < R 1 < I ( U ; V 0 , Y ) . This requires I ( U ; V ) < I ( U ; V 0 , Y ) which can be sho wn as follows I ( U ; V 0 , Y ) = h ( U ) − h ( U | V 0 , Y ) = h ( U ) − h ( U − κV 0 − αY | V 0 , Y ) = h ( U ) − h ( κZ + (1 − α ) X − αW | V 0 , Y ) ( a ) = h ( U ) − h ( κZ + (1 − α ) X − αW ) = 1 2 log  P + κ 2 σ 2 v κ 2 σ 2 z + (1 − α ) 2 P + α 2 σ 2  ( b ) = 1 2 log  P + κ 2 σ 2 v P  + δ (  ) = h ( U ) − h ( U | V ) + δ (  ) = I ( U ; V ) + δ (  ) (26) In (26), (a) follows because ( κZ + (1 − α ) X − αW ) is independent of Y and V 0 . (b) follows because we can always find a δ (  ) > 0 for the choice of κ 2 = P 2 ( P + σ 2 ) σ 2 z −  σ 2 z . Hence from kno wing v 0 , u and y we can make an estimate of v . Since all random variables are Gaussian, the optimal estimate is a linear MMSE estimate which can be computed as follows. Let Λ be the cov ariance matrix of [ V 0 U Y ] T and let Γ be the correlation between V and [ V 0 U Y ] T . Λ and Γ are giv en by Λ =   σ 2 v − σ 2 z κ ( σ 2 v − σ 2 z ) 0 κ ( σ 2 v − σ 2 z ) P + κ 2 σ 2 v P 0 P P + σ 2   and Γ =  σ 2 v − σ 2 z κσ 2 v 0  T . The coefficients of the linear MMSE estimate are giv en by Λ − 1 Γ and this yields the optimal MMSE estimate which is giv en belo w as, ˆ v = v 0 + κσ 2 z P ( u − κ v 0 − α y ) (27) The distortion D is given by D = E [( v − ˆ v ) 2 ] = E [( v − v 0 − κσ 2 z P ( u − κ v 0 − α y )) 2 ] = E [( z − κσ 2 z P ( κ z + x − α y )) 2 ] = E [((1 − κ 2 σ 2 z P ) z − κσ 2 z P ((1 − α ) x − α w )) 2 ] ( a ) = σ 2 z 1 + P σ 2 + δ (  ) (28) Here, (a) follows by using the appropriate values of κ and α . W e once again obtain the optimal distortion D by making  arbitrarily small and δ (  ) → 0 . It is instructiv e to compare the performance of this scheme with that of the follo wing naive scheme that would be optimal in the absence of side-information at the recei ver . In the naive scheme, the v is transmitted directly (analog 10 transmission). At the receiv er , an MMSE estimate of v is formed from the receiv ed signal y and the av ailable side information v 0 . The distortion for this naive scheme can be seen to be D naiv e = σ 2 z / (1 + ( P /σ 2 ) σ 2 z ) . Notice that ∂ D naive ∂ σ 2 z | σ 2 z =0 = 1 , whereas for the W yner-Zi v scheme, ∂ D ∂ σ 2 z | σ 2 z =0 = 1 1+ P /σ 2 < 1 . At σ 2 z = 0 , both D naiv e and D are zero. i.e. the optimal scheme and the naive scheme approach zero distortion with different slopes. C. Superimposed digital and HD A W yner-Ziv scheme The abo ve results could also be extended to a form of superimposed digital and analog coding. This is similar to the superimposed digital and HDA Costa coding case discussed in section II-C. W e once again ha ve two streams as sho wn in Fig 4. The first stream uses a rate R W yner Ziv code to quantize the source assuming the side information v 0 is known at the receiver and the discrete index is encoded using an optimal channel code to produce the codeword x 1 . The power allocated to this stream is P W Z = ( P + σ 2 )(1 − 2 − 2 R ) . The second stream uses the HD A W yner -Zi v scheme and produces the output x 2 . The auxiliary random variable of the HD A scheme is giv en by U = κ 1 V + X 2 (29) with X 2 ∼ N (0 , P H W Z ) , where P H W Z = ( P + σ 2 )2 − 2 R − σ 2 and X 2 and V are independent. W e also choose κ 2 1 = P 2 H W Z ( P H W Z + σ 2 ) σ 2 e −  σ 2 e where σ 2 e = σ 2 z 2 − 2 R . The two streams ( x 1 and x 2 ) are superimposed and transmitted through the channel. The receiv ed signal is gi ven by y = x 1 + x 2 + w . At the receiv er x 1 is decoded assuming x 2 + w as independent noise and this gives the W yner-Zi v encoded bits (index). This along with the side information v 0 can be used to make an estimate of the source v and we call the estimate as ˜ v . The random variables corresponding to v and ˜ v are related as V = ˜ V + ˜ Z (30) with ˜ Z having a variance σ 2 z 2 − 2 R . When the digital part is first decoded and canceled from the recei ved signal, we get an equiv alent channel for the HD A W yner-Zi v scheme with po wer constraint P H W Z and channel noise σ 2 . W e next make a final estimate of v using a HD A W yner-Zi v decoder from the new side information ˜ v , the observed equi valent channel ( y − x 1 ) and the decoded u . Notice that since the choice of κ 2 1 = P 2 H W Z ( P H W Z + σ 2 ) σ 2 e −  σ 2 e where σ 2 e = σ 2 z 2 − 2 R is designed for the side information ˜ v , this ensures decoding of u with arbitrarily high probability . The achiev able distortion is then given as follows. + Channel Coder HDA Wyner-Ziv Encoder Digital Wyner-Ziv Source encoder x 1 x 2 R bits v x Fig. 4. Block diagram of the encoder of the superimposed digital and HDA W yner-Zi v scheme. D ( a ) = σ 2 e σ 2 P H W Z + σ 2 + δ (  ) = σ 2 z 2 − 2 R σ 2 P H W Z + σ 2 + δ (  ) ( b ) = σ 2 z P H W Z + σ 2 P + σ 2 σ 2 P H W Z + σ 2 + δ (  ) = σ 2 z 1 + P σ 2 + δ (  ) (31) 11 Here in (31) (a) follo ws since we assume that the first stream is decoded with high probability and apply the results of HD A W yner-Zi v decoding with the side information ˜ v . Also (b) follo ws since P H W Z = ( P + σ 2 )2 − 2 R − σ 2 . The optimal distortion σ 2 z 1+ P σ 2 can be obtained by making  arbitrarily small and δ (  ) → 0 . Notice that for any rate R , 0 ≤ R < C , where C is the capacity of the A WGN channel, there is a corresponding power allocation for P H W Z = ( P + σ 2 )2 − 2 R − σ 2 for which the ov erall scheme is optimal. Thus, there are infinitely many schemes which are optimal with the digital W yner-Zi v corresponding to P H W Z = 0 and the HD A W yner-Zi v corresponding to P H W Z = P and R = 0 . Further , we would like to mention that there is another way to get a family of optimal schemes using the HD A W yner-Zi v scheme. Here, the source v is encoded using a HD A W yner-Zi v encoder to the sequence x . The auxiliary random variable U is gi ven by U = κV + X (32) where κ 2 = P 2 ( P + σ 2 ) σ 2 v −  σ 2 v . The sequence x can be treated as an i.i.d Gaussian source and, hence, the family of schemes proposed by Bross, Lapidoth and Tinguely [6] can be applied on x . The scheme proposed in [6] quantizes the analog source, which in this case is x to a quantization inde x and is sent ov er the Gaussian channel along with the uncoded analog source (here x ) with the appropriate power scaling. At the receiver we can obtain an optimal estimate of x by first decoding the quantized inde x and then making an estimate on the analog source. Notice that the HD A W yner-Zi v receiv er only requires an optimal MMSE estimate of x , which can be obtained using the family of schemes in [6]. Hence the resulting distortion in v is still optimal. T o establish this claim we need to sho w that u can be decoded with arbitrarily high probability and an optimal estimate of v must be made using u and the MMSE estimate ˆ x . W e next sho w belo w that I ( U ; V ) < I ( U ; V 0 , ˆ X ) . Hence, we can choose a codebook for u with 2 nR 1 code words such that I ( U ; V ) < R 1 < I ( U ; V 0 , ˆ X ) . Since I ( U ; V ) < R 1 , we can find a u that is jointly typical with v with probability close to 1 and since R 1 < I ( U ; V 0 , ˆ X ) , u can decoded with high probability from ( V 0 , ˆ x ) . I ( U ; V 0 , ˆ X ) = h ( U ) − h ( U | V 0 , ˆ X ) = h ( U ) − h ( U − κV 0 − ˆ X | V 0 , ˆ X ) = h ( U ) − h ( κZ + X − ˆ X | ˆ X , V 0 ) ( a ) = h ( U ) − h ( κZ + X − ˆ X ) ( b ) = 1 2 log  P + κ 2 σ 2 v κ 2 σ 2 z + ασ 2  = 1 2 log  P + κ 2 σ 2 v P  + δ (  ) = h ( U ) − h ( U | V ) + δ (  ) = I ( U ; V ) + δ (  ) (33) In (33), (a) follo ws because ( X − ˆ X ) is orthogonal to ˆ X and hence ( κZ + X − ˆ X ) is independent of ˆ X and V 0 , (b) follo ws because X − ˆ X is Gaussian with variance α σ 2 and is orthogonal to Z . The estimate of v is then gi ven by ˆ v = v 0 + κσ 2 z P ( u − κ v 0 − ˆ x ) (34) The resulting distortion can be obtained by following the steps similar to those in (28) which can be found to be optimal. I V . T R A N S M I S S I O N O F A G AU S S I A N S O U R C E W I T H I N T E R F E R E N C E A T T H E T R A N S M I T T E R A N D S I D E I N F O R M AT I O N AT T H E R E C E I V E R In this section, we consider the problem of transmitting a Gaussian source v through an A WGN channel with channel noise variance σ 2 in the presence of an interference s known only at the transmitter and in the presence 12 of side information v 0 kno wn only at the receiver . The side information v 0 is assumed to be related to the source v according to V = V 0 + Z where Z ∼ N (0 , σ 2 z ) and is independent of V 0 . A similar model has been considered by Merha v and Shamai [11] for a more general setup where the source and side information are not assumed to be Gaussian. They show that a separation based approach of W yner-Zi v coding followed by Gelfand-Pinsker coding is optimal. Here, we propose a joint-source channel coding scheme when the source and channel noise are Gaussian. The proposed scheme is easily obtained by combining the results from the pre vious two sections. It must be noted that a similar joint source channel coding scheme using nested lattices and dither has been shown in [7]. Howe ver , our scheme is based only on random code books. T o establish our scheme we can combine the results from the previous two sections as follows. Choose the auxiliary random variable U such that U = X + αS + κV (35) with κ 2 = P 2 ( P + σ 2 ) σ 2 z −  σ 2 z and α = P P + σ 2 . Further , let X ∼ N (0 , P ) , S ∼ N (0 , Q ) and V ∼ N (0 , σ 2 v ) and let X , S and V be pairwise independent. A codebook U is obtained by generating 2 nR 1 code sequences for u and this is shared between the encoder and decoder . At the encoder , the source v is encoded by choosing an x that is jointly typical with u , v and s . Such a u exists with high probability if we have chosen R 1 > I ( U ; S, V ) . Now x is transmitted over the channel. The received signal vector y is giv en as y = x + s + w At the decoder , u is decoded by looking for a u that is jointly typical with y and the side information v 0 . Using standard arguments on joint-typicality , it can be seen that a unique such u exists with high probability if R 1 < I ( U ; Y , V 0 ) . W e now show that I ( U ; S, V ) < I ( U ; Y , V 0 ) . This implies that there exists an R 1 , such that I ( U ; S, V ) < R 1 < I ( U ; Y , V 0 ) which satisfies the requirements at the encoder and the decoder . I ( U ; Y , V 0 ) = h ( U ) − h ( U | Y , V 0 ) = h ( U ) − h ( U − αY − κV 0 | Y , V 0 ) = h ( U ) − h ( κZ + (1 − α ) X − αW | Y , V 0 ) ( a ) = h ( U ) − h ( κZ + (1 − α ) X − αW ) = 1 2 log( P + α 2 Q + κ 2 σ 2 v P ) + δ (  ) = I ( U ; S, V ) + δ (  ) (36) where (a) follows since κZ + (1 − α ) X − αW is orthogonal to Y and V 0 . Then an optimal linear MMSE estimate of v is formed from the side information v 0 , the receiv ed vector y and the vector u . By using the argument as in section. III-B, the MMSE estimate is given by ˆ v = v 0 + κσ 2 z P ( u − κ v 0 − α y ) (37) The resulting distortion can be obtained by following steps similar to (28) and can be seen to be D = σ 2 z 1+ P σ 2 , which is the optimal distortion. V . A N A L Y S I S O F T H E S C H E M E S F O R S N R M I S M A T C H In this section, we consider the performance of the above JSCC schemes for the case of SNR mismatch where we design the scheme to be optimal for a channel noise variance of σ 2 , but the actual noise v ariance is σ 2 a . Separation based digital schemes suffer from a pronounced threshold effect. When the channel SNR is w orse than the designed SNR, the index cannot be decoded and when the channel SNR is better than the designed SNR, the distortion is limited by the quantization and does not improv e. Ho wever , the hybrid digital analog schemes considered offer better performance in this situation. 13 Let us consider the joint source channel coding setup with side information at both the transmitter and receiv er and σ 2 a < σ 2 . W e can decode u at the receiv er when the SNR is better than the designed SNR and make an estimate of the source from the various observ ations at the receiv er as shown below . 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 10 log10(1/ σ a 2 ) − 10 log10 distortion HDA Costa − Interference Digital Costa − Source HDA Costa − Source Digital Costa − Interference Gen. HDA Costa − Interference Gen. HDA Costa − Source Fig. 5. Performance of the different Costa coding schemes for the joint source channel coding problem. U = X + αS + κ w V (38) V = V 0 + Z (39) Y = X + S + W a (40) where κ w = q P 2 ( P + σ 2 ) σ 2 z , α = P P + σ 2 , S ∼ N (0 , Q ) and Z ∼ N (0 , σ 2 z ) . From now on, we drop the  ’ s in κ w to improv e clarity . Note that α depends only on the assumed noise variance σ 2 and not on σ 2 a . From the observ ations [ V 0 , U, Y ] , an optimal linear MMSE estimate of V is obtained. Similar to the definition in section III-B let Λ be the cov ariance of [ V 0 , U, Y ] T and Γ be the correlation between V and [ V 0 , U, Y ] T . Hence Λ =   σ 2 v − σ 2 z κ ( σ 2 v − σ 2 z ) 0 κ ( σ 2 v − σ 2 z ) P + α 2 Q + κ 2 w σ 2 v P + αQ 0 P + αQ P + Q + σ 2 a   and Γ =  σ 2 v − σ 2 z κσ 2 v 0  T . Then the distortion (in the presence of mismatch) is giv en by D a = σ 2 v − Γ T Λ − 1 Γ (41) This on further simplification yields D a =  ( Qσ 4 + ( P ( P + Q ) + 2 P σ 2 + σ 4 ) σ 2 a ) σ 2 z  ×  P 2 ( P + Q ) + P ( P + Q ) σ 2 + Qσ 4 + ( P (2 P + Q ) + 3 P σ 2 + σ 4 ) σ 2 a  − 1 . (42) Let us now look at a fe w special cases 14 A. Hybrid Digital Analog Costa Coding In this setup there is side information only at the transmitter . The distortion achiev able for the user under SNR mismatch with the actual SNR greater than the designed SNR is obtained by setting σ v = σ z (42) and is given belo w . D v a =  ( Qσ 4 + ( P ( P + Q ) + 2 P σ 2 + σ 4 ) σ 2 a ) σ 2 v  ×  P 2 ( P + Q ) + P ( P + Q ) σ 2 + Qσ 4 + ( P (2 P + Q ) + 3 P σ 2 + σ 4 ) σ 2 a  − 1 (43) The distortion in the source v is shown in Fig.5 for a designed SNR of 10 dB as the actual channel SNR ( 10 log 1 /σ 2 a ) v aries when the source and interference both hav e unit variance. It can be seen that the distortion in the source is smaller with the HDA Costa scheme than with the digital Costa scheme. In some case, the distortion in estimating the interference at the recei ver may also be of interest and can be obtained by estimating S from (38) and (40). The distortion is giv en below , D sa =  Q ( P + σ 2 )( P 2 + (2 P + σ 2 ) σ 2 a )  ×  P 2 ( P + Q ) + P ( P + Q ) σ 2 + Qσ 4 ( P (2 P + Q ) + 3 P σ 2 + σ 4 ) σ 2 a  − 1 (44) It can be seen from Fig. 5 that the distortion in estimating the interference is better for the digital scheme than for the HD A Costa scheme. In [14], Suti vong et al. have studied a somewhat related problem. They consider the transmission of a digital source in the presence of an interference known at the transmitter with a fixed channel SNR. They study the optimal tradeof f between the achiev able rate and the error in estimating the interference at the designed SNR. The main result is that we can get a better estimate of the interference if we transmit the digital source at a rate lesser than the channel capacity . There are important differences our work and that in [14]. First of all, we consider transmission of an analog source instead of a digital source. Secondly , we consider mismatch in the channel, i.e., our schemes are designed to be optimal at the designed SNR and as we move away from the designed SNR, we study the tradeoff between the error in estimating the interference and the distortion in the reconstruction of the analog source. This tradeof f is discussed below . B. Generalized HD A Costa Coding under channel mismatch Next we analyze the performance of the generalized HD A Costa coding under channel mismatch. This case leads to some interesting analysis. By changing the source coding rate of the digital part R , we can tradeof f the distortion between the source and the interference in the presence of mismatch. The different random variables and their relations are giv en belo w . U = X + αS + κ 1 E (45) Y = X + S + W a (46) V = V ∗ + E (47) In the above equation κ 1 = q P P + σ 2 ( P + σ 2 ) − σ 2 2 2 R σ 2 v (Again, we hav e dropped the  in the expression for κ 1 .) From the abo ve equations an estimate of S as well as V is obtained by taking a linear MMSE estimate as all the random v ariables are Gaussian. The resulting expressions of estimation error D sa ( R ) and D v a ( R ) are giv en by D v a ( R ) =  ( σ 2 a ( σ 2 + P ) 2 + ( σ 4 + σ 2 a P ) Q ) σ 2 v  ×  ( σ 2 + P ) 2 ( σ 2 a + P + Q ) − 2 2 R ( σ 2 − σ 2 a ) P ( σ 2 + P + Q )  − 1 (48) D sa ( R ) =  ( σ 2 + P )(2 2 R ( σ 2 − σ 2 a ) P − ( σ 2 + P )( σ 2 a + P )) Q  ×  2 2 R ( σ 2 − σ 2 a ) P ( σ 2 + P + Q ) − ( σ 2 + P ) 2 ( σ 2 a + P + Q )  − 1 (49) 15 The performance of the generalized HD A Costa scheme and HD A Costa scheme in relation to digital scheme is shown in fig. 5. For example in separation using digital Costa there is no improv ement in our estimate of the analog source, b ut we get a better estimate of the interference as sho wn in fig.5. On the contrary for the HD A Costa scheme there is only a small improvement in the estimate of the interference but a good improv ement in the estimate of the analog source. The generalized HD A also shows a dif ference in the estimate for the source and the interference for different rates R and performs as a digital Costa for the choice of R = C and as HDA Costa for the choice of R = 0 . In effect we can tradeof f the estimation error in interference with the source by choosing dif ferent values of R when there is a channel mismatch. C. Hybrid Digital Analog W yner Ziv In this case the distortion could be obtained by setting Q = 0 in (42). The actual distortion is gi ven by 10 12 14 16 18 20 12 14 16 18 20 22 24 26 10 log10(1/ σ a 2 ) −10 log10 distortion HDA Wyner−Ziv Digital Wyner−Ziv Outer Bound Fig. 6. Performance of the different W yner-Ziv schemes for the joint source channel coding problem. D a = ( P + σ 2 ) σ 2 a σ 2 z P 2 + (2 P + σ 2 ) σ 2 a (50) This is clearly better than σ 2 z σ 2 P + σ 2 which is what is achiev able with a separation based approach. Howe ver , we don’t kno w if this is the optimal distortion that is achiev able in the presence of channel mismatch. A simple lo wer bound on the achiev able distortion in the presence of mismatch is to assume that the transmitter kno ws the channel SNR. Based on this we can analyze the gap in dB between the distorion of HDA W yner Zi v scheme and the lower bound as follows. The lower bound on D is gi ven by D low er = σ 2 z 1 + P /σ 2 a (51) No w the gap between the analog W yner-Zi v and the bound at high SNR can be easily calculated as lim σ a → 0 D lower D a . The gap in db, G is hence given by G = 10 log  P P + σ 2  (52) For example, if our designed SNR is say 10 db, for high SNRs, we loose at most G = 0 . 41 db which is fairly close to the outer bound as shown in Fig. 6. 16 V I . D I S T O RT I O N E X P O N E N T F O R H DA C O S TA A N D W Y N E R - Z I V S C H E M E S In this section, we consider the performance of the HD A joint source-channel coding schemes for transmitting a Gaussian source through a Gaussian channel when the actual channel noise variance σ 2 a is not known, but it is kno wn that the variance is always smaller than σ 2 . Since we are interested in the performance of a single encoding scheme over a wide range of noise variances, a useful measure of performance is the rate of decay of the distortion as a function of the actual noise v ariance in the limit σ 2 a → 0 . More precisely , we define a distortion exponent as ζ = lim σ 2 a → 0 log( D ( σ 2 a )) log( σ 2 a ) where D ( σ 2 a ) is the distortion when the noise variance is σ 2 a . Notice that this exponent is quite dif ferent from the distortion signal-to-noise ratio (SNR) exponent considered in [17], [18] for the case of slo w fading channels. In [17] and [18], the rate of decay of distortion with av erage SNR is studied by allowing for a family of coding schemes, one for each average SNR. In contrast, we fix the encoding scheme here and consider the rate of decay with the actual channel SNR (i.e., there is no fading). An upper bound on the achie vable ζ can be obtained by assuming that a genie informs the transmitter of σ a and the transmitter chooses an optimal encoding scheme for a noise variance of σ 2 a . Let us assume a general case where there is an interference s ( S ∼ N (0 , Q ) ) which is kno wn at the transmitter and some side information v 0 which is related to v according to V = V 0 + Z , where Z ∼ N (0 , σ 2 z ) is known at the receiv er . Then, the distortion for the genie-aided scheme is σ 2 z 1+ P σ 2 a since an optimal W yner-Zi v encoder follo wed by an optimal Costa encoder can be chosen. In this case, the distortion exponent is 1. Notice that in the absence of any side information the distortion for the genie-aided scheme is σ 2 v 1+ P σ 2 a which also results in an exponent of 1. In the absence of any interference also, the achiev able distortion is σ 2 z 1+ P σ 2 a and the exponent is 1. Thus, for any single encoding scheme, ζ ≤ 1 both in the presence and absence of interference and/or side information. W e will now consider the performance of the HD A schemes considered in Section IV. If a joint source channel coding scheme is designed to be optimal when the noise variance is σ 2 , then the distortion when the noise v ariance is σ 2 a is giv en by (from (42) in Section V) D ( σ 2 a ) =  ( Qσ 4 + ( P ( P + Q ) + 2 P σ 2 + σ 4 ) σ 2 a ) σ 2 z  ×  P 2 ( P + Q ) + P ( P + Q ) σ 2 + Qσ 4 + ( P (2 P + Q ) + 3 P σ 2 + σ 4 ) σ 2 a  − 1 . (53) W e now consider two cases. A. Absence of Interfer ence When there is no interference at the transmitter , Q = 0 and, hence, from (53), we can see that the optimal distortion can be obtained for a noise v ariance of σ 2 and the optimal distortion exponent of ζ = 1 can be obtained. Thus, this scheme performs as well as the genie-aided receiv er in the distortion exponent sense. B. Presence of Interference In the presence of an interference, Q 6 = 0 , and from (53), ζ can be seen to be zero. That is, some amount of residual interference is always present and, hence, in the high SNR limit ( σ 2 a → 0) , the performance is dominated by this residual interference. Howe ver , if optimal performance is not desired when the noise variance is σ 2 , then the optimal exponent of ζ = 1 can be obtained using a minor modification to the scheme discussed in Section IV. In the modified scheme, the auxiliary random v ariable U is generated as follo ws U = X + S + κ e V (54) Note that α is chosen to be 1, which is clearly not optimal for a noise variance of σ 2 . The side information V 0 is V = V 0 + Z (55) and the receiv ed signal is Y = X + S + W (56) 17 Using arguments similar to those in Section IV, κ e is chosen so as to satisfy I ( U ; Y , V 0 ) > I ( U ; S, V ) . The required condition on κ e can be obtained as follows I ( U ; Y , V 0 ) > I ( U ; S, V ) ⇒ h ( U ) − h ( U | Y , V 0 ) > h ( U ) − h ( U | S, V ) ⇒ h ( U | S, V ) > h ( U | Y , V 0 ) (57) Note that h ( U | S, V ) = h ( X ) and h ( U | Y , V 0 ) = h ( U − η Y − κ e V 0 | Y , V 0 ) , where η = E [ U Y ] E [ Y 2 ] . For this choice of η , ( U − η Y − κ e V 0 ) ⊥ Y , V 0 and, hence, h ( U | Y , V 0 ) = h ( U − η Y − κ e V 0 ) . Hence, we get the relation, h ( X ) > h ( U − η Y − κ e V 0 ) ⇒ P > E [( U − η Y − κ e V 0 ) 2 ] ⇒ s P 2 + P Q − Qσ 2 ( P + Q + σ 2 ) σ 2 z > κ e Hence, κ e can be chosen to be arbitrarily close to q P 2 + P Q − Qσ 2 ( P + Q + σ 2 ) σ 2 z . Now x is transmitted and y is received. The optimal distortion is obtained as an MMSE estimate of v from [ y , u , v 0 ] . The final distortion is given by D ( σ 2 a ) = ( P + Q ) σ 2 a σ 2 z ( P + Q ) σ 2 a + κ 2 e ( P + Q + σ 2 a ) σ 2 z (58) It can be seen that as σ 2 a → 0 , D ( σ 2 a ) ∝ σ 2 a and, hence, ζ = 1 . V I I . A P P L I C AT I O N S T O T R A N S M I T T I N G A G AU S S I A N S O U R C E W I T H B A N D W I D T H C O M P R E S S I O N W e now consider the problem of transmitting K samples of the i.i.d Gaussian source to a single user in N = λK ( λ < 1 ) uses of an A WGN channel with noise variance σ 2 where the transmit power is constrained to 1. There is no interference in the channel, but since λ < 1 , we will see that the techniques described in the previous sections are useful for this problem. There at least three ways to achie ve the optimal distortion in this case. One is to use a con ventional separation based approach. The second one is to use superposition coding and the third one is to use Costa coding. Although, they are all optimal for the single user case, they perform differently when there is a mismatch in the channel SNR and, hence, the last two approaches are briefly described here. a) Superposition Coding: Here we split the source in two parts and take N samples of the source v , namely v N 1 and scale it by √ a creating the systematic signal x 1 = √ av N 1 . W e take the other K − N source samples v K N +1 and use a con ventional source encoder followed by a capacity achie ving channel code resulting in the N dimensional vector x c = C ( Q ( v K N +1 )) , where C denotes a channel encoding operation and Q denotes a source encoding operation. Then x c is normalized so that the av erage power is √ 1 − a . The ov erall transmitted signal is x = x s + x c and the receiv ed signal is y = x + w . At the receiver , the digital part is first decoded assuming the systematic (analog) part is noise and then x c is subtracted from y . Then an MMSE estimate of v N 1 is formed. For the optimal choice of a , the optimal ov erall distortion can be obtained giv en by a ∗ sup = σ 2 "  1 + 1 σ 2  λ − 1 # and D ∗ sup = 1  1 + 1 σ 2  λ (59) which is the optimal distortion. b) Digital Costa Coding: W e split the source exactly as in the previous case and one stream is formed as x s = √ av N 1 . Ho wev er, here the digital part assumes that x s is interference and uses Costa coding to produce x c with po wer 1 − a as shown in Fig. 7. In Costa coding, we define an auxiliary random variable u = x c + α 1 x s where α 1 = 1 − a 1 − a + σ 2 is the optimum scaling coef ficient. At the receiv er , the digital part is decoded which means that u can be obtained. In spite of knowing u exactly , the optimal estimate of v N 1 is obtained by simply treating x c as noise since for the optimal choice of α 1 , x c = u − α 1 x s and v N 1 are uncorrelated. Therefore, an MMSE estimate of v N 1 is formed assuming x c were noise. Hence, the overall distortion becomes 18 + Costa coding assuming X s is interference x s y Quantizer Q x c Fig. 7. Encoder model using Costa coding for single user D = λ 1 + a 1 − a + σ 2 + 1 − λ  1 + 1 − a σ 2  λ/ 1 − λ (60) Again, minimizing D w .r .t. a giv es a ∗ costa = (1 + σ 2 ) " 1 − 1  1 + 1 σ 2  λ # and D ∗ costa = 1  1 + 1 σ 2  λ (61) which is the best possible distortion. c) Hybrid Digital Analog Costa Coding: For the case of λ = 0 . 5 , the digital Costa coding part can be replaced by a hybrid digital analog (HD A) Costa coding. W e refer to such a scheme as HDA Costa coding. The same power allocation howe ver , remains the same and hence, we can simply use a ∗ C osta without the need to differentiate the digital and HD A Costa coding. It is quite straightforward to sho w that a ∗ C osta > a ∗ sup for λ < 1 . Hence, the Costa coding approach allocates higher po wer to the systematic part than the superposition approach, since the systematic part is treated as interference. A. P erformance in the pr esence of SNR mismatch No w , we consider the same set up as above, but when the actual channel noise variance is σ 2 a , whereas the designed noise variance is σ 2 . Case 1: σ 2 a > σ 2 The distortion for the superposition code can be computed to be the sum of the distortions in the systematic part and the digital part. When σ 2 a > σ 2 , the digital part cannot be decoded and, hence, we assume that the distortion in the digital part is the variance of the source, 1. D sup = λ 1 + a ∗ sup 1 − a ∗ sup + σ 2 a + (1 − λ ) · 1 (62) Both the digital and HD A Costa coding schemes perform identically when σ 2 a > σ 2 and the distortion for the Costa code can be computed to be D dig C osta = D H D AC osta = λ 1 + a ∗ C osta 1 − a ∗ C osta + σ 2 a + (1 − λ ) · 1 (63) Case 2: σ 2 a < σ 2 In this case, the digital part can be decoded exactly and, hence, the distortion for superposition coding is D sup = λ 1 1 + a ∗ sup σ 2 a + (1 − λ ) 1  1 + 1 − a ∗ sup a ∗ sup + σ 2  λ/ (1 − λ ) (64) For digital Costa coding, the decoder first decodes the digital part when the auxiliary random variable u is perfectly kno wn. In the case when σ 2 a 6 = σ 2 , the receiver must form the MMSE estimate of v N 1 from the channel observation 19 y and u . Therefore, the overall distortion is D dig C osta = λ  1 − [ p a ∗ C osta α p a ∗ C osta ] ×  1 + σ 2 a 1 − a ∗ C osta + αa ∗ C osta 1 − a ∗ C osta + αa ∗ C osta 1 − a ∗ C osta + α 2 a ∗ C osta  − 1 ×  p a ∗ C osta α p a ∗ C osta  + (1 − λ ) 1  1 + 1 − a ∗ C osta a ∗ C osta + σ 2  λ/ (1 − λ ) (65) For the HD A Costa coding, we can decode u and form MMSE estimates of v N 1 and v K N +1 separately and, hence, the ov erall distortion is giv en by D H D AC osta = λ  1 − [ p a ∗ C osta α p a ∗ C osta ] ×  1 + σ 2 a 1 − a ∗ C osta + αa ∗ C osta 1 − a ∗ C osta + αa ∗ C osta 1 − a ∗ C osta + α 2 a ∗ C osta + κ 2  − 1 ×  p a ∗ C osta α p a ∗ C osta  + (1 − λ ) (1 − [0 κ ] ×  1 + σ 2 a 1 − a ∗ C osta + αa ∗ C osta 1 − a ∗ C osta + αa ∗ C osta 1 − a ∗ C osta + α 2 a ∗ C osta + κ 2  − 1 ×  0 κ  (66) The performance of the superposition scheme, digital Costa and HD A Costa scheme are shown for an example with λ = 0 . 5 in Fig. 8. The designed SNR is defined as 10 log 10 1 σ 2 whereas the actual SNR is defined as 10 log 10 1 σ 2 a . In the example, the designed SNR is fixed at 10dB and the actual SNR is varied from 0 dB to 20 dB. It can be seen that the Costa coding approach is better than superposition coding when σ 2 a > σ 2 and worse for the other case. The HD A Costa coding scheme performs the best over the entire range of SNRs. HDA Costa and Superposition HDA Costa and Digital Costa Fig. 8. Performance of different schemes for the source splitting approach for the bandwidth compression problem with SNR mismatch. V I I I . A P P L I C AT I O N S T O B R O A D C A S T I N G W I T H B A N D W I D T H C O M P R E S S I O N W e now consider the problem of transmitting K = 2 N samples of a unit variance Gaussian source v in N uses of the channel to two users through A WGN channels with noise v ariances σ 2 1 (weak user) and σ 2 2 (strong user) with σ 1 > σ 2 . The channel has the po wer constraint P = 1 . W e are interested in joint source channel coding schemes 20 that provide a good region of pairs of distortion that are simultaneously achiev able at the two users. This problem was considered in [1, 5, 8]. The best known region to date is given by the schemes therein. Notice that when we design a source channel coding scheme to be optimal for the weak user , the strong user operates under the situation of SNR mismatch e xplained in Section VII-A with σ 2 2 = σ 2 a < σ 2 = σ 2 1 . Similarly , when the system is designed to be optimal for the strong user , for the weak user σ 2 1 = σ 2 a > σ 2 = σ 2 2 . Motiv ated by the fact that for λ = 0 . 5 , the HD A Costa coding scheme performs the best, we propose a scheme which is shown in Fig. 9. There are three layers in the proposed coding scheme. The first layer is the systematic part where N out of the K samples of the source are scaled by √ a . Let us call this as x s = √ av N 1 . The other K − N samples of the Gaussian source are hybrid digital analog Costa coding, treating x s as the interference and transmits the signal x 1 with power b in the second layer . So x 1 = u 1 − α 1 x s − κ c v K N +1 , where α 1 and κ c are the optimal scaling coefficient to be used in the hybrid digital analog Costa coding process and u 1 is the auxiliary variable. This layer is meant to be decoded by the weak user and, hence, the scaling factor α 1 is set to be b/ ( b + c + σ 2 1 ) . That is, this layer sees the third layer also as independent noise. + HDA Costa Coding assuming X s is interference Costa coding assuming X s and x 1 are interference R 2 b it s Systematic layer Layer 1 x s x 1 x 2 y v N 1 v K N +1 Wyner-Ziv Encoder Layer 2 Fig. 9. Encoder model using Costa coding The third layer is first W yner Ziv coded at a rate R 2 assuming the estimate of v K N +1 at the receiv er as side information. The W yner-Zi v index is then encoded using digital Costa coding assuming x s and x 1 are interference and uses power c = 1 − a − b . Therefore, x 2 = u 2 − α 2 ( x s + x 1 ) . This layer is meant for the strong user and, hence, the scaling factor α 2 = c/ ( c + σ 2 2 ) . W e then transmit x = x s + x 1 + x 2 . At the recei ver , from the second layer an estimate of v K N +1 is obtained. This estimate acts as side information that can be used in refining the estimate of v K N +1 for the strong user using the decoded W yner -Ziv bits. The W yner-Zi v bits are decoded from the third layer by Costa decoding procedure. The users estimate the systematic part v N 1 and non-systematic part v K N +1 by MMSE estimation from the recei ved y , the decoded u 1 and u 2 . So the overall distortion seen at the weak user is D 1 = 1 2 1 1 + a b + c + σ 2 1 + 1 2 1 1 + b c + σ 2 1 The distortion for the strong user is gi ven by D 2 = 1 2 1 − [ √ a α 1 √ a ]  1 + σ 2 2 b + α 1 a b + α 1 a b + α 2 1 a + κ 2  − 1 ×  √ a α 1 √ a  + 1 / 2 1 + c σ 2 2 (1 − [0 κ ] ×  1 + σ 2 2 b + α 1 a b + α 1 a b + α 2 1 a + κ 2  − 1  0 κ  ! (67) 21 The corner points of the distortion re gion corresponding to being optimal for the strong and weak user respecti vely , can be obtained by setting c = 0 and b = 0 , respectiv ely . The distortion region for this scheme for the case of σ 2 1 = 0 dB and σ 2 2 = 5 dB is shown in Fig. 10. The distortion region for three other schemes are also shown. They are the scheme proposed by Mittal and Phamdo in [1], a dif ferent broadcasting scheme which uses digital Costa coding in both the layers proposed in [8] (details can be found there) and the broadcast scheme with one layer of superposition coding and one layer of digital Costa coding considered in [5, 8]. This scheme currently appears to be the best kno wn scheme. Notice that in the third layer , instead of using a separate W ynzer-Zi v encoder follo wed by a Costa code, we could hav e used the HD A scheme discussed in Section IV with identical results. The proposed broadcast scheme in Fig. 9 significantly outperforms the scheme in Mittal and Phamdo and the digital Costa based broadcast scheme for this example. The corner points of this scheme also coincide with those of the best known schemes reported in [5, 19]. − 3.1 − 3 − 2.9 − 2.8 − 2.7 − 2.6 − 2.5 − 1.5 − 1.4 − 1.3 − 1.2 − 1.1 − 1 − 0.9 − 0.8 − 0.7 − 0.6 10 log 10 (D 2 ) 10 log 10 (D 1 ) Superposition + Costa HDA Costa Mittal and Phamdo Digital Costa Fig. 10. Distortion regions of the different schemes for broadcasting with bandwidth compression. I X . C O N C L U S I O N A N D F U T U R E W O R K W e discussed hybrid digital analog version of Costa coding and W yner-Zi v coding for transmitting an analog Gaussian source through an A WGN channel in the presence of an interferer known only to the transmitter and side information a v ailable only to the receiv er respecti vely . These schemes are closely related to the schemes by Reznic and Zamir [2] and [7], but make the auxiliary random v ariable model more explicit. W e also sho wed that there are infinitely many schemes that are optimal for this problem, extending the work of Bross, Lapidoth and T inguely [6] to the side information case. The HD A coding schemes have advantages over strictly digital schemes when there is a mismatch in the channel SNR. This makes them also useful for broadcasting a Gaussian source to two users with different SNRs. R E F E R E N C E S [1] U. Mittal and N. Phamdo, “Hybrid digital-analog HD A joint source-channel codes for broadcasting and robust communications, ” IEEE T ran. Info. Theory , vol. 48,no. 5, pp. 1082–1102, May 2002. [2] Z. Reznic, M. Feder , and R. Zamir , “Distortion bounds for broadcasting with bandwidth expansion”, IEEE T ran. Info. Theory , v ol. 52,no. 8, pp. 3778- 3788, August 2006. [3] T . J. Goblick Jr ., “Theoretical limitations on the transmission of data from analog sources”, IEEE T ran. Info. Theory , vol. 11,no. 10, pp. 558-567, Oct 1965. [4] M. Gastpar , B. Rimoldi and M. V etterli, “T o code, or not to code: lossy source-channel communication revisited”, IEEE T ran. Info. Theory , vol. 49,no. 5 pp. 1147-1158, May 2003. [5] V . M. Prabhakaran, R. Puri, and K. Ramachandran, “Hybrid Analog-Digital Strategies for Source-Channel Broadcast” 43rd Allerton Confer ence on Communication, Contr ol and Computing , Allerton, IL, September 2005. 22 [6] S. Bross ,A. Lapidoth, and S. T inguely , “Superimposed Coded and Uncoded Transmissions of a Gaussian Source over the Gaussian Channel, ” Pr oceedings of the IEEE International Symposium on Information Theory (ISIT) , Seattle, USA, 2006. [7] Y . Kochman and R. Zamir, “ Analog Matching of Colored Sources to Colored Channels, ” ISIT 2006 , Seattle, USA, July 2006. [8] K. R. Narayanan, M. P . Wilson and G. Caire, “Hybrid Digital and Analog Costa Coding for Broadcasting with Bandwidth Compression, ” W ireless Communications Lab T echnical report, TR-06-107, T exas A&M Univ ersity , http://wcl3.tamu.edu/research.html , August 25, 2006 [9] M. Costa, “Writing on Dirty Paper , ” IEEE T ran. Info. Theory , vol. 29, No. 3, pp. 439-441, May 1983. [10] S. Shamai, S. V erdu, and R. Zamir , “Systematic lossy source/channel coding, ” IEEE T rans. Info. Theory , vol. 44, pp. 564–579, March 1998. [11] N. Merhav , and S. Shamai, “On joint source-channel coding for the W yner-Zi v source and the Gel’fand-Pinsker channel, ” IEEE T rans. Info. Theory , vol. 49, pp. 2844–2855, No vember 2003. [12] R. Zamir, S. Shamai, and U. Erez, “Nested linear/lattice codes for structured multiterminal binning, ” IEEE T rans. Info. Theory , vol. 48, pp. 1250–1276, June 2002. [13] U. Erez, S. Litsyn, and R. Zamir , “Lattices which are good for (almost) ev erything, ” IEEE T rans. Info. Theory , vol. 51, pp. 3401-3416, October 2005. [14] A. Sutivong, M. Chiang, T . M. Cover , and Y . Kim, “Channel Capacity and State Estimation for State-Dependent Gaussian Channels, ” IEEE T ran. Info. Theory , vol. 51, No. 4, pp. 1486-1495, April 2005. [15] T . Cover and J. Thomas, Elements of Information Theory , Wiley , 2006 [16] A. D. W yner and J. Zi v , “The rate distortion function for source coding with side information at the decoder, ” IEEE T ran. Info. Theory , vol. IT -22, no. 1, pp. 1-10, Jan. 1976. [17] J. N. Laneman, E. Martinian, G. W . W ornell, and J. G. Apostolopoulos, “Source channel di versity for parallel channels”, IEEE T ransactions on Information Theory , vol. IT -51, no. 10, pp. 3518–3539, Oct 2005. [18] T . Holliday and A. J. Goldsmith, “Joint source and channel coding for mimo systems”, Allerton Conf. Commun. Contr ol and Computing , pp. 1302–1311, Monticello IL, USA, Oct. 2004. [19] K. R. Narayanan, G. Caire and M. P . W ilson, “Duality Between Broadcasting with Bandwith Expansion and Bandwidth Compression”, Pr oc. Intl. Symp. Info. Theory , Nice, France 2007.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment