On the Separability of Parallel Gaussian Interference Channels

The separability in parallel Gaussian interference channels (PGICs) is studied in this paper. We generalize the separability results in one-sided PGICs (OPGICs) by Sung \emph{et al.} to two-sided PGICs (TPGICs). Specifically, for strong and mixed TPG…

Authors: Sang Won Choi, Sae-Young Chung

On the Separabilit y of P ara llel G aussian Interfer ence Channels Sang W on Cho i School of EECS KAIST Daejeon, K orea Email: ace190 5@kaist.ac.kr Sae-Y oung Chung School of EECS KAIST Daejeon, Korea Email: sychung @ee.kaist.ac.kr Abstract —The separability in parallel Gaussian i nterference channels (PGICs) i s studied in this paper . W e generalize th e separability results in one-sided P GICs (OPGICs) by Su ng et al. to two-sided P GICs (TPGICs). Specifi cally , f or strong and mixed TPGICs, we show neces sary and sufficient conditions f or the se parability . For this, we show diagonal cova riance matrices are su m-rate optimal fo r strong and mixed TPGICs. I . I N T R O D U C T I O N Interfer ence is a fund amental problem in wireless co mmu- nications. Studying the in terferen ce chan nel (IC) can give us insights in to how to d eal with the prob lem. Specifically , 2 -user single-inpu t single-outp ut (SISO) Gaussian IC (GIC) [1]-[6] has been studied. T he capacity r egions of the 2 -user GICs have b een clarified for very strong [1] and strong [2], [3] interferen ce cases. Recently , the sum capacity for the very weak interfer ence has been discovered [4]-[ 6], where a proper noisy version of gen ie is used for the tight up per boun d on the sum cap acity . So far , m ost of the research on the I C has been focussed on the GI C itself. Recently , parallel GI C ( PGIC) h as been studied with intere st. Here, the PGIC has several indepen dent GI Cs as sub -chann els. In the PGIC, joint coding an d indep endent coding can be considered , where joint coding means coding over multiple sub-c hannels, and in depend ent coding refe rs to coding over each sub-channe l separately . In 2 -user PGICs, t here hav e been i n vestigations into whether indepen dent coding suf fices to achieve the (su m) c apacity , i.e. , whether the separability hold s or n ot. In [7], the separa bility has been co nsidered for o ne-sided PGIC (OPGIC), which has been fu rther stud ied in the ergodic sense in [8 ]. Recen tly , the ind ependen t co ding h as been shown to achiev e the sum capacity in the no isy interfe rence r egime [9], where treating interferen ce as noise in each sub-chan nel is optimal in the sense of the sum capacity . The main con tribution of th is pa per is in considerin g the separability in a class of strong, mixed, and weak PGICs. This paper is organized as follows. In Section II, th e channel mod el is descr ibed. In Section I II, we show our sep arability results in the sense of th e sum capacity . Then, we conclu de the paper in Section IV . I I . C H A N N E L M O D E L W e consider PGI Cs described as Y 1 = H 11 X 1 + H 21 X 2 + N 1 (1) and Y 2 = H 12 X 1 + H 22 X 2 + N 2 , (2) where Y k = [ y k 1 y k 2 · · · y kM ] T ∈ R M , X k = [ x k 1 x k 2 · · · x kM ] T ∈ R M , k = 1 , 2 , wher e M is the number of s ub-ch annels, y km ( x km ) is the received ( transmitted) signal at the k -th receiver (tran smitter) in the m -th sub-chann el for m = 1 , 2 , · · · , M , and H kl =      h kl, 1 0 · · · 0 0 h kl, 2 · · · 0 . . . . . . . . . . . . 0 0 · · · h kl,M      is a diagon al channel matrix whose ( m, m ) -th compo nent is the no n-zero chann el coefficient from the k -th transmitter to the l -th receiver in the m -th sub -chann el. The noise vectors N 1 and N 2 are add itiv e white Gaussian w ith z ero me an and covariance matr ix of I M . Here, I M denotes the M × M identity matrix. At the k -th transmitter , a message M k unifor mly distributed over the message index set { 1 , 2 , · · · , 2 nR k } is map ped to the transmitted codeword [ X k, 1 , X k, 2 , · · · , X k,n ] o f length n , wh ere X k,i = [ x k 1 ,i x k 2 ,i · · · x kM ,i ] T for i = 1 , 2 , · · · , n , which is subject to an a verage p ower constraint per sub-ch annel, i.e., 1 n n X i =1 | x km,i | 2 ≤ P km . (3) At the k - th receiver , the block of n received signal vector [ Y k, 1 , Y k, 2 , · · · , Y k,n ] is used to decode the message and an error ha ppens when the outp ut of the deco der ˆ M k 6 = M k . The error pro bability for the k -th u ser is given by λ k,n = Pr( ˆ M k 6 = M k ) (4) assuming uniform distribution for message s. A rate pair ( R 1 , R 2 ) is said to be achiev ab le if we hav e a sequence of encodin g and decod ing functio ns such that λ n → 0 as n → ∞ , where λ n is the maxim um of λ 1 ,n and λ 2 ,n . T he capacity region for the PGIC is defined as the closure of the set of all achiev able rate pairs. Note that the PGIC o f ( 1 ) a nd ( 2 ) is a special form of the general m ultiple-inp ut multiple-ou tput (MIMO) GI C [10] with diagona l H kl ’ s for k , l = 1 , 2 . I I I . S U M C A PAC I T Y In this section, we an alyze th e su m capacity und er joint coding and unde r inde penden t coding. Then, we in vestigate whether the separab ility hold s or no t for some classes o f TPGICs. W e start by stating the f ollowing lemma. Lemma 1: Let w = [1 w 2 · · · w K ] T be a weigh t vector whose k -th elem ent is the weigh t for the k - user’ s ra te with w k ≥ 0 . A covariance matrix for the k -th user is denoted as S k with constraint diag ( S k ) ≤ P k , where diag ( S k ) and P k are diago nal matrices of size M × M whose ( m, m ) - th compon ents are the ( m, m ) -th element of S k and P km , respectively . Then , the weighted sum capacity of M IMO IC denoted as f ( w , P 1 , P 2 , · · · , P K ) is co ncave in ( P 1 , P 2 , · · · , P K ) . Pr oof: It is directly fr om [ 9]. Specifically , th e set o f all ach iev ab le schemes with diag ( S k ) ≤ P k always in- cludes TDM/FDM between any two achie vable schemes with diag ( S k ) ≤ P ′ k and diag ( S k ) ≤ P ′′ k , where P k = λ P ′ k + (1 − λ ) P ′′ k for 0 ≤ λ ≤ 1 . A. Capacity re gion for PGICs Notation 1: For full-rank square matrices S and T , S ≥ T ( S > T ) means that S − T is positive semid efinite (def- inite). Let H [ m ] denote ( h 11 ,m , h 12 ,m , h 21 ,m , h 22 ,m ) for m = 1 , 2 , · · · , M . For notatio nal convenience, we use the following notations: A m = 1 2 log 2  1 + | h 11 ,m | 2 P 1 m  , (5) B m = 1 2 log 2  1 + | h 12 ,m | 2 P 1 m  , (6) C m = 1 2 log 2  1 + | h 21 ,m | 2 P 2 m  , (7) D m = 1 2 log 2  1 + | h 22 ,m | 2 P 2 m  , (8) E m = 1 2 log 2  1 + | h 11 ,m | 2 P 1 m + | h 21 ,m | 2 P 2 m  , (9) F m = 1 2 log 2  1 + | h 12 ,m | 2 P 1 m + | h 22 ,m | 2 P 2 m  , (10) G m = 1 2 log 2  1 + | h 11 ,m | 2 P 1 m 1 + | h 21 ,m | 2 P 2 m  , (11) H m = 1 2 log 2  1 + | h 22 ,m | 2 P 2 m 1 + | h 12 ,m | 2 P 1 m  , (12) I m = 1 2 log 2  1 + | h 21 ,m | 2 P 2 + | h 11 ,m | 2 P 1 1 + | h 12 ,m | 2 P 1  , (13) and J m = 1 2 log 2  1 + | h 12 ,m | 2 P 1 + | h 22 ,m | 2 P 2 1 + | h 21 ,m | 2 P 2  (14) for m = 1 , 2 , · · · , M . 1) Str on g TPGIC: Lemma 2: For strong TPGIC, i.e., H 2 12 ≥ H 2 11 and H 2 21 ≥ H 2 22 , the capac ity region is giv en by [ diag( S k ) ≤ P k , k =1 , 2 { ( R 1 , R 2 ) | 0 ≤ R 1 ≤ 1 2 log 2   I M + H 11 S 1 H T 11   , 0 ≤ R 2 ≤ 1 2 log 2   I M + H 22 S 2 H T 22   , 0 ≤ R 1 + R 2 ≤ 1 2 log 2   I M +  H 11 S 1 H T 11 + H 21 S 2 H T 21    , 0 ≤ R 1 + R 2 ≤ 1 2 log 2   I M +  H 12 S 1 H T 12 + H 22 S 2 H T 22           , (15) where the corr espondin g sum capacity is given by min M X m =1 A m + D m , M X m =1 E m , M X m =1 F m ! , (16) where S k is the covariance matrix of X k for k = 1 , 2 . Pr oof: First, the cap acity region ( 15 ) follows from [ 11]. Second, using Hadam ard’ s inequality [12], we see that diago- nal matrices S 1 and S 2 suffice to ach iev e all the rate pairs in the capacity region ( 15 ) , fro m which we g et ( 16 ) . Cor ollary 1: When we u se each sub-channel instance in- depend ently in the strong TPGIC, the following is the sum capacity . M X m =1 min ( A m + D m , E m , F m ) . (17) Pr oof: I t follows from Lemma 2. Lemma 3: In the low signal-to- noise ratio (SNR) regime, the sum capacity for strong TPGIC un der joint coding coin- cides asymptotically with that under independent coding, i.e., lim max k,m P km → 0 min  P M m =1 A m + D m , P M m =1 E m , P M m =1 F m  P M m =1 min ( A m + D m , E m , F m ) = 1 . (18) Pr oof: W e refer readers to [16]. 2) Mixed TPGIC: Lemma 4: For mixed TPGIC with H 2 12 ≥ H 2 11 and H 2 21 ≤ H 2 22 , the sum capacity is given b y min M X m =1 F m , M X m =1 D m + G m ! . (19) Pr oof: T he sum capacity fo r the mixed TPGIC is g i ven by max diag( S k ) ≤ P k , k =1 , 2 min      1 2 log 2   I M + H 12 S 1 H T 12 + H 22 S 2 H T 22   , 1 2 log 2    I M + H 11 S 1 H T 11  I M + H 21 S 2 H T 21  − 1    + 1 2 log 2   I M + H 22 S 2 H T 22        , (20) which follows from [11]. Note that the following condition   I M + H 2 22 S 2   | I M + H 2 21 S 2 | ≤   I M + H 2 22 P 2   | I M + H 2 21 P 2 | (21) for any S 2 with P 2 = diag ( S 2 ) is sufficient to show that diagona l covariance matrices P 1 and P 2 are op timal for the sum capacity . Since ( 21 ) is satisfied whenever H 2 21 ≤ H 2 22 1 , ( 20 ) beco mes ( 19 ) , which com pletes the proo f. Remark 1: Max imizing power for each sub-ch annel is op- timal for ( 20 ) . Cor ollary 2: For mixed TPGICs with H 2 12 ≥ H 2 11 and H 2 21 ≤ H 2 22 , the sum cap acity u nder ind ependen t c oding is giv en by M X m =1 min ( F m , D m + G m ) . (22) Pr oof: I t follows from Lemma 4. Lemma 5: In the low SNR regime, f or m ixed TPGI Cs with H 2 12 ≥ H 2 11 and H 2 21 ≤ H 2 22 , the sum capacity und er joint coding co incides asympto tically with that unde r in depend ent coding, i.e., lim max k, m P km → 0 min  P M m =1 F m , P M m =1 D m + G m  P M m =1 min ( F m , D m + G m ) = 1 . ( 23) Pr oof: W e refer readers to [16]. 3) Noisy-interference TPGIC: Lemma 6: The re exist TPGICs that are sep arable when the chann el realization H [ m ] at the m -th sub -chann el is included in N m = n H [ m ]    | h 21 ,m | | h 22 ,m | + | h 12 ,m | | h 11 ,m | ≤ 1 o for all m = 1 , 2 , · · · , M . Pr oof: It follows fro m [9], wher e the s eparability is proven whe n the power constrain ts satisfy a ce rtain co ndition. Remark 2: For the TPGIC where the c hannel realization H [ m ] at the m - th sub-chan nel satisfi es | h 21 ,m | | h 22 ,m | + | h 12 ,m | | h 11 ,m | > 1 , | h 21 ,m | | h 22 ,m | ≤ 1 , and | h 12 ,m | | h 11 ,m | ≤ 1 for all m = 1 , 2 , · · · , M , th e su m c apacity is no t known ev en for the TPGIC u nder indepe ndent coding. B. Sepa rability 1) Str ong TPGIC: Theor em 1: The stron g TPGIC is separ able if f H [ m ] ∈ S 1 m for all m = 1 , 2 , · · · , M , (24) H [ m ] ∈ S 2 m for all m = 1 , 2 , · · · , M , (25) or H [ m ] ∈ S 3 m for all m = 1 , 2 , · · · , M , (26) 1 W e omit this proof due to space limitati ons. W e refer readers to [16] for the proof of this. where S 1 m =  H [ m ]     1 + | h 11 ,m | 2 P 1 m ≤ | h 21 ,m | 2 | h 22 ,m | 2 , 1 + | h 22 ,m | 2 P 2 m ≤ | h 12 ,m | 2 | h 11 ,m | 2  , S 2 m =  H [ m ]     1 + | h 11 ,m | 2 P 1 m > | h 21 ,m | 2 | h 22 ,m | 2 ≥ 1 , | h 12 ,m | 2 | h 11 ,m | 2 ≥ | h 22 ,m | 2 P 2 m | h 11 ,m | 2 P 1 m ·  | h 21 ,m | 2 | h 22 ,m | 2 − 1  + 1  , and S 3 m =  H [ m ]     1 + | h 22 ,m | 2 P 2 m > | h 12 ,m | 2 | h 11 ,m | 2 ≥ 1 , | h 12 ,m | 2 | h 11 ,m | 2 < | h 22 ,m | 2 P 2 m | h 11 ,m | 2 P 1 m ·  | h 21 ,m | 2 | h 22 ,m | 2 − 1  + 1  . Pr oof: Since ( 16 ) is always gre ater th an or equal to ( 17 ) , we only nee d to conside r the cond ition fo r eq uality in the following: min M X m =1 A m + D m , M X m =1 E m , M X m =1 F m ! ≥ M X m =1 min { A m + D m , E m , F m } . (27) A necessary and suf ficient condition for the equ ality is gi ven by A m + D m ≤ min { E m , F m } , f or all m = 1 , 2 , · · · , M , E m ≤ min { A m + D m , F m } , f or all m = 1 , 2 , · · · , M , or F m ≤ min { A m + D m , E m } for a ll m = 1 , 2 , · · · , M , which comp letes the proof. 2) Mixed TPGIC: Theor em 2: The mixed TPGIC with H 2 12 ≥ H 2 11 and H 2 21 ≤ H 2 22 is separable iff H [ m ] ∈ M [1] 1 m for all m = 1 , 2 , · · · , M (28) or H [ m ] ∈ M [1] 2 m for all m = 1 , 2 , · · · , M , (29) where M [1] 1 m =  H [ m ]     | h 12 ,m | 2 | h 11 ,m | 2 ≤ 1 + | h 22 ,m | 2 P 2 ,m 1 + | h 21 ,m | 2 P 2 ,m  (30) and M [1] 2 m =  H [ m ]     | h 12 ,m | 2 | h 11 ,m | 2 > 1 + | h 22 ,m | 2 P 2 ,m 1 + | h 21 ,m | 2 P 2 ,m  . (31 ) Pr oof: T he sum cap acity is given b y min M X m =1 F m , M X m =1 D m + G m ! from Lemma 4. Under the in depend ent cod ing, th e sum capacity is given by M X m =1 min ( F m , D m + G m ) from Cor ollary 2. Note that min M X m =1 F m , M X m =1 D m + G m ! ≥ M X m =1 min ( F m , D m + G m ) , where the n ecessary and sufficient con dition for the equality is given by F m ≤ D m + G m for all m = 1 , 2 , · · · , M (32 ) or F m > D m + G m for all m = 1 , 2 , · · · , M , (33) which comp letes the proof. Remark 3: In the low SNR r egime, strong TPGICs are separable asym ptotically in the sense o f th e sum capacity , which follows fro m Lemma 3. Also, in case of mixed TPGICs with H 2 12 ≥ H 2 11 and H 2 21 ≤ H 2 22 , the separability holds asymptotically , which is confirm ed by Lemma 5. 3) W eak TPGIC: From Lemma 6, no isy-interfe rence TPG- ICs are separab le in th e sense of the sum capacity . Specif- ically , single-user decodin g at each receiver per sub-c hannel is enough to ach iev e the sum capacity . Except fo r the noisy- interferen ce TPGICs, it is not k nown if weak TPGICs with H 2 12 ≤ H 2 11 and H 2 21 ≤ H 2 22 are separable or not. However , we can co nclude the separab ility partially based on some known inner and outer boun ds. First, fo r the TPGIC, the sum capacity under ind ependen t coding is upp er boun ded by M X m =1 min ( A m + H m , D m + G m , I m + J m ) (34) based on the ou ter bound results in [13] and [14]. Secon d, for the T PGIC, th e su m c apacity under join t co ding is lower bound ed by max diag( S kc ) ≤ β k P k , diag( S kp ) ≤ (1 − β k ) P k 0 ≤ β k ≤ 1 , k =1 , 2 min ( R 1 c + R 2 c , R 12 c ) + R 1 p + R 2 p (35) from the superpo sition codin g based achiev able scheme in [15], where R 1 c = min  1 2 log 2   I M + H 11 S 1 c H T 11 Z − 1 1   , 1 2 log 2   I M + H 12 S 1 c H T 12 Z − 1 2    , R 2 c = min  1 2 log 2   I M + H 21 S 2 c H T 21 Z − 1 1   , 1 2 log 2   I M + H 22 S 2 c H T 22 Z − 1 2    , R 12 c = min  1 2 log 2   I M +  H 11 S 1 c H T 11 + H 21 S 2 c H T 21  Z − 1 1   , 1 2 log 2   I M +  H 12 S 1 c H T 12 + H 22 S 2 c H T 22  Z − 1 2    , R 1 p = 1 2 log 2    I M + H 11 S 1 p H T 11  I M + H 21 S 2 p H T 21  − 1    , and R 2 p = 1 2 log 2    I M + H 22 S 2 p H T 22  I M + H 12 S 1 p H T 12  − 1    , where for k = 1 , 2 , S kc ( S kp ) is covariance matrix of X kc ( X kp ) wh ich is the tran smitted signal vector f or comm on (priv ate) inform ation with X k = X kc + X kp , and Z k ’ s are defined as Z 1 = I M + H 11 S 1 p H T 11 + H 21 S 2 p H T 21 (36) and Z 2 = I M + H 12 S 1 p H T 12 + H 22 S 2 p H T 22 . (37) Since we c an sho w there exist TPGICs such that ( 35 ) > ( 34 ) is satisfied, it is guaranteed that inseparab le TPGI Cs exist in terms of the sum c apacity . I V . C O N C L U S I O N W e have co nsidered the separab ility in th e sense of the sum capacity in some T PGICs. Since joint coding is more complicated to im plement than independ ent cod ing, separa- bility result can help us identify ch annels fo r which we can lower co mplexity without any loss in the sum-rate. W e have shown n ecessary and suf ficien t conditions for the separability for strong and m ixed TPGICs. One interesting observation is that un like w eak OPGICs, indepen dent coding is n ot always sum-rate op timal for th e stro ng and mixed TPGICs. Ho wev er , in the low SNR regime, the sep arability hold s asymptotically in the stron g and mixed TPGICs. A C K N O W L E D G M E N T This work was sup ported by the IT R & D p rogra m of MKE/IIT A. [2008- F-004- 01, 5G mo bile communicatio n sys- tems based on b eam d ivision mu ltiple access an d relays with group coop eration] 2 11, 1, 1 m m h P 2 22 , 2, 1 m m h P 1 1 0 2 12 , 2 11, m m h h 2 21, 2 22, m m h h Fig. 1. Separabl e parallel Gaussian interf erence channels whose channel realiz ations are H [ m ] ∈ S im for all m = 1 , 2 , · · · , M , H [ m ] ∈ M [ j ] km for all m = 1 , 2 , · · · , M , or H [ m ] ∈ N m for all m = 1 , 2 , · · · , M , where i = 1 , 2 , 3 , j = 1 , 2 , and k = 1 , 2 . R E F E R E N C E S [1] A. B. Carleia l, “ A case where inte rference does not reduce capacity , ” IEEE T rans. Inf. Theory , vol. IT -21, no. 5, pp. 569-570, Sep. 1975. [2] H. Sato, “The capacity of the Gaussian interferen ce channe l under strong interfe rence, ” IEEE Tr ans. Inf. Theory , vol. IT -27, no. 6, pp. 786-788, Nov . 1981. [3] M. H. M. Costa and A. A. E l Gamal, “The capac ity region of the discrete memoryless interfere nce channe l with s trong interfere nce, ” IEE E T rans. Inf. Theory , vol. IT -33, no. 5, pp. 710-711, Sep. 1987. [4] X. Shang, G. Kramer , and B. Chen, “ A ne w outer bound and the noisy-int erferenc e sum-rate capacity for Gaussian interfe rence channels, ” preprint . [5] A. S. Motahari and A. K. Khandani, “Capaci ty bounds for the Gaussian interfe rence channel, ” IEEE T rans. Inf. Theory , vol. IT -55, no. 2, pp. 620- 643, Feb . 2009. [6] V . S. Annapureddy and V . V . V eera va lli, “Gaussian interf erence netwo rks: sum capac ity in the low interfe rence regime and new oute r bounds on the capacit y re gion, ” preprint. [7] C. W . Sung, K. W . K. Lui, K. W . Shum, and H. C. So, “Sum capacity of one-sid ed parallel Gaussian interference channels, ” IEEE T rans. Inf . Theory , v ol. 54. no. 1, pp. 468-472, Jan. 2008. [8] L. Sankar , X. S hang, E. Erkip, and H. V . Poor , “Ergodic two-u ser inter - ference channels: Is separabilit y optimal?, ” in P r oc. A llerton Confer ence , 2008. [9] X. Shang, B. Chen, G. Kramer , and H. V . Poor, “Noisy-interfere nce sum- rate capacity of parallel Gausssian interference channel s, ” preprint. [10] S. V iswanath and S. A. Jafar , “On the capacity of vector Gaussian interfe rence channels, ” i n P r oc. IEEE ITW , 2004. [11] X. Shang, B. Chen, G. Kramer, and H. V . Poor, “On the capac ity MIMO interfe rence channels, ” i n P r oc. Allerton Confere nce , 2008. [12] R. A. Horn and C. R. J ohnson, Matrix analysis, New Y ork: Cambridge Uni v . Press, 1988. [13] G. Kramer , “Outer bounds on the capacity of Gaussian inte rference channe ls, ” IEE E T rans. Inf . Theory , vol. 50, no. 3, pp. 581-586, Mar . 2004. [14] R. H. E tkin, D. N. C. Tse, and H. W ang, “Gaussian interferen ce channel capac ity to within one bit, ” IEEE T rans. Inf . Theory , submitte d for publica tion, 2007. [15] X. Shan g, B. Chen, and M. J. Gans, “ On th e achi e v able s um rate for MIMO inte rference channel s, ” IEEE T rans. Inf. Theory , vo l. 52, no . 9, pp. 4313-4320, Sep. 2006. [16] S. W . Choi and S.-Y . Chung, “On the separabilit y of the paral lel Gaussian interfere nce c hannels, ” i n preparation.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment