Multipath Channels of Bounded Capacity
The capacity of discrete-time, non-coherent, multipath fading channels is considered. It is shown that if the delay spread is large in the sense that the variances of the path gains do not decay faster than geometrically, then capacity is bounded in …
Authors: Tobias Koch, Amos Lapidoth
Multipath Channels of Bounded Capaci ty T obias K och Amos Lapidoth ETH Zurich, Switzerland Email: { tkoch, lapidoth } @isi.ee.eth z.ch Abstract — The capacity of discrete-time, non-coherent, multi- path fading channels is consi dered. It is shown that if th e delay spread is large i n the sense t hat the variances of t he path gains do not decay faster than geometrically , then capacity is boun ded in the signal-to-noise ratio. I . I N T R O D U C T I O N This paper studies no n-coher ent multipath (frequ ency- selecti ve) fading chann els. Such chan nels have been in ves- tigated extensively in the wideband r egime where th e sign al- to-noise ratio (SNR) is ty pically small, and it was sh own th at in the limit as the a v ailable bandwidth tends to infinity the capacity o f the fading channe l is the same as the capa city of the additive white Gau ssian no ise (A WGN) chan nel of equal received p ower , see [1]. 1 When the SNR is large we encou nter a different situatio n. Indeed , it has been shown in [5] fo r n on-coh erent fr equency- flat fading chan nels that if the fading p rocess is regular in the sense that the pr esent fading cann ot be p redicted perfectly from its past, then at high SNR capacity only increases double- logarithm ically in the SNR. Th is is in stark c ontrast to the logarithm ic growth of the A WGN capacity . See [6], [ 7], [8], and [9] for extensions to multi-a ntenna systems, and see [10] and [ 11] f or e xtensions to non-regular fading, i.e., when the present fading c an be pred icted perf ectly f rom its past. Thus, commun icating over non-co herent flat-fadin g channels at hig h SNR is power inefficient. In this pap er , we sh ow that co mmunica ting over non- coheren t m ultipath fading channels at high SNR is not me rely power inefficient, but even worse: if the delay spre ad is large in th e sense that the variances of the path g ains do n ot decay faster than geometrically , then capacity is bound ed in the SNR. For such chan nels, capacity does not tend to infinity as the SNR tends to infinity . T o state this result precisely we begin with a math ematical descrip tion of the channel model. A. Chan nel Mode l Let C and Z + denote the set of com plex nu mbers and th e set of positi ve inte gers, respectively . W e co nsider a discrete- time multipath fading chann el wh ose channel outpu t Y k ∈ C at time k ∈ Z + correspo nding to the channel inpu ts 1 Ho we ver , in con trast to the infinite bandwi dth capacity of the A WGN channe l where the condit ions on the capacity achie ving input di s trib ution are not so stringent , the infinite bandwidth capaci ty of non-coherent fading channe ls can only be ac hiev ed by signalin g schemes which are “pea ky”; see also [2], [3], [4] and reference s therei n. ( x 1 , x 2 , . . . , x k ) ∈ C k is given by Y k = k X ℓ =1 H ( k − ℓ ) k x ℓ + Z k . (1) Here, H ( ℓ ) k denotes the time- k gain of the ℓ -th path, and { Z k } is a sequ ence o f indepen dent and iden tically distributed (IID), zero -mean, variance- σ 2 , cir cularly-sym metric, comp lex Gaussian random variables. W e assume that fo r each path ℓ ∈ Z + 0 (with Z + 0 denoting the set of non- negativ e integers) the stochastic p rocess H ( ℓ ) k , k ∈ Z + is a zero-m ean stationary process. W e den ote its variance and its differential entropy rate by α ℓ , E h H ( ℓ ) k 2 i , ℓ ∈ Z + 0 and h ℓ , lim n →∞ 1 n h H ( ℓ ) 1 , H ( ℓ ) 2 , . . . , H ( ℓ ) n , ℓ ∈ Z + 0 , respectively . W e fu rther assume that sup ℓ ∈ Z + 0 α ℓ < ∞ and inf ℓ ∈L h ℓ > − ∞ , (2) where th e set L is defin ed as L , { ℓ ∈ Z + 0 : α ℓ > 0 } . W e finally assume that the p rocesses H (0) k , k ∈ Z + , H (1) k , k ∈ Z + , . . . are indepen dent ( “uncorr elated scattering ”), that they are jointly indepe ndent of { Z k } , and that the join t law of { Z k } , H (0) k , k ∈ Z + , H (1) k , k ∈ Z + , . . . does not depend on the inp ut sequence { x k } . W e consid er a non-co her en t channel mo del where n either the transmitter n or the r eceiv e r is cognizan t of the realization of H ( ℓ ) k , k ∈ Z + , ℓ ∈ Z + 0 , but both are aware of their statistic. W e d o not a ssume that th e path gains are Gaussian. B. Chan nel Capa city Let A n m denote the sequen ce A m , A m +1 , . . . , A n . W e defin e the c apacity as C ( SNR ) , lim n →∞ 1 n sup I X n 1 ; Y n 1 , (3) where th e maximization is over all joint distributions on X 1 , X 2 , . . . , X n satisfying the power co nstraint 1 n n X k =1 E | X k | 2 ≤ P , (4) and where SNR is d efined as SNR , P σ 2 . (5) By F a no’ s in equality , n o rate ab ove C ( SNR ) is achiev able. 2 (See [13] for a d efinition o f an achiev a ble rate.) Notice that the above channel (1) is generally not stationary 3 since the n umber of terms (p aths) in fluencing Y k depend s on k . It is the refore prima facie not clear wh ether the liminf on the RHS of (3) is a limit. C. Main R esult Theor e m 1: Consider the above cha nnel mo del. Then lim ℓ →∞ α ℓ +1 α ℓ > 0 = ⇒ sup SNR > 0 C ( SNR ) < ∞ , (6) where we defin e, for any a > 0 , a/ 0 , ∞ and 0 / 0 , 0 . For example, when { α ℓ } is a geom etric sequence, i.e. , α ℓ = ρ ℓ , ℓ ∈ Z + 0 for some 0 < ρ < 1 , then capacity is b ounded . Theorem 1 is proved in Sectio n I I where it is ev e n shown that (6) would co ntinue to hold if we repla ced the limin f in (3) by a limsup. Section III addresses briefly multipath chann els of unbo unded capacity . I I . P RO O F O F T H E O R E M 1 The p roof fo llows along the same lines as the proo f of [14, Thm. 1i)]. W e first note th at it follows from the left-han d side (LHS) of (6) that we can fin d an ℓ 0 ∈ Z + 0 and a 0 < ρ < 1 so tha t α ℓ 0 > 0 an d α ℓ +1 α ℓ ≥ ρ, ℓ = ℓ 0 , ℓ 0 + 1 , . . . . (7) W e contin ue with the ch ain r ule f or mutual in formatio n 1 n I ( X n 1 ; Y n 1 ) = 1 n ℓ 0 X k =1 I X n 1 ; Y k Y k − 1 1 + 1 n n X k = ℓ 0 +1 I X n 1 ; Y k Y k − 1 1 . (8) Each term in the first sum on the right- hand side ( RHS) of (8) is upper bo unded by 4 I X n 1 ; Y k Y k − 1 1 ≤ h ( Y k ) − h Y k Y k − 1 1 , X n 1 , H (0) k , H (1) k , . . . , H ( k − 1) k ≤ log π e σ 2 + k X ℓ =1 α k − ℓ E | X ℓ | 2 !! − log π eσ 2 ≤ log 1 + sup ℓ ∈ Z + 0 α ℓ · n · SNR ! , (9) where the first inequ ality fo llows be cause conditio ning can not increase entropy; the secon d inequ ality follows from the 2 See [12] for conditi ons that guarante e that C ( SNR ) is achie va ble. 3 By a stat ionary channel we mean a channel where for any stationary input { X k } the pair { ( X k , Y k ) } is jointly s tatio nary . 4 Throughout this paper , l og( · ) denote s the natural logarithm function. entropy maxim izing prop erty of Ga ussian r andom variables [13, Th m. 9. 6.5]; and the last in equality fo llows b y up per bound ing α ℓ ≤ sup ℓ ∈ Z + 0 α ℓ , ℓ = 0 , 1 , . . . , k − 1 and from the p ower constrain t (4). For k = ℓ 0 + 1 , ℓ 0 + 2 , . . . , n , we upper bound I X n 1 ; Y k Y k − 1 1 using th e gener al upper bo und for mutu al informa tion [5, Thm. 5.1] I ( X ; Y ) ≤ Z D W ( ·| x ) R ( · ) Q . ( x ) , (10) where D ( ·k· ) denotes relativ e entropy , W ( ·|· ) is the channel law , Q ( · ) denotes the distribution on the chan nel inpu t X , and R ( · ) is any d istribution on the ou tput alph abet. 5 Thus, any ch oice of output distribution R ( · ) yields an upper bo und on the mutual info rmation. For any gi ven Y k − 1 1 = y k − 1 1 , we cho ose the output distribution R ( · ) to be of den sity √ β π 2 | y k | 1 1 + β | y k | 2 , y k ∈ C , (11) with β = 1 / ( ˜ β | y k − ℓ 0 | 2 ) and 6 ˜ β = min ρ ℓ 0 − 1 α ℓ 0 max 0 ≤ ℓ ′ ≤ ℓ 0 α ℓ ′ , α ℓ 0 , ρ ℓ 0 . (12) W ith th is ch oice 0 < ˜ β < 1 and ˜ β α ℓ ≤ α ℓ + ℓ 0 , ℓ ∈ Z + 0 . (13) Using (11) in (10), and averaging over Y k − 1 1 , we ob tain I X n 1 ; Y k Y k − 1 1 ≤ 1 2 E log | Y k | 2 + 1 2 E h log ˜ β | Y k − ℓ 0 | 2 i + E h log ˜ β | Y k − ℓ 0 | 2 + | Y k | 2 i − h Y k X n 1 , Y k − 1 1 − E log | Y k − ℓ 0 | 2 + log π 2 ˜ β . (14) W e bound the term s in ( 14) sep arately . W e b egin with E log | Y k | 2 = E E log | Y k | 2 X k 1 ≤ E log E | Y k | 2 X k 1 = E " log σ 2 + k X ℓ =1 α k − ℓ | X ℓ | 2 !# , (15) where the inequality follows from Jensen’ s inequality . L ike- wise, we use Jensen’ s ineq uality and ( 13) to upper bou nd E h log ˜ β | Y k − ℓ 0 | 2 i ≤ E " log ˜ β σ 2 + ˜ β k − ℓ 0 X ℓ =1 α k − ℓ 0 − ℓ | X ℓ | 2 ! # ≤ E " log σ 2 + k − ℓ 0 X ℓ =1 α k − ℓ | X ℓ | 2 !# (16) 5 For channels with finite input and output alphabets this inequ ality follo ws by T opsøe’ s identi ty [15]; see also [16, Thm. 3.4]. 6 When y k − ℓ 0 = 0 , then the density of the Cauchy distribut ion (11) is undefined. Howe ver , this e vent is of zero probability and has there fore no impact on the mutual information I ` X n 1 ; Y k ˛ ˛ Y k − 1 1 ´ . and E h log ˜ β | Y k − ℓ 0 | 2 + | Y k | 2 i ≤ E " log 2 σ 2 + 2 k − ℓ 0 X ℓ =1 α k − ℓ | X ℓ | 2 + k X ℓ = k − ℓ 0 +1 α k − ℓ | X ℓ | 2 !# ≤ lo g 2 + E " log σ 2 + k X ℓ =1 α k − ℓ | X ℓ | 2 !# , (17) where the second inequality follows because P k ℓ = k − ℓ 0 +1 α k − ℓ | X ℓ | 2 ≤ 2 P k ℓ = k − ℓ 0 +1 α k − ℓ | X ℓ | 2 . Next, we derive a lower bound on h Y k X n 1 , Y k − 1 1 . Let H k ′ = H (0) k ′ , H (1) k ′ , . . . , H ( k − 1) k ′ , k ′ = 1 , 2 , . . . , k − 1 . W e have h Y k X n 1 , Y k − 1 1 ≥ h Y k X n 1 , Y k − 1 1 , H k − 1 1 = h Y k X n 1 , H k − 1 1 , (18) where the inequality follows b ecause conditioning cannot increase entropy , and w here th e equality follows because, condition al on X n 1 , H k − 1 1 , Y k is indep endent of Y k − 1 1 . Let S k be define d as S k , { ℓ = 1 , 2 , . . . , k : min {| x ℓ | 2 , α k − ℓ } > 0 } . (19) Using the en tropy power ineq uality [13, Thm. 16.6. 3], an d using that the pr ocesses H (0) k , k ∈ Z + , H (1) k , k ∈ Z + , . . . are indep endent an d jointly ind ependen t of X n 1 , it can be shown that for any given X n 1 = x n 1 h k X ℓ =1 H ( k − ℓ ) k X ℓ + Z k X n 1 = x n 1 , H k − 1 1 ! ≥ lo g X ℓ ∈S k e h H ( k − ℓ ) k X ℓ X ℓ = x ℓ , { H ( k − ℓ ) k ′ } k − 1 k ′ =1 + e h ( Z k ) ! . (20 ) W e lower boun d the differential entrop ies on the RHS of (2 0) as follows. The differential e ntropies in the sum are lower bound ed by h H ( k − ℓ ) k X ℓ X ℓ = x ℓ , H ( k − ℓ ) k ′ k − 1 k ′ =1 = lo g α k − ℓ | x ℓ | 2 + h H ( k − ℓ ) k H ( k − ℓ ) k ′ k − 1 k ′ =1 − log α k − ℓ ≥ lo g α k − ℓ | x ℓ | 2 + inf ℓ ∈L ( h ℓ − log α ℓ ) , ℓ ∈ S k , (21) where the equ ality follows fr om the behavior of dif f erential entropy under scaling; an d where the in equality follows by the stationarity of the pr ocess H ( k − ℓ ) k , k ∈ Z + which implies that the differential entro py h H ( k − ℓ ) k H ( k − ℓ ) k ′ k − 1 k ′ =1 cannot be smaller tha n the dif f erential entropy rate h k − ℓ [13, Thms. 4.2.1 & 4.2.2], and by lower boundin g ( h k − ℓ − log α k − ℓ ) by inf ℓ ∈L ( h ℓ − log α ℓ ) (which holds for each ℓ ∈ S k because S k ⊆ L ). Th e last differential entro py on the RHS of (20) is lower bo unded b y h ( Z k ) ≥ inf ℓ ∈L ( h ℓ − log α ℓ ) + log σ 2 , (22) which follows by no ting that inf ℓ ∈L ( h ℓ − log α ℓ ) ≤ log ( π e ) . (23) Applying ( 21) & (2 2) to (20), and averaging over X n 1 , yields then h Y k X n 1 , Y k − 1 1 ≥ E " log σ 2 + k X ℓ =1 α k − ℓ | X ℓ | 2 !# + inf ℓ ∈L ( h ℓ − log α ℓ ) . (24) W e contin ue with the analysis o f (14) by lower bou nding E log | Y k − ℓ 0 | 2 . T o this e nd, we write the expectation a s E E log k − ℓ 0 X ℓ =1 H ( k − ℓ 0 − ℓ ) k − ℓ 0 X ℓ + Z k − ℓ 0 2 X k − ℓ 0 1 and lower bou nd the co nditional exp ectation by E log k − ℓ 0 X ℓ =1 H ( k − ℓ 0 − ℓ ) k − ℓ 0 X ℓ + Z k − ℓ 0 2 X k − ℓ 0 1 = x k − ℓ 0 1 = lo g σ 2 + k − ℓ 0 X ℓ =1 α k − ℓ 0 − ℓ | x ℓ | 2 ! − 2 · E log P k − ℓ 0 ℓ =1 H ( k − ℓ 0 − ℓ ) k − ℓ 0 x ℓ + Z k − ℓ 0 q σ 2 + P k − ℓ 0 ℓ =1 α k − ℓ 0 − ℓ | x ℓ | 2 − 1 ≥ lo g σ 2 + k − ℓ 0 X ℓ =1 α k − ℓ 0 − ℓ | x ℓ | 2 ! + log δ 2 − 2 ǫ ( δ, η ) − 2 η h − P k − ℓ 0 ℓ =1 H ( k − ℓ 0 − ℓ ) k − ℓ 0 x ℓ + Z k − ℓ 0 q σ 2 + P k − ℓ 0 ℓ =1 α k − ℓ 0 − ℓ | x ℓ | 2 (25) for some 0 < δ ≤ 1 an d 0 < η < 1 , where h − ( X ) , Z { x ∈ C : f X ( x ) > 1 } f X ( x ) log f X ( x ) x . , (26) and where ǫ ( δ, η ) > 0 tends to zero as δ ↓ 0 . (W e w rite x ℓ in lo wer case to indicate that expectation and entropy are cond itional on X k − ℓ 0 1 = x k − ℓ 0 1 .) Here, the inequality follows by writing the expectation in the form E log | A | − 1 · I {| A | > δ } + E log | A | − 1 · I {| A | ≤ δ } (where I {·} d enotes the indicator functio n), an d by upper bound ing then the first expectation by − log δ and the secon d expectation u sing [5, Lem ma 6.7] . W e contin ue b y uppe r bound ing h − P k − ℓ 0 ℓ =1 H ( k − ℓ 0 − ℓ ) k − ℓ 0 x ℓ + Z k − ℓ 0 q σ 2 + P k − ℓ 0 ℓ =1 α k − ℓ 0 − ℓ | x ℓ | 2 = h + P k − ℓ 0 ℓ =1 H ( k − ℓ 0 − ℓ ) k − ℓ 0 x ℓ + Z k − ℓ 0 q σ 2 + P k − ℓ 0 ℓ =1 α k − ℓ 0 − ℓ | x ℓ | 2 − h P k − ℓ 0 ℓ =1 H ( k − ℓ 0 − ℓ ) k − ℓ 0 x ℓ + Z k − ℓ 0 q σ 2 + P k − ℓ 0 ℓ =1 α k − ℓ 0 − ℓ | x ℓ | 2 ≤ 2 e + log( π e ) + log σ 2 + k − ℓ 0 X ℓ =1 α k − ℓ 0 − ℓ | x ℓ | 2 ! − h k − ℓ 0 X ℓ =1 H ( k − ℓ 0 − ℓ ) k − ℓ 0 x ℓ + Z k − ℓ 0 ! , (27) where h + ( X ) is d efined as h + ( X ) , h ( X ) + h − ( X ) . Here, we applied [5, Lemm a 6. 4] to u pper bound h + P k − ℓ 0 ℓ =1 H ( k − ℓ 0 − ℓ ) k − ℓ 0 x ℓ + Z k − ℓ 0 q σ 2 + P k − ℓ 0 ℓ =1 α k − ℓ 0 − ℓ | x ℓ | 2 ≤ 2 e + log ( π e ) . (28) A veraging ( 27) over X k − ℓ 0 1 yields h − P k − ℓ 0 ℓ =1 H ( k − ℓ 0 − ℓ ) k − ℓ 0 X ℓ + Z k − ℓ 0 q σ 2 + P k − ℓ 0 ℓ =1 α k − ℓ 0 − ℓ | X ℓ | 2 X k − ℓ 0 1 ≤ 2 e + log( π e ) + E " log σ 2 + k − ℓ 0 X ℓ =1 α k − ℓ 0 − ℓ | X ℓ | 2 !# − h k − ℓ 0 X ℓ =1 H ( k − ℓ 0 − ℓ ) k − ℓ 0 X ℓ + Z k − ℓ 0 X k − ℓ 0 1 ! ≤ 2 e + log( π e ) − inf ℓ ∈L ( h ℓ − log α ℓ ) , (29) where the secon d in equality follows by conditioning the dif- ferential entro py ad ditionally on Y k − ℓ 0 − 1 1 , an d b y u sing then lower bou nd ( 24). A lower bou nd o n E log | Y k − ℓ 0 | 2 follows now by averaging (25) over X k − ℓ 0 1 , and by ap plying ( 29) E log | Y k − ℓ 0 | 2 ≥ E " log σ 2 + k − ℓ 0 X ℓ =1 α k − ℓ 0 − ℓ | X ℓ | 2 !# + log δ 2 − 2 ǫ ( δ, η ) − 2 η 2 e + log( π e ) + inf ℓ ∈L 2 η ( h ℓ − log α ℓ ) . (30) Returning to the analysis o f (1 4), we obtain f rom (3 0), (2 4), (17), (16), and (15) I X n 1 ; Y k Y k − 1 1 ≤ 1 2 E " log σ 2 + k X ℓ =1 α k − ℓ | X ℓ | 2 !# + 1 2 E " log σ 2 + k − ℓ 0 X ℓ =1 α k − ℓ | X ℓ | 2 !# + log 2 + E " log σ 2 + k X ℓ =1 α k − ℓ | X ℓ | 2 !# − E " log σ 2 + k X ℓ =1 α k − ℓ | X ℓ | 2 !# − inf ℓ ∈L ( h ℓ − log α ℓ ) − E " log σ 2 + k − ℓ 0 X ℓ =1 α k − ℓ 0 − ℓ | X ℓ | 2 !# − log δ 2 + 2 ǫ ( δ, η ) + 2 η 2 e + log( π e ) − inf ℓ ∈L 2 η ( h ℓ − log α ℓ ) + log π 2 ˜ β ≤ K + E " log σ 2 + k X ℓ =1 α k − ℓ | X ℓ | 2 !# − E " log σ 2 + k − ℓ 0 X ℓ =1 α k − ℓ 0 − ℓ | X ℓ | 2 !# , (31) with K , − inf ℓ ∈L 1 + 2 η ( h ℓ − log α ℓ ) + log 2 π 2 ˜ β δ 2 + 2 ǫ ( δ, η ) + 2 η 2 e + log( π e ) . (32) The second inequality in (31) follows because P k − ℓ 0 ℓ =1 α k − ℓ | X ℓ | 2 ≤ P k ℓ =1 α k − ℓ | X ℓ | 2 . In order to sho w that the capacity i s bound ed in the SNR, we apply ( 31) and (9) to (8) and use then that f or any seq uences { a k } and { b k } n X k = ℓ 0 +1 ( a k − b k ) = n X k = n − ℓ 0 +1 ( a k − b k − n +2 ℓ 0 ) + n − ℓ 0 X k = ℓ 0 +1 ( a k − b k + ℓ 0 ) . (33) Defining a k , E " log σ 2 + k X ℓ =1 α k − ℓ | X ℓ | 2 !# (34) and b k , E " log σ 2 + k − ℓ 0 X ℓ =1 α k − ℓ 0 − ℓ | X ℓ | 2 !# (35) we have for th e first sum o n th e RHS of (33) n X k = n − ℓ 0 +1 ( a k − b k − n +2 ℓ 0 ) = n X k = n − ℓ 0 +1 E " log σ 2 + P k ℓ =1 α k − ℓ | X ℓ | 2 σ 2 + P k − n + ℓ 0 ℓ =1 α k − n + ℓ 0 − ℓ | X ℓ | 2 !# ≤ ℓ 0 log 1 + sup ℓ ∈ Z + 0 α ℓ · n · SNR ! , (36) which follows by lower b oundin g th e deno minator by σ 2 , and by using the n Jensen’ s in equality along with th e last ineq uality in (9). For th e second sum on the RHS o f ( 33) we have n − ℓ 0 X k = ℓ 0 +1 ( a k − b k + ℓ 0 ) = n − ℓ 0 X k = ℓ 0 +1 E " log σ 2 + P k ℓ =1 α k − ℓ | X ℓ | 2 σ 2 + P k ℓ =1 α k − ℓ | X ℓ | 2 !# = 0 . (37) Thus, apply ing (31)–(3 7) and (9) to ( 8), we obta in 1 n I ( X n 1 ; Y n 1 ) ≤ ℓ 0 n log σ 2 + sup ℓ ∈ Z + 0 α ℓ · n · SNR ! + ℓ 0 n log σ 2 + sup ℓ ∈ Z + 0 α ℓ · n · SNR ! + n − 2 ℓ 0 n K , (38) which tends to K < ∞ a s n te nds to infinity . This proves Theorem 1. I I I . M U LT I P AT H C H A N N E L S O F U N B O U N D E D C A PAC I T Y W e have seen in Th eorem 1 that if th e variances of the p ath gains { α ℓ } do n ot decay faster than geo metrically , then ca- pacity is bound ed in the SNR. In this section, we d emonstrate that this need not b e the case when the variances of th e path gains decay faster th an geometrica lly . Th e f ollowing theorem presents a sufficient conditio n for the cap acity C ( SNR ) to be unbou nded in the SNR. Theor e m 2: Consider the above cha nnel mo del. Then lim ℓ →∞ 1 ℓ log log 1 α ℓ = ∞ = ⇒ sup SNR > 0 C ( SNR ) = ∞ . (3 9) Pr oof: Omitted. Note: W e d o not cla im that C ( SNR ) is ac hiev able. Ho w- ev e r , it can be sh own that when, for example, th e p rocesses H ( ℓ ) k , k ∈ Z + , ℓ ∈ Z + 0 are IID Gaussian, th en the maxim um achiev ab le rate is unbou nded in the SNR, i.e., any r ate is achiev ab le for sufficiently large SNR. Certainly , the cond ition on the LHS of (39) is satisfied when the ch annel ha s fin ite m emory in the sense that for so me finite L ∈ Z + 0 α ℓ = 0 , ℓ = L + 1 , L + 2 , . . . . In this case, (1) b ecomes Y k = k − 1 X ℓ =0 H ( ℓ ) k x k − ℓ + Z k , k = 1 , 2 , . . . , L L X ℓ =0 H ( ℓ ) k x k − ℓ + Z k , k = L + 1 , L + 2 , . . . . (40) This ch annel (40) was studied for g eneral (but finite) L in [17] where it was shown that its cap acity satisfies lim SNR →∞ C ( SNR ) log log SNR = 1 . (41) Thus, for finite L , the cap acity pr e-loglog ( 41) is not affected by the multipath b ehavior . This is p erhaps surp rising as Theorem 1 implies tha t if L = ∞ , and if the variances of the path gain s d o not dec ay faster than geometrically , then th e pre-log log is zero. A C K N OW L E D G M E N T Discussions with Helmut B ¨ olcskei an d Giusep pe Dur isi are gratefully acknowledged. R E F E R E N C E S [1] R. G. Gallager , Informati on Theory and Reliable Communicati on . John W iley & Sons, 1968. [2] V . Sethuraman, L. W ang, B. Hajek, and A. Lapidoth, “Low SNR capacit y of fadin g channels - MIMO and delay spread, ” in Proc . IEEE Int. Symposium on Inf. Theory , Nice, France, June 24–29, 2007. [3] M. M ´ edard and R. G. Gallager , “Bandwi dth scali ng for fading multipath channe ls, ” IEEE T rans. Inform. Theory , vol. 48, no. 4, pp. 840–852, Apr . 2002. [4] ˙ I. E. T elata r and D. N. C. Tse, “Capac ity and mutual information of wideband multipath fading channels, ” IEE E T rans. Inform. Theory , vol. 46, no. 4, pp. 1384–1400, July 2000. [5] A. Lapidoth and S. M. Moser , “Capacity bounds via duality with applic ations to multiple-a ntenna systems on flat fading channel s , ” IEE E T rans. Inform. Theory , vol . 49, no. 10, pp. 2426–2467, Oct. 2003. [6] T . Ko ch and A. Lapidot h, “Deg rees of freedo m in non-co herent station- ary MIMO fading channels, ” in Proc . W inter School Cod. and Inform. Theory , Bratisla va , Slova kia, Feb . 20–25, 2005. [7] , “The f ading number and de grees of freedom in non-co herent MIMO fadin g channels: a peace pipe, ” in Proc . IEE E Int. Symposium on Inf. Theory , Adelaide, Australia, Sept. 4–9, 2005. [8] A. Lapidoth and S. M. Moser , “The fadin g number of single-input multiple -output fading channe ls with memory , ” IEEE Tr ans. Inform. Theory , vol. 52, no. 2, pp. 437–453, Feb . 2006. [9] S. M. Moser , “The f ading number of multi ple-inp ut multiple-out put fadi ng channels with memory , ” in P r oc. IE EE Int. Symposium on Inf. Theory , Nice, France, June 24–29, 2007. [10] A. Lapidoth, “On the asymptoti c capacit y of stationary Gaussian fa ding channe ls, ” IEEE T rans. Inform. Theory , vol. 51, no. 2, pp. 437–446, Feb . 2005. [11] T . Koch and A. L apidot h, “Gaussian fading is the worst fading , ” in P r oc. IEEE Int. Symposium on Inf. Theory , Seattl e, W ashington, USA, July 9–14, 2006. [12] S. V erd ´ u and T . S. Han, “ A gene ral formula for channel capac ity , ” IEE E T rans. Inform. Theory , vol . 40, no. 4, pp. 1147–1157, July 1994. [13] T . M. Cov er and J. A. Thomas, Elemen ts of Information Theory . John W iley & Sons, 1991. [14] T . Koch, A. L apidot h, and P . P . Sotiriadis, “ A hot channel, ” in Proc. Inform. Theory W orkshop (ITW) , Lake T ahoe, CA, USA, Sept. 2–6 2007. [15] F . T opsøe, “ An informati on theoreti cal identity and a problem in volving capac ity , ” Studia Sci. Math. Hungar . , vol. 2, pp. 291–292, 1967. [16] I. Csisz ´ ar and J. K ¨ orner , Information Theory: Codi ng Theorems for Discr ete Memoryless Systems . Acad emic Press, 1981. [17] T . K och and A. Lapido th, “On multipa th fading channels at high SNR, ” 2008, subm. to IEEE Int. Symposium on Inf. Theory , T oronto, Canada. [Online]. A vail able: http:// arxiv .org/abs/0801.0672
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment