Approaching Blokh-Zyablov Error Exponent with Linear-Time Encodable/Decodable Codes
Guruswami and Indyk showed in [1] that Forney's error exponent can be achieved with linear coding complexity over binary symmetric channels. This paper extends this conclusion to general discrete-time memoryless channels and shows that Forney's and B…
Authors: Zheng Wang, Jie Luo
1 Approac hing Blokh-Zy ablo v Error Exp onen t wi th Linear-Ti me En co dable/Deco dab le Co des Zheng W ang, Stud ent Memb er, IEEE , Jie Luo, Memb er, IEEE Abstr act — Gurusw ami and Indyk sho wed in [1] that F orney’ s error exp onent can be achieved with li near coding complexity o ver binary symmetric channels. This paper extends this conclusion to g eneral discrete- time mem oryless c hannels and shows that F orney’s and Bl okh-Zyablo v error ex ponents can be arbitrarily approac hed by one-level and multi-level concatenated codes with li near enco ding/deco ding complexity . The key resul t is a revisi on to F orney’s general m inimum distance d eco ding algorithm, which enables a low com- plexity integration of Gurusw ami-Indyk’s outer co des into the concatenated coding schemes. Index T erms — co ding complexi t y , concatenated code, error exp onent I. Intr oduction Consider commu nication ov er a discrete- time memor y- less channel mo deled by a co nditional p oint ma s s function (PMF) or probability density function (PDF) p Y | X ( y | x ), where x ∈ X and y ∈ Y are the input and output symbols, X and Y a re the input a nd output a lphab ets, r esp ectively . Let C be the Shannon capacity . F ano showed in [2] that the minimu m error proba bilit y P e for blo ck c hannel co des of rate R and length N is b ounded by lim N →∞ − log P e N ≥ E ( R ) , (1) where E ( R ) is a positive function of channel transition probabilities, known a s the error exponent. F or finite input and o utput alphab ets, without co ding complexity constraint, the maxim um achiev able E ( R ) is given b y Gallager in [3], E ( R ) = max p X E L ( R, p X ) , (2) where p X is the input distribution, and E L ( R, p X ) is g iven for differen t v alues o f R as follows, max ρ ≥ 1 {− ρR + E x ( ρ, p X ) } 0 ≤ R ≤ R x − R + E 0 (1 , p X ) R x ≤ R ≤ R cr it max 0 ≤ ρ ≤ 1 {− ρR + E 0 ( ρ, p X ) } R cr it ≤ R ≤ C . (3) The definitions o f other v ariables in (3) can b e found in [4]. If we r e place the P MF by PDF, the s ummations by int egra ls and the max opera tors b y sup in (2), (3), the maximum achiev able error exp onent for contin uous chan- nels, i.e., channels whose input and/or output alphab ets are the set of real n umbers [3], is still given by (2). The aut hors are with the Electrical and Computer Engineering Departmen t, Colorado State Uni v ersity , F ort Collins, CO 80523. E- mail: { zhw ang, ro ck ey } @engr.colostate.edu. In [4], F orney prop osed a one-le vel co nc a tenated co ding scheme, which c an achiev e the following er ror exp onent, known as F orney’s exp o nent, for any rate R < C with a complexity of O ( N 4 ). E c ( R ) = max r o ∈ [ R C , 1 ] (1 − r o ) E R r o , (4) where r o and R a re the outer and the ov erall ra tes, resp ec- tively . F orney ’s coding scheme concatenates a maxim um distance separable (MDS) outer er ror-co rrection co de with well p erformed inner channel co des. T o achiev e E c ( R ), the deco der is required to exploit reliabilit y infor mation from the inner co des using a genera l minim um distance (GMD) deco ding algorithm [4]. F orney’s GMD algorithm essentially carr ies out outer co de deco ding, under v arious conditions, fo r O ( N ) times. The ov erall deco ding com- plexity o f O ( N 4 ) is due to the fa ct that the o uter co de (whic h is a Reed- So lomon co de) used in [4] has a de- co ding co mplexit y of O ( N 3 ). F orney’s concatenated co des were generalized to m ulti-level concatenated co des, also known a s the gener alized concatenated co des, by Blokh and Zyablov in [5]. As the order of concatenatio n go es to infinity , the erro r exp onent a pproaches the following Blokh-Zyablov bo und (o r Blokh-Z yablo v err or expo nen t) [5][6]. E ( ∞ ) ( R ) = max p X ,r o ∈ [ R C , 1 ] R r o − R " Z R r o 0 dx E L ( x, p X ) # − 1 . (5) In [1], Guruswami and Indyk prop osed a family of linear - time e nco dable/deco dable nearly MDS error-cor rection co des. By concatenating these co des (as outer co des) with fixe d-lengthe d bina r y inner co des, to g ether with Justesen’s GMD algorithm [7], F orney’s erro r exp onent was shown to b e achiev able ov er binary symmetric channels (BSCs) with a complexity of O ( N ) [1], i.e., linear in the co deword length. The num ber of o uter co de deco dings req uired b y Justesen’s GMD algorithm is only a constant 1 , as opp osed to O ( N ) in F orney’s cas e [4]. Since each outer co de deco d- ing has a complexity of O ( N ), upp er-b ounding the num ber of o uter co de deco dings by a constant is requir ed for achieving the ov erall linear complexity . Because Justesen’s GMD algor ithm assumes bina r y channel outputs [7][8], achiev ability of F or ney’s exp onent was only proven for BSCs in [1, Theor em 8]. 1 Strictly sp eaking, the required n umber of outer code decodings is linear in the inner codeword length, whi c h is fixed at a reasonably large constant . 2 In this pa per , we show that F orney’s GMD algorithm can be revis ed to car ry out outer co de deco ding for only a co nstant num b er of times 2 . With the help of the revised GMD algor ithm, by using Guruswami-Indyk’s outer c o des with fixed-lengthed inner co des, one-level and m ulti-level concatenated co des can a r bitrarily a pproach F orney’s and Blokh-Zyablov e x po nent s w ith linear co mplexity , ov er gen- eral discrete-time memoryle s s channels. I I. Revised GMD Algorithm and Its Im p act on Conca tena ted Codes Consider one-level conca tenated co ding schemes. As- sume, for an arbitrar ily small ε 1 > 0, we ca n co nstruct a linear encodable/ deco dable outer err or-co rrection co de, with rate r o and length N o , which can co rrect t symbol error s and d sym bo l e r asures so lo ng as 2 t + d < N o (1 − r o − ε 1 ). Note that this is p ossible for larg e N o as shown by Guruswami and Indy k in [1]. T o simplify the notations, we a ssume N o (1 − r o − ε 1 ) is a n integer. The outer co de is concatenated with suitable inner co des with rate R i and fixed length N i . The r ate and length of the concatenated co de a re R = r o R i and N = N o N i , resp ectively . In F or ney’s GMD deco ding, inner co des forward not only the estimates ˆ x m = [ ˆ x 1 , . . . , ˆ x i , . . . , ˆ x N o ] but also a reliability vector α = [ α 1 , . . . , α i , . . . , α N o ] to the outer code, where ˆ x i ∈ GF ( q ), 0 ≤ α i ≤ 1 and 1 ≤ i ≤ N o . Let s ( ˆ x, x ) = +1 x = ˆ x − 1 x 6 = ˆ x . (6) F or any outer co deword x m = [ x m 1 , x m 2 , . . . , x mN o ], define a dot pro duct α · x m as follows α · x m = N o X i =1 α i s ( ˆ x i , x mi ) = N o X i =1 α i s i . (7) Theorem 1: There is at most o ne co deword x m that satisfies α · x m > N o ( r o + ε 1 ) . (8) Theorem 1 is implied b y Theor e m 3.1 in [4]. Rearra ng e the weights in a s cending o rder o f their v alues and let i 1 , . . . , i j , . . . , i N o be the indices s uch that α i 1 ≤ . . . ≤ α i j ≤ . . . ≤ α i N o . (9) Define q k = [ q k ( α 1 ) , . . . , q k ( α j ) , . . . , q k ( α N o )], for 0 ≤ k < 1 /ε 2 , where ε 2 > 0 is a p ositive constant with 1 / ε 2 being an integer, and q k ( α i j ) is given by q k ( α i j ) = 0 if α i j ≤ k ε 2 and i j ≤ N o (1 − r o − ε 1 ) 1 otherwise . (10) Define dot pro duct q k · x m as q k · x m = N o X i =1 q k ( α i ) s ( ˆ x i , x mi ) = N o X i =1 q k ( α i ) s i . (11) Then following theo rem giv es the key res ult that enables the revision of F or ney’s GMD deco der. 2 The revision can als o b e r egarded as an extension to Justesen’s GMD deco ding given in [7]. Theorem 2: If α · x m > N o ε 2 2 + ( r o + ε 1 )(1 − ε 2 2 ) , then for some 0 ≤ k < 1 / ε 2 , q k · x m > N o ( r o + ε 1 ). Pr o of: Define a set o f v alues c j = ( j − 1 / 2 ) ε 2 for 1 ≤ j ≤ 1 / ε 2 and an integer p = ⌈ α i N o (1 − r o − ε 1 ) /ε 2 ⌉ , wher e 1 ≤ p ≤ 1 /ε 2 . 3 Let λ 0 = c 1 λ k = c k +1 − c k , 1 ≤ k ≤ p − 1 λ p = α i N o (1 − r o − ε 1 )+1 − c p λ h = α i h − p + N o (1 − r o − ε 1 )+1 − α i h − p + N o (1 − r o − ε 1 ) , if p < h < p + N o ( r o + ε 1 ) λ p + N o ( r o + ε 1 ) = 1 − α i N o . (12) W e hav e j − 1 X k =0 λ k = c j 1 ≤ j ≤ p α i j − p + N o (1 − r o − ε 1 ) p < j ≤ p + N o ( r o + ε 1 ) , (13) and p + N o ( r o + ε 1 ) X k =0 λ k = 1 . (14) Define a new weigh t v ector ˜ α = [ ˜ α 1 , . . . , ˜ α i , . . . , ˜ α N o ] with ˜ α i = argmin c j , 1 ≤ j ≤ p | c j − α i | α i ≤ α i N o (1 − r o − ε 1 ) α i α i > α i N o (1 − r o − ε 1 ) . (15) Define p k = [ p k ( α 1 ) , . . . , p k ( α i ) , . . . , p k ( α N o )] with 1 ≤ k ≤ p + N o ( r o + ε 1 ) such that for 0 ≤ k < p p k = q k , (16) and for p ≤ k ≤ p + N o ( r o + ε 1 ) p k ( α i ) = 0 α i ≤ α i k − p + N o (1 − r o − ε 1 ) 1 α i > α i k − p + N o (1 − r o − ε 1 ) . (17) W e hav e ˜ α = p + N o ( r o + ε 1 ) X k =0 λ k p k . (18) Define a set of indices U = { i 1 , i 2 , . . . , i N o (1 − r o − ε 1 ) } . (19) According to the definition of ˜ α i , for i / ∈ U , ˜ α i = α i . Hence ˜ α · x m = α · x m + X i ∈U ( ˜ α i − α i ) s i . (20) Since | ˜ α i − α i | ≤ ε 2 / 2, and s i = ± 1, w e hav e X i ∈U ( ˜ α i − α i ) s i ≥ − N o (1 − r o − ε 1 ) ε 2 2 . (21) 3 Note that the v alue of p cannot b e 0. Because if p = 0, i.e., α i N o (1 − r o − ε 1 ) = 0, then there are at least N o (1 − r o − ε 1 ) zeros in ve ctor α . Consequent ly , α · x m ≤ N o ( r o + ε 1 ) < N o ε 2 2 + ( r o + ε 1 ) 1 − ε 2 2 , which contradicts the assumption that α · x m > N o ε 2 2 + ( r o + ε 1 )(1 − ε 2 2 ) . 3 Consequently , α · x m > N o ε 2 2 + ( r o + ε 1 ) 1 − ε 2 2 im- plies ˜ α · x m > N o ( r o + ε 1 ) . (22) If p k · x m ≤ N o ( r o + ε 1 ) for all p k ’s, then ˜ α · x m = p + N o ( r o + ε 1 ) X k =0 λ k p k · x m ≤ N o ( r o + ε 1 ) p + N o ( r o + ε 1 ) X k =0 λ k = N o ( r o + ε 1 ) , (23) which contradicts (22). Therefore, there m ust b e some p k that satisfies p k · x m > N o ( r o + ε 1 ) . (24) Since fo r k ≥ p , p k has no mor e than N o ( r o + ε 1 ) num b er of 1’s, which implies p k · x m ≤ N o ( r o + ε 1 ), the vectors that satisfy (24) must exist a mo ng p k with 1 ≤ k < p . In words, for some k , q k · x m > N o ( r o + ε 1 ). Theorems 1 and 2 indicate that, if x m is trans mitted and α · x m > N o ε 2 2 + ( r o + ε 1 )(1 − ε 2 2 ) , for so me 0 ≤ k < 1 /ε 2 , err ors-a nd- erasures deco ding sp ecified by q k (where symbols with q k ( α i ) = 0 are era sed) will output x m . Since the to tal num ber of q k vectors is upp e r b ounded by a constant 1 /ε 2 , the outer co de carr ies out er rors- and- erasures deco ding only for a constant n umber of times. Consequently , a GMD deco ding that carr ies out er rors- and-eras ur es deco ding for all q k ’s and compar es their deco ding outputs ca n recov er x m with a complexity o f O ( N o ). Since the inner co de length N i is fixed, the ov erall complexity is O ( N ). The following theorem gives an error probability b ound for one-le vel concatenated c o des with the rev ised GMD deco der. Theorem 3: Assume inner co des achieve Gallager ’s error exp onent g iven in (2). L et the r eliability vector α b e generated a ccording to F or ney’s alg orithm pr esented in [4, Section 4.2]. Let x m be the transmitted outer co deword. F or la rge enough N , e r ror probability o f the one- level concatenated co des is upp er b ounded by P e ≤ P n α · x m ≤ N o ε 2 2 + ( r o + ε 1 ) 1 − ε 2 2 o ≤ exp [ − N ( E c ( R ) − ε )] , (25) where E c ( R ) is F or ney’s error exponent given b y (4) and ε is a function of ε 1 and ε 2 with ε → 0 if ε 1 , ε 2 → 0. The pro o f of Theorem 3 ca n be obtained by fir st replac- ing Theo rem 3.2 in [4] with Theor em 2, a nd then following F or ney’s a nalysis presented in [4, Section 4.2]. The differ e nce betw een F orney’s and the revised GMD deco ding schemes lies in the definition of er rors- and- erasures deco da ble v ectors q k , the num ber of which deter- mines the deco ding co mplexity . F orney’s GMD deco ding needs to car r y out er rors- and-eras ur es deco ding for a nu mber o f times linear in N o , whereas ours for a constant nu mber o f times. Although the idea b ehind the r evised GMD deco ding is similar to J ustesen’s GMD alg o rithm [7], Justesen’s work has fo c us ed on err or-cor rection co des where inner co des for ward Hamming dista nce informa tion (in the form of an α v ector) to the outer co de. Applying the re vised GMD alg orithm to multi-lev el con- catenated co des [5][6] is quite straig ht forward. Achiev a ble error e xpo nent of an m -lev el co ncatenated codes is given in the following Theorem. Theorem 4: F or a discrete- time memoryless channel with ca pacity C , for any ε > 0 and a ny integer m > 0 , one can construct a seq ue nce o f m -level co ncatenated codes whose enco ding/ deco ding complexity is linear in N , and whose error probability is b ounded by lim N →∞ − log P e N ≥ E ( m ) ( R ) − ε, E ( m ) ( R ) = max p X ,r o ∈ [ R C , 1 ] R r o − R R r o m P m i =1 h E L ( i m ) R r o , p X i − 1 (26) The pro of of Theorem 4 can be obtained b y combining Theorem 3 a nd the deriv ation of E ( m ) ( R ) in [5][6]. Note that lim m →∞ E ( m ) ( R ) = E ( ∞ ) ( R ), where E ( ∞ ) ( R ) is the Blo kh-Zyablov er ror exp onent given in (5). Theorem 4 implies that, for discrete-time memoryless channels, Blok h-Zyablo v erro r exp one nt can b e ar bitrarily approached with linear enco ding/ de c o ding complexity . I I I. Conclusions W e prop o s ed a revised GMD deco ding a lgorithm fo r concatenated co des ov er gener al dis c rete-time memor yless channels. B y co mbin ing the GMD algor ithm with Gu- ruswami and Indyk’s error co rrection co des, we show ed that F orney’s and Blokh- Zyablo v erro r exp onents can b e arbitrar ily approached b y one-level and multi-lev el con- catenated co ding schemes, resp ectively , with linea r enco d- ing/deco ding complexity . Ackno wledgment The authors w ould like to thank Professo r Alexa nder Barg for his help on mu lti-level conca tenated co des . References [1] V. Guruswa mi and P . Indyk, “Linear-Time Encod- able/Decodable Codes With Nea r-Optimal Rate,” IEEE T r ans. Inform. The ory, V ol. 51, N o. 10, pp. 3393-3400, Oct. 2005. [2] R. F ano, “T ransmission of Inform ation,” The M.I.T Pr ess, and John Wiley & Sons, Inc., New Y ork, N.Y. , 1961. [3] R. Gallager, “A Simpl e Deriv ation of The Co ding Theorem and Some Applications,” IEEE T r ans. on Inform. The ory, V ol.11, pp.3-18, Jan. 1965. [4] G. F or ney , “Concatenated Co des,” The MIT Pr ess, 1966. [5] E. Bl okh and V. Zya blov, “Linear Concatenat ed Codes,” Nauk a, Mosco w, 1982 (In Russi an). [6] A. Barg and G. Z ´ emor, “ Concatenat ed Codes: Serial and P aral- lel,” IEEE T r ans. Inform. The ory, V ol. 51, pp. 1625-1634, May 2005. [7] J. Justesen, “A Cl ass of Constructive Asymptotically Go o d Al - gebraic Codes,” IEEE T r ans. Inform. The ory, V ol. IT-18, pp. 652-656, Sep. 1972. [8] V. G uruswami, “List Decoding of Error-corr ecting Codes,” Ph.D. dissertation, MIT, Cambridge, MA, 2001.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment