Minimum-Delay Decoding of Turbo-Codes for Upper-Layer FEC

In this paper we investigate the decoding of parallel turbo codes over the binary erasure channel suited for upper-layer error correction. The proposed algorithm performs on-the-fly decoding, i.e. it starts decoding as soon as the first symbols are r…

Authors: Ghassan M. Kraidy, Valentin Savin

Minimum-Delay Decoding of Turbo-Codes for Upper-Layer FEC
MINIMUM-DELA Y DECODING OF TURBO CODES FOR UPPER-LA YER FEC Ghassan M. Kraidy and V alentin Savin CEA-LETI, 17 Rue des Martyrs, 38054 Grenoble, France { ghassan.kraidy ,valentin.sa vi n } @cea.fr ABSTRA CT In this paper we in vestigate the decoding of parallel turbo codes over the b inary erasure channel suited for upper-layer error corre ction. The pro posed algorithm per forms “on-the- fly” d ecoding, i.e. it starts deco ding as so on a s the first sym- bols are received. This algorithm compar es with the itera- ti ve d ecoding of code s defined on gra phs, in that it p ropa- gates in the trellises o f the turbo co de b y removing transi- tions in th e same w ay ed ges ar e removed in a b ipartite grap h under message- passing deco ding. Performanc e co mparison with LDPC codes for different coding rates is shown. 1. INTR ODUCTION The bin ary erasure channel (BEC) introduced by E lias [1] is one of the simplest channel models: a symbol is either erased with probability p , or e xactly receiv ed with probability 1 − p . The capacity of such a channel with a uniform sou rce is gi ven by: C = 1 − p Codes that achieve this capacity are called Ma ximum-Distance Separable (MDS) cod es, and th ey can recover the K infor- mation symbo ls from a ny K of the N codeword symb ols. An MDS code that is widely used over the BEC is the no n- binary Reed- Solomon (RS) code, but its block leng th is lim- ited by t he Galois field cardinality that dramatically increases the deco ding co mplexity . For large bloc k lengths, lo w -density parity-ch eck ( LDPC) codes [2] [3] [4] [ 5] and repeat-accu mu- late (RA) [6] codes with message-passing d ecoding proved to perfor m very clo se to the ch annel cap acity with reasonable complexity . Moreover, “rateless” codes [7] [ 8] that are capa - ble of gene rating an infinite sequence of par ity symbols were propo sed fo r the BEC. T heir main strength is their high p er- forman ce toge ther with linear time encoding an d decod ing. Howe ver, conv o lutional-ba sed cod es, that are widely used for Gaussian channels, are less in vestigated for the BEC. Among the few paper s that treat conv o lutional and turb o codes [9] in this context are [10] [11] [12] [13] [14]. In practical systems, da ta p ackets r eceiv ed at the upper layers enco unter erasures. I n the In ternet for instanc e, it is frequen t to have datagrams that are discar ded b y th e phys- ical layer cyclic r edunda ncy ch eck (CRC) or forward error correction ( FEC), or even b y th e tran sport level user da ta- gram protoco l (UDP) check sums. Another example would be the tr ansmission links th at exhibit deep fading o f the sig- nal (fades of 10dB or more) fo r sho rt periods. This is th e case of the satellite cha nnel wher e weather c onditions (es- pecially rain) severely degrades the channel quality , or even the mobile transmissions due to terrain effect. I n such situa- tions, the physical layer FEC fail s and we can either ask for re-transmission (only if a retur n ch annel exists, and p enaliz- ing in broadc ast/ multicast scenarios) or use upper layer (UL) FEC. In this paper, we propose a minimum -delay de coding al- gorithm for tur bo cod es suited for UL -FEC, in the sen se th at the deco ding starts since the reception of th e first symbo ls where a symbol could be a bit or a packet. The paper is orga- nized as follo ws: Section 2 gi ves the system mo del and a brief recall of the e xisting decoding algorithms. Sectio n 3 explains the minimu m-delay d ecoding algorith m. Simulation resu lts and comparisons with LDPC codes are shown in Sectio n 4, and Section 5 gi ves the concluding remarks. 2. SYSTEM MODEL AND NO T A TIONS W e consider th e transmission of a parallel turbo co de [9 ] with rate R c = K / N over the BEC. A n infor mation b it seq uence of length K is fed to a recursive system atic conv olutional (RSC) co de with rate ρ = k /n to gen erate a first p arity b it sequence. The sam e info rmation sequenc e is scram bled via an interleaver Π to generate a second parity sequence. W ith half-rate RSC c onstituents, the resulting turbo code has rate 1 / 3 . I n or der to raise the rate of th e turbo cod e, parity b its are pun ctured. In this paper, we consider rate- 1 / 3 , punctur ed rate- 1 / 2 and punctur ed r ate- 2 / 3 turb o codes. Th e decodin g of turbo co des is p erforme d iterati vely using probabilities on informa tion bits, which re quires the reception of the entire codeword b efore the decoding pr ocess starts. For instance, the soft-inp ut soft-output (SISO) “Forward-Backward” (FB) algorithm [1 5], optimal in terms of a po steriori prob ability (APP) on symbols, consists of one forward recursion and one backward recu rsion over the trellis of th e two constituent codes. As turbo co des are classically u sed over Gaussian channels, a SISO algorithm (the FB or other sub-optimal decoding al- gorithms) are required to attain low err or rates. Ex changin g hard infor mation between the constituent co des using an al- gorithm such as the well-k nown V iterbi Algor ithm (V A) [16] (that is a Max imum-Likeliho od Sequence Estimator ( MLSE) for con volutional co des) is ha rshly penalizin g. Howe ver , in the case of the BEC, a S ISO decoding algorithm is not neces- sary . In fact, it has been shown in [12] that the V A is optim al in terms of sy mbol (or bit) pro bability on the BEC, which means that one can achiev e optima l d ecoding of turbo cod es on the BEC without using soft info rmation. In oth er words, if a bit is known to (or cor rectly decoded by ) one trellis, its value cannot b e modified by the other trellis. Mo tiv ated b y this key prop erty , we prop ose a d ecoding algorith m for tu rbo codes based on hard information exchange. 3. ON-THE-FL Y DECODING OF TURBO CODES The tur bo code has two trellises that have K steps each, a nd one codeword represents a path in th e trellises. In a goal to minimize the decoding d elay , we prop ose an algor ithm th at starts dec oding directly after the re ception of the fir st bits of the tra nsmitted cod ew o rd. First, at e very step of the trellises, if one of the n bits of the binary labeling is received ( i.e. is known), we remove the transitions tha t do not cover this bit. If, at some step in the trellis, all the transitions lea v ing a state e i on the left are removed, we then k now that no tran sition arrives to this state at the previous step. Consequently , all the incoming transitions to state e i from the left are removed. Similarly , if - at some step - there ar e n o tr ansitions arri ving to a state e j on the rig ht, this means that we cannot lea ve state e j at the following step, and all th e transitions outgo ing from state e j are removed. This way the info rmation propag ates in the trellis an d some bits can be deter mined with out being received. This algorith m is insp ired by the message-passing decodin g of LDPC codes over the BEC, where tran sitions connected to a variable n ode are removed if this variable is received. Now at som e stage of the dec oding proc ess, if an info r- mation bit is determined in one trellis without being received, we set its interleaved (or de-in terleav ed) cou nterpart as kn own and the same propag ation is triggered in the other trellis. The informa tion exchan ge between the two trellises continues un- til propag ation stops in bo th trellises. Th is way we c an re- cover th e whole tra nsmitted information bits witho ut r eceiv- ing the whole transmitted codeword. In the seque l, for the sake of clearn ess, we will o nly con- sider parallel turbo codes built from the concatenatio n of two RSC codes with gene rator po lynomials (7 , 5) in octal (th e polyno mial (7) 8 being the feed back po lynomial) , con straint length L = 3 , an d coding rate ρ = k /n = 1 / 2 , co de that has a simple trellis structure with fou r states. The algorithm can be applied to any parallel tur bo co de built fr om other RSC constituents. The transitions of the RSC (7 , 5) 8 code between two tre llis steps are sh own in Fig. 1. As the co de is system- atic, the bit b 1 represents the inform ation bit, and the bit b 2 e 1 = 00 • 00 b 1 b 2 11 K K K K K K K K K K K K K K K K K K K K K K • e 2 = 01 • 11 i i i i i i i i i i i i i i i i i i i 00 U U U U U U U U U U U U U U U U U U U • e 3 = 10 • 10 i i i i i i i i i i i i i i i i i i i 01 T T T T T T T T T T T T T T T T T T T • e 4 = 11 • 01 s s s s s s s s s s s s s s s s s s s s s s 10 • 11 Fig. 1 . Transitions of the half-rate four-state RSC (7 , 5) 8 code. the parity bit. There are 2 k = 2 transitions leaving and 2 transitions arri v ing to each state. The transitions between two steps o f the trellis can be represen ted by a 2 L − 1 × 2 L − 1 matrix ( 4 × 4 matrix in this case) . For the (7 , 5) 8 code for instance, the transition table is given by: e 1 e 2 e 3 e 4 e 1 00 X 11 X e 2 11 X 00 X e 3 X 10 X 01 e 4 X 01 X 10 where an X mean s tha t th e transition does not exist. For th e need of the prop osed algorithm, we will use the transition ta - ble of the code to b uild bin ary transition m atrices T xx , T b 1 x , and T x b 2 with b 1 , b 2 ∈ { 0 , 1 } that contain the allowed tran- sitions d epending on the known bits. These matrices will be stored at the decoder and used as look-up tables throu ghout the decoding process. For instance, if the two bits of the tran- sition are unknown, we define the matrix: T xx =     1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1     where a one in p osition ( i, j ) m eans that th ere is a transition between state e i and state e j , and a zero m eans that no tran- sition exists. Howe ver , if b 1 = 0 and b 2 is unk nown, o r if b 1 is un known and b 2 = 0 , w e defin e the f ollowing matrices correspo nding to the allowed transitions: T 0 x =     1 0 0 0 0 0 1 0 0 0 0 1 0 1 0 0     , T x 0 =     1 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1     W e b uild th e other m atrices similarly . No te that there are a total of 3 n matrices, each of size 2 L − 1 × 2 L − 1 . On-the-fl y decoding algorithm 1) Initia lization step . W e consider matrices M 1 ( i ) and M 2 ( j ) correspo nding to transitions at steps i a nd j o f the two trel- lises of the constitue nt co des. T hese matrices are initialized as follows: M 1 , 2 (0) = 2 6 6 4 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 3 7 7 5 , M 1 , 2 (1) = 2 6 6 4 1 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 3 7 7 5 M 1 , 2 ( K ) = 2 6 6 4 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 3 7 7 5 , M 1 , 2 ( K + 1) = 2 6 6 4 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 3 7 7 5 M 1 ( t ) = M 2 ( t ) = T xx , t = 2 , . . . , K − 1 The matrices at steps 0 and 1 (nam ely M 1 (0) , M 2 (0) , M 1 (1) and, M 2 (1) ) rep resent the fact that any co dew ord starts in the zero state. The matr ices at steps K an d K + 1 represent the two steps requir ed for trellis termination ( i.e. end ing in the zero state). 2) Reception step . Each time a bit r ∈ { 0 , 1 } is recei ved: • I f r is an inform ation b it, it is placed in appro priate positions in bo th trellises as r 2 i = r 2 j = r , where j = Π ( i ) . W e then co mpute: M 1 ( i ) = M 1 ( i ) ∧ T r x and M 2 ( j ) = M 2 ( j ) ∧ T r x where th e ∧ o perator is a logical AND betwee n corr e- sponding en tries of the two m atrices. In oth er words, we only keep the transitions in M 1 ( i ) with b 1 = r . • I f r is a p arity bit, we set r 2 i +1 = r if r belongs to the first trellis, o r r 2 j +1 = r if r belongs to the secon d one. W e then comp ute: M 1 ( i ) = M 1 ( i ) ∧ T x r or M 2 ( j ) = M 2 ( j ) ∧ T x r 3) Propaga tion ste p . If eithe r M 1 ( i ) o r M 2 ( j ) h as at least one all-zero column or one all-zero ro w , the algorithm is able to p ropaga te in either direction in either trellis u sing th e fo l- lowing rule: • L et d ∈ { 1 , 2 } rep resent th e trellis in dices and initialize a counter t ∈ { i, j } repr esenting the step index throu gh each trellis. • L eft propag ation: an all-zero r ow with index u in M d ( t ) generates an all-zero column with index u in M d ( t − 1) . • Righ t pr opagation : an all-ze ro co lumn with index v in M d ( t ) generates an all- zero row with ind ex v in M d ( t + 1) . If we get new all-z ero co lumns o r new all- zero rows at steps t ± 1 , we set t ← t ± 1 an d continue the propagatio n (Step 3). 4) Duplication step . If durin g th e propagation we get some M d ( t ) ⊆ T bx ( i.e. the value of the inform ation b it o f the t th transition in the d th trellis is equal to b ), w e p roceed as follows: • I f M 1 ( t ) ⊆ T b x , we compute: M 2 (Π( t )) = M 2 (Π( t )) ∧ T b x and then we prop agate fro m Π( t ) in th e second trellis (Step 3). • I f M 2 ( t ) ⊆ T b x , we compute: M 1  Π − 1 ( t )  = M 1  Π − 1 ( t )  ∧ T b x and then we p ropagate from Π − 1 ( t ) in the first trellis (Step 3). 5) New reception step . If the p ropagatio n in both trellises stops, we go back to step 2. 6) Decoding stop . The deco ding is successful if M 1 ( i ) ⊆ T b x for all i ∈ { 0 , . . . , K − 1 } . W e then define the in efficiency ratio µ as fo llows: µ = r stop K where r stop ≥ K is the nu mber of bits re ceiv ed at the mo - ment when the d ecoding stops. An illustration of the p ro- posed algor ithm is shown in Fig. 2. First, at the receptio n of an infor mation bit b 1 = 0 , we r emove th e tran sitions in the correspo nding step in th e trellis where b 1 = 1 . Note th at this step is done in interlea ved positions in both trellises at the re- ception of an inf ormation bit. At this stage, no prop agation in the trellis is possible as all the states are still c onnected . Next we recei ve a parity bit b 2 = 1 ; th e remaining transitions correspo nding to b 2 = 0 are removed. A t that poin t, we no- tice that state e 1 and e 2 on th e left are not connected . T his means that the transitions arri v ing from the left t o these states are not allowed anymore, thus they are rem oved. Sim ilarly , we remove the tran sitions leaving the states e 1 and e 3 on the right. In fact, the average decoding inef ficiency µ av of the code relates to its erasure r ecovery c apacity as follows: supp ose that, on av erage, th e proposed decoding algorithm require s K ′ ( K ′ ≥ K ) symbo ls to be ab le to re cover the K infor ma- tion symbols. we can write the following: µ av = K ′ K = (1 − p th ) N K = 1 − p th R c where the thresho ld p robab ility p th correspo nds to the a ver - age fraction of erasures the decoder can recover . W e can then write p th as: p th = 1 − µ av R c • G G G G G G G G G G G G G G G • b 1 = 0 b 2 =? • G G G G G G G G G G G G G G G • • k k k k k k k k k k k k k S S S S S S S S S S S S S • S S S S S S S S S S S S S • k k k k k k k k k k k k k S S S S S S S S S S S S S • • k k k k k k k k k k k k k S S S S S S S S S S S S S • S S S S S S S S S S S S S • k k k k k k k k k k k k k S S S S S S S S S S S S S • • w w w w w w w w w w w w w w w • w w w w w w w w w w w w w w w • w w w w w w w w w w w w w w w • (a) • G G G G G G G G G G G G G G G • b 1 = 0 b 2 = 1 • G G G G G G G G G G G G G G G • • k k k k k k k k k k k k k S S S S S S S S S S S S S • • k k k k k k k k k k k k k S S S S S S S S S S S S S • • k k k k k k k k k k k k k S S S S S S S S S S S S S • S S S S S S S S S S S S S • k k k k k k k k k k k k k S S S S S S S S S S S S S • • w w w w w w w w w w w w w w w • w w w w w w w w w w w w w w w • w w w w w w w w w w w w w w w • (b) • G G G G G G G G G G G G G G G • b 1 = 0 b 2 = 1 • • k s __ ____ _ _ ___ ___ + 3 _ _ _ _ _ _ _ _ _ _ _ _ _ _ • S S S S S S S S S S S S S • • k k k k k k k k k k k k k S S S S S S S S S S S S S • • S S S S S S S S S S S S S • S S S S S S S S S S S S S • • • • w w w w w w w w w w w w w w w • w w w w w w w w w w w w w w w • (c) Fig. 2 . On-the-fly decod ing; removed ed ges are dashed: (a) T r ellis after the re ception of the sou rce bit b 1 = 0 , ( b ) T rellis after the receptio n of the parity bit b 2 = 1 , ( c ) Trellis after left and right propagatio n. For instan ce, if a code with R c = 1 / 3 has µ av = 1 . 07 6 , it h as p th = 0 . 64 1 . As a code with this co ding rate is -theoretically- capable of corr ecting a probability of erasure of p = 2 / 3 , the gap to capacity is: ∆ p = p − p th ≃ 0 . 02 5 W ith cod es suc h as LDPC or turbo co des, it is p ossible to achieve near-capacity perf ormance with iterative deco ding, with µ av ≃ 1 . Ideally , an MDS code (that ach iev es capac- ity) has µ av = 1 , i.e. it is capab le of recovering the K info r- mation symbols from any K received symb ols ou t of the N codeword symbols. Finally , it is important to note that the algorithm proposed in this section is linear in the interleaver size K . In fact, an RSC code with 2 L − 1 states an d 2 k transitions leaving each states has 2 L − 1 × 2 k = 2 k + L − 1 transitions between tw o trel- lis steps. T his means th at the turbo code h as a total of approx- imately 2 × K × 2 k + L − 1 transitions. Even if the decoding is exponential in k and L , it is linear in K . As we can obtain very p owerful turb o code s with relati vely small k and L , we can s ay that a turbo code with the proposed algorithm has lin- ear time encoding and decoding, and thus it is suited for appli- cations were l ow-complexity “on -the-fly” encoding/decod ing are required (as with the “Raptor codes” [8] for instance). 4. SIMULA TION RESUL TS In this section, the pe rforman ce of th e pr oposed algorithm with p arallel turbo codes is shown. The coding r ate of the turbo cod e using h alf-rate constituen t codes is R c = 1 / 3 . Howe ver, we also consider turb o codes with R c = 1 / 2 and R c = 2 / 3 obtained b y p uncturin g the R c = 1 / 3 turbo code. W e use two typ es of interleavers: 1) Pseud o-rand om (PR ) in- terleavers (not op timized) and 2) Quasi-cyclic (QC) bi-d imen- sional interleav ers [17] that are the best known interleavers in the litera ture: in fact, it w as shown in [18] that the min imum distance d min of a turbo code i s upper-boun ded by a quantity that grows logarithm ically with the interleaver size K , and th e QC interleavers alw a ys achieve this bound. The co mparison is made with r egular an d irregular stair- case LDPC cod es. An LDPC co de is said to be stair case if the righ t h and side of its par ity check matrix consists of a double diago nal. The advantage of a staircase LDPC code is that the enco ding can be per formed in linear time using the parity check matrix, th erefore ther e is no need for the g en- erator matrix, which g enerally is not low density . A stair- case LDPC co de is said to be regular if the le ft hand side of the parity ch eck matrix is regular, i.e. the number o f 1 ’ s per column is con stant. Other wise it is said to b e irregular . I n this section, we con sider regular staircase LDPC cod es with four 1 ’ s per each left hand side co lumn. Ir regular staircase LDPC co des are optimized fo r the BEC chann el by density ev olu tion. I n Fig. 3, we com pare the perform ance of turb o codes and LDPC codes for R c = 1 / 3 . T urb o c odes with RSC (7 , 5) 8 and PR interleaving achieve an average inefficiency µ av of ab out 1 . 09 , wh ich mean s th ey requ ire K ′ = 1 . 09 K received bits (or 9% overhead) to b e able to recover the K informa tion bits. Howe ver, using a QC interleaver , th e over- head with the sam e tu rbo co de is o f ab out 7 . 6% , which is very 1.05 1.07 1.09 1.11 1.13 1.15 1.17 1.19 1.21 0 1000 2000 3000 4000 5000 6000 7000 µ av Interleaver size (K) TC, RSC (7,5) 8 , PR interleaving TC, RSC (7,5) 8 , QC interleaving TC, RSC (13,15) 8 , QC interleaving TC, RSC (17,15) 8 , QC interleaving Regular staircase LDPC, (d v = 4) Irregular staircase LDPC Fig. 3 . A verage inefficiency ( µ av ) with respect to interleaver size K over the BEC. T urbo code with half- rate RSC con- stituents versus LDPC codes, R c = 1 / 3 . 1.04 1.06 1.08 1.1 1.12 1.14 1.16 0 1000 2000 3000 4000 5000 6000 7000 µ av Interleaver size (K) TC, RSC (7,5) 8 , PR interleaving TC, RSC (7,5) 8 , QC interleaving Regular staircase LDPC, (d v = 4) Irregular staircase LDPC Fig. 4 . A verag e inefficiency ( µ av ) with respect to interleaver size K over the BEC. T ur bo code with half-rate (7,5) 8 RSC constituents versus LDPC codes, punctured to R c = 1 / 2 . 1.02 1.04 1.06 1.08 1.1 1.12 0 1000 2000 3000 4000 5000 6000 7000 µ av Interleaver size (K) TC, RSC (7,5) 8 , PR interleaving TC, RSC (7,5) 8 , QC interleaving Regular staircase LDPC, (d v = 4) Irregular staircase LDPC Fig. 5 . A verag e inefficiency ( µ av ) with respect to interleaver size K over the BEC. T ur bo code with half-rate (7,5) 8 RSC constituents versus LDPC codes, punctured to R c = 2 / 3 . close to th e irr egular staircase LDPC c ode, wh ile the regu lar staircase LDPC is far above regular turb o codes ( 16% over- head). In ad dition, it is imp ortant to note that u sing turbo codes with RSC constitue nts with L = 4 (trellis with eight states), namely the RSC (13 , 15 ) 8 and the (17 , 15) 8 codes, increases the decoding comp lexity withou t improving the p er- forman ce. Pun ctured half-rate turbo co des are compare d with half-rate LDPC cod es in Fig . 4. Again, tu rbo codes with QC interleavers largely outperfo rm regular LDPC codes ( 6% to 11% overhead) , and thu s perfor m closer to ir regular LDPC codes ( 5% overhead). However , punctur ing even more the turbo code to raise it to R c = 2 / 3 widens the g ap with ir- regular LDPC codes, placing the perform ance curve wit h QC interleaving ( 5 . 2% overhead) at equ al distance fro m regular LDPC codes ( 7 . 5% ) and irregular LDPC codes ( 3% ). 5. CONCLUSION In this paper we pr oposed a novel decodin g alg orithm f or turbo co des over the BEC. This algor ithm, ch aracterized by “on-the- fly” propag ation in the trellises and hard in formation exchange between the tw o code s, i s app ropriate for UL-FEC. Performan ce results with very small overhea d were shown for different interleaver sizes and cod ing rates. Altho ugh the turbo codes presented in this p aper were not op timized for the BEC, the results are very promising. Fur ther impr ovements can be done by optimizing turbo codes for this channel. 6. REFERENCES [1] P . Elias, “Coding for noisy channels, ” IR E Con v . Rec , vol. 4, no. S 37, pp. 47, 1955. [2] C. Di, D. Proie tti, E T elatar , T Richardson , and R Urbank e, “Finite- length analysis of low-de nsity parity-check codes on thebinary erasure channe l, ” IEEE T rans. Inf. The ory , v ol. 48, no. 6, pp. 1 570–1579, 20 02. [3] T . Richa rdson, A. Shokrolla hi, and R. Urba nke , “Finite-le ngth analysis of vario us lo w -density parity-chec k ensembles for the binary erasure channe l, ” IEEE Int. Symp. Inf. Theory , 2002. [4] T . Ric hardson, A. Shokrollahi, and R. Urbanke, “Design of capac ity- approac hing irregula r low-d ensity parity-c heck codes, ” IEEE T rans. Inf. Theory , vol. 47, pp. 619–637, 2001. [5] M. G. L uby , M. Mitzenmacher , M.A. Shokrollahi, and D.A. Spielman, “Ef ficient erasure correcting codes, ” IEEE T rans. Inf. Theory , v ol. 47 , no. 2, pp. 569–584, 2001. [6] HD Pfister , I. Sason, and R . Urbanke, “Capac ity-ac hie ving ensembles for the binary erasure channe l with bounded complexity , ” IEEE T rans. Inf. Theory , vol. 51, no. 7, pp. 2352–237 9, 2005. [7] M. L uby , “L T codes, ” Pr oc. ACM Symp. F ound. Comp. Sci. , pp. 271– 280, 2002. [8] A. Shokrollahi , “Raptor codes, ” IE EE/ACM T rans. Ne tworking (TON ) , vol. 14, pp . 2551–2567, 2006. [9] C. Berrou and A. Glavie ux, “Near optimum error corre cting cod ing and decodin g: turbo-code s, ” IEEE T rans. Comm. , vol. 44, pp. 1261–1271, 1996. [10] K. E. T epe and J. B. Anderson, “T urbo codes fo r binary symmetric and binary erasure channel s, ” IEEE Int. Symp. Inf. Theory , 1998. [11] B.M. K urkoski , P .H. Siegel, and J.K. W olf, “Exact prob abilit y of era- sure and a deco ding alg orithm for con volutio nal codes on the binary erasure channe l, ” IEEE GLOBECOM , 2003. [12] B.M. Kurkoski , P . H. Sieg el, and J.K. W olf, “ Analysis of con volut ional codes on the erasure channel , ” IEEE Int. Symp. Inf. Theory , 2004. [13] E . Rosnes and O. Ytrehus, “T urbo decodin g on the binary erasure cha n- nel: Finite-le ngth analysis and tu rbo stopping sets, ” IEEE T rans. Inf . Theory , vol . 53, pp. 4059–4075, 2007. [14] Jeong W . L ee, R. Urbanke, and R.E. Blahut, “On th e performance of turbo codes over the bi nary erasure ch annel, ” IEEE Comm. Lett. , v ol. 11, pp. 67–69, 2007. [15] L . Ba hl, J. Cock e, F . Jelinek, and J. Ra viv , “Opti mal decoding of li near codes for minimizing symbol error rate, ” IEEE T rans. Inf. Theory , vol . 20, no. 2, pp. 284–287, March 1974. [16] A. V iterbi , “Error bounds for con vol utional codes and an asymptoti- cally optimum decoding algorithm, ” IEEE T rans. Inf. Theory , v ol. 13, pp. 260–269, 1967. [17] J. J. Bout ros and G. Zemor , “On quasi-c yclic interlea vers for paral lel turbo codes, ” IEEE T rans. Inf. Theory , vol. 52, pp . 1732–1739, 2006. [18] M. Breiling, “ A l ogarith mic upper bou nd on t he minimum distance of turbo codes, ” IEEE T rans. Inf. Theory , vol. 50, pp . 1692–1710, 2004.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment