On "A Novel Maximum Likelihood Decoding Algorithm for Orthogonal Space-Time Block Codes"
The computational complexity of the Maximum Likelihood decoding algorithm in [1], [2] for orthogonal space-time block codes is smaller than specified.
Authors: Ender Ayanoglu
1 On “A Nov el Maximum Likelihood Decoding Algorithm f or Orthogonal Space-T im e Block Codes” Ender A yanoglu Center for Perv asiv e Communication s and Computing Department of Electrical Engineering and Computer Science Univ ersity of California, Irvine Abstract The computatio nal comp lexity of the Max imum L ikelihood decoding algorithm in [1], [2] fo r orthogo nal space-time block codes is smaller than specified. I . I N T RO D U C T I O N In [1],[2], t he decoding of an Orthogonal Space-T ime Block Code (OSTBC) in a Multi-Input Mu lti- Output (MIMO) system with N transmit and M recei ve antennas, and an interval of T symbols d uring which the channel is constant, is consid ered. The recei ved signal is given b y Y = G N H + V (1) where Y = [ y j t ] T × M is the recei ved signal m atrix of size T × M and w hose entry y j t is the signal recei ved at antenna j at time t , t = 1 , 2 , . . . , T , j = 1 , 2 . . . , M ; V = [ v j t ] T × M is the no ise matrix, and G N = [ g j t ] T × N is the transmitted signal matrix whose entry g i t is the signal transmit ted at antenna i at time t , t = 1 , 2 , . . . , N . The matrix H = [ h i,j ] N × M is the channel coef ficient matrix of size N × M whose entry h i,j is the channel coeffi cient from transmit antenna i t o recei ve antenna j . The entries of the matrices H and V are independent, zero-mean, and circularly symmetric com plex Gaussian rando m variables. The real-valued representation of (1) is obtain ed by first arranging the matrices Y , H , and V , each i n one col umn vector by stacking t heir col umns one after the other as y 1 1 . . . y M T = ˇ G N h 1 , 1 . . . h N ,M + v 1 1 . . . v M T (2) where ˇ G N = I M ⊗ G N , wi th I M denoting the ident ity m atrix of size M and ⊗ denoting the Kronecker matrix multipli cation, and then decomposi ng the M T -dimensional complex problem defined by (2) to a 2 M T -dimensi onal real-valued problem by applying th e real-valued lattice representation defined i n [3] to obtain ˇ y = ˇ H x + ˇ v (3) or equivalently Re( y 1 1 ) Im( y 1 1 ) . . . Re( y M T ) Im( y M T ) = ˇ H Re( s 1 ) Im( s 1 ) . . . Re( s K ) Im( s K ) + Re( v 1 1 ) Im( v 1 1 ) . . . Re( v M T ) Im( v M T ) . (4) The real-v alued fading coef ficients of ˇ H are defined using the com plex f ading coeffi cients h i,j from transmit antenna i to recei ve antenna j as h 2 l − 1+2( j − 1) N = Re( h l,j ) and h 2 l +2( j − 1) N = Im( h l,j ) for 2 l = 1 , 2 , . . . , N and j = 1 , 2 , . . . , M . Since G N is an orthogon al matrix and due to the real-v alued representation of the system using (4), it can be observed that • Al l columns of ˇ H = [ ˇ h 1 ˇ h 2 . . . ˇ h 2 K ] where ˇ h i is the i th column of ˇ H , are orthogonal to each other , or equivalently ˇ h T i ˇ h j = 0 i, j = 1 , 2 , . . . , K, i 6 = j (5) • T he inner product of ev ery column in ˇ H with itself is equal to a constant , i.e., ˇ h T i ˇ h i = ˇ h T j ˇ h j i, j = 1 , 2 , . . . , K. (6) I I . D E C O D I N G Let σ = ˇ h T 1 ˇ h 1 . (7) Note σ = ˇ h T i ˇ h i , i = 1 , 2 , . . . , 2 K . Due to t he ort hogonality property in (5)-(6), we ha ve ˇ H T ˇ H = σ I 2 K . (8) Let’ s represent (4) as ˇ y = ˇ H x + ˇ v . (9) By m ultiplyin g this expression by ˇ H T on the left, we have ¯ y = ˇ H T ˇ y (10) = σ x + ¯ v (11) where ¯ v i s zero-mean, and due to (8) has independent and identically distrib uted Gaussian members. The Maximum Li kelihood solution i s found by m inimizin g ¯ y 1 ¯ y 2 . . . ¯ y 2 K − σ ¯ x 1 ¯ x 2 . . . ¯ x 2 K 2 2 (12) over all combinations of x ∈ Ω 2 K . T his can be furt her si mplified as ˆ x i = ar g min x i ∈ Ω | ¯ y i − σ x i | 2 (13) for i = 1 , 2 , . . . , 2 K . Then, the decoded message is ˆ x = ( ˆ x 1 , ˆ x 2 , . . . , ˆ x 2 K ) T . (14) I I I . C O M P U T A T I O NA L C O M P L E X I T Y The decoding op eration consists of t he multiplication ˇ H T ˇ y , calculation of σ = ˇ h T 1 ˇ h 1 , the multiplica- tions σ x i , and performing (13). W ith a slight change, we will consider t he calculation o f σ − 1 and the multipli cations z i = σ − 1 ¯ y i i = 1 , 2 , . . . , 2 K. (15) Then ˆ x i = arg min x i ∈ Ω | z i − x i | 2 (16) for i = 1 , 2 , . . . , 2 K , which is a standard quantization operation in con ventional Qu adrature Amplitude Modulation. W e will compute the decoding compl exity up to this quanti zation operation. Note ˇ H is a 2 M T × 2 K matrix, ˇ h 1 is a 2 M T -di mensional vector , and we will assu me t he complexity of real division 3 as equiv alent to 4 real m ultipli cations as in [1],[2]. The multiplicati on ˇ H T ˇ y takes 2 M T · 2 K , calculation of σ takes 2 M T , its in verse takes 4, and σ − 1 ¯ y takes 2 K real mult iplications. Similarly , th e mul tiplication ˇ H T ˇ y takes 2 K · (2 M T − 1) , and calculation o f σ takes 2 M T − 1 real additions. L etting R M and R A be t he number of real m ultipli cations and the number of real additions, the complexity of decoding the transmitted comp lex signal ( s 1 , s 2 , . . . , s K ) w ith th e technique described in (7), (10), and (15) is C PR = (4 K M T + 2 M T + 2 K + 4) R M , (4 K M T + 2 M T − 2 K − 1) R A (17) which is smaller t han t he comp lexity specified in [1],[2] and does not depend on t he const ellation size L . Howe ver , as wil l be seen in the examples, the m atrix ˇ H can i nclude values identical to 0, o r mult iplications by a scalar , which result in deviations from (17). Also, in (56), we will provide a sli ghtly small er figure for th is comp lexity . In what follows, we will calculate the exact complexity v alues for four examples. Due to the ortho gonality property (8) of ˇ H , the QR decomposit ion of ˇ H is Q = 1 √ σ ˇ H R = √ σ I 2 K (18) and t herefore does not n eed to b e com puted explicitly . T he procedure described above is equiv alent and has lo wer computatio nal complexity . I V . C O M PA R I S O N W I T H A C O N V E N T I O N A L T E C H N I Q U E W e wil l now comp are the technique in (7), (10), and (15) with one from the literature. In [4], it has been sh own that k Y − G N H k 2 = k H k 2 K X k =1 | s k − ˆ s k | 2 + constan t , (19) where ˆ s k = 1 k H k 2 [Re { T r( H H A H k Y ) } − ˆ ı · Im { T r( H H B H k Y ) } ] (20) and where A k and B k are the matrices in the linear representation of G N in terms of ¯ s k = Re[ s k ] and ˜ s k = Im[ s k ] for k = 1 , 2 , . . . , K as [4] G N = K X k =1 ¯ s k A k + ˆ ı ˜ s k B k = K X k =1 s k ˇ A k + s ∗ k ˇ B k , (21) ˆ ı = √ − 1 , A k = ˇ A k + ˇ B k , and B k = ˇ A k − ˇ B k . On ce { ˆ s k } K k =1 are calculated, the decoding problem can be s olved by min s k ∈ Ω 2 | s k − ˆ s k | 2 (22) once for each k = 1 , 2 , . . . , K . Similarly t o (16), thi s i s a standard quantization p roblem in Quadrature Amplitude Modul ation and we w ill calculate the computational complexity of this approach up to thi s point. W e will carry out the com putational complexity analysis of the techni que in (7), (10), and (15) against the complexity of the technique in (20) for four exa mples, including those in [1], [2]. Example 1: Consider the Alamouti OSTBC with N = K = T = 2 and M = 1 where G 2 = s 1 s 2 − s ∗ 2 s ∗ 1 . (23) The received signal is giv en by y 1 y 2 = s 1 s 2 − s ∗ 2 s ∗ 1 h 1 , 1 h 2 , 1 + v 1 v 2 (24) 4 Representing (24) in the real domain, we ha ve Re( y 1 ) Im( y 1 ) Re( y 2 ) Im( y 2 ) = ˇ H x 1 x 2 x 3 x 4 + Re( v 1 ) Im( v 1 ) Re( v 2 ) Im( v 2 ) (25) where x 1 = Re( s 1 ) , x 2 = Im( s 1 ) , x 3 = Re( s 2 ) , x 4 = Im( s 2 ) and ˇ H = h 1 − h 2 h 3 − h 4 h 2 h 1 h 4 h 3 h 3 h 4 − h 1 − h 2 h 4 − h 3 − h 2 h 1 . (26) Note that t he matrix ˇ H is orth ogonal and all of its columns hav e the same sq uared norm. One needs 16 real multipli cations to calculate ¯ y = ˇ H T ˇ y , 4 real multipl ications to calculate σ = ˇ h T 1 ˇ h 1 , 4 real m ultiplicatio ns to calculate σ − 1 , and 4 real m ultipli cations to calculate σ − 1 ¯ y . There are 3 · 4 = 12 real additions t o calculate ˇ H T ˇ y and 3 real additio ns to calculate σ . As a result, with this approach, decoding takes a total of 2 8 real multi plications and 15 real additions. For the m ethod in (20) above, the p roducts H H A H 1 , H H B H 1 , H H A H 2 , H H B H 2 are H H A H 1 = [ h ∗ 1 , 1 h ∗ 2 , 1 ] H H B H 1 = [ h ∗ 1 , 1 − h ∗ 2 , 1 ] H H A H 2 = [ h ∗ 2 , 1 − h ∗ 1 , 1 ] H H B H 2 = [ h ∗ 2 , 1 h ∗ 1 , 1 ] (27) which will be multiplied by Y = ( y 1 , y 2 ) T where h 1 , 1 , h 2 , 1 , y 1 , y 2 are all complex. It can be observed from (20) and (27) that one needs all prod ucts h ∗ i, 1 y j , i, j = 1 , 2 . T herefore, one needs 4 complex or 16 real multipli cations. The calculation of k H k 2 takes 4, its reciprocal 1 / k H k 2 4, and the multip lication of 1 / k H k 2 with Re { T r[ H H A H k Y ] } and Im { T r[ H H B H k Y ] } for k = 1 , 2 another 4 real multipli cations. It can be calculated that each of Re { T r[ H H A H k Y ] } and Im { T r[ H H B H k Y ] } has 3 distin ct real additions for k = 1 , 2 , which means t here are a total of 12 real additi ons for this operation. Calculation of k H k 2 takes 3 real additions. As a result , t his approach employs 28 real multipli cations and 15 real additions to decode. Note, in this case, t he complexity figures in (17) are 28 real multipli cations and 15 real add itions, whi ch hold exactly . Example 2: Consider the OSTBC with M = 2 , N = 3 , T = 8 , and K = 4 given b y [5] G 3 = s 1 − s 2 − s 3 − s 4 s ∗ 1 − s ∗ 2 − s ∗ 3 − s ∗ 4 s 2 s 1 s 4 − s 3 s ∗ 2 s ∗ 1 s ∗ 4 − s ∗ 3 s 3 − s 4 s 1 s 2 s ∗ 3 − s ∗ 4 s ∗ 1 s ∗ 2 T . (28) The received signal can be writ ten as y 1 1 y 2 1 . . . . . . y 1 8 y 2 8 = G 3 h 1 , 1 h 1 , 2 h 2 , 1 h 2 , 2 h 3 , 1 h 3 , 2 + v 1 1 v 2 1 . . . . . . v 1 8 v 2 8 . (29) In [2], it has been shown that the 32 × 8 real-valued channel mat rix ˇ H is ˇ H = h 1 − h 2 h 3 − h 4 h 5 − h 6 0 0 h 2 h 1 h 4 h 3 h 6 h 5 0 0 . . . . . . . . . . . . . . . . . . . . . . . . h 7 − h 8 h 9 − h 10 h 11 − h 12 0 0 h 8 h 7 h 10 h 9 h 12 h 11 0 0 . . . . . . . . . . . . . . . . . . . . . . . . 0 0 h 11 h 12 − h 9 − h 10 − h 7 − h 8 0 0 h 12 − h 11 − h 10 h 9 − h 8 h 7 (30) 5 where h i , i = 1 , 2 , . . . , 11 and h j , j = 2 , 4 , . . . , 12 are the real and i maginary parts, respectively , of h 1 , 1 , h 2 , 1 , h 3 , 1 , h 1 , 2 , h 2 , 2 , h 3 , 2 . The m atrix ˇ H T is 8 × 32 where each row has 8 zeros, while each of the remaini ng 24 s ymbols has o ne of h 1 , h 2 , . . . , h 12 , repeated twice. Let’ s first ignore t he repetit ion of h i in a ro w . Then, the calculation of ˇ H T ˇ y takes 8 · 24 = 192 real multiplicatio ns. The calculation of σ = ˆ h T 1 ˆ h 1 = 2 P 12 k =1 h 2 i takes 12 + 1 = 13 real m ultiplicatio ns, In additi on, one needs 4 real multiplications to calculate σ − 1 , and 8 real multi plications to calculate σ − 1 ¯ y . T o calculate ˇ H T ˇ y , one needs 8 · 23 = 184 real additions, and t o calculate σ , one needs 11 real additions. As a result, with this approach, one needs a total of 217 real multipli cations and 195 real addi tions to decode. For the m ethod in (20) above, the p roducts H H A H 1 and H H B H 1 are H H A H 1 = h ∗ 1 , 1 h ∗ 2 , 1 h ∗ 3 , 1 0 h ∗ 1 , 1 h ∗ 2 , 1 h ∗ 3 , 1 0 h ∗ 1 , 2 h ∗ 2 , 2 h ∗ 3 , 2 0 h ∗ 1 , 2 h ∗ 2 , 2 h ∗ 3 , 2 0 H H B H 1 = h ∗ 1 , 1 h ∗ 2 , 1 h ∗ 3 , 1 0 − h ∗ 1 , 1 − h ∗ 2 , 1 − h ∗ 3 , 1 0 h ∗ 1 , 2 h ∗ 2 , 2 h ∗ 3 , 2 0 − h ∗ 1 , 2 − h ∗ 2 , 2 − h ∗ 3 , 2 0 (31) Other H H A H k and H H B H k hav e similar structures, with zero columns located elsewher e, same location in H H A H k and H H B H k , k = 2 , 3 , 4 . Nonzero columns of H H A H k and H H B H k are the shuf fled versions of the columns of H H A H 1 and H H B H 1 , wi th the same s huffl ing for H H A H k and H H B H k , possi bly wit h sign changes. As a result, the first four columns of H H A H k and H H B H k are t he sam e, the first and second four columns of H H A H k are the same, while the first and second four columns of H H B H k are negati ves of each other , k = 1 , 2 , 3 , 4 . F or t his G N , o ne has G H N G N = 2 K X k =1 | s k | 2 ! I (32) which makes it necessary to replace k H k 2 with 2 k H k 2 in (20) above. The vector Y is given as Y = y 1 1 y 2 1 . . . . . . y 1 8 y 2 8 . (33) The complex multipl ications i n calculating Re { T r[ H H A H 1 Y ] } can be used to calculate Im { T r[ H H B H 1 Y ] } due to sign changes and the calculation of real and imaginary parts. Ignoring the repetition of h ∗ i,j , there are 12 different complex numbers in H H A H 1 and due t o the trace operation, th ey will be m ultiplied with 12 complex numbers from Y . As a resul t, to calculate T r[ H H A H k Y ] (equiv alentl y T r[ H H B H k Y ] ) one n eeds 12 complex or 48 real multipl ications for one k . T o calculate the numerators of s k , for all k = 1 , 2 , 3 , 4 , one needs 19 2 real multiplicatio ns. T o calculate 2 k H k 2 in the denominator , one needs 13 real m ultipli cations. T o calculate its in verse, one needs 4 real mu ltiplication s. Finally , to complete the calculation o f s k for k = 1 , 2 , 3 , 4 by multiplying the n umerators of their real and imaginary parts by 1 / (2 k H k 2 ) , one needs 8 real multiplications . T o calculate ea ch Re { T r[ H H A H k Y ] } or Im { T r[ H H B H k Y ] } for k = 1 , 2 , 3 , 4 , one needs 12 + 11 = 23 real addi tions. T o calculate k H k 2 , one needs 11 additions. As a result , with this approach, one needs 217 real mult iplications and 195 real addition s t o decode, same number as in the approach sp ecified by (7), (10), and (15). For this example, (17) sp ecifies 300 real multip lications and 279 real additions. The reduct ion is due to the elements with zero v alues in ˇ H . It is important to make the observ at ion that the repea ted values of h i in the columns of ˇ H , or equi valently h ∗ m,n in the rows of H H A H k or H H B H k , h a ve a substantial im pact on complexity . W e will carry out the rest of this discussion only for the approach in (7), (10), and (15), the one in (20) is essentiall y the same. Due to the repetition of h i , by groupin g the two values o f ˇ y j that it m ultipli es, it takes 8 · 12 = 96 real multipli cations to compute ˇ H T ˇ y , not 8 · 24 = 19 2 . The summations for each row of ˇ H T ˇ y will now be done in two steps, first 12 p airs of addi tions per each h i , and t hen after multiplicatio n b y h i , additio n of 12 real numbers. This takes 12 + 11 = 23 real addition s, wi th no change from the way the calculation was 6 made without grouping. W i th this change, t he complexity of decoding becomes 121 real mu ltiplicatio ns and 1 95 real additions, a huge reduction from 300 real multipl ications and 2 79 real additions. Example 3 : W e will now consider the code G 4 from [5]. The parameters for this code are N = K = 4 , M = 1 , and T = 8 . It is giv en as G 4 = s 1 − s 2 − s 3 − s 4 s ∗ 1 − s ∗ 2 − s ∗ 3 − s ∗ 4 s 2 s 1 s 4 − s 3 s ∗ 2 s ∗ 1 s ∗ 4 − s ∗ 3 s 3 − s 4 s 1 s 2 s ∗ 3 − s ∗ 4 s ∗ 1 s ∗ 2 s 4 s 3 − s 2 s 1 s ∗ 4 s ∗ 3 − s ∗ 2 s ∗ 1 T . (34) Similarly to G 3 of Example 2, this code has the property that G H 4 G 4 = 2( P K k =1 | s k | 2 ) I . As a result, k H k 2 in the denominator in (20) should be replaced with 2 k H k 2 . The ˇ H m atrix is 16 × 8 and can be calculated as ˇ H = h 1 − h 2 h 3 − h 4 h 5 − h 6 h 7 h 8 h 2 h 1 h 4 h 3 h 6 h 5 h 8 h 7 h 3 − h 4 − h 1 h 2 h 7 − h 8 − h 5 h 6 h 4 h 3 − h 2 − h 1 h 8 h 7 − h 6 − h 5 . . . . . . . . . . . . . . . . . . . . . . . . h 5 h 6 − h 7 h 8 − h 1 − h 2 h 3 h 4 h 6 − h 5 − h 8 h 7 − h 2 h 1 h 4 − h 3 . (35) This mat rix consists entirely of nonzero ent ries. Each entry in a column equ als ± h i for som e i ∈ { 1 , 2 , . . . , 8 } , every h i appearing twice in a column. Ignoring t his repetiti on for now , calculation of ˇ H T ˇ y takes 8 · 16 = 128 real mul tiplications . Calculation of σ takes 9 real m ultiplicatio ns, it s in verse 4 real m ultiplicatio ns, and the calculation of σ − 1 ¯ y tak es 8 real m ultiplicati ons. Calculation of ˇ H T ˇ y takes 8 · 15 = 120 real additions, and calculation o f σ takes 7 real additions. A s a result, with t his approach, to decode, on e needs 149 real multi plications and 127 real additions. For this code, for t he m ethod in (20), the matrices H H A H k and H H B H k k = 1 , 2 , 3 , 4 are as follows. H H A H 1 = [ h ∗ 1 , 1 h ∗ 2 , 1 h ∗ 3 , 1 h ∗ 4 , 1 h ∗ 1 , 1 h ∗ 2 , 1 h ∗ 3 , 1 h ∗ 4 , 1 ] H H A H 2 = [ h ∗ 2 , 1 − h ∗ 1 , 1 − h ∗ 4 , 1 h ∗ 3 , 1 h ∗ 2 , 1 − h ∗ 1 , 1 − h ∗ 4 , 1 h ∗ 3 , 1 ] H H A H 3 = [ h ∗ 3 , 1 h ∗ 4 , 1 − h ∗ 1 , 1 − h ∗ 2 , 1 h ∗ 3 , 1 h ∗ 4 , 1 − h ∗ 1 , 1 − h ∗ 2 , 1 ] H H A H 4 = [ h ∗ 4 , 1 − h ∗ 3 , 1 h ∗ 2 , 1 − h ∗ 1 , 1 h ∗ 4 , 1 − h ∗ 3 , 1 h ∗ 2 , 1 − h ∗ 1 , 1 ] (36) H H B H 1 = [ h ∗ 1 , 1 h ∗ 2 , 1 h ∗ 3 , 1 h ∗ 4 , 1 − h ∗ 1 , 1 − h ∗ 2 , 1 − h ∗ 3 , 1 − h ∗ 4 , 1 ] H H B H 2 = [ h ∗ 2 , 1 − h ∗ 1 , 1 − h ∗ 4 , 1 h ∗ 3 , 1 − h ∗ 2 , 1 h ∗ 1 , 1 h ∗ 4 , 1 − h ∗ 3 , 1 ] H H B H 3 = [ h ∗ 3 , 1 h ∗ 4 , 1 − h ∗ 1 , 1 − h ∗ 2 , 1 − h ∗ 3 , 1 − h ∗ 4 , 1 h ∗ 1 , 1 h ∗ 2 , 1 ] H H B H 4 = [ h ∗ 4 , 1 − h ∗ 3 , 1 h ∗ 2 , 1 − h ∗ 1 , 1 − h ∗ 4 , 1 h ∗ 3 , 1 − h ∗ 2 , 1 h ∗ 1 , 1 ] From this set we conclude th at the complex multipl ications between H H A H k Y and H H B k H Y can be shared for a given k = 1 , 2 , 3 , 4 . The num ber of real multipl ications to calculate H H A H k Y for all k = 1 , 2 , 3 , 4 is 4 · 8 · 4 = 128 . The number of real mu ltiplication s to calculate 2 k H k 2 is 6 + 1 = 7 , and to calculate its in verse takes 4 real multipl ications. Finall y , the n umber of real mul tiplications to complete the calculation of s k for all k = 1 , 2 , 3 , 4 is 8. In order to calculate H H A H k Y or H H B H k Y , one needs 8 real additions to perform each complex mult iplication and 7 real addi tions to calculate the sum. A s a result , calculati on of Re { T r[ H H A H k Y ] } and Im { T r[ H H B H k Y ] } for all k = 1 , 2 , 3 , 4 takes 8 × 15 real additions. Calculation of k H k 2 takes 6 + 1 = 7 real additions. Therefore, wi th this approach the number of real multiplication s and additions to decode are 149 and 127, respectiv ely , same as the numbers needed for the approach in (7), (10), and (15). 7 For this e xample, equation (17) specifies 156 re al multiplicatio ns and 135 real additions. The reduction is due to the fact that one row of ˇ H T has each h i appearing twice. This reduces the number of multi plications and s ummations to calculate σ by about a fac tor of 2. Howe ver , because each h i appears twice in ever y row of ˇ H T , th e nu mber of multiplicati ons can actuall y be reduced subs tantially , as we discussed in Example 2. As di scussed in Example 2, we can reduce the number of multiplication s to calculate ˇ H T ˇ y by grouping the two m ultipliers of each h i by summ ing them prior to m ultipli cation by h i , i = 1 , 2 , . . . , 8 . As seen in Example 2, this do es not alter t he number of real additions. W ith this sim ple change, the n umber of real multiplications to decode becomes 85 and the number of real add itions to decode remains at 127. Example 4: It is instructive to consider the code H 3 giv en in [5] with N = 3 , K = 3 , T = 4 which we wi ll consi der for M = 1 wh ere H 3 = s 1 s 2 s 3 / √ 2 − s ∗ 2 s ∗ 1 s 3 / √ 2 s ∗ 3 / √ 2 s ∗ 3 / √ 2 ( − s 1 − s ∗ 1 + s 2 − s ∗ 2 ) / 2 s ∗ 3 / √ 2 − s ∗ 3 / √ 2 ( s 2 + s ∗ 2 + s 1 − s ∗ 1 ) / 2 . (37) For this code, H H 3 H 3 = ( P 3 k =1 | s k | 2 ) I is satisfied. In this case, the matrix ˇ H can b e calculated as ˇ H = h 1 − h 2 h 3 − h 4 h 5 / √ 2 − h 6 / √ 2 h 2 h 1 h 4 h 3 h 6 / √ 2 h 5 / √ 2 h 3 h 4 − h 1 − h 2 h 5 / √ 2 − h 6 / √ 2 h 4 − h 3 − h 2 h 1 h 6 / √ 2 h 5 / √ 2 − h 5 0 0 − h 6 ( h 1 + h 3 ) / √ 2 ( h 2 + h 4 ) / √ 2 − h 6 0 0 h 5 ( h 2 + h 4 ) / √ 2 − ( h 1 + h 3 ) / √ 2 0 h 6 h 5 0 ( h 1 − h 3 ) / √ 2 ( h 2 − h 4 ) / √ 2 0 − h 5 h 6 0 ( h 2 − h 4 ) / √ 2 ( − h 1 + h 3 ) / √ 2 . (38) It can b e verified that ever y column ˇ h i of ˇ H h as the property that ˇ h T i ˇ h i = σ = k H k 2 = P 6 k =1 h 6 k for i = 1 , 2 , . . . , 6 . In this case, the nu mber of rea l multiplicatio ns to calculate ˇ H T ˇ y requires more caution than the pre vious examples. For the first four rows of ˇ H T , t his number is 6 real multiplication s per ro w . For the last two rows, due to combining , e.g., h 1 and h 3 in ( h 1 + h 3 ) / √ 2 in the fifth element of ˇ h 5 , and the commonality o f h 5 and h 6 for the first and third, and second and fourth, respectiv ely , elements of ˇ h 5 , and one single multipli er 1 / √ 2 for th e whole column, the nu mber o f real mult iplications needed is 7. As a result, calculation of ˇ H T ˇ y takes 38 real m ultiplicati ons. Calculation o f σ takes 6 real multiplications. One needs 4 real multi plications to calculate σ − 1 , and 6 real m ultiplicatio ns to calculate σ − 1 ¯ y . First four rows of ˇ H T ˇ y require 5 real additions each. L ast t wo rows of ˇ H T ˇ y require 4 + 7 = 11 real additions each. This is a total of 42 real additions to calculate ˇ H T ˇ y . Calculation of σ requires 5 real add itions. Overall, with th is approach one needs 54 real mul tiplications and 4 7 real additions to decode. For th is code, for the method i n (20) above, the matrices H H A H k and H H B H k , k = 1 , 2 , 3 are as follows. H H A H 1 = [ h ∗ 1 , 1 h ∗ 2 , 1 − h ∗ 3 , 1 0 ] H H A H 2 = [ h ∗ 2 , 1 − h ∗ 1 , 1 0 h ∗ 3 , 1 ] H H A H 3 = 1 √ 2 [ h ∗ 3 , 1 h ∗ 3 , 1 h ∗ 1 , 1 + h ∗ 2 , 1 h ∗ 1 , 1 − h ∗ 2 , 1 ] (39) H H B H 1 = [ h ∗ 1 , 1 − h ∗ 2 , 1 0 h ∗ 3 , 1 ] H H B H 2 = [ h ∗ 2 , 1 h ∗ 1 , 1 h ∗ 3 , 1 0 ] H H B H 3 = 1 √ 2 [ h ∗ 3 , 1 h ∗ 3 , 1 − h ∗ 1 , 1 − h ∗ 2 , 1 − h ∗ 1 , 1 + h ∗ 2 , 1 ] 8 Before discussi ng the complexity of the approach in (20), we would like to make an observ ation. A careful examination shows that the com plex m ultiplicati ons between H H A H k Y for k = 1 , 2 , 3 and H H B H j Y for j = 1 , 2 , 3 can be shared in t he method outlined in (20). In thi s case, since h ∗ 3 in the first and second element of H H A H 3 can be shared, t here are 9 complex multiplicati ons needed for the calculation of H H A H k Y for k = 1 , 2 , 3 . The real values of those will be used in calculating the real parts of s k , k = 1 , 2 , 3 , and the imaginary parts in calculating the imaginary parts of s k , k = 1 , 2 , 3 , albeit in possibly dif ferent signs or locations. This requires a careful i mplementatio n where the n eeded complex multipli cations are calculated, stored, and their real and imaginary parts carefully distributed in the m ost judicious manner . The 9 complex multiplications correspond to 36 real multipli cations, and there are 2 more real multi plications, by 1 / √ 2 for t he real and imaginary parts of s 3 . As in the previous meth od, 6 real mu ltiplicatio ns are needed to calculate k H k 2 , 4 real multi plications to calculate 1 / k H k 2 , and 6 real multipli cations to complete t he calculation o f s k , k = 1 , 2 , 3 . The ca lculation of Re { T r[ H H A H k Y ] } and Im { T r[ H H B H k Y ] } for all k = 1 , 2 , 3 t akes 4 · 5 + 2 · (6 + 5) = 42 real additions, and the calculatio n of k H k 2 takes 5 more real additio ns. This approach results in a total of 54 real m ultipli cations and 47 real additions to decode, as in the technique in (7), (10), and (15). For thi s example, (17) specifies 66 real multiplications and 49 real additions. The reduction is d ue to the presence of the zero entries in ˇ H . On the other hand, the presence of the factor 1 / √ 2 in the last two rows of ˇ H T adds two real mul tiplications to the to tal num ber of real mult iplications. Before conclu ding this example, we would like to dis play the matrices A 3 and B 3 for th is code. A 3 = 0 0 1 / √ 2 0 0 1 / √ 2 1 / √ 2 1 / √ 2 0 1 / √ 2 − 1 / √ 2 0 B 3 = 0 0 1 / √ 2 0 0 1 / √ 2 − 1 / √ 2 − 1 / √ 2 0 − 1 / √ 2 1 / √ 2 0 (40) In all other A k and B k matrices in t he four examples studied, the entries were ± 1 . Furthermore, in all other A k and B k matrices in the four examples, th ere was at most one nonzero v alue in a row . In b oth A 3 and B 3 above, t he entri es are irrational nu mbers and two rows hav e two no nzero entries. From t he e xamples abo ve, by studying t he operations of the two techniques in detail, it can actually be seen that, not only is the com putational complexity o f the technique in (7), (11), and (15) is t he same as t he technique in (20), but also th ey actually perform equiv alent operations. V . O R T H O G O NA L I T Y O F ˇ H A N D C O M P U T A T I O N A L C O M P L E X I T Y R E V I SI T E D W e ha ve seen in the examples that wh en G H N G N = c ( P K k =1 | s k | 2 ) I where c = 1 , 2 , then σ = c k H k 2 . W e will now show this holds in general. Based on that result, we will then reduce the computational complexity estimate in (17). Let z = v ec( Y ) = y 1 1 . . . y M T . (41) Form two vectors, ¯ s and ˜ s , consisting of real and im aginary parts of s k , and form a vector s ′ that is the concatenation of ¯ s and ˜ s : ¯ s = ( ¯ s 1 , ¯ s 2 , . . . , ¯ s K ) T , ˜ s = ( ˜ s 1 , ˜ s 2 , . . . , ˜ s K ) T , s ′ = ( ¯ s, ˜ s ) T . (42) By rearranging the right hand side of (2), we can write z = F s ′ + e = F a ¯ s + F b ˜ s + e (43) where F = [ F a F b ] is an M T × 2 K , and F a and F b are M T × K complex matrices whose entries consist of (linear combinatio ns o f) channel coef ficients h i,j , and e is the corresponding complex Gaussian 9 noise vector . In [4], it was shown th at when G H N G N = ( P K k =1 | s k | 2 ) I , then Re[ F H F ] = k H k 2 I . It is straightforward to e xtend th is result so that when G H N G N = c ( P K k =1 | s k | 2 ) I , then Re[ F H F ] = c k H k 2 I (44) where c is a pos itive integer . Let ¯ z = Re[ z ] , ˜ z = Im[ z ] , ¯ e = Re[ e ] , ˜ e = Im[ e ] , (45) and ¯ F a = Re[ F a ] , ˜ F a = Im[ F a ] , ¯ F b = Re[ F b ] , ˜ F b = Im[ F b ] . (46) Now define z ′ = ¯ z ˜ z F ′ = ¯ F a ¯ F b ˜ F a ˜ F b e ′ = ¯ e ˜ e (47) so that we can write y ′ = F ′ s ′ + e ′ (48) which is actually the sam e expression as (4) except t he vectors and matri ces have t heir rows and columns permuted. It can be shown that (44) im plies F ′ T F ′ = c k H k 2 I . (49) Let P y and P s be 2 K × 2 K and 2 M T × 2 M T , respectively , permutation m atrices such that Re( y 1 1 ) Im( y 1 1 ) . . . Re( y M T ) Im( y M T ) = P y y ′ Re( s 1 ) Im( s 1 ) . . . Re( s K ) Im( s K ) = P s s ′ . (50) It fol lows that P T y P y = P y P T y = I and P T s P s = P s P T s = I . W e now have Re( y 1 1 ) Im( y 1 1 ) . . . Re( y M T ) Im( y M T ) = P y ( F ′ s ′ + e ′ ) (51) = P y F ′ P T s Re( s 1 ) Im( s 1 ) . . . Re( s K ) Im( s K ) + P y e ′ (52) = ˇ H Re( s 1 ) Im( s 1 ) . . . Re( s K ) Im( s K ) + Re( v 1 1 ) Im( v 1 1 ) . . . Re( v M T ) Im( v M T ) . (53) Therefore, ˇ H = P y F ′ P T s (54) 10 which im plies ˇ H T ˇ H = P s F ′ T P T y P y F ′ P T s = c k H k 2 I . (55) In other words, σ = c k H k 2 . This has an im pact on t he computational complexity formula (17) which we discuss n ext. First, let c = 1 . Since σ = k H k 2 , i ts calculation takes 2 M N real m ultiplicati ons and 2 M N − 1 real additions. As a result, the computation al complexity formula (17) can be updated as C PR = (4 K M T + 2 M N + 2 K + 4) R M , (4 K M T + 2 M N − 2 K − 1) R A . (56) When c > 1 , the number of real multiplications to calculate σ increases by 1, ho we ver , the complexity of t he calculation of ˇ H T ˇ y will reduce by a factor of c , as seen in the examples. As seen in t he examples, the presence o f values of 0 withi n ˇ H will reduce th e computat ional complexity . Its effect wil l be a reduction in the number of real multiplicati ons to calculate ˇ H T ˇ y by a fac tor equal to the rati o of the rows of A k and B k that consis t only of 0 v alues to the t otal number of all ro ws in A k and B k for k = 1 , 2 . . . , K , wi th a similar (not sam e) reduction in the nu mber of real addition s to calculate ˇ H T ˇ y . It will also reduce the num ber of real mult iplications and additions to calculate σ b ut that ef fect can be m ore com plicated, as seen in Example 4. Al so, as seen in Example 4, the contents of the ˇ H matrix can have linear comb inations of h i values, which also result in changes in computatio nal complexity . V I . D I S C U S S I O N For an OSTBC G N satisfying G H N G = c ( P K k =1 | s k k 2 ) I where c is a p ositive integer , t he Maximum Likelihood soluti on is formu lated in four equiv alent ways k Y − G N H k 2 = k z − F s ′ k 2 = k z ′ − F ′ s ′ k 2 = k ˇ y − ˇ H x k 2 . (57) There are four solutio ns, all equal. The first solu tion is obtained by expanding k Y − G N H k 2 and is gi ven by (20) when c = 1 [4, eq. (7.4.2)]. When c > 1 , it should be altered as ˆ s k = 1 c k H k 2 [Re { T r( H H A H k Y ) } − ˆ ı · Im { T r( H H B H k Y ) } ] k = 1 , 2 , . . . , K. (58) The s econd solut ion is obtained by expanding the second expression in (57) and is given by ˆ s ′ = Re[ F H z ] c k H k 2 . (59) This is giv en i n [4. eq. (7.4.20)] for c = 1 . The th ird solution is the solution to the t hird equation in (57) ˆ s ′ = F ′ T z ′ c k H k 2 . (60) The fourth sol ution is t he on e introduced in [1]. It is the solution to the fourth equation in (57) and is giv en by Re( ˆ s 1 ) Im( ˆ s 1 ) . . . Re( ˆ s K ) Im( ˆ s K ) = ˇ H T ˇ y σ = ˇ H T ˇ y c k H k 2 . (61) Considering t hat F a = [vec( H A 1 ) · · · v ec ( H A K )] F b = [ ˆ ı v ec( H B 1 ) · · · ˆ ı v ec( H B K )] (62) [4, eq. (7.1.7)], it can be verified that (58) and (59) are equal. The equality of (59) and (60) fol lows from (45)-(47). The equality of (60) and (61) follows from (50) and (54). Therefore, equations (58 )-(61) yield the same result, and when properly im plemented, will ha ve identical com putational compl exity . 11 Finally , we would like to state that a straightforward implementati on of (58) or (59) can actually result in larger complexity t han (60) and (61). Th e proper impl ementation requires that in (58) and (59), the terms not needed due t o elimination by the T r[ ], Re[ ], and Im[ ] operators are not calcul ated. W e calculated t he comput ational complexity values for the examples taking this fact into account. R E F E R E N C E S [1] L. Azzam and E. A yanoglu, “ A no vel maximum lik elihood decoding algorithm for orthogona l space-time block codes, ” IEEE T ran sactions on Communications , vo l. 57, pp. 606–6 09, March 2009. [2] ——, “Low-comple xity maximum lik elihood detection of orthogon al space-time block codes, ” in Pr oc. IEEE Global T elecommunications Confer ence , November 2008. [3] ——, “Reduced complexity sphere decoding for square QAM via a ne w lattice representation, ” in Pro c. IEEE Global T elecommunication s Confer ence , November 2007. [4] E. G. Larsson and P . Stoica, Spa ce-T ime Block Coding for W ir eless Communications . Cambridge Uni ve rsity Press, 2003. [5] V . T arokh, H. Jafark hani, and R. Calderban k, “Spa ce-time block coding for wireless communications: Performance results, ” IEEE J ournal on Selected Ar eas i n Communication s , vol. 17, pp. 451 –460, July 1999.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment