A Low Density Lattice Decoder via Non-Parametric Belief Propagation
The recent work of Sommer, Feder and Shalvi presented a new family of codes called low density lattice codes (LDLC) that can be decoded efficiently and approach the capacity of the AWGN channel. A linear time iterative decoding scheme which is based …
Authors: ** - Danny Bickson – IBM Haifa Research Lab, Haifa, Israel (이메일: danny.bickson@gmail.com) - Alex
A Lo w Density Latti ce Decoder via Non-parametric Belief Propagati on Danny Bickson IBM Haifa Research Lab Mount Carmel, Haifa 319 05, Israel Email: danny .bickson@gmail. com Alexander T . Ihler Bren Scho ol of Informatio n and Com puter Science University of Californ ia, Irvine Email: ihler @ics.uci.edu Harel A viss ar and Da nny Dolev School of Computer Science and Engineering Hebrew University of Jerusalem Jerusalem 9 1904, Israel Email: { h arela01,d olev } @cs.huji.ac.il Abstract — The recent work of Sommer , Feder a n d Shalvi presented a new family of codes called low d ensity lattice codes (LDLC) that can b e decoded efficiently and app roac h the capacity of the A WGN channel. A linear time iterative decoding scheme whi ch is based on a message-passing formulation on a factor graph is giv en. In the current work we report our th eoretical find ings regar ding the relation between the LDLC decoder and belief propagation. W e show th at the LDLC decoder is an instance of non-parametric belief propaga tion and further connect it to the Gaussian belief p ropaga tion algorithm. Our new results enable borrowing knowledge from the non-p arametric and Gaussian belief propagation d omains i nto the LDLC domain. Specifically , we giv e more general conv ergence cond itions for con vergence of the LDLC decoder (under the sam e assumptions of the original LDLC conv ergence analysis). W e discuss h ow to ex t end the LDLC decoder from Latin square to full rank , non-square matrices. W e p ropose an efficient construction of sparse generator matrix and i ts matching decoder . W e report preliminary experimental r esults which sh ow ou r decoder has comparable symbol to erro r rate compar ed to the original LDLC decoder . I . I N T R O D U C T I O N Lattice codes p rovide a continuous-alph abet encoding pr o- cedure, in which integer-v alued infor mation b its are co n- verted to positions in Eu clidean spac e. Mo ti vated by the success of low-density parity check ( LDPC) codes [1], recen t work b y Somme r et a l. [ 2] presen ted low d ensity lattice codes (LDLC). Like L DPC cod es, a L DLC co de has a sp arse decodin g matrix which can be deco ded efficiently using an iterativ e message- passing algo rithm defined over a factor graph. In th e original pap er , the lattice codes were limited to La tin squares, and some theoretical results wer e proven for th is special case. The non- parametric belief propag ation (NBP) alg orithm is an ef ficien t method fo r appro ximated in ference on co n- tinuous g raphical mod els. The NBP alg orithm was o riginally introdu ced in [3], b ut has recently been rediscovered inde pen- dently in sev e ral do mains, amon g them com pressiv e sen sing [4], [5] and lo w density lattice de coding [2], demo nstrating very good em pirical p erforman ce in these systems. In this work, we inv estigate the theo retical relation s be- tween the LDLC decoder and belief pr opagation , and show it is an instanc e of the NBP algorithm . This under standing has both theo retical and pr actical co nsequences. Fr om the theory po int of view we provide a cle aner and more stan dard deriv atio n o f the LDLC u pdate ru les, from the graphica l models pe rspectiv e. From the practical side we prop ose to use the consider able bod y of researc h that exists in th e N BP domain to allow construction of efficient decoder s. W e further pro pose a new family of L DLC cod es as well as a new LDLC deco der based on th e NBP algor ithm . By utilizing sparse gener ator m atrices rather than th e spa rse parity c heck matrices used in the or iginal LDL C work, we can o btain a mor e efficient enco der and decod er . W e introdu ce th e theoretical fo undation s which ar e the basis o f our new decoder and gi ve prelimin ary e x perimental results which show o ur d ecoder h as co mparable perf ormance to the LDLC d ecoder . The structu re of this pa per is as fo llows . Section I I overviews LDL C codes, belief prop agation on factor graph and the LDLC d ecoder algorithm . Section III rederive the original LDLC algorithm using standard graphical models terminolo gy , and shows it is an instance of the NBP algo- rithm. Section IV presents a new family of LDLC codes as well as o ur n ovel d ecoder . W e f urther discu ss the relatio n to the GaBP algorithm . In Section V we discuss con vergence and giv e mor e gen eral sufficient con ditions for c on vergen ce, under the sam e assumptions used in the o riginal LDLC work. Section VI brings pr eliminary experimen tal results of e valuating our NBP decod er v s. the LDLC deco der . W e conclud e in Section VII . I I . B AC K G RO U N D A. Lattices and low-den sity lattice codes An n -d imensional lattice Λ is defined by a gen erator matrix G of size n × n . The lattice consists of the discrete set of points x = ( x 1 , x 2 , ..., x n ) ∈ R n with x = Gb , where b ∈ Z n is the set o f all possible integer vectors. A low-density lattice co de (L DLC) is a lattice with a n on- singular generator matrix G , for which H = G − 1 is sparse. It is co n venien t to assume that det ( H ) = 1 /det ( G ) = 1 . An ( n, d ) regular LDLC code has an H matr ix with con stant row and colum n degree d . I n a latin sq uare LDLC, the values of the d non-z ero c oefficients in each row and each colu mn are some pe rmutation of the values h 1 , h 2 , · · · , h d . W e assume a linear ch annel with additive wh ite Gau ssian noise (A WGN). For a vector o f integer-v alu ed in formation b the transmitted codew or d is x = Gb , where G is the LDLC encodin g matrix , and the received o bservation is y = x + w where w is a vector of i.i.d . A WGN with d iagonal covariance σ 2 I . The d ecoding pr oblem is then to estimate b gi ven th e observation vector y ; for th e A WGN chan nel, the MMSE estimator is b ∗ = arg min b ∈ Z n || y − Gb || 2 . (1) B. F a ctor graphs an d belief pr o pagation Factor g raphs provide a convenient m echanism for repre- senting stru cture among random variables. Suppose a f unc- tion or distribution p ( x ) defined o n a large set of v ariab les x = [ x 1 , . . . , x n ] factors into a collection of s m aller functions p ( x ) = Q s f s ( x s ) , wher e each x s is a vecto r com posed of a smaller subset of the x i . W e rep resent this factorizatio n as a bipartite grap h with “factor n odes” f s and “variable n odes” x i , where the neighb ors Γ s of f s are th e variables in x s , and the neigh bors of x i are the factor n odes wh ich have x i as an argument ( f s such that x i in x s ). For compactness, we use subscripts s, t to indicate factor n odes and i, j to indicate variable nod es, and will use x and x s to indicate sets of variables, typically form ed into a vector whose en tries are the v ar iables x i which ar e in the set. The b elief pr opagation (BP) o r sum- produ ct algor ithm [ 6] is a po pular tech nique fo r estimating the marginal pr obabili- ties of each of the v ar iables x i . BP follows a message-passing formu lation, in wh ich at e ach iter ation τ , ev e ry variable passes a message ( denoted M τ is ) to its neighboring factors, and factors to their ne ighborin g v ar iables. These messages are given by the genera l form, M τ +1 is ( x i ) = f i ( x i ) Y t ∈ Γ i \ s M τ ti ( x i ) , M τ +1 si ( x i ) = Z x s \ x i f s ( x s ) Y j ∈ Γ s \ i M τ j s ( x j ) dx s . (2) Here we have in cluded a “local factor” f i ( x i ) for each variable, to b etter parallel our development in th e sequel. When th e variables x i take on only a finite n umber of values, the m essages may be rep resented as vector s; the resulting algorithm has proven effecti ve in many coding ap plications including low-density parity ch eck (LDPC) co des [7 ]. I n keeping with our focus on continu ous-alphab et cod es, how- ev e r , we will focus on imp lementations for continuous-valued random v ar iables. 1) Gau ssian Belief Pr op agation: When the joint distri- bution p ( x ) is Ga ussian, p ( x ) ∝ exp {− 1 2 x T J x + h T x } , th e BP messages may also be co mpactly r epresented in th e same form. Here we use the “informatio n form” o f th e Gau ssian distribution, N ( x ; µ, Σ) = N − 1 ( h, J ) where J = Σ − 1 and h = J µ . In this case, the distribution’ s factors can always be written in a pairwise form , so that each fun ction in volves at most two v ar iables x i , x j , with f ij ( x i , x j ) = exp {− J ij x i x j } , j 6 = i , an d f i ( x i ) = exp {− 1 2 J ii x 2 i + h i x i } . Gaussian BP (GaBP) then ha s me ssages th at are also conv e niently represented as infor mation-fo rm Gaussian dis- tributions. I f s refers to factor f ij , we have M τ +1 is ( x i ) = N − 1 ( β i \ j , α i \ j ) , α i \ j = J ii + X k ∈ Γ i \ j α ki , β i \ j = h i + X k ∈ Γ i \ j β ki , (3) M τ +1 sj ( x j ) = N − 1 ( β ij , α ij ) , α ij = − J 2 ij α − 1 i \ j , β ij = − J ij α − 1 i \ j β i \ j . (4) From the α a nd β values we ca n comp ute the estimated marginal d istributions, wh ich are Gaussian with mean ˆ µ i = ˆ K i ( h i + P k ∈ Γ i β ki ) and v ariance ˆ K i = ( J ii + P k ∈ Γ i α ki ) − 1 . It is known that if GaBP conv e rges, it results in th e ex- act MAP estimate x ∗ , although the variance estimates ˆ K i computed by GaBP are on ly approximation s to the co rrect variances [8 ]. 2) Nonp arametric belief pr o pagation: In more gener al continuo us-valued systems, the messages do not h av e a sim- ple closed form and must be ap proxima ted. Nonp arametric belief pro pagation, or NBP , extends th e popular class o f par- ticle filtering alg orithms, which assume variables are related by a Markov cha in, to gene ral gr aphs. In NBP , messages ar e represented b y collections of w eighted samp les, smoo thed by a Ga ussian shape– in other w ords, G aussian mixtu res. NBP follows the same message update structure of (2). No- tably , wh en th e factors are all either G aussian or mixtu res of Gaussians, the m essages will r emain mixtures of Gau ssians as well, since the p roduct or m arginalization of any m ixture of Gaussians is also a mix ture of Gaussians [3]. Howev er , the produ ct of d Gaussian mixtures, e ach with N compo nents, produ ces a mixtu re of N d compon ents; th us e very m essage produ ct creates an expon ential increase in the size of the mixture. For th is reason, one m ust approxim ate the mixtur e in some way . NBP typically relies on a stochastic sampling process to pr eserve on ly high-likelihoo d com ponents, and a number o f samplin g algorith ms have been designed to e nsure that th is process is as efficient as po ssible [9]–[ 11]. One may also apply various determin istic algorithms to reduce the number of Gaussian mixture comp onents [12 ]; for exam ple, in [13] , [14] , an O ( N ) greedy algo rithm (wh ere N is the number of comp onents befor e reduction ) is used to tr ade off representatio n size with approximation er ror under various measures. C. LDLC decoder The LDLC decod ing algo rithm is also described as a message-passing algorith m d efined on a factor grap h [6], whose factors r epresent the inform ation and constrain ts on x ar ising fr om o ur knowl edge of y and the f act that b is integer-v alu ed. Here, we rewrite the LDLC d ecoder update rules in the more stand ard g raphical m odels notation. The factor graph used is a bipartite graph with variables nodes { x i } , representing each element of the vector x , and factor nodes { f i , g s } correspond ing to functions f i ( x i ) = N ( x i ; y i , σ 2 ) , g s ( x s ) = ( 1 H s x ∈ Z 0 otherwise , where H s is the s th row of the deco ding matrix H . Each variable n ode x i is co nnected to tho se factors f or which it is an argu ment; since H is sparse, H s has few no n-zero entrie s, making the r esulting factor grap h sparse as well. No tice that unlike the constru ction of [2 ], this formu lation does no t require that H be squ are, and it may have arbitrar y e ntries, rather than b eing restricted to a Latin square con struction. Sparsity is pr eferred, both for computatio nal efficiency and because be lief pro pagation is typically mo re well b ehaved on sparse systems with sufficiently long cycles [ 6]. W e can now d irectly deriv e the belief propaga tion update equations as Gaussian mixture distributions, corr esponding to an instance of the NB P algorithm. W e suppr ess th e iteration nu mber τ to reduce clutter . V a riable to factor messages. Suppo se that our factor to variable m essages M si ( x i ) are ea ch described by a G aussian mixture d istribution, which we will write in both the mo ment and in formation for m: M si ( x i ) = X l w l si N ( x i ; m l si , ν l si ) = X l w l si N − 1 ( x i ; β l si , α l si ) . (5) Then, the variable to factor message M is ( x s ) is given by M is ( x s ) = X l w l is N ( x s ; m l is , ν l is ) = X l w l is N − 1 ( x s ; β l is , α l is ) , (6) where l r efers to a vector o f indices [ l s ] for each neigh bor s , α l is = σ − 2 + X t ∈ Γ i \ s α l t ti , β l it = y i σ − 2 + X t ∈ Γ i \ s β l s ti , (7) w l it = N ( x ∗ ; y i , σ 2 ) Q w l s si N − 1 ( x ∗ ; β l s si , α l s si ) N − 1 ( x ∗ ; β l it , α l it ) . The moment par ameters are th en g i ven by ν l it = ( α l it ) − 1 , m l it = β l it ( α l it ) − 1 . Th e value x ∗ is any arbitrarily chosen point, o ften taken to be th e mean m l it for nu merical reasons. F actor to variable messages. Assume that the incom ing messages are of the f orm (6), an d n ote that the factor g s ( · ) can be rewritten in a summation for m, g s ( x s ) = P b s δ ( H s x = b s ) , which includes all possible integer values b s . If we con dition on the value of both the integer b s and the indices o f the inc oming me ssages, again form ed into a vector l = [ l j ] with an element for each variable j , we can see th at g s enforce s the linear equality H si x i = b s − P H sj x j . Using standard Gaussian identities in the mome nt parameter ization and sum ming over all p ossible b s ∈ Z an d l , we o btain M si ( x i ) = X b s X l w l si N ( x i ; m l si , ν l si ) = X b s X l w l si N − 1 ( x i ; β l si , α l si ) , (8) where ν l si = H − 2 si ( X j ∈ Γ s \ i H 2 j s ν l j j s ) , m l si = H − 1 si ( − b s + X j ∈ Γ s \ i H j s m l j j s ) , w l si = Y j ∈ Γ s \ i w l j j s , (9) and the information parameters are given by α l si = ( ν l si ) − 1 and β l si = m l si ( ν l si ) − 1 . Notice that (8) matches the initial assumption of a Gaus- sian mixtur e given in (5). At each iteration , th e exact messages remain mixtur es of Gaussians, and the algorithm iteslf corresponds to an instan ce of NBP . As in any NBP implementatio n, we also see that the n umber o f compon ents is increasing at eac h iteration and must e ventually ap proxi- mate the messages using so me finite n umber of comp onents. T o date the work o n LDLC decoders h as focused on de- terministic appro ximations [ 2], [15]– [17], often gree dy in nature. Howe ver , the existing literature on NBP con tains a large numb er of deterministic and stochastic approximation algorithm s [9]– [13]. The se algor ithms can use spatial data structures such as KD-Trees to impr ove efficiency a nd avoid the pitfalls that com e with g reedy op timization. Estimating the codewor d s. The orig inal codeword x can be estimated using its belief, an appro ximation to its marginal distribution giv en the constraints and observations: B i ( x i ) = f i ( x i ) Y s ∈ Γ i M si ( x i ) . (10) The v alue of each x i can th en b e estimated as either the mean o r mo de of the belief , e. g., x ∗ i = ar g max B i ( x i ) , and th e in teger - valued infor mation vector estimated as b ∗ = round( H x ∗ ) . I I I . A P A I RW I S E C O N S T RU C T I O N O F T H E L D L C D E C O D E R Before introdu cing our n ovel lattice cod e construction , we demonstra te that th e LDLC d ecoder can b e equivalently con- structed using a p airwise gr aphical mo del. This constru ction will ha ve impo rtant consequences when relating the LDLC decoder to Gau ssian belief prop agation (Sec tion IV -B) and understan ding convergence p roperties (Sectio n V). Theor e m 1: T he L DLC d ecoder algo rithm is an instance of the NBP algorithm executed on the fo llowing pairwise graphica l mod el. Denote the n umber LDLC variable nodes as n and the num ber o f che ck n odes as k 1 . W e co nstruct a new graphica l model with n + k variables, X = ( x 1 , · · · , x n + k ) as follows. T o match the LDLC notation we use the index letters i, j, .. to d enote variables 1 , ..., n an d th e letters s, t, ... to den ote new variables n + 1 , ..., n + k which will take the place of the check no de factors in the original form ulation. W e fur ther define the self and ed ge po tentials: ψ i ( x i ) ∝ N ( x i ; y i , σ 2 ) , ψ s ( x s ) , ∞ X b s = −∞ N ( x s ; b s , 0) , ψ i,s ( x i , x s ) , ex p ( − x i H is x s ) . (11) Pr oof: The proof is constru cted by substituting the edg e and self poten tials (15) into the belief pr opagation u pdate rules. Sin ce we are using a pairw ise graph ical model, we d o not h a ve two u pdate rules from variable to factors and from factors to v ariab les. Ho wever , to r ecover the LDLC upd ate rules, we make the artificial distinctio n between the variable and factor nodes, where the nod es x i will be shown to be related to the v ariab le nodes in the L DLC decode r , and th e nodes x s will be shown to be related to the factor nodes in the LDLC decoder . a) LDLC variable to facto r nod es: W e start with the integral-prod uct rule computed in the x i nodes: M is ( x s ) = Z x i ψ ( x i , x s ) ψ i ( x i ) Y t ∈ Γ i \ s M ti ( x i ) dx i 1 Our constructi on extends the square parity check matrix assumption to the general case. The produ ct of a mixture of Gaussians Q t ∈ Γ i \ s M ti ( x i ) is itself a mixture of Gaussians, where each co mponen t in the ou tput mixture is the produ ct of a single Gaussians selected from each inp ut mixtu re M ti ( x i ) . Lemma 2 (Gaussian p r oduct): [18, Claim 10], [2, Claim 2] Given p Gaussians N ( m 1 , v 1 ) , · · · , N ( m p , v p ) th eir prod - uct is prop ortional to a Gaussian N ( ¯ m, ¯ v ) with ¯ v − 1 = p X i =1 1 v i = p X i =1 α i ¯ m = ( p X i =1 m i /v i ) ¯ v = p X i =1 β i ¯ v Pr oof: Is gi ven in [18, Claim 10 ]. Using th e Gaussian produ ct lemma the l s mixture comp onent in th e message from variable node i to factor node s is a single Gau ssian given b y M l s is ( x s ) = Z x i ψ is ( x i , x s ) ` ψ i ( x i ) Y t ∈ Γ i \ s M τ ti ( x i ) ” dx i = Z x i ψ is ( x i , x s ) ` ψ i ( x i ) exp {− 1 2 x 2 i ( X t ∈ Γ i \ s α l s ti ) + x i ( X t ∈ Γ i \ s β l s ti ) } ´ dx i = Z x i ψ is ( x i , x s ) “ exp( − 1 2 x 2 i σ − 2 + x i y i σ − 2 ) · · exp {− 1 2 x 2 i ( X t ∈ Γ i \ s α l s ti ) + x i ( X t ∈ Γ i \ s β l s ti ) } ” dx i = Z x i ψ is ( x i , x s ) “ exp {− 1 2 x 2 i ( σ − 2 + X t ∈ Γ i \ s α l s ti ) + x i ( y i σ − 2 + X t ∈ Γ i \ s β l s ti ) } ” dx i . W e go t a formulation which is equivalent to L DLC variable nodes upd ate rule giv en in (7). Now we use the fo llowing lemma f or com puting the integral: Lemma 3 (Gaussian in te gral): Given a (one dimension al) Gaussian φ i ( x i ) ∝ N ( x i ; m, v ) , the integral R x i ψ i,s ( x i , x s ) φ i ( x i ) dx i , where is a (two d imensional) Gaussian ψ i,s ( x i , x s ) , ex p( − x i H is x s ) is pro portion al to a ( one d imensional) Gaussian N − 1 ( H is m, H 2 is v ) . Pr oof: Z x i ψ ij ( x i , x j ) φ i ( x i ) dx i ∝ Z x i exp ( − x i H is x s )exp {− 1 2 ( x i − m ) 2 /v } dx i = = Z x i exp “ ( − 1 2 x 2 i /v ) + ( m /v − H is x s ) x i ” dx i ∝ exp (( m/v − H is x s ) 2 / ( − 2 v )) , where the last transiti on was obtained by using the G aussian integral: ∞ Z −∞ exp ( − ax 2 + bx ) d x = p π /a exp ( b 2 / 4 a ) . exp ( ( m/v − H is x s ) 2 / ( − 2 v )) = exp {− 1 2 ( v ( m/v − H is x s ) 2 ) } = = exp {− 1 2 ( H 2 is v ) x 2 s + ( H is m ) x s − 1 2 v ( m/v ) 2 } ∝ exp {− 1 2 ( H 2 is v ) x 2 s + ( H is m ) x s } . Using the results of Lem ma 3 we get that the sen t message between variable n ode to a factor n ode is a mix ture of Gaussians, wher e each Gaussian co mponen t k is g i ven b y M l is ( x s ) = N − 1 ( x s ; H is m l s is , H 2 is v l s is ) . Note th at in the LDLC terminology the integral o peration as defined in L emma 3 is c alled stretch ing. In the LDLC algorithm , the stretching is computed by th e factor no de as it receiv es the message fr om the variable n ode. I n NBP , the integral operation is com puted at th e variable no des. LDLC F actors to varia ble n odes: W e start a gain with the BP integral-pro duct ru le and ha ndle the x s variables computed at the factor nodes. M si ( x i ) = Z x s ψ is ( x i , x s ) ψ s ( x s ) Y j ∈ Γ s \ i M j s ( x j ) dx s . Note that the p roduct Q j ∈ Γ s \ i M τ j s ( x j ) , is a mixtur e of Gaus- sians, wh ere the k -th comp onent is co mputed by selecting a single Gaussian from each message M τ j s from the set j ∈ Γ s \ i and applyin g the prod uct lemma (Lemma 2). W e get Z x s ψ is ( x i , x s ) “ ψ s ( x s ) exp {− 1 2 x 2 s ( X k ∈ Γ s \ i H 2 ks v l i ks )+ + x s ( X k ∈ Γ s \ i H ks m l i ks ) } ” dx s (12) W e co ntinue by comp uting the prod uct with the self potential ψ s ( x s ) to get = Z x s ψ is ( x i , x s ) ` ∞ X b s = −∞ exp( b s x s ) exp {− 1 2 x 2 s ( X k ∈ Γ s \ i H 2 ks v l i ks )+ + x s ( X k ∈ Γ s \ i H ks m l i ks ) } ´ dx s = = ∞ X b s = −∞ Z x s ψ is ( x i , x s ) ` exp( b s x s ) exp {− 1 2 x 2 s ( X k ∈ Γ s \ i H 2 ks v l i ks )+ + x s ( X k ∈ Γ s \ i H ks m l i ks ) } ´ dx s = = ∞ X b s = −∞ Z x s ψ is ( x i , x s ) ` exp {− 1 2 x 2 s ( X k ∈ Γ s \ i H 2 ks v l i ks )+ x s ( b s + X k ∈ Γ s \ i H ks m l i ks ) } ´ dx s = = −∞ X b s = ∞ Z x s ψ is ( x i , x s ) ` exp {− 1 2 x 2 s ( X k ∈ Γ s \ i H 2 ks v l i ks )+ + x s ( − b s + X k ∈ Γ s \ i H ks m l i ks ) } ´ dx s . Finally we use Lemma 3 to com pute the in tegral and get = −∞ X b s = ∞ exp {− tf r ac 12 x 2 s H 2 si ( X k ∈ Γ s \ i H 2 ks v l i ks ) − 1 + + x s H si ( X k ∈ Γ s \ i H 2 ks v l i ks ) − 1 ( − b s + X k ∈ Γ s \ i H ks m l i ks ) } dx s . It is easy to verify th is fo rmulation is iden tical to the LDLC update ru les (9). I V . U S I N G S PA R S E G E N E R A T O R M AT R I C E S W e propo se a new family of LDLC co des wher e the generato r matr ix G is spa rse, in contra st to the or iginal LDLC codes where th e parity check matrix H is spar se. T able I ou tlines the prope rties of ou r propo sed decode r . Our decoder is designed to b e mor e efficient than the o riginal LDLC decoder, since as we will soo n sho w , both encoding, initialization an d final operations are more efficient in the NBP deco der . W e are cur rently in the p rocess of fully ev aluating our deco der performan ce rela ti ve to the LDLC decoder . In itial results are repo rted in Section VI. −1 −0.5 0 0.5 1 1.5 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Fig. 1. The approximat ing function g r elax s ( x ) for the binary case. A. The NBP deco der W e u se an u ndirected bip artite graph , with variables no des { b i } , r epresenting each element of th e vector b , an d observa- tion no des { z i } for each elemen t of the observation vecto r y . W e define the self poten tials ψ i ( z i ) and ψ s ( b s ) as follows: ψ i ( z i ) ∝ N ( z i ; y i , σ 2 ) , ψ s ( b s ) = ( 1 b s ∈ Z 0 otherwise , (13) and th e edge potentials: ψ i,s ( z i , b s ) , ex p ( − z i G is b s ) . Each variable no de b s is conne cted to the observation no des as defined by the encodin g matr ix G. Since G is spar se, the resulting bipartite g raph sparse as well. As with LDPC decoder s [7], the belief prop agation o r sum-pr oduct algo- rithm [6], [1 9] p rovides a powerful a pproxim ate decodin g scheme. For com puting the MAP assignment of the tran smitted vector b using n on-para metric belief pro pagation we per form the following relaxation , which is o ne of th e main novel con- tributions of this pap er . Recall that in the origin al prob lem, b are o nly allowed to be in tegers. W e r elax th e fun ction ψ s ( x s ) from a delta fun ction to a mixture of Gaussians centered around integers. ψ r elax s ( b s ) ∝ X i ∈ Z N ( i, v ) . The variance parameter v controls the appro ximation qua lity , as v → 0 th e approx imation quality is high er . Figure 2 plots an examp le relaxation of ψ i ( b s ) in th e bin ary case. W e have defined the self and edge poten tials which a re the input the to the NBP algo rithm. Now it is possible to run the NBP algorithm usin g (2) and get an approx imate MAP solution to (1). Th e der i vation o f the NBP deco der up date ru les is similar to the one don e for the LDLC decoder, thus omitted. Howe ver, the re are several imp ortant differences th at should be addr essed. W e start by analyzing the algor ithm efficiency . W e assume that the input to o ur deco der is the spa rse matrix G , the re is no ne ed in compu ting th e enco ding ma trix G = H − 1 as done in the LDLC decoder . Naively this initialization takes O ( n 3 ) cost. The encoding in our scheme is done a s in LDL C b y com puting th e multiplication Gb . Howe ver, since G is sparse in our case, encodin g cost is O ( nd ) where d << n is the a verag e nu mber of non-zeros entries on each row . Encoding in the LDLC method is done in Algorith m LDLC NBP Initia lizat ion operat ion G = H − 1 None Initia lizat ion cost O ( n 3 ) - Encoding operation Gb Gb Encoding cost O ( n 2 ) O ( nd ) , d ≪ n Post run operation H x None Post run cost O ( nd ) - T ABLE I C O M PA R I S O N O F L D L C D E C O D E R V S . N B P D E C O D E R Algorithm LDLC decoder NBP decoder Update rules T wo One Sparsity assumption Decoding mat. H Encoding mat. G Algorithm derivation Custom Standard NBP Graphical model Factor graph Pairwise potentials Related Ope rations Stretch/Unstretch Integral Conv olution product periodic extension product T ABLE II I N H E R E T D I FF E R E N C E S B E T W E E N L D L C A N D N B P D E C O D E R S O ( n 2 ) since even if H is sparse, G is typically den se. After conv e rgence, the LDLC decoder multip lies by the matrix H and rounds the result to get b . This operation costs O ( nd ) where d is the average nu mber of non-zero entries in H . In contrast, in the NBP d ecoder, b is compu ted directly in the variable n odes. Besides o f efficiency , there are several inher ent differences between th e two a lgorithms. Su mmary of the differences is g iv en in T able II. W e u se a stand ard for mulation of BP using pairwise potentials form, which means th ere is a single update ru le, and not two upd ate rules from left to right an d r ight to left. W e ha ve sho wn that the con volution operation in the LDLC deco der relates to pro duct step of the BP algor ithm. The stretch/un strech operations in the LDLC decoder are implemented using the integral step of the BP algorithm . The periodic extension oper ation in the LDLC decoder is incor porated into our dec oder algo rithm using the self p otentials. B. The r elation of the NBP deco der to G aBP In this section we show that simplified version of the NBP decoder coincid es with the Ga BP algorith m. The simplified version is obtained , when in stead of u sing o ur pro posed Gaussian mixtu re prior, we initialize th e NBP algo rithm with a p rior comp osed of a single G aussian. Theor e m 4: By initializing ψ s ( b s ) ∼ N (0 , 1) to be a (single) Gau ssian the NBP decod er up date rules are identical to up date rules of th e GaBP a lgorithm. Lemma 5: By initializing ψ s ( x s ) to be a ( single) Gaussian the messages of th e NBP decoder ar e single Gaussians. Pr oof: Assume both the self potentials ψ s ( b s ) , ψ i ( z i ) are initialized to a single Gaussian, every message of the NBP decoder algorithm will remain a Gaussian. This is because the pro duct (3) of sing le Gaussians is a sing le Gaussian, the integral and (4) of sing le Gaussians pr oduce a single Gaussian as well. Now we are ab le to prove Th eorem 4: Pr oof: W e start wr iting the update rules of the variable nodes. W e initialize the self potentials of the v ariable nodes ψ i ( z i ) = N ( z i ; y i , σ 2 ) , Now we substitute, u sing the produ ct lemm a and L emma 3. M is ( b s ) = Z z i ψ i,s ( z i , b s ) “ ψ i ( z i ) Y t ∈ Γ i \ s M ti ( z i ) ” dz i = Z z i ψ i,s ( z i , b s ) ` exp( − 1 2 z 2 i σ − 2 + y i z i σ − 2 ) Y t ∈ Γ i \ s exp( − 1 2 z 2 i α ti + z i β ti ) ´ dz i Z z i ψ i,s ( z i , b s ) ` exp( − 1 2 z 2 i ( σ − 2 + X t ∈ Γ i \ s α ti )+ z i ( σ − 2 y i + X t ∈ Γ i \ s β ti ) ´ dz i = ∝ e xp “ − 1 2 z 2 i G 2 is ( σ − 2 + X t ∈ Γ i \ s α ti ) − 1 + z i G is ( σ − 2 + X t ∈ Γ i \ s α ti ) − 1 ( σ − 2 y i + X t ∈ Γ i \ s β ti ) ” Now we get GaBP upd ate rules by substitutin g J ii , σ − 2 , J is , G is , h s , σ − 2 y i : α is = − J 2 is α − 1 i \ s = − J 2 is ( J ii + X t ∈ Γ i \ s α ti ) − 1 , β is = − J is α − 1 i \ s β i \ s = − J is “ α − 1 i \ s ( h i + X t ∈ Γ i \ s β ti ) ” . W e continue expandin g M si ( z i ) = Z b s ψ i,s ( z i , b s ) ψ s ( b s ) Y k ∈ Γ s \ i M τ ks ( b s ) db s Similarly u sing the initializations ψ s ( b s ) = exp {− 1 2 b 2 s } , ψ i,s ( z i , b s ) , exp( − z i G is b s ) . Z b s ψ i,s ( z i , b s ) “ exp {− 1 2 b 2 s } Y k ∈ Γ s \ i exp( − 1 2 b 2 s α is + b s β ks ) ” db s = Z b s ψ i,s ( z i , b s ) “ exp {− 1 2 b 2 s (1 + X k ∈ Γ s \ i α is ) + b s ( X k ∈ Γ s \ i β ks ) } ” db s = exp {− 1 2 b 2 s G 2 is (1 + X k ∈ Γ s \ i α is ) − 1 + b s G is (1 + X k ∈ Γ s \ i α is ) − 1 ( X k ∈ Γ s \ i β ks ) } Now we get GaBP upd ate rules by substituting J ii , 1 , J si , G is , h i , 0 : α si = − J 2 si α − 1 s \ i = − J 2 si ( J ii + X k ∈ Γ s \ i α is ) − 1 , β si = − J si α − 1 s \ i β s \ i = − J si “ α − 1 s \ i ( h i + X k ∈ Γ s \ i β ks ) ” . T ying together the results, in the case of a s ingle Gaussian self po tential, the NBP deco der is initialized usin g the following in verse covariance m atrix: J , I G G T diag ( σ − 2 ) W e have shown th at a simpler version of the NBP decoder, when the self po tentials are initialized to be sing le Gaussians boils down to GaBP a lgorithm. It is known [ 20] that the GaBP a lgorithm solves th e fo llowing least square pr oblem min b ∈ R n k Gb − y k assumin g a Gaussian prior on b , p ( b ) ∼ N (0 , 1) , we get the MMSE so lution b ∗ = ( G T G ) − 1 G T y . Note the relation to (1). The difference is that we re lax the LDLC d ecoder assump tion that b ∈ Z n , with b ∈ R n . Getting back to the NBP deco der , Figure 2 compa res th e two dif ferent pr iors used, in the NBP dec oder and in the GaBP algo rithm, for the bipolar case. It is clear that the Gaussian prio r ass umption on b is not accurate eno ugh. In the NBP decod er , we relax the delta fun ction (13) to a Gaussian mixture prior compo sed o f mixtures centered around Integers. Overall, the NBP decoder algorithm can b e thou ght o f as a n extension of the GaBP algorithm with mo re acc urate prior s. −4 −3 −2 −1 0 1 2 3 4 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0.016 0.018 0.02 GaBP prior NBP prior Fig. 2. Compa ring GaBP prior to the prior we use in the NBP decod er for the bipolar case ( b ∈ {− 1 , 1 } ). V . C O N V E R G E N C E A N A L Y S I S The behavior o f the belief propag ation a lgorithm h as been extensi vely studied in the literature, resultin g in sufficient condition s for con vergence in th e discrete case [21 ] an d in jointly Gau ssian mod els [22 ]. Howe ver, little is known about the b ehavior of BP in mo re g eneral continu ous sy stems. The origin al L DLC paper [2 ] gi ves some ch aracterization of its c on vergen ce pr operties un der sev er al simplif ying as- sumptions. Relaxing som e of these assum ptions and using our pair wise factor for mulation, we show that the cond itions for GaBP co n vergence can also b e ap plied to y ield new conv e rgence p roperties for the L DLC de coder . The most imp ortant assump tion mad e in the LDLC con- vergence an alysis [2] is that th e system converges to a set of “consistent” Gaussians; spe cifically , that at all iteration s τ beyond so me numb er τ 0 , on ly a single integer b s contributes to the Gaussian mixture. Notionally , this cor responds to the idea th at the decoded inf ormation values themselves are well resolved, and the conv e rgence being an alyzed is with respect to the tra nsmitted bits x i . Un der this (poten tially stro ng) assumption, sufficient cond itions are gi ven f or the d ecoder’ s conv e rgence. Th e auth ors also assume tha t H consists of a Latin squar e in which each r ow and column contain some permutatio n of the scalar values h 1 ≥ . . . ≥ h d , up to an arbitrary sign . Four con ditions are given which should all hold to ensur e conv e rgence: • LDLC-I: det( H ) = det( G ) = 1 . • LDLC-II: α ≤ 1 , wh ere α , P d i =2 h 2 i h 2 1 . • LDLC-III : The spectral r adius of ρ ( F ) < 1 whe re F is a n × n matrix d efined by: F k,l = h rk h r l if k 6 = l and there exist a row r of H for wh ich | H r l | = h 1 and H r k 6 = 0 0 otherwise • LDLC-IV: The spectral radius of ρ ( ˜ H ) < 1 wh ere ˜ H is derived from H b y per muting the rows su ch that the h 1 elements will be placed on the diago nal, dividing each row by the appr opriate diago nal element ( + h 1 or − h 1 ), and th en nu llifying the diagonal. Using our new results we are no w able to provid e new conv e rgence cond itions for the LDLC d ecoder . Cor ollary 6: The conv e rgence of the LDLC d ecoder de- pends o n the properties of the fo llowing matr ix: J , 0 H H T diag (1 / σ 2 ) (14) Pr oof: In Theo rem 1 we have shown an equivalence between the LDLC algor ithm to NBP initialized with the following potentials: ψ i ( x i ) ∝ N ( x i ; y i , σ 2 ) , ψ s ( x s ) , ∞ X b s = −∞ N − 1 ( x s ; b s , 0) , ψ i,s ( x i , x s ) , exp( x i H is x s ) . (15) W e have f urther discussed the relatio n b etween the self p o- tential ψ s ( x s ) and the p eriodic extension opera tion. W e have also shown in Theorem 4 th at if ψ s ( x s ) is a single Gaussian (equiv alent to the assum ption of “ consistent” beh avior), the distribution is join tly Gaussian and rath er th an NBP (with Gaussian mixture messages), we obtain GaBP (with Gaussian messages). Convergence of the GaBP algorithm is depend ent on the inverse cov arianc e matrix J and not o n the shift vector h . Now we are a ble to construct the ap propria te inv erse covariance matrix J based on the pairwise factor s given in Theorem 1. Th e matr ix J is a 2 × 2 b lock matrix, where the check variables x s are assigned the upper rows and the original v ariables are assigned the lo wer ro ws. The entries can be read o ut from the q uadratic terms of th e poten tials ( 15), with the only n on-zero entries c orrespon ding to the p airs ( x i , x s ) and self p otentials ( x i , x i ) . Based on Corollary 6 we can characteriz e the conver gence of the LDLC d ecoder, using the sufficient con ditions for conv e rgence of GaBP . Either one o f th e following two condition s ar e sufficient for con vergence: [GaBP-I] ( walk-summability [22 ]) ρ ( I − | D − 1 / 2 J D − 1 / 2 | ) < 1 where D , diag ( J ) . [GaBP-II] ( diagon al domin ance [8]) J is diagon ally dominan t ( i.e. | J ii | > = P j 6 = i | J ij | , ∀ i ). A furthe r difficulty ar ises f rom the fact that the upp er diagona l o f ( 14) is zero, which me ans th at bo th [GaBP-I, II] fail to ho ld. T here ar e th ree p ossible ways to overcome this. 1) Create an appro ximation to the orig inal problem by setting the upp er left block matrix of (14) to di ag ( ǫ ) where ǫ > 0 is a small constan t. The accura cy of the approx imation grows as ǫ is smaller . In case either of [GaBP-I,II] holds on the fixed matrix the “con sistent Gaussians” con verge into an ap proxim ated solution. 2) In case a permu tation on J (14) exists wher e either [GaBPI,II] hold for pe rmuted matrix, then the “c onsis- tent Gaussians” conv ergence to th e co rrect solution. 3) Use precon ditioning to cr eate a new graphical model where the edg e po tentials are deter mined by the inf ormation m atrix H H T , ψ i,s ( x i , x s ) , exp( x i { H H T } is x s ) and the self potentials of the x i nodes ar e ψ i ( x i ) , exp {− 1 2 x 2 i σ − 2 + x i { H y } i } . The proof of the cor rectness of the ab ove construction is giv en in [23 ]. Th e benefit of this precond itioning is that th e m ain d iagonal o f H H T is sur ely n on zero . If either [GaBP-I,II] holds on H H T then “consistent Gaussians” convergence to the correct solution. How- ev e r , the matrix H H T may not be sparse anymore, thus we p ay in decoder efficiency . Overall, we have giv en two sufficient conditio ns for con ver - gence, u nder th e “consistent Gau ssian” assumption for the means and variances o f th e LDLC deco der . Our conditions are more gene ral be cause o f two reasons. Fi rst, we pr esent a single sufficient co ndition in stead of four th at h av e to hold concurr ently in the orig inal LDLC w or k. Second, our conv e rgence analysis does not assume L atin squ ares, n ot ev e n square matrices and do es not assum e noth ing a bout the sparsity of H . T his extends the a pplicability of th e LDLC decoder to other types of codes. Note that our convergence analysis relates to the mean and variances of the Gaussian mixture m essages. A r emaining o pen p roblem is the co n ver- gence of th e amp litudes – the relativ e heig hts of the different consistent Gau ssians. V I . E X P E R I M E N TA L R E S U LT S In this section we r eport p reliminary experimental results of our NBP-ba sed decode r . Our imp lementation is gene ral and not r estricted to th e LDLC do main. Specifically , r ecent work b y Baron et al. [5] had extensively tested ou r NBP implementatio n in the con text of the related compr essi ve sensing domain . Our Matlab co de is a vailable on the web on [ 24]. W e have used a code len gths of n = 100 , n = 10 00 , where the nu mber of non zero s in each row an d each column is d = 3 . Unlike LDLC Latin squares which are forme d using a gen erater sequen ce h i , we have selected the n on- zeros entries of the sparse encodin g m atrix G rando mly out of {− 1 , 1 } . This con struction fu rther optimizes LDLC decodin g, since bipolar en tries a voids the integral compu- tation (stre tch/unstrech op eration). W e have used bipolar signaling, b ∈ {− 1 , 1 } . W e have calculated the maximal noise level σ 2 max using Poltyrev g eneralized definition for channel capacity u sing unrestricted power assump tion [25]. For bip olar signaling σ 2 max = 4 n p det( G ) 2 / 2 π e . When applied to lattices, the g eneralized cap acity implies th at th ere exists a lattice G of high enough d imension n that enab les transmission with arb itrary small erro r p robability , if and only if σ 2 < σ 2 max . Figure 3 plots SER (symbo l er ror rate) of the NBP decode r v s. the LDLC decoder for code length n = 10 0 , n = 100 0 . The x -axis represent the distanc e from capacity in dB as calculated usin g Polty rov equation . As can be seen , our novel NBP deco der has b etter SER for n = 100 for all noise le vels. For n = 1000 we have better perform ance for high noise level, and comparab le pe rforman ce up to 0.3dB from L DLC f or low no ise le vels. W e are currently in the process of extend ing our implem entation to suppo rt code lengths of up n = 100 , 000 . Initial perform ance results are very promising . 0 0.5 1 1.5 2 2.5 3 3.5 4 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 Distance from capacity (dB) SER LDLC vs NBP code performance LDLC code len 100 LDLC code len 1000 NBP code len 100 NBP code len 1000 Fig. 3. NBP vs. LDLC decoder performance V I I . F U T U R E W O R K A N D O P E N P RO B L E M S W e ha ve sho wn that th e LDLC deco der is a v ar iant of the NBP algorithm . This allowed us to use curr ent research results from the non- parametric belief p ropagatio n doma in, to extend the d ecoder applicability in several d irections. First, we have extended algorithm ap plicability from Latin squares to full column rank m atrices (possibly no n-square ). Second, W e have extend ed the LDL C convergence a nalysis, by discovering simpler con ditions for co n vergence. Third, we have presented a new family of LDLC which are based on sparse encoding matrices. W e are cu rrently workin g on an op en sou rce implemen ta- tion o f the NBP based decod er , using an u ndirected grap hical model, including a co mplete com parison of p erform ance to the LDLC decoder . Another area of future w o rk is to e xamine the practical performa nce of the efficient Gau ssian mixture produ ct samp ling algo rithms developed in the NBP domain to b e applied for LDL C deco der . As little is known about the con vergence of the NBP algorithm, we plan to continue examine its con vergence in different settings. Finally , we plan to in vestigate the app licability of the recent con vergenc e fix algorithm [26] fo r supporting decoding matrices wh ere the sufficient c onditions fo r convergence do not h old. A C K N OW L E D G M E N T D. Bickson would like to thank N. So mmer, M. Feder and Y . Y ona from T el A viv University for interesting d iscussions and h elpful in sights regarding the LDLC algorith m a nd its impleme ntation. D. Bickson w as p artially suppor ted by grants NSF II S-08033 33, NSF NeT S-NBD CNS-072 1591 and D ARP A IPTO F A8750- 09-1- 0141. Danny Dole v is In- cumben t of the Berth old Badler Ch air in Computer Science. Danny Dolev was sup ported in part by the Israeli Science Foundation (ISF) Gr ant nu mber 039 7373. R E F E R E N C E S [1] R. G. Galla ger, “Low density parity chec k codes, ” IR E T rans. Inform. Theory , vol. 8, pp. 21–28, 1962. [2] N. Somm er , M. Feder , and O. Shalvi, “Lo w-density lattice codes, ” in IEEE T ransactions on Information Theory , vol. 54, no. 4, 2008, pp. 1561–1585. [3] E. Sudderth, A. Ihler , W . Freeman, and A. Will s ky , “Nonpara m etric belie f pr opagati on, ” in Conf erence on Computer V ision and P attern Recogn ition (CVPR) , June 2003. [4] S. Sarv otham, D. Baron, and R. G. Baraniuk, “Compressed sensin g reconstru ction via belief propagat ion, ” Rice Univ ersity , Houston, T X, T ech. Rep. TREE0601, July 2006. [5] D. Baron, S. Sarvotham, and R. G. Baraniuk, “Baye sian compressi ve sensing via belief propagati on, ” IEEE T rans. Sign al Pr ocessing, to appear , 2009. [6] F . Kschischang, B. Frey , and H. A. Loeliger , “Fact or graph s and the sum-product algorithm, ” vol. 47, pp. 498–519, Feb. 2001. [7] R. J. McEliece, D. J. C. MacKay , and J. F . Cheng, “Turb o decoding as an instanc e of Pearl’ s ’belie f propagation’ algorithm, ” vol. 16, pp. 140–152, Feb . 1998. [8] Y . W eiss and W . T . Freeman, “Correctness of belief propaga tion in Gaussian grap hical models of arbi trary topology , ” Neural Comput ation , vol. 13, no. 10, pp. 2173–2200, 2001. [9] A. Ihler , E. Sudderth, W . Freema n, and A. W illsky , “Effici ent mul- tiscal e sampling from products of gaussian mixtures, ” in Neural Informatio n Proc essing Systems (NIPS) , Dec. 2003. [10] M. Briers, A. Doucet, and S. S. Singh, “Sequent ial auxi liary particle belie f propagati on, ” in Internati onal Confe rence on Information Fu- sion , 2005, pp. 705–711. [11] D. Rudoy and P . J. W olf, “Multi-scale MCMC methods for sampling from products of Gaussian mixtures, ” in IEEE Internati onal Confe r- ence on Acoustics, Speec h and Signal Proc essing , vol. 3, 2007, pp. III–1201–III– 1204. [12] A. T . Ihler . Kern el Densit y Estimation T oolbox for MA TLAB [online] http://www.ics .uci.edu/ ∼ ihler/c ode/ . [13] A. T . Ihler , Fisher , R. L. Moses, and A. S. Wi llsky , “Nonparamet ric belie f propag ation for self- local izati on of sensor networks, ” Selec ted Area s in Communica tions, IEEE Jo urnal on , vol. 23, no. 4, pp. 809– 819, 2005. [14] A. T . Ihler , J. W . Fisher , and A. S. W illsky , “Pa rticle fi lteri ng under communicat ions constraints, ” in Statistical Signal Pr ocessing , 2005 IEEE/SP 13th W orkshop on , 2005, pp. 89–94. [15] B. Kurk oski and J. Dauwels, “Message-pa ssing decoding of lat tices using Gaussian mixture s, ” in IEEE Int. Symp. on Inform. Theory (ISIT) , T oronto, Canada , July 2008. [16] Y . Y ona and M. Feder , “Ef ficient parame tric decoder of low density latti ce codes, ” in IEEE Int ernational Symposium on Inf ormation Theory (ISIT) , Seoul, S. Korea, July 2009. [17] B. M. Kurkoski, K. Y amaguchi, and K. Kobayashi , “Single-Gaussia n messages and noise thresholds for decoding low-den sity latti ce codes, ” in IEEE Internati onal Symposium on Information Theory (ISIT) , Seoul, S. Korea , July 2009. [18] D. Bickson, “Gaussian belief propagation: Theory and applicati on, ” Ph.D. dissertat ion, T he Hebre w Uni versity of Jerusalem, October 2008. [19] J. Pearl, Proba bilisti c R easoning in Intellig ent Systems: Net works of Plausible Infer ence . San Francisco: Morgan Kaufmann , 1988. [20] O. Shental, D. Bickson, P . H. Siegel , J. K. W olf, and D. Dolev , “Gaussian belief propagati on s olv er for systems of linear equations, ” in IEEE International Symposium on Information Theory (ISIT) , T oronto, Canada , July 2008. [21] A. T . Ihler , J. W . F . III, and A. S. Wil lsky , “Loopy belief propagat ion: Con ver gence and ef fects of message errors, ” Journal of Machine Learning Researc h , vol. 6, pp. 905–936, May 2005. [22] D. M. Maliouto v , J . K. Johnson, and A. S. Wi llsk y , “W alk-sums and belie f propagat ion in Gaussian graphical models, ” J ournal of Mac hine Learning Researc h , vol. 7, O ct. 2006. [23] D. Bickson, O. Shental, P . H. Siegel , J. K. W olf, and D. Dolev , “Gaussian belief propagation based multiuser detec tion, ” in IEEE In- ternati onal Symposium on Information Theory (ISIT) , T oronto, Cana da, July 2008. [24] Gaussian Belief P ropaga tion implementati on in matlab [online] http://www.cs. huji.ac.il/labs/d anss/p2p/gabp/ . [25] G. Poltyre v , “On coding without restri ctions for the A WGN channel, ” in IEEE T rans. Inform. Theory , vol. 40, Mar . 1994, pp. 409–417. [26] J. K. Johnson, D. Bickson, and D. Dolev , “Fixing con ver gence of Gaussian belief propagat ion, ” in IEEE International Sympo sium on Informatio n Theory (ISIT) , Seoul, South Kore a, 2009.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment