Diffusion Leaky Zero Attracting Least Mean Square Algorithm and Its Performance Analysis

Recently, the leaky diffusion least-mean-square (DLMS) algorithm has obtained much attention because of its good performance for high input eigenvalue spread and low signal-to-noise ratio (SNR). However, the leaky DLMS algorithm may suffer from perfo…

Authors: Long Shi, Haiquan Zhao

Diffusion Leaky Zero Attracting Least Mean Square Algorithm and Its   Performance Analysis
Diffusion L eaky Z ero A ttracting L east M ean S quare A lgorithm and Its P erformance A nalysis Long Shi 1, 2 , and Haiquan Zha o 1, 2 , Senior Memb er, IEE E 1 Key Labora tory of Magnet ic Susp ension Te chnology and Maglev Vehi cle, Minis try of Edu cation 2 School of Elect ri cal Enginee ring, South west Ji aotong Uni versit y, Chengdu 6100 31, People ’s Republi c of China Corre sponding auth or: Haiqua n Zhao (e- mai l: hqzhao_s wjtu@126. com ). AB S T R ACT Recently, the leaky diffusion l east - mean - square (DLMS) algorithm has obtained much attent ion because of its good perform ance for hig h inpu t eige nva lue spre ad and lo w signa l - to - noise ratio (SNR). However, the leaky DLMS algorithm may suffer from perfor mance deterioration in the sparse system. To overcome this drawback, the l eaky zero attracting DLMS (LZA - DLMS ) algorit hm is developed in thi s p ape r, whic h add s a n 1 -nor m l penalty to the c ost funct ion to ex ploit th e property of spars e syste m. T he leaky reweighted zero attracting DLMS (LRZA - DLMS) a lgorith m is also put f orwa rd, wh ic h can improve the estimation performan ce in the presence of time - var ying sparsity . Inst ead of usi ng the 1 -nor m l penalty, in the reweighted version, a log - sum function is emplo yed as t he s ubs tit ut io n . B ased on the weight error variance relation and several common assum ptions, we analyze the transient beh avior of ou r fi ndings and determine the stabilit y bound of the step - size. Moreover, we im plement the steady state theoretical analy sis f or the propose d algor ithm s. Simulations i n the conte x t of di stributed net wor k sys te m identification illustrate tha t the proposed sch eme s outperf orm var ious existing algorithms and validate the accuracy of the theoretical re sults. INDEX TERM S leaky, low SNR, zero attracting, sparse system, weight error varian ce I. INTRODUCTION P AR AM ET ER estimation acts as an importa nt role in the adaptiv e si gnal pr ocess ing, w hich h as obtai ned m uch attention over the pas t decades [1 ]-[ 8]. Rece ntly, distri buted estimation has increased popular ity since it can deal with the informatio n extraction from the data collect ed at nodes over the net wor k , and also been applied in vario us fields, such as environ ment monitori ng, disaster re lief mana geme nt a nd so urce localization [9 ]-[ 12]. In the pre vious work , two strategies have been extensiv ely studied, nam ely t he incremental strategy [13 ] , [ 14] and th e diffusion strategy [15 ] , [ 16], respectively. In the incremental strategy, each node only communicates with its adja cent nod e in a sequential path . This strategy ha s l o w p o we r req uire ment s owing t o simple communi cat ions [14 ] , [ 17]. However , the incremental str ategy is sensitive to link failure whic h is frequently encountered in th e distrib uted network [18 ] -[ 20]. In contrast, t he diffusio n strateg y is more widel y applied in the distrib uted estimatio n due to its gr eat rob ust nes s aga inst the li nk fai lur e [16 ] , [ 21 ] , [ 22 ]. I n the d iffu sion str ateg y, each node over the network requires to communicate with all its neig hbors and f uses the local estimates by a specific combi nati o n manne r s uch a s the unifo r m , Met ropolis and relative - degree rules [23 ]-[ 25] . T he implementation o f diffusion strate gy contains co mbination stage a nd adaptat ion s tage. Bas ed on differen t orders of these two stage s, t he Combine - then - Adapt (CTA) diffusio n strate gy and the Adapt - then -C ombine (ATC) diffu sion strategy were devel oped [22] . T he previous li teratures ha ve illustra te d that t he ATC - type diffusio n strateg y outperf or m s t he CT A - t yp e str ategy under a fair comparison [26 ]-[ 28]. In the family of diff usion algor ithms, the diffusion le a st - mean - squar e (DLMS) algorithm was first proposed [15] . In [25], a m ore ge neral diffu sion algorith m was put forward , whi c h allows measurement exch a nge in the adaptation stage . Subse que ntl y, t wo m od ifications were developed to overcome the tradeoff between f a st convergence rate and low steady state misalignment [29 ] , [ 30] . In [31] , t he diff usio n a ffi ne p r ojectio n algorith m (DAPA) wa s proposed to speed u p the conv ergence for colore d inputs . T o a ddress other situations, vario us algor ithms were in vestigated [ 20], [32] - [35]. In practical applications such as acoustic echo cancellation and active noise control , input si gnal s us uall y exhibi t the propert y of hi gh eigenvalu e spread and low sign al - to - no ise ratio (SNR) [3 6 ] , [ 37], which can reduce the convergence rat e and eve n resul t in t he i nstab ility o f th e conve ntio nal D LMS algor it hm [38]. To thi s end , t he lea k y DLM S algorithm was propose d [39], whe r e the lea kage ter m can pr even t unb o unded gro wth o f the weig ht vect ors VOLUME X X, 2017 1 from occurrin g so that the sta bilit y of the alg orithm can be ensu r ed [40]. Also note that in ma n y p ractical fields, the unkno wn syst em is sp arse (a l arge amo unt o f co effic ie nts are zeros or nea r- zeros) [41 ]-[ 43], w hich deterio rate s the algorithm pe rfor mance. It is rep orted that exploiting the sparsi ty o f the unk nown syst e m is beneficial to enhan ce the estimation performance . Therefore , t he zero attracting DLMS (ZA DLMS) and the reweighted zero attracting DLMS (RZA D LMS) were put for war d , which ca n accelerat e the convergence rate of near - zero coefficient s in the spar se syst em [4 4 ] , [ 45] . Motivated by the above f ac ts, although the leaky DLMS algori thm equips superiority to deal w ith highly correlated input s in a l o w SNR e nvir o nment , we exp ect to fur the r improve its es timation performance in the s p arse syst e m. Thus, thi s pa per pro poses the leaky zero attracting DLMS (LZA - DLMS) algorit hm, whic h adds a penalty to the cost function to exploit the pr operty of sparse syste m . As reported in [46], [47], the l 0 - norm constraint renders t he cost functi on no n - con vex, w hich leads t o a Non - Polynomial (NP) hard pr oblem on th e minimiz ation of the l 0 - no r m. Therefore , we employ the l 1 - norm penalty for t he propos ed algorithms. In addition, the leaky reweigh ted zer o attracting DLM S ( LR ZA - DLMS) algor ithm is proposed for sol ving the time - var yi ng sparsit y problem . Unlike the LZA - DLM S algor ith m, i n the re wei ghted ver sio n, a log - sum func tion is employed as the constraint. Extensively , we develop their ATC a nd CT A var iant s, name ly the AT C - LZA - D LMS , AT C - LRZA - DLM S, CT A - LZA - DLMS an d CT A -L RZA - DLMS algorith ms. In terms of theoretica l contributio n , we present the detailed tr ansie nt a nal ysis of our proposed AT C - type algori thms by invok i ng seve ral assumpti ons , which can characteri ze the global weight error var iance evolution . W e al so determine the stabilit y bound of the step - size. In addition, we implement the stead y state theoretical a nalysis for the ATC - t ype algorithm s . Fi nall y, Monte Ca rlo (MC) simula tions conducted i n vario us scenarios su fficientl y demonstrate tha t the pro posed algorithms o utperf or m vario us exi sti ng a lgor it hms a nd verify the accuracy of theoretical anal ysis . The rest of the paper is organized as follows. In Section 2, we derive the LZA - DL MS and LRZA - DLMS algorithms incl udin g ATC and CT A versi ons . In Sec tion 3 , we analyze the trans ie nt and steady state behavio rs of the AT C - t yp e algorithms. We a lso di scuss the sta bility boun d of the st ep - size as well as the computational complexity . I n Section 4, numerical si mulations ar e co nducted to te st o ur findings and validate the theoretical analysis. In Section 5, we draw several conclusions. Notation : W e use normal letters to denote scalars and use boldface letters to denote vectors or m atrices. T he mathematical nota tions used in w hat f ollo ws are summarized in Table I. TA B LE I M ATHEMAT ICA L N OTAT IONS Operators Description () T ⋅ T ransp osition ⊗ Kro necker pr oduct c o l{ } ⋅ S tan d ard v ectoriz atio n ope ration vec ( ) ⋅ S tack th e columns of a matri x into a colu mn vect or d i ag{ } ⋅ Diagonal mat rix [] E ⋅ Expec tatio n Tr( ) ⋅ T race oper ation max () λ ⋅ L a rgest ei g envalue of a matrix || ⋅ Absolu te op erati on || || p ⋅ -norm p l n I n b y n identi ty matrix II . PROPOSED ALGORI THMS Consi der a dis tribu ted ne twork com posed of N nodes over a geogr aphi c regio n. At time i , e ach node k has access to the time realization , { ( ), } k ki di u of zero - mean random sequence. W e are interested in estimating the unkno wn M - dim ensional ve ct or o w that satisfies a linear mode l , () () k ki o k d i vi = + uw (1 ) where () k di denotes the de sir ed signal, [ ] , ( ) , ( 1) , , ( 1) ki k k k ui ui ui M = − −+ u  is th e inp ut vec tor, [ ( 1 ), ( 2), , ( )] T oo o o w w wM = w  stan ds for the unknow n weight vector , and () k vi represents the backgr ou nd noise with zero - mean a nd var ia nc e 2 , vk σ . A. T HE LZA - DLMS A LGORIT HM In wh at fol lo ws , we consider the case of no me asurem ent excha nge in t he adap tation stage [ 16 ] . Le t k N denote the set of nodes in the neighborhood of node k (inc ludin g itself). T o derive the AT C - LZA - DLMS algorit h m, we can mini miz e the follo wi ng cost f unct ion 2 2 ,, /{ } 1 ( ) () k dist T k k ki lk l lN k J Ed i b γ ρ ∈ = − + −+ + ∑ w uw w φ ww w (2) where , lk b deno tes t he c o mbina ti on rul e, γ r epresents the positive leaky factor , ρ is an attracting factor to balance the penalty a nd the estimation error, w and l φ stan d for the global and local es timates of o w , respectively. Note that t he c o mbinat ion r ule , lk b sati sfies [22] ,, 0 if TT lk k b lN = ∉= 1 Ω 1 (3) where Ω is t he NN × matr ix with i ndi vidua l e n tries , lk b . Ta king t he gra die nt o f (2) , we ach ieve VOLUME X X, 2017 9 ( ) ,, , /{ } () ( ) [] k dist k k d k lk l lN k Jb sign γρ ∈  ∇ = −+ −  ++ ∑ w uu w R wR w φ ww (4) where , ,, [] T k ki ki E = u R uu is assu med p ositive - definite (i.e. , , 0 k > u R ), a nd ,, [ () ] T d k k ki Ed i = u Ru [15 ] , [ 16] . U sing t he s teepest - descent method for the estimate of o w at node k , we obtain the follo wing rec ursion ( ) , ,1 , , ,1 ,1 ,1 , ,1 /{ } [] () k ki ki d k k ki ki ki lk l ki lN k sign b µ µγ ρ χ − −− − − ∈ = +− − − +− ∑ uu w w R Rw w w φ w (5) where {, } µχ denote po sitive step - si zes. AT C - L ZA - DLMS algorith m: By i ntroduci ng an intermediate , ki φ , Eq. (5) can b e differentiated into t wo steps ( ) , ,1 , , ,1 ,1 , , , ,1 /{ } (1 ) [ ] () k ki ki d k k ki ki ki ki lk l ki lN k sign b µγ µ ρ χ − −− − ∈ = − +− − = +− ∑ uu φ w R Rw w w φ φ w (6) W e then replace l φ in (6) with the intermed iate esti mate , li φ whic h is ava i lable at node l at ti me i , and also replace ,1 ki − w with the inter mediate esti m ate , ki φ . T his s ubstitutio n is reasonable since , ki φ contains mo re information tha n ,1 ki − w [ 39, 45 ] , and leads to ( ) , ,1 , , ,1 ,1 , , ,, , /{ } (1 ) [ ] () k ki ki d k k ki ki ki ki lk l i ki lN k sign b µγ µ ρ χ − −− ∈ = − +− − = +− ∑ uu φ w R Rw w w φ φ φ (7) Noting from the s econd equati on in (7) an d combini ng (3), we get , , , ,, /{ } (1 ) k ki kk ki lk ki lN k bb χχ χ ∈ = −+ + ∑ w φ φ (8) As is done in [16], if we introduce the follo wing coefficients , , ,, 1 and for kk kk kk kk a b a b lk χχ χ = −+ = ≠ (9) we have ( ) , ,1 , , ,1 ,1 ,1 , ,, [] k ki ki d k k ki ki ki ki lk li lN sign a µ µγ ρ − −− − ∈ = +− − − = ∑ uu φ w R Rw w w w φ (1 0) where the coeff icie nts , lk a are real, non - ne gati ve ,a nd satisfy ,, 0 if TT lk k a lN = ∉= 1 Γ 1 (11 ) where Γ is a NN × matri x wit h i ndivi dua l entr ies , lk a . Now emplo ying the follo wing instantaneo us approxim ations f or (10) , ,, , , , () TT k ki ki d k k ki di ≈≈ uu R uu R u ( 12) w e obtain the update equat ion for the pr oposed AT C - LZA - DLMS algorit hm , ,1 , , ,1 ,1 , ,, (1 ) ( ( ) ) [ ] k T ki ki ki k ki ki ki ki lk li lN d i sign a µγ µ ρ − −− ∈  = −+ − −   =   ∑ φ w u uw w w φ (13) C TA - L ZA - DLMS algorithm : Note that we can also ch ange t he order of two steps i n (6) and use a similar manner to implement derivation. F in ally , we achie ve the recursi on for t h e C TA - LZA - DLMS algo rithm , , ,1 , , , ,, , = (1 ) ( ( ) ) [ ] k ki lk li lN T ki ki ki k ki ki ki a d i sign µγ µ ρ − ∈     = −+ − −  ∑ φ w w φ uu φ φ (14) Rem ark 1. It can be observed fr om ( 13) and (1 4) t hat both t h e AT C- LZA - DLMS and CT A - LZA - DLMS algorit h ms have zero - attr actors, denoted by ,1 [] ki sign ρ − w and , [] ki sign ρ φ , whic h are fu nctio ned to s hrink sma ll weigh t coeff icients to zer os in the sp ar se s ystem, spe edi ng up t he conve rgenc e r ate. How ever , i f t he system to be estimated is not spa rse , the zero - attracto r s will also attract the non - sparse coeff icie nts to zeros regardless of their a mplit ud es because the sign function o nl y care s ab out t he si gn o f coefficients , resultin g in ir rationalit y for larg e weight coefficients . Therefore, an improved version is presented below . B. T HE LRZA - DL MS ALGORITHM Motivated b y the reweighte d method [45] , the LR ZA - DLMS al gori thm i s d erive d by minimi zi ng t he follo wing cost function 2 2 ,, /{ } 1 ( ) () | ( )| log 1 k dist k k ki lk l lN k L T i J Ed i b wi γρ ε ∈ = = −+ −  ′ ++ +  ′  ∑ ∑ w uw w φ ww (15) where ρ ′ and ε ′ are positive const ant s, and () wi denotes the th i en t r y o f w . Since the derivation of t he LRZ A - DL MS algorit hm is simi lar to that o f the LZA - D LMS algorith m , w e here omit its detailed process. We ach ieve the u pdate equati ons for the AT C - LRZA - DLMS and CT A - LRZ A - D LMS algorit hm s, as follows ATC - LR ZA - D LMS algorith m: ,1 , ,1 , , ,1 ,1 , ,, [] (1 ) ( ( ) ) 1| | k ki T ki ki ki k ki ki ki ki lk li lN sign di a µγ µ ρ ε − −− − ∈  = −+ − −   +   =   ∑ w φ w u uw w w φ (16) and VOLUME X X, 2017 9 CTA - LR ZA - D LMS algorith m: , , ,1 , , , , ,, , = [] (1 ) ( ( ) ) 1| | k ki lk li lN ki T ki ki ki k ki ki ki a sign di µγ µ ρ ε − ∈      = −+ − − +   ∑ φ w φ w φ uu φ φ (17) where =/ ρ µρ ε ′′ and 1/ εε ′ = . Rem ark 2. The zero - attractors in (16) and (17) are denoted by ,1 ,1 [] 1| | ki ki sign ε − − + w w and , , [] 1| | ki ki sign ε + φ φ , respectively. It can be seen that the ze ro - attractor not only shrin k s small weight coefficients to zeros, bu t also distinguish es non - zero coefficients because it reflects the effect of amplitudes instead of directl y tak ing the signs of the coeffic ients . Fo r a bett er und er sta ndin g, t he propose d a lgorithms ar e summarized in Table II . TABL E I I S UMMARY OF THE A LGORITHMS AT C - typ e C TA - t yp e Initial ization: ,1 ,1 0 kk −− = = w φ fo r each no de k Ada ptatio n by the A T C - t ype algorit hms: AT C - LZA - D LM S , ,1 , , ,1 ,1 (1 ) ( ( ) ) [ ] T ki ki ki k ki ki ki d i sign µγ µ ρ − −− = −+ − − φ w u uw w AT C - LR Z A - D LMS ,1 , ,1 , , ,1 ,1 [] (1 ) ( ( ) ) 1| | ki T ki ki ki k ki ki ki sign di µγ µ ρ ε − −− − = −+ − − + w φ w u uw w Comb ination : , ,, k ki l k li lN a ∈ = ∑ w φ Initial ization: ,1 ,1 0 kk −− = = w φ for each no de k Comb ination : , , ,1 = k ki lk li lN a − ∈ ∑ φ w Ada ptatio n by the CT A - ty pe algor ithms: C TA - LZA - D LM S , , , ,, , (1 ) ( ( ) ) [ ] T ki ki ki k ki ki ki d i si gn µγ µ ρ = −+ − − w φ uu φ φ C TA - LR Z A - D LM S , , , , ,, , [] (1 ) ( ( ) ) 1| | ki T ki ki ki k ki ki ki sign di µγ µ ρ ε = −+ − − + φ w φ uu φ φ III . PERFORMANCE ANALYSIS In this s ect ion, we perform the tr ansient be havior analysis and determin e t he sta bility bound of the step - size for the proposed alg orithms. Moreov er, we anal yze the steady state performance, as well as discuss the co mp u tatio nal comple xity . Since the AT C an d CT A algorithms have similaritie s in terms of the analysis, we only carr y out the anal ysis fo r the ATC - type algorithms as a demonstratio n . In order to mak e a nalysis tractable, we utilize the follo wing unified model to characteriz e the AT C - type algori thms , ,1 , , ,1 , ,, (1 ) [ ] k T ki ki ki ki ki ki lk li lN eg a µγ µ ρ −− ∈  = − +−   =   ∑ φ wu w w φ (18) where ,1 [] ki g − w denotes ,1 [] ki sign − w for th e AT C - LZA - DLMS algorith m, and represents ,1 ,1 [] 1| | ki ki sign ε − − + w w for the AT C - LRZA - DLMS al gorithm. To proceed, it is necess ar y to intr od uce so me sta tistical a ssumptio ns and approxim ations. Assu mption 1 . The regressor s , ki u are temporally and spatially independent , and ident ically distrib uted (i.i.d .) with zero - mean [ 16 ] , [ 25 ] , [ 26 ]. Assu mption 2 . T he b ack gro und noise () k vi is i.i.d. with zero - mean and variance 2 k v σ , and i s in dependent of , ki u [9] , [ 21 ]. Assu mption 3 . T he th m entry of the weight error vector at node k at time i , n am ely , () ki wm  , is sub je ct to Gaussia n distribution with mean , () ki m µ and variance 2 , () ki m σ , i. e., 2 , ,, () ( () , () ) ki ki ki wm N m m µσ   [48]-[ 51 ] . Thus, the th m entry of the es timated weight vector , ki w follows Gauss ian distribution, e xpressed a s 2 , , ,, () () () ( () () , () ) ki o ki o ki ki w m wm w m N wm m m µσ = −−   Approx i mat ion 1. For mn ≠ , we m ake the approxim ations ,, , , ( ( ( )) ( ( ))) ( ( ( ))) ( ( ( ))) ki ki ki ki Egw m gw n E gw m Egw n ≈ and ,, , , ( ( ) ( ( ))) ( ( )) ( ( ( ))) ki ki ki ki E wm g wn E wm E g wn ≈ [48] , [50], [ 51 ]. Approx i mat ion 2. The fluct uat ion s of , () ki wm fr o m o n e iteration to the next iteratio n are small enou gh to a c hieve ,1 ,1 ,1 ,1 [ ( )] ( [ ( )]) 1 () 1 () ki ki ki ki sign w m E sign w m E wm E wm εε −− −−   ≈  ++  , ,1 ,1 ,1 ,1 () () 1 () 1 () ki ki ki ki wm E wm E wm E wm εε −− −−   ≈  ++  and ( ) ( ) 22 ,1 ,1 11 1 () 1 () ki ki E wm E wm εε −−   ≈   ++  [ 51 ] , [ 52 ]. Rem ark 3 . Assumptio n s 1~3 ha ve b een succ es sf ully used in ana lyz ing t he a dap tive fil ter ing algor it hms a ltho ug h t hese assumptions may not be true in pr actical app lications. VOLUME X X, 2017 9 Approx i m ation s 1~2 are beneficial to th e cal c ulation o f expectation s exp ress ed b y no nli nea r f uncti ons o f ad ap tive tap - wei ght s, wh ich has be en verified a s an e ffective methodol ogy , especially in the ca se of white inp ut signals [50], [51] . Furthe rmor e, usi ng these a pproximati ons makes it feasible to predict the b ehavi or s of the propose d algorithms . A. M EAN B EHAVIOR M ODEL W e d efine the weight error vectors ,, ki o ki = − w ww  , ,, ki o ki = − φ w φ  , and the glob al q uan ti ties of the net work vectors and ma t r ices { } col , , , opt o o o w ww w  (19) { } 1, 2 , , col , , , i i i Ni w ww w  ( 20) { } 1, 2 , , col , , , i i i Ni w ww w     ( 21) { } 1, 2 , , col , , , i i i Ni φ φ φ φ     ( 22) { } 1, 2 , , 1 [ ] col [ ], [ ], , [ ] i i i Ni g gg g − w ww w  (23) { } 1, 2 , , diag , , , i i i Ni U uu u  (24) D efine the error vector, the noise vector, and the desired vector of the network { } 1, 2 , , col , , , i i i Ni ee e e  (25) { } 12 c o l () , () , , () iN vi v i v i v  (26) { } 12 c o l () , () , , () iN di d i d i d  (27) Als o , the diagonal matrice s for co llecting the step - sizes µ , leaky f acto rs γ and attractin g factors ρ are given by { } { } { } diag , , , diag , , , diag , , , s s µµ µ γγ γ ρρ ρ M γ ρ    ( 28) C onsi der ing 1 i ii i − = + e Uw v  and r e writing t he adaptatio n stage in (18) into t he er ror ve cto rs o f the net wo rk yields 11 1 ( ) [] TT i MN i i i i i i i g −− − = − + −+ φ I QU U w Q γ w QU v ρ w   (29) w here M = ⊗ QM I , sM = ⊗ γ γ I , and sM = ⊗ ρ ρ I . Ta king i nto ac co unt the co mbi natio n sta ge in ( 18 ), t he rec ursio n ca n be inte grat ed i nto 11 1 ( ) [] TT i MN i i i i i i i g −− − = − + −+ w P I QU U w P Q γ w PQU v P ρ w  (30) w here M = ⊗ P Γ I . Now, taking expectati ons of both sides of (30) an d inv o king Assum ption 1 and As sumpt io n 2, w e obtain 11 1 [ ] ( [ ] ) [] [] ( [ ]) T i MN i i i i i E EE E Eg −− − = −+ + w P I Q U U w PQ γ w P ρ w  (31) where [] T ii M E = ⊗ UU S I wi t h { } 12 22 2 , ,, N uu u diag σσ σ S  . The expectation 1 ( [ ]) i Eg − w can be calculated below . ATC - L ZA - D LM S algorit hm: For th e AT C - LZA - DLM S , 1 ( [ ]) i Eg − w is d enoted by ,1 ( [ ]) ki E sign − w . T he th m com ponent of ,1 ( [ ]) ki E sign − w is gi ven b y ,1 ( [ ( )]) ki E sign w m − . Apply ing Assum p tion 3, w e can calculate ,1 ( [ ( )]) ki E sign w m − . ( ) ( ) ( ) ( ) ( ) ,1 2 0 ,1 2 2 ,1 ,1 2 ,1 2 0 2 ,1 , ,1 ,1 ,1 ( [ ( )]) () () 1 exp 2 () 2 () () () 1 exp 2 () 2 () () () 12 () () ( ki o ki ki ki o ki ki ki o ki ki o ki E sign w m x wm m dx m m x wm m dx m m wm m m wm erf µ σ pσ µ σ pσ µ f σ µ − − −∞ − − +∞ − − − − −  −−  = −−     −−  +−     −  = −−   −+ = − ∫ ∫ 2 ,1 ) 2 () ki m m σ −     (32) W her e ,1 , 1 () [ () ] ki ki m Ew m µ −− =  , 22 ,1 , , , ( ) ( ) ( [ ( )]) ki ki m m ki m Ew m σ − = − W   wi t h [] T i ii E W ww    , () f ⋅ denotes the c umulative di strib ution functio n (CDF) of the standard normal distr ibution, a nd () erf ⋅ is the error function which is defined as 2 0 2 ( ) exp( ) x erf x t dt p = − ∫ . AT C - L R ZA - D LMS algo rithm: F o r t h e AT C - LRZA - DLM S , 1 ( [ ]) i Eg − w is den oted by ,1 ,1 [] 1| | ki ki sign E ε − −    +  w w . T he th m com ponent of ,1 ,1 [] 1| | ki ki sign E ε − −    +  w w is expr esse d as ,1 ,1 [ ( )] 1 | ( )| ki ki sign w m E wm ε − −    +  . E mpl o ying Approx imation 2, w e have ,1 ,1 ,1 ,1 [ ( )] ( [ ( )]) 1 () 1 () ki ki ki ki sign w m E sign w m E wm E wm εε −− −−   ≈  ++  , and ,1 () ki Ew m − can be computed by invoking Assum pt ion 3 [ 51 ] VOLUME X X, 2017 9 ,1 ,1 ,1 2 ,1 2 ,1 ,1 2 ,1 () () () ( () () ) 2 () ( () () ) 2 ( ) exp 2 () o ki ki o ki ki o ki ki ki wm m E w m w m m erf m wm m m m µ µ σ µ σ p σ − −− − − − −  −  = −    − +−    (33) B. M EAN S QUARE BEHAVIOR MOD EL M ultip lyi ng (30) by T i w  for both sides gives r ise t o 11 11 1 1 11 1 11 ( )( ) [] [] ( ) () () [ ] T ii T T T TT MN i i i i MN i i T T TT T T TT i i i ii i T TT T T T TT i i MN i i i i T T TT T T TT M N ii i ii M N ii i i i gg g −− −− − − −− − −− = −− ++ + +− −− + − + ww P I Q UU w w I UU Q P PQ γ ww γ QP P Q U v vU QP P ρ ww ρ P P I QU U w w γ QP P I QU U w v U Q P P I QU U w w ρ P PQ γ w     11 1 11 1 11 11 11 1 () [] ( ) [] +[]( ) +[ ] [] T TT T TT T i M N ii i ii T TT T T T T T i i i i i MN i i T T T TT T T TT iii ii i T T T T T T TT ii M Ni i ii T TT i ii g g gg g −− − −− − −− −− −− − −− + −− −− − − w I U U Q P PQ γ w v UQ P PQ γ ww ρ P P QU v w I U U Q P PQU v w γ Q P PQU v w ρ P P ρ w w I U UQ P P ρ ww γ QP P ρ w v UQ P    (34) T aking exp ecta tio ns o f bo th side s of ( 34 ) and i nvoki ng Assum ption 1 and A ssumpt ion 2 yie lds 11 11 1 1 11 11 [] ( [ ]) [ ]( [ ] ) [] [ ] { [ ] [ ]} ( [ ]) [ ] ( [ ]) { [ ]} T ii T T T TT MN i i i i MN i i T T TT T T TT i i i ii i T TT T T T TT i i MN i i i i T T TT MN i i i i E EE E EE Eg g E E E Eg −− −− − − −− −− = −− ++ + +− +− + ww P I Q UU w w I UU Q P PQ γ ww γ QP P Q U v vU QP P ρ ww ρ P PI Q U U w w γ QP PI Q U U w w ρ P     11 1 1 11 11 [ ]( [ ] ) { [ ]} + {[ ] } ( [ ] ) + {[ ] } T T T T T TT i i MN i i i i T T T T T T TT ii M N i i ii E E Eg Eg E Eg −− − − −− −− −+ − PQ γ w w I U U Q P PQ γ ww ρ P P ρ w w I UU Q P P ρ ww γ QP   (35) To implement the follo wing anal ysis, we introduce the Kro necker product operation and its propert y [ 14 ] , [ 24 ] , [ 25 ] . That is, for arbitrar y matrices {, , } XY Z whic h ar e compatible in ter ms of dimens ions, we have vec ( ) ( ) vec ( ) T XYZ Z X Y = ⊗ . Apply ing the a bove opera tion f or (35), we can easily achieve 11 11 11 1 1 11 1 vec ( ) ( ) ve c ( ) ( ) vec ( ) ( ) vec ( [ ] ) ( ) vec( ( [ ] [ ])) ( ) vec( [ ]) ( ) vec( ( [ ])) ( ) vec ( [ ]) ( ) vec( ( i T T TT ii T T TT i ii i TT T ii TT T TT T ii i i TT T TT ii i E Eg g E Eg E Eg −− −− −− − − −− − = ⊗ +⊗ +⊗ +⊗ +⊗ + ⊗ +⊗ + ⊗ W A A W BB W C C U vv U DD w w B A ww D A w w A B ww D B w     1 11 11 [ ])) ( ) vec ( ( [ ] ) ) ( ) vec ( ( [ ] )) T i TT T ii TT T ii Eg Eg − −− −− +⊗ +⊗ w A D ww B D ww  (36) w here [] T i ii E W ww  , ( [ ]) T TT MN i i E = − A I UU Q P , T TT = B γ QP , TT = C QP , TT = D ρ P . Under As sumption 1 and A ssumpti on 2, [] TT i ii i E U vv U can be expressed as [] TT i ii i M E = ⊗ U vv U G I ( 37) w here { } 11 2 2 22 2 2 2 2 , ,, NN uv u v u v diag σσ σ σ σ σ G  . Rem ark 4. T he net work me an - square deviation (MSD) is defined as the average of every MSD at node k , i.e., ,, 1 1 MSD MSD N net i k i k N = = ∑ . N ote that , 1 MSD Tr ( ) net i i N = W  , one can obtain the recu r sion for , MSD net i from (31) an d (36). Then, the focus i s to calculate several expectat ions in (36 ), includ ing 11 ( [] [] ) T ii Eg g −− ww , 11 [] T ii E −− ww  , 11 ( [ ]) T ii Eg −− ww  and 11 ( [ ]) T ii Eg −− ww . Give n tha t 11 [] T ii E −− ww  and 11 ( [ ]) T ii Eg −− ww  can be rewritten as 1 11 [] [ ] TT i opt i i EE − −− − w w ww   and 1 11 ( [ ]) ( [ ]) TT opt i i i Eg E g − −− − w w ww , we o nl y need to take i nto account the calculation for 11 ( [] [] T ii Eg g −− ww and 11 ( [ ]) T ii Eg −− ww so t hat (36) can be im plemented. T he expectations 11 ( [] [] ) T ii Eg g −− ww and 11 ( [ ]) T ii Eg −− ww can be calculated by us ing Approx i mati on 1 and Appro xim a tion 2 . C. S TABILITY B OUND OF THE STEP-SIZE To ensur e the proposed ATC - type alg orithms can conve rge in t he mean a nd m ean - square, the boun d of the step - size will be discusse d in this part . From the mean aspect , the Eq. (31) can be r eformulated as 11 1 1 11 11 1 [ ] ( [ ] ) [] [] ( [] ) ( [ ]) [ ] [ ] ( [ ]) { ( [ ]) } [ ] ( [ ]) { ( [ ] )} [ ] ( T i MN i i i i i T MN i i i o i i T MN i i i o i T MN i i i o E E E E Eg E E E Eg E E Eg E E Eg µµ µ −− − − −− −− − = − ++ = − + −+ = − − ++ = − − ++ w P I Q U U w PQ γ wP ρ w P I Q U U w PQ γ ww P ρ w P I Q U U PQ γ w PQ γ wP ρ w PI U U γ wP γ wP ρ     1 [ ]) i − w (38) Note that 1 ( [ ]) i Eg − w o nl y characterize s the zero - attracto r , whi c h ha s been proved to be bounde d in the pr ior work [ 51 ] , [ 52 ] . Therefore, the proposed alg orithms will converg e in VOLUME X X, 2017 9 the mea n if the cond ition max ( )1 λ < PF holds , whe r e [] T MN i i E µµ −− F I UU γ  . Recalling that M = ⊗ P Γ I , we get max max 2 ( ) () λλ ≤⋅ PF Γ F (39) Owing to the property of com bination rul e, 2 1 ≤ Γ is guara nte ed. Thus , the ne twor k stab ilit y in the mean d epe nds on max max ( ) () λλ ≤ PF F ( 40) By ded uci ng fr o m (40) , the AT C - typ e al gorithms asymptotically conver ge in the mean if the step - size is chosen to satisfy max 2 0 ( [ ]) T ii E γ µ λ − << UU ( 41) W e now cons ider the bound in the mea n - sq uare . T o proceed, we rewrite (36) in to an alternative for mulation 1 11 11 1 11 vec ( ) ( ) vec ( ) ( ) v e c ( [] [] ) ( ) vec ( [ ]) ( ) vec( ( [ ] [ ])) ( ) vec ( [ ] ) ( ) vec ( ( [ ]) ( [ i T T TT T T TT i TT T T oo i io T T TT i ii i TT T ii TT T io TT T o i ii EE E Eg g E Eg E g − −− −− − −− = ⊗+ ⊗ − ⊗−⊗ +⊗ − − +⊗ +⊗ +⊗ +⊗ − W A AB BB AA B W B B Ww w ww C C U vv U DD w w B A ww DA w w w w     1 1 11 1 11 11 ])) ( ) vec ( [ ]) ( ) vec( ( [ ])) ( ) vec( ( [ ]) ( [ ] )) ( ) vec ( ( [ ] )) T TT T oi TT T ii TT T T i o ii TT T ii E Eg Eg Eg Eg − − −− − −− −− +⊗ +⊗ +⊗ − +⊗ A B ww DB w w A D w w ww B D ww  (42) where T o opt opt W ww  . Also learn from [ 51 ] , [ 52 ] , the quantities 11 ( [] [] ) T ii Eg g −− ww , 11 ( [ ]) T ii Eg −− ww , and 11 ([ ] ) T ii Eg −− ww in (42) are boun ded. U sing t he Kronec ker produ ct property ( ) ( ) ( )( ) XY ZW X Z Y W ⊗ = ⊗⊗ that is availab le for ar bitrary matrices {, , , } X Y ZW of compatible d imensio ns [ 27 ] , we can express the terms T T TT T T TT ⊗+ ⊗ − ⊗−⊗ A AB BB AA B in (42) as 22 22 22 2 2 2 ( ) { ( [] [] ) } ( ) { [( ) ( )]} ( )( ) ( )( ) ( )( [ ]) ( )( ) ( )( [ ] ) ( ){ } T T TT T T TT TT MN i i i i MN MN TT ii ii T MN i i T MN i i MN EE E E E µµ µµ µµ µµ µµ ⊗+ ⊗ − ⊗−⊗ = ⊗ −⊗ + ⊗ +⊗ ⊗ +⊗ ⊗ − ⊗⊗+ ⊗⊗ − ⊗ ⊗+ ⊗ ⊗ = ⊗ −+ A AB BB AA B P P I I UU UU I P P UU UU P P γ γ PP γ I PP γ UU P PI γ P P UU γ P P K JI (43) where [ () () ] [] [] TT T ii ii ii T ii EE E = ⊗ +⊗+⊗ +⊗ K UU UU γ γ γ UU UU γ (44) and [] [] TT MN i i i i MN MN MN EE = ⊗ + ⊗+ ⊗+⊗ J I UU UU I γ II γ (45) Therefore, the Eq. (42) can conver ge i n the mean - square if 22 2 ( ){ } MN µµ ⊗ −+ P P K JI is guaranteed to be stable. Due to the p roperty of combinati on rule, w e can e nsur e 2 1 ⊗≤ PP . It s ugge sts tha t t he pr opo sed A T C - type algorithms can conver ge in the mean - square if 22 2 MN µµ = −+ L K JI is stable. Follo wing the same argument in [ 51 ] , the stable condition of matrix L can be determined 1 max 11 0 mi n , ( ) max ( ) R µ λλ −+  <<  ∈  JK H (46) w here 22  −  =   JK H I0 . Therefore, to guarantee the stability of o u r AT C - typ e algorithm s in the mean and m ea n - squar e , a str ingent condition for the step - size is 1 max max 2 11 0 mi n , , ( [ ]) ( ) max ( ) T ii ER γ µ λλ λ −+  − <<  ∈  UU J K H (47) D. STEADY STATE PERFORMANCE The network MSD in the steady state is d efined by , 1 MSD T r ( ) net N ∞∞ = W  ( 48) As i →∞ , taki ng t he li mit a nd t rac e for (36), w e obtain T r ( ) vec ( ) ( ) ve c ( ) vec ( ) ( ) ve c ( ( )) vec ( ) ( ) ve c ( ( )) vec ( ) ( ) vec ( [ ]) vec ( ) ( ) vec ( ( [ ] [ ]) ) vec ( ) ( ) vec ( [ ] T TT MN o T TT T MN o T TT T MN o T T T TT MN T TT T MN TTT T MN o E E E Eg g E ∞ ∞ ∞ ∞∞∞ ∞ ∞∞ ∞ = ⊗ +⊗ +⊗ +⊗ +⊗ +⊗ WI Φ BB W I Φ B B ww I Φ B B ww I Φ C C U v vU I Φ DD w w I Φ B A ww     ) vec ( ) ( ) vec ( ( [ ]) ) vec ( ) ( ) vec ( [ ]) vec ( ) ( ) vec ( ( [ ] )) vec ( ) ( ) vec ( ( [ ] ) ) vec ( ) ( ) vec ( ( [ ] )) T TT T MN T TT T MN o T TT T MN T TT T MN T TT T MN Eg E Eg Eg Eg ∞∞ ∞ ∞∞ ∞∞ ∞∞ +⊗ +⊗ +⊗ +⊗ +⊗ I Φ D A ww I Φ A B ww I Φ D B ww I Φ A D ww I Φ B D ww    (49) where 22 1 () T T TT T T TT MN − = −⊗− ⊗ + ⊗+ ⊗ Φ I A AB BB A A B . The n, ta king t he limit as i →∞ for (31), w e arrive at VOLUME X X, 2017 9 1 [] ( ) ( [] ( [] ) ) T MN E E Eg − ∞ ∞∞ = −+ w I A PQ γ wP ρ w  (50) To achieve the an al ytical resul ts from (49) and (50), w e make some assumptions in th e steady state. ( [ ][ ] ) [ ][ ] TT oo Eg g g g ∞∞ ≈ w w ww ( 51) ( [] ) () [ ] TT o Eg E g ∞∞ ∞ ≈ ww w w  (52) ( [] ) [ ] () TT o Eg g E ∞∞ ∞ ≈ ww w w  (53) The se ass umptio ns are reasonable because o ∞ ≈ ww holds in t he stea d y st a te. Com bi ng (49) and (50 ) i nto (48 ), o ne can obtain , MSD net ∞ . Rem ar k 5. The detailed d erivation o f (4 9 ) is gi ven i n Appen dix A . The developed an al ysis can also be use d to obtain the steady stat e b ehavior s for the AT C DLM S , AT C ZA DLMS , and ATC leaky D LMS algori thms . For exampl e , whe n = D0 and = ρ 0 , the proposed algorithm reduces to the ATC leaky D LMS algorit hm . I n t his ca se , [] E ∞ w  and Tr ( ) ∞ W  are give n b y 1 [] ( ) ( [] ) T MN EE − ∞∞ = − w I A PQ γ w  (54 ) and T r ( ) vec ( ) ( ) vec ( ) vec ( ) ( ) vec ( ( )) vec ( ) ( ) vec ( ( )) vec ( ) ( ) vec ( [ ]) vec ( ) ( ) vec ( [ ]) vec ( ) ( ) vec ( [ ]) T TT MN o T TT T MN o T TT T MN o TTT T MN o T T T TT MN T TT T MN o E E E E E ∞ ∞ ∞ ∞ ∞∞∞ ∞ ∞ = ⊗ +⊗ +⊗ +⊗ +⊗ +⊗ WI Φ BB W I Φ B B ww I Φ B B ww I Φ B A ww I Φ C C U v vU I Φ A B ww      (55) E. COMPUTATIONAL COMPLEXITY T a bl e III summarizes the computatio nal co mplexity including multiplicatio ns, ad ditions a nd me mory require ment for various algorit hms, w here k n deno tes t he numbe r of component s of the neigh borhood set k N , and P stan ds for projection orders of the ATC DAP A. As compared with the ATC DLMS algorithm , t he AT C D AP A has a significant increase in the co mplexi ty , wh ile t he AT C leaky D LM S , AT C Z A D L MS a nd AT C R ZA D LM S algorithms just have a moderate increase i n the co mple xit y . As compared with th e ATC ZA DLMS and ATC RZA DLMS algo rithms, the proposed AT C - LZA - DLM S a nd AT C - LRZA - DLMS algorithms are more computationall y expe nsive because additio nal c alc ulation for leaky term is needed. TABL E II I C OMPUTATIONAL C OMPLEX ITY FOR N ODE k PER IT ERA TION Algor ithms Multiplicatio ns Additio ns Mem ory words AT C D LM S [ 1 5 ] ATC D AP A [ 31 ] AT C le aky DL MS [39] AT C Z A D LM S [ 4 5 ] ATC R ZA DLM S [ 45 ] ATC - LZA - D LMS ATC - LR Z A - D LM S 22 k M Mn ++ 2 ( 1) 1 k MM n +− − ( 2) 5 k nM ++ 2 22 k P M P MMM n + ++ 22 2 ( 1) k P M PM P n M + −+ − 2 2 2 ( 1) k MP P M P M n ++ + + + 23 k M Mn ++ 2 ( 1) k MM n +− ( 3) 7 k nM ++ 32 k M Mn ++ 3 ( 1) 1 k MM n +− − ( 3) 6 k nM ++ 42 k M Mn ++ 4 ( 1) 1 k MM n +− − ( 3) 7 k nM ++ 43 k M Mn ++ 3 ( 1) k MM n +− ( 3) 7 k nM ++ 53 k M Mn ++ 4 ( 1) k MM n +− ( 3) 8 k nM ++ IV. S IMULATIONS In this section, w e cond uct Monte Carlo (MC) simulations to test the esti mation perfor mance of the propose d algorithms a nd evalua te the accuracy of the theoretical a nalysis. T he ada pti ve fil ter and the un kno wn system are assum ed to have the same n umber of ta p s. I n Section 4.1, we show the estimation performance of the leaky algorithms a nd propose d algor ithms in a synthetic sparse system. In Section 4.2, we compare the propos e d algorithms with vario us existin g algorith ms for colored input s. I n Se ctio n 4. 3, we ve rify the theor etical res ults b y extensive simulations. The performance of all tested algorithm s is evaluated b y 10 , 10 log MSD net i . T he unifo r m rule is used in the si mulatio ns, which is defi ned as , 1/ lk k aN = for all l [ 16 ] . Except th e t heoretical verification , simulati on results are the average of 100 independent trial s. A. S YNTHETIC SPARSE SYSTEM In this subsection, we consider a network c onta ini ng 20 nodes , sho wn i n Fi g. 1 . The unkno wn sys te m o w has 64 M = taps. Initiall y, only one co efficient o f o w is set to 1 with its posi tion rando mly selected while o ther coefficients are eq ua l to 0 , ma king t he s yste m hi ghl y spar se. After 3000 iterations, 1 6 coef ficients are set to 1 with the ir positions randomly selected, making the system have a sparsity ratio o f 16/64. After 6000 iteratio ns, 32 coefficients are randomly set equal to 1, yielding a non - spa rse s yste m. Bo th the Ga us sian inp uts a n d co lor ed input s are used to exa mine t he algorithms. The variances of the Gaus sia n input s and ba ckgro und noise s a re d epicted in Fig. 2. The VOLUME X X, 2017 9 colo re d inp uts ar e ge ner ated by p assi ng t he G aus sia n inp uts thro ugh a first - order system 1 ( ) 1 / ( 1 0.7 ) Gz z − = − . As can be seen from Fig. 3, whe n the s y s t e m i s highl y sparse , the proposed AT C - type algorithm s outperform the correspon ding C TA - t yp e a lgor ithms re gard less o f Gaus sia n input s or co lore d inputs. Fro m Fi g. 3( a) , when the syste m is highl y sparse, t he LRZA - DLMS algor ithm yields lo wer steady - state mi salignm ent than t he LZA - DLMS algor ithm for both A TC and C TA t ypes. T he C TA - LRZA - D LMS algorithm is super ior to the A T C - LZA - DLMS algor ithm. Moreover , the pro posed algo rith ms be have bet ter t han the leaky DLMS algorithm. When th e syst e m is l ess spa rse (spar sit y ra tio 16/64), t h e C TA - LZA - DLM S a n d AT C - LZA - DLMS algorithms perform almost the same , meanwhile the C TA - LR ZA - DL M S a n d AT C - LRZA - DLMS algor ithms also achieve similar performance. However , i n this stage, the leaky DLMS algorithm provides lower steady - state mi salignmen t than the proposed algorit hms. Whe n t he syste m is o nl y hal f sp arse , the pe rfo rmanc e of t he proposed algorithms f urther deterio rates. Fro m Fig. 3( b ) , it is clear that t h e AT C - LR Z A - DLM S a lgorithm achieves the best performance no m atter in which s ta ge. W hen t h e s ystem i s highl y sparse , t he propos ed al gorithm s outperf or m th e leaky DLMS algorit hm. Intere stingly , t h e AT C - LZA - DLMS algorithm a nd t he C TA - LZRA - DLMS algorith m a l mo st have the same performance . When t he s ystem i s less spar se (sparsit y ratio 16 /64), t h e C TA - LRZ A - DLM S algorith m is superior to the A TC - LZA - DLMS al gorithm. Mea nwhile, the leaky DLMS al gorithm yield s lower stead y - state misalignme nt than the LZA - DLMS algor ithm. Whe n the syste m is onl y half s par se, t he pe rfor mance o f all the tested algo r ith ms do es not c hange si gni fica ntl y . Fig . 1 Netw ork topo logy of 20 no des 2 4 6 8 10 12 14 16 18 20 0. 2 0. 4 0. 6 0. 8 1 Node index I nput v ari anc es 2 4 6 8 10 12 14 16 18 20 0. 2 0. 4 0. 6 0. 8 1 Node index Nois e v arianc es Fig . 2 I nput var iances and no ise var iances o f 20 no des Therefore, the proposed algorithm s are sensitive to the spar sit y of t he s yste m. F or tuna tel y , whe n the inp ut is correlated , t h e AT C - LR Z A - DLM S algorithm still mai ntains good pe rfo rmance e ve n th ough t he un kno wn s yste m becom e s non - sparse. 0 100 0 200 0 300 0 400 0 500 0 600 0 700 0 800 0 900 0 -25 -20 -15 -10 -5 0 5 1 0 1 5 2 0 I t erat i o n s MSD( d B) C T A lea ky DL MS C TA - L ZA - D L MS C TA - L R ZA - DL M S A TC lea ky DL MS A TC - L ZA - D L MS A TC - L R ZA - DL M S 440 0 460 0 480 0 -10 -9 -8 450 0 5000 550 0 -16 -14 -12 (a) 0 100 0 200 0 300 0 400 0 500 0 600 0 700 0 800 0 900 0 -30 -20 -10 0 10 20 I t eratio n s MSD( dB) C T A lea ky DL MS C TA - L ZA - D L MS C TA - L R ZA - D L M S A TC l e a ky DL M S A TC - L ZA - D L MS A TC - L R ZA - D L M S (b) Fig . 3 MSD curve s of the le aky DL MS and proposed a lgorithms . T h e le a k y D LM S : =0.01 µ , and =0.002 γ . Th e prop osed a lgorit hm s: =0.01 µ , VOLUME X X, 2017 9 =0.002 γ , =0.0005 ρ , and 1 ε = . (a) Gauss ian inputs (b ) colored inp uts . B. S YSTEM IDENTIFICATION In thi s sub sec tio n, we also emp lo y the net wor k sho wn in Fig. 1. The colored regress o rs , ki u have le ngt h 128 M = , which are generated by filtering the Gaussia n inputs thro ugh a first - order syst em 1 ( ) 1 / ( 1 0.7 ) Gz z − = − . T he variances of the Ga us sian i np uts a nd b ackgr ound nois es are the same as t hat i n Section 4. 1. In the simulatio ns, the i nput signal s fir stly pa ss a highl y spar se syste m who se coefficients are 0 except its first coefficient set to 1, then pass a system modeled by a FIR filter, whose frequency response and impulse response are depicted in Fig . 4. To facilitate the co mparison, the proposed CTA - typ e algorithms ar e presented in Fig. 5 (a), and the AT C - t yp e algor ith ms ar e sho wn i n Fig.5 (b). 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 -60 -50 -40 -30 -20 -10 0 Normali z ed Frequenc y ( ×p rad/s am ple) Magnit ude (dB ) Magnit ude Res pons e (dB ) 20 40 60 80 100 120 0 0. 005 0. 01 0. 015 0. 02 0. 025 0. 03 Sam pl es Am pli t ude Fig . 4 Fre quency respo nse and i mp ulse r espo nse of acous tic path As can be seen from Fig. 5 (a), the C TA D L M S algori thm behaves worse tha n the CT A DAP A d ue t o the eff ect of the colored in puts. In addit ion, both the CT A leaky D L M S a n d C T A Z A D LMS al gorithm s outperf orm the CT A DLMS algo rithm tha nks to the leaky factor and zero - attractor . It also sugge sts tha t the CT A RZA DLMS algorithm outper forms the CT A ZA DLMS algor ithm because of the re weighted regularizat ion. As compared with other tested algo rithms, b ot h the CT A - LZA - DLMS and C TA - LR ZA - D LM S algorithms yield lower stead y - state misalignment under the same converg e nce rate. In particu lar , the proposed C T A - LZA - DLMS is slightly inferior to the C TA - LRZA - DLMS algorith m in terms of the steady - state mi sali gn ment . 0 500 1000 1500 2000 2500 3000 -22 -20 -18 -16 -14 -12 -10 -8 Iterat ions MSD (d B) (a) CTA DL M S ( µ = 0. 01) [ 15] ( b) CT A D APA ( µ = 0. 1, P = 4) [ 31] (c ) CT A l eak y DLM S ( µ = 0. 005, γ = 0. 02) [ 39] ( d) CT A Z A DL MS ( µ = 0. 01, ρ = 10 -4 ) [45] (e) CTA RZA DLM S ( µ = 0. 01, ρ = 10 -4 , ε = 10) [ 45] C T A-L Z A-D LMS ( µ = 0. 005, γ = 0. 02, ρ = 5*10 -5 ) CTA-LRZA- DLM S ( µ = 0. 005, γ = 0. 02, ρ = 5*10 -5 , ε = 10) propos ed propos ed (a) (b) (c ) (d) (e) (a) 0 500 1000 1500 2000 2500 3000 -22 -20 -18 -16 -14 -12 -10 Iterat ions MSD (d B) (a) A TC DLM S ( µ = 0. 009) [ 15] ( b) AT C D APA ( µ = 0. 08, P = 4) [ 31] (c ) ATC leak y DLM S ( µ = 0. 005, γ = 0. 02) [ 39] ( d) AT C Z A DL MS ( µ = 0. 01, ρ = 10 -4 ) [45] (e) A TC RZ A DLMS ( µ = 0. 01, ρ = 10 -4 , ε = 10) [ 45] AT C -L Z A-D LMS ( µ = 0. 005, γ = 0. 02, ρ = 5*10 -5 ) A TC-LRZ A -DLMS ( µ = 0. 005, γ = 0. 02, ρ = 5*10 -5 , ε = 10) propos ed propos ed (a) (b) (c ) (d) (e) (b) Fig. 5. The MSD cu rves of th e proposed algorit hms and s ome existin g algorit hms . (a) CTA - t ype (b) ATC - typ e . As shown in Fig. 5 (b ), the A TC DLM S algorith m still exhibits the poo rest performance a mon g all the algorit hms, while the A T C lea ky D LM S, A TC ZA D LMS an d A T C RZA DLM S al gorithms perfor m better than it. By contrast, the propose d algori th ms are clearly superior to othe r a lgorithms, yielding much lo wer stead y - state misalign ment. C. TRANSIENT THEORETI CAL VALIDATION In thi s sub sect ion, we ve rif y the transie nt theoretical anal ysis for t he proposed AT C - type algorith ms. W e consider a netw ork composed of 5 n odes. The unkn o wn syst em is modeled by [ 00 1 00 ] T o = w . T he variances of the Gaus sia n input s a nd background no ise s are depicted i n Fig. 6 . The tra nsient M SD c urve s are obtain ed from ( 31) and (36) . We firstly carr y out the verifica tion for the ATC - LZA - DLMS algorith m with respe ct to µ . The parameters are selected as 0.005 , 0.001 ργ = = . As can b e seen from F ig. 7, the theoretica l results match well with the experi mental results. Besides, it is found that th e p erformance of the VOLUME X X, 2017 9 AT C - LZA - DLMS al gorithm i s becoming be tter as the step - size µ increases. Accordi ng to the char a cteris tics of leaky algorithms and zero - attrac ting al gorithms , i t is reasonably inferred tha t the al gorithm performance will deteriorate and be even unstable when the step - si ze increases to a certain value [ 53] , [51] . 1 2 3 4 5 1 1. 2 1. 4 1. 6 1. 8 Node index I nput v arianc es 1 2 2 4 5 0 0. 005 0. 01 0. 015 0. 02 0. 025 0. 03 Node index Nois e v ari anc es Fig . 6 I nput var iances and no ise var iances o f 5 no des 0 100 200 300 400 500 600 700 800 900 1000 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 Iterat ions MSD (d B) s im ulat i on( µ = 0. 005) t heory ( µ = 0. 005) s im ulat i on( µ = 0. 008) t heory ( µ = 0. 008) s im ulat i on( µ = 0. 01) t heory ( µ = 0. 01) s im ulat i on( µ = 0. 02) t heory ( µ = 0. 02) s im ulat i on( µ = 0. 03) t heory ( µ = 0. 03) Fig . 7 Th e exp er imen ta l and th eoret ica l MSD cu rves of the p ropos ed A TC - LZA - DLMS al gorithm wit h respec t to µ The n, we co nduc t the comparison for the A TC - LZA - DLMS algorith m with respect to γ , sho wn in Fi g. 8 . The parameter s are set to 0.03 , 0.001 µρ = = . As can be seen, the theoretical results also match accurately with the experimental results. It is observed that the ATC - LZA - DLMS algorit hm with 0.001 γ = outperf or ms that wi th 0.01 γ = and 0.1 γ = . Moreover, we imp l eme n t the verification for the ATC - LZA - DLMS al gorithm with respect to ρ , depicted in Fig . 9 . The parameters are ch o sen as 0.03 , 0.001 µγ = = . It is obvious that the par a meter ρ make s a s igni fica nt infl uenc e t o the per for manc e o f the algorithms. Fo r example , the ATC - LZA - DLMS algor ithm wit h 0.001 ρ = is about 10 dB lower than that with 0.003 ρ = in terms of the stea dy - state mi sali gn ment . 0 100 200 300 400 500 600 700 800 900 1000 -35 -30 -25 -20 -15 -10 -5 Iterat ions MSD (d B) s im ulat i on ( γ = 0. 001) t heory ( γ = 0. 001) s im ulat i on ( γ = 0. 01) t heory ( γ = 0. 01) s im ulat i on ( γ = 0. 1) t heory ( γ = 0. 1) Fig . 8 Th e exper imen ta l and th eoret ica l MSD cu rves of the p ropos ed A TC - LZA - DLMS al gorithm wit h respec t to γ 0 100 200 300 400 500 600 700 800 900 1000 -35 -30 -25 -20 -15 -10 -5 Iterat ions MSD (d B) s im ulat i on ( ρ = 0. 001) t heory ( ρ = 0. 001) s im ulat i on ( ρ = 0. 003) t heory ( ρ = 0. 003) s im ulat i on ( ρ = 0. 005) t heory ( ρ = 0. 005) Fig . 9 Th e exper imen ta l and th eoret ica l MSD cu rves of the p ropos ed A TC - LZA - DLMS al gorithm wit h r esp ect t o ρ Next , we e xamine the t heoretical accuracy for the A T C - LRZ A - DLMS a lgorith m with respect to µ . T he parameters are selected as 0.005 , 0.001 , 1 ρ γε = = = . As can be seen from Fig. 10, w e ca n draw s o me c onc lus ion s similar to tha t in t h e AT C - LZA - DLMS algorithm. F urthermore, we cond uct the comparison for the A TC - LR ZA - D LM S algorithm wi th respec t to γ , depicte d in Fig. 1 1 . T he parameters are chosen as 0.008 , 0.001 , 1 µ ρε = = = . As can be seen, th ere is a small gap between the A T C - LRZA - DLMS algorith m with 0.001 γ = and t hat with 0.01 γ = in the steady - state misalignment. How ever , as the leaky factor γ increases to 0.1, the performan ce o f the algorithm rapidly deterio rates, ab out 8dB higher than that with 0.01 γ = in terms of the steady - state misalignment . VOLUME X X, 2017 9 0 100 200 300 400 500 600 700 800 900 1000 -25 -20 -15 -10 -5 0 Iterat ions MSD (d B) s im ulat i on ( µ = 0. 005) t heory ( µ = 0. 005) s im ulat i on ( µ = 0. 008) t heory (mu= 0. 008) s im ulat i on ( µ = 0. 01) t heory (mu= 0. 01) s im ulat i on (mu= 0. 02) t heory ( µ = 0. 02) s im ulat i on ( µ = 0. 03) t heory ( µ = 0. 03) Fig . 10 The experim ent al and theor eti ca l MSD cu rves of th e prop osed AT C - LR Z A - DLMS al gor ithm with r espect t o µ 0 100 200 300 400 500 600 700 800 900 1000 -30 -25 -20 -15 -10 -5 0 Iterat ions MSD (d B) s im ulat i on ( γ = 0. 001) t heory ( γ = 0. 001) s im ulat i on ( γ = 0. 01) t heory ( γ = 0. 01) s im ulat i on ( γ = 0. 1) s im ulat i on ( γ = 0. 1) Fig . 11 The experim ent al an d th eoret ica l MSD cu rves of t he p rop osed AT C - LR Z A - DLMS al gor ithm with re spect to γ 0 100 200 300 400 500 600 700 800 900 1000 -30 -25 -20 -15 -10 -5 0 Iterat ions MSD (d B) s im ulat i on ( ρ = 0. 001) t heory ( ρ = 0. 001) s im ulat i on ( ρ = 0. 003) t heroy ( ρ = 0. 003) s im ulat i on ( ρ = 0. 005) t heory ( ρ = 0. 005) Fig . 12 The experim ent al and theor eti ca l MSD cu rves of th e prop osed AT C - LR Z A - D LM S algo rithm with r espect to ρ Finally , we i mp le ment the verifica tion for the A TC - LRZ A - DLMS al gorith m with respect to ρ , sho wn i n Fi g. 12. The parameters are set to 0.008 , 0.001 , 1 µ γε = = = . Sim ilar t o the re sult s in the A T C - LZA - DLMS algor ithm, the parameter ρ also has a great impact on the performance of the A T C - LRZA - DLMS algorithm. D. STEADY STATE THEORET ICAL VALIDATI ON W e here evaluate the steady state theoretical analysis for th e AT C - LZA - DLMS and AT C - LRZA - DLMS algorithms . The net work and the u nkno wn s yste m are c onsis te nt wit h that in the tran sient valid ation. T he stea dy state M SD curves are obtained f r om ( 48 ) to (50) . As can be seen from Figs. 13 and 14, the steady s tate theoretical v alues are in good agreement with t he experimental resul t s. 0 200 400 600 800 100 0 -30 -25 -20 -15 -10 -5 0 I t erat i o n s MSD( d B) s i mul a t i o n ( µ =0.005, ρ =0.001, γ =0.005) theory ( µ =0.005, ρ =0.001, γ =0.005) s i mul a t i o n ( µ =0.01, ρ =0.001, γ =0.003) theory ( µ =0.01, ρ =0.001, γ =0.003) s i mul a t i o n ( µ =0.02, ρ =0.001, γ =0.005) theory ( µ =0.02, ρ =0.001, γ =0.005) Fig . 13 The experimen ta l and th eoret i cal M SD cu rves of the prop osed AT C -L ZA - D L MS algo rithm . 0 200 400 600 800 100 0 -30 -25 -20 -15 -10 -5 0 I t erat i o n s MSD( d B) s i mul a t i o n ( µ =0.005, ρ =0.001, γ =0.005, ε =1) theory ( µ =0.005, ρ =0.001, γ =0.005, ε =1) s i mul a t i o n ( µ =0.008, ρ =0.001, γ =0.005, ε =1) theory ( µ =0.008, ρ =0.001, γ =0.005, ε =1) s i mul a t i o n ( µ =0.01, ρ =0.001, γ =0.003, ε =1) theory ( µ =0.01, ρ =0.001, γ =0.003, ε =1) Fig . 14 The experimen ta l and theor eti ca l MSD cu rves of the p rop osed AT C - LR Z A - DLMS al gor ithm . V. CONCLUSIONS In this pap er, b y incorporati ng the zero - attract ors into the leaky DLMS algorithm , we have propos ed the LZA - DLMS and LRZA - DLMS al gor it hms inc lud i ng thei r ATC a nd CTA ver sion s. Fo r spa rse syste m identifica tion, t he propose d algorithm s outperfor m vario us e xist ing algo rit hms when the inp uts are colo red. In p articular, i n the case of VOLUME X X, 2017 9 ti me - var ying sp arse syste m , the LRZA - DLMS algorithms exhibi t s upe rio r p erfo rman ce than t he LZ A - DLM S algorithms tha nks to the r eweighted regular izatio n. Empl oyi ng se ver al common a ssumpt ions and appro ximatio ns, we have achi eve d the theoretical recursion, which can successfully characterize the transie nt net w o r k MSD of our findin gs. T o guara ntee t he co nver gence in t he mean and mean - square, the stabilit y bound of the step -s ize for th e proposed ATC - type algorithms has been dete rmine d . Moreover, we h ave implemented the steady state th eor etical anal ysis for t he AT C - type algorit hms. Exte nsi ve expe ri ments under various c ond itio ns ha ve s ho wn t he MS D curves of theoretical anal ysi s match accurately with the experimental curves . I n our futur e wor k, we wil l car ry out the research on t he time - varying leaky factor to enh ance estimation performance. APPENDIX A After a sim p le matrix calculation for (36), the steady state value of Tr ( ) ∞ W  is gi ven b y 1 1 1 1 1 1 T r( ) Tr(v e c ( ( )v e c ( ) ) ) Tr ( vec ( ( ) vec( ( )))) Tr ( vec ( ( ) vec( ( )))) Tr ( vec ( ( ) vec( [ ]))) Tr ( vec ( ( ) vec( ( [ ] [ ])))) T r(v e c ( ( )v e c TT o TT T o TT T o T T TT TT T TT E E E Eg g − ∞ − ∞ − ∞ − ∞∞∞ ∞ − ∞∞ − = ⊗ +⊗ +⊗ +⊗ +⊗ +⊗ W Φ BB W Φ B B ww Φ B B ww Φ C C U v vU Φ DD w w Φ BA    1 1 1 1 1 ( [ ]))) Tr ( vec ( ( ) vec( ( [ ])))) Tr ( vec ( ( ) vec( [ ]))) Tr ( vec ( ( ) vec ( ( [ ])))) Tr ( vec ( ( ) vec( ( [ ] )))) Tr ( vec ( ( ) vec ( ( [ ] )))) T o TT T TT T o TT T TT T TT T E Eg E Eg Eg Eg ∞ − ∞∞ − ∞ − ∞∞ − ∞∞ − ∞∞ +⊗ +⊗ +⊗ +⊗ +⊗ ww Φ D A ww Φ A B ww Φ D B ww Φ A D ww Φ B D ww     (56 ) Using the property T r ( ) ( vec ( )) vec ( ) T XY X Y = and letting MN X = I and 1 vec ( ( ) vec ( )) TT o Y − = ⊗ Φ BB W for the first ter m on the RHS of (5 6 ) , we arrive at 1 T r(v e c ( ( )v e c( ) ) ) vec ( ) ( ) vec ( ) TT o T TT MN o − ⊗ = ⊗ Φ BB W I Φ BB W (57 ) Performin g the similar op eration for the r emainin g ter ms in (5 6 ), we f inally obt ain (49). ACKNOWLEDGMENT This work was partially suppo rted b y National Science Found atio n o f P .R. China ( Grant : 61571374, 6187 1461 a nd 6143301 1). REFERENCES [1] M.H. L i, X.M. Liu , “Th e least squ ares based itera ti ve alg orit hms for pa ram eter es tima tio n of a bi lin ear sys tem with au t oregress ive n oi se using the dat a filter ing tec hnique,” Sign al Proc ess i ng ., vol. 147 , pp. 23 - 34, Jun. 2018 . [2] L. Xu , “Th e damp in g itera ti ve par amete r i dent ificat ion method for dyna mi cal syst ems ba sed o n th e sine s ign al m easu remen t ,” Sig nal Process ing ., vol. 12 0 , p p. 660 - 667, Mar. 2016 [3] L. Xu, F . Ding, “Re cursive le ast squares an d multi - i nnovat ion stoch astic gra dient par ameter estim ation methods for sig n a l modelin g,” Circu its Syst. Si gnal Proc ess ., vol. 3 6, n o. 4, pp. 1735 - 1753, Apr. 2017 . [4] F. Ding, X .H. W ang, L . M ao, L . Xu, “Jo int state and mult i - innov ation param eter e stimation f or time - delay linea r syst ems and its conv erge nce based o n the K alman f ilteri ng, ” Digi t al Signal Process ing ., vol. 62 , pp. 2 11 - 223 , Mar. 2017 . [5] Y.J. Wang, F. Ding, L . Xu, “ Some new results of d esign ing an IIR filter wit h colored noi se for sign al proces sin g,” Digit al Sign al Process ing ., vol. 71 , pp. 44 - 58, Jan. 20 17. [6] F. Ding, Y .J. W ang, J.Y. D ai, Q.S. L i , Q.J. Chen, “A recursive leas t squa res param eter estimat ion algorit hm for output n onlinear aut oregr essi ve s ystem s usi ng th e inpu t - outpu t data filt erin g,” J. Frankl. Inst ., vol. 354 , no. 15 , pp. 693 8 – 6955 , Oct. 2017 . [7] M.T. Chen, F. Ding, L . Xu, T. Hay at, A. Al saedi, “Iterativ e identif icatio n algorithm s for bil inear - in - paramete r sy stems w ith autoregr essive movi ng a verage n oise,” J. Fra nk l. In st ., vol. 3 54 , n o. 17 , pp. 7885 – 7898 , Nov. (2017 ). [8] X. Zhang, F. Ding , A. Al saadi, T . Hay at , “Recu rsive par amet er identif icatio n of the dy namical m odels f or biline ar state s pace systems,” N on linea r Dyn a mic ., vol. 89 , no. 4 , pp . 2415 – 2429 , Sep. 2017. [9] J. Chen , A.H. Say ed, “D iffusion adaptat ion stra tegies f or distri buted opti mization and learn ing over n et works,” IEEE Transac tions on Signal Proces sing ., vol. 60 , n o. 8 , pp. 4289 - 4305 , Au g. 2012 . [10] R . Abd olee, B. C hampa gn e, “Di ffus ion LMS st ra tegi es in sen sor netw orks with nois y input da ta,” IEEE /ACM Tr ans actions on Networ king ( TON) ., vol. 24 , n o. 1 , p p. 3- 14, Feb . 2016 . [11] R. Abdole e, B. Champagne , A.H. Say ed, “Diffusio n adaptation o ver mult i - agent ne twor ks with w ireless link im pairme nts,” IEEE Trans actio ns on M obile Com put ing ., vol. 15 , n o. 6 , pp. 1362 - 1376 , Jun. 2016. [12] L. Shi , H. Zh ao, “Two Dif fusi on Proport ion ate Sign Subband Adapt ive F iltering Algor ithms,” Circuit s, Syst ems, and Sig nal Processing ., vol. 36 , no. 10, p p. 4 242 - 4259, Oct 2017. [13] C .G. Lopes , A.H. S ayed, “Dis tr ibut ed adap ti ve inc remen ta l str at egies: Form ulation a nd perf ormance analy sis,” in: P roc Inter nation al Conf erence o n Acous tics, S peech a nd Sign al Proces sing (I CASSP) , Toulouse, Fran ce, 2006, pp. 584 - 587. [14] C .G. Lop es, A.H. S a yed, “ Incr emen ta l adap tive s tra teg ies o ver distr ibute d netw orks,” I EEE Trans actio ns on S ignal Pr ocessi ng ., vol. 55 , no. 8, pp. 4064 - 407 7, Aug. 2007. [15] C.G. L opes, A.H . Sayed, “ Diffusion leas t - mean squ ares over ad apti ve net work s, ” in: Proc Intern ational Confer ence on Acousti cs, Speec h, and Sign al Proce ssing (I CASSP) , Honolu lu, HI, 2007 , pp. 917 - 920 . [16] C.G. L opes, A.H . Sayed, “ Diffusion lea st - mean squ ares over adapt ive net works: Formu lati on and performan ce ana lysis, ” IEE E Trans actio ns on Si gnal Pr ocessi ng ., vol. 56 , no. 7 , pp. 3122 - 3136 , Jul. 2008. [17] L. Shi, H. Z hao, “ Vari abl e step - size distribute d increme ntal norma lise d LMS al gor ithm, ” E lectr onics Lett ers ., vol. 52 , no. 7 , p p. 519 - 521 , Ap r 2016 . [18] F.S. Catt ivelli, A .H. Saye d, “ Anal ysis of spatial and incr ementa l L MS process in g fo r distributed e stimat ion, ” IEEE Tr ansacti ons on Signal Proces sing ., vol. 59 , n o. 4, pp. 1 465 - 1480 , Ap r. 201 1. [19] L. Li, J.A . Chambe rs, C.G. L opes, A .H. Say ed, “ Distr ibute d estim ation over an adapt ive inc remental n etwork based on the affin e proje ction alg orithm, ” IEE E Trans actions on Sign al Proces sing ., vol. 58 , no. 1 , pp. 151 - 164 , Jan . 2010. [20] Y. Yu, H. Z hao, “ Robust i nc remen t al norm ali zed lea st mean squ are algo rithm wit h variabl e step s izes ove r distrib uted netwo rks, ” Signal Processing ., vol. 144 , pp. 1 - 6, M ar 201 8. [21] J. Chen, A .H. Say ed, “ Distri buted P areto optimizat ion via diffusio n strate gies, ” IE EE Journa l of Selec ted Topic s i n Signa l Proce ssi ng ., vol. 7, n o. 2 , pp. 205 - 220 , Ap r . 201 3. [22] N. Takahas h i, I . Yamada, A .H. Sayed, “ Diffusion leas t - mea n squ ar es with a dapti ve comb iners: Formulat ion and p erforman ce ana lysis, ” VOLUME X X, 2017 9 IEEE Transac tions o n Signal Proc essing ., vol. 58 , no. 9 , pp . 4795 - 4810 , Sept . 2010 . [23] L. Xiao , S. Bo yd, “ Fas t linea r iteration s for distrib uted avera gin g, ” Systems & Control Letters ., vol. 53 , no. 1 , pp. 65 - 78 , Sep t. 2004 . [24] F.S. Cattiv ell i, C.G. Lo pes, A.H. Say ed, “ Diffusion recursi ve least - squares for di strib uted estimat ion ove r adaptive netwo rks, ” IEEE Tran saction s on Signal Proce ssing ., vol. 56 , n o. 5 , pp . 1865 - 1877 , May 2008. [25] F.S. Cattiv ell i, A.H. Say ed, “ Diff u sio n L MS strate gies for distr ibuted estim ation , ” IEEE Tra nsacti ons on Si gnal Proc essing ., vol. 58 , n o. 3 , pp. 1035 - 1048 , Mar. 20 10. [26] F.S. Cattiv ell i, A.H. Say ed, “ Diff usion s trategie s for dis trib uted Kalman fi lterin g and smoothin g, ” IEEE Tra nsactio ns on automa tic contr ol ., vol. 55 , no. 9 , pp. 2069 - 2084 , S ept . 2010. [27] A.H. Say ed, Di ffu sion adap tati on over ne tw o rks , Acade mic Pr ess Librar y in Si gnal Processi ng, 201 3. [28] A.H. Say ed, S. - Y. T u , J. Ch en, X . Z hao, Z.J. T ow fic, “ Diffusion str at egies for ad apta tion an d lea rni ng over n etwork s: a n exa min ati on of dist ributed st rategies and network beha vior, ” IEEE Si gnal Proce s s. Mag ., vol. 30 , n o. 3 , pp . 155 - 171 , M ay. 2013. [29] M.O. Bin Sae ed, A. Zer guine, S.A . Zummo, “ A noi s e ‐ c ons train ed algo rithm for estima tion ove r distribu ted netw orks, ” Internatio nal Journal of Adaptiv e Cont rol and Signal P roces sing ., vol. 27 , no. 10 , pp. 827 - 845 , Oct. 2013. [30] M.O.B. Sae ed, A. Zer guine, S.A. Z ummo, “ A varia ble s tep - si ze stra tegy for di strib uted estim ation ov er adaptiv e ne two rks, ” EURAS IP Journal on Advance s in Signal Proce ssing ., vol. 2013 , no. 1 35, D ec. 2013. [31] M.S.E. Aba di, Z. Saffar i, “ Distr ibuted e stimatio n ove r an adap tive diffus ion netw ork base d on the family of affine proje ction al gorithm s, ” in : 6th I nt. Symp. Te lecommun. (IST ) , 2012, pp. 607 – 611. [32] W. Ma, B. Che n, J. Duan, H . Zhao, “ Diff us i on max imu m corr entropy criterion algorithm s for r obust dis tributed e stimatio n, ” Digita l Si gnal P roces sing ., vol. 58 , pp . 10 - 19 , Nov. 2016 . [33] J. Ni, “ Diffu sion sign subband adapti ve filteri ng a lgorith m for distr ibute d estimat ion, ” IEEE Signal Processin g Letters ., vol. 22 , n o. 11 , pp. 2029 - 2033 , Nov. 2015. [34] J. Chen, C. R ichard, A.H . Sayed, “ Mul titask dif fusio n ada ptatio n over net works, ” IEE E Trans action s on Si gnal Pro cessin g ., vol. 62 , no. 16 , pp. 4129 - 4144 , Au g. 201 4. [35] Y. Xia, D.P . Mandi c, A.H . Say ed, “ An a daptiv e diffus ion augm ented CLMS algorit hm f or dis tributed filt ering of n onci rcular c omplex signals, ” IEEE Si gnal Pr ocessi ng Le tters ., vol. 18 , n o. 11 , pp. 659 - 662 , Nov. 2011 . [36] L. L u, H.Q. Z hao, “ A ctive im p ulsiv e nois e contro l using maximu m corren t ropy wit h ad apt ive ker nel s ize, ” Mechanical S yste ms and Signal Proces sing ., vol. 87 , pp. 180 - 191 , Mar. 2017. [37] L. Lu, H.Q . Zhao, “ I mpr oved Fi lt ered - x Leas t M ean Kurto sis Algorit hm for Active Noi se Cont rol, ” Circu its Sy stems and Signal Processing ., vol. 36 , no. 4 , pp. 1586 - 1603 , Ap r . 2017 . [38] M.S. Salman , “ Sp ars e leak y ‐ L MS algorithm f or sy stem identif icatio n and its conve rge nce analy sis, ” Int ernat ional J ournal o f Adap tive C ontrol & Signal Proces sing ., vol. 28 , no. 10 , pp. 1065 - 1072 , Oct . 2015. [39] L. L u, H. Zhao, “ Diffu sion leak y L M S algori th m: Analysis a nd imple mentatio n, ” Signa l Proce ssing . vol. 140 , pp. 77 - 86 , Nov. 2017. [40] V.H. Nascim ento, A.H. Say ed, “ Unbiase d and sta ble leakage - ba sed adap tive f ilters, ” IEEE Transacti ons on S ignal P roce ssing ., vol. 47 , no. 12 , pp. 3261 - 3276 , Dec. 1 999. [41] W.U. Bajw a, J. Haup t, G. Raz, R. Now ak , “ Compres sed chann el sensing, ” in : Proc. 4 2nd An nu. Conf. Inf. Sci. Sys t. (CISS) , Prin cet on, NJ, 20 0 8, pp. 5 - 10. [42] D. L. Donoh o, “ Compr esse d sens ing, ” IEEE Transac tion s on inform ation t heory ., vol. 52 , no. 4 , pp. 1289 - 1306 , Ap r. 2006 . [43] J.A. Baze rque, G.B. G iann akis, “ Distribu ted spe ctrum sens ing for cogni tive rad io networks by exploit ing sp arsit y ,” IEEE Tra nsacti ons on Signa l Proce ssing ., vol. 58 , n o. 3 , pp. 1847 - 18 62 , Ma r. 2010 . [44] Y. Liu, C. L i, Z. Z hang, “ Diffusion Sparse Least - M ean Squa res Ov er Networks ,” IEEE Tr ansact ions on Si gnal Proc essin g . , vol. 60 , no. 8, pp. 4480 - 4485 , Au g. 201 2. [45] P.D. L orenzo , A.H. S ayed, “ Sparse Distrib ute d Lear ning Base d on Diff usion Ada ptation ,” I EEE Transa cti ons on S ignal P rocess ing ., vol. 61 , no. 6 , pp . 1419 - 1433 , Ma r. 201 3. [46] K. Sh i , P . Shi, “ Conver gence analy sis of spars e L MS algorithms w ith l1 - norm pe nalty based o n white i nput s ign al,” Signal Process ., vol. 90, n o. 1 2, pp. 3289 – 32 93, D ec. 2 010. [47] Y. Murak ami, M. Ya magishi, M . Y ukawa, I . Yamada, “ A sparse adap tive f iltering using t ime - varying s oft - th resh oldi n g techni qu es,” in : Proc. IEE E ICASSP , Dall as, TX, Mar. 2010, pp. 37 34 – 3737 . [48] J. Chen, C. R ichard, Y. So ng, D. Brie , “ Transient Performance Analysis of Zero - Attracti n g L MS, ” IEEE Signal Processi ng Letters ., vol. 23 , no. 12 , pp. 1786 - 1790 , Dec. 2016. [49] W. Wang , H. Zhao, B. Chen, “ Bias c omp ensat ed zero a tt ra ctin g norm ali zed lea st mean squ are adaptive fi lter and it s performan ce analysis, ” Signal Processing ., vol. 143 , pp. 94 - 105, Feb. 201 8. [50] J. Ch en, C. Rich ard, J. - C. M. B ermud ez, P. Hon eine, “Nonn egat iv e least - mean - square al gorithm,” I EEE Trans. S ignal Pr oces s., vol . 59, no. 1 1, p p. 5225 – 5235, Nov. 2011 . [51] S. Zha ng, J. Z hang, “ Trans ient an aly sis of ze ro attracti ng NL MS algo rithm witho ut Gauss ian inputs assumptio n, ” Signa l P rocessi ng ., vol. 97 , pp . 100 - 109 , Apr. 2014. [52] G. Su, J. J in, Y. Gu, J . Wang, “ Performance analysis of l 0 nor m const rai nt lea st m ean squ ar e algori th m,” IE EE Trans action s on Signal Proces sing ., vol. 60 , n o. 5 , pp. 2223 - 22 35 , M a y. 2012 . [53] A.H. Say ed, T.Y. Al naffouri, “ M ean - square anal ysis of no rmaliz ed leak y ada pti ve fi lter s, ” in : P roc IEEE I nterna tiona l Conf erenc e on Ac ou sti c s, Spe ech, a nd S ignal Processi ng (ICASSP ) , 2001, pp. 3873 - 3876. VOLUME X X, 2017 9

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment