Mean-Square Performance Analysis of Noise-Robust Normalized Subband Adaptive Filter Algorithm

This paper studies the statistical models of the noise-robust normalized subband adaptive filter (NR-NSAF) algorithm in the mean and mean square deviation senses involving transient-state and steady-state behavior by resorting to the method of the ve…

Authors: Yi Yu, Haiquan Zhao, Badong Chen

1 Mean Square Behavior of Noise-Robust Normalized Subband Adaptive Filter Algorithm Yi Yu 1 , Haiquan Zhao 2 , Badong Chen 3 , Wenyuan Wan g 2 , Lu Lu 4 1 School of Inform ation Eng ineering, Southwest Univ ersity of Science and Technology , Mianyang, China 2 School of Elect rical Engineering, Southwe st Jiaotong University, Chengdu, Chi na 3 School of Elect ronic and I nformation Engineering , Xi’an Jiaotong University, X i’an, China 4 School of Elect ronics and I nformation Engineering, Sichuan Univ ersity, Chengdu, C hina Abstract : This paper studies the statistical m odels of the noise-robust normalized subband adaptive filter (NR-NSAF) algorithm in the mean and mean s q uare deviation senses involving transient and steady-state behavior by resorting to the vectorization and Kronecker product of matrices. Thus, t he pr oposed analysis does not require the Ga ussian a ssumption to the i nput signal. Moreo ver, it removes the paraunitary assumption aiming to the analysis filter banks as in the e xisting analyse s of subband adaptive algorit hms. Si mulation results in various conditions demonstrate the effectiveness of our theor etical analysis . For a special form of the algorithm, the proposed steady-s tate expression is also b etter accurate than the p revious analysis. 1. Introduction Adaptive filter algorithms hav e a p ivotal position in so me applications suc h as system id entification, channel equalizat ion, active noise control, and echo cancellation [ 1] , [2 ] . One of the popular algorithms is t he normalized least mean square (NLMS) , and it is si mple a nd easy in implementation. Ne vertheless, the problem is very slow convergence rate for the co rrelated input signal. To overcome this prob lem, subband adaptive f ilter (SAF) with mu ltiba nd structure has attracted much attention due to its decorrelat ion property. Th e multiband struc ture eli minates aliasing and band edge effects relative to the conve ntional structure [3]. According to th is , the normalized SAF (NSAF) algorithm was propo sed by Le e and Gan in [3 ] . Compared with the NLMS, the NSAF has faster convergence rate w hen th e input signal has high corr elation in the time -domain , while retaining comparable computational complexity. In a recent decade, m any works have b een reported to further obtain an improvement on the p erformance of the NSAF [4]-[12 ] . Typically , inspired by the NLMS with reusing weight vectors at each update [13], a noise-robust NSAF (NR -NSAF) algor ithm was proposed which improves the stead y-state perfor man ce in highly noisy environments [8], and al most at the same time, Ni proposed an improved NSAF (INS AF) al gorithm [9] . And, the NR-NS AF algorithm is a more ge neral form of the INSAF al gor ithm. The per formance analysis is a crucial p oint i n the s tudy of adaptive filter algorithms [2] , [14]-[18] . Much literature has addressed the performance a nalysis of t he NS AF a lgorithm [19]-[23]. Specifically , the steady-state m ean-square error (MSE ) of the N SAF using a fixed st ep size and a fixed regularization parameter were studied in [19] and [20 ], respectively. I n some applications, e.g., system identification a nd echo cancellatio n, adaptive filter e stimates the impulse response of the u nderlying system, so studying the mean square deviation (MSD) performance of adap tive algorith ms see ms to be more appropriate than the MSE . In general, the MSE can also be obtained from the MSD through the autocovariance matrix of input vector. In [24] , Jeong et al. analyzed the s tead y-state MSD of the I NSAF al gorithm and this a nalysis framework has also been extended to its under-modeling scenario [25] and affine projection v ariant [10] . The theoretical results coincides w ith the simulations, but the accurac y depends o n large number of subbands and long adaptive filter. Ho wever, the transient behavior of the IN SAF algorithm has not been studied. In this paper, w e analyze the MSD perform ance of the NR -NSAF algorithm . Our analysis is based on the method of the vectorization and Kro necker p roduct o f matrices d eveloped originally by Sayed [2]. This method is very popular r ecently , since it do es not enforce the input signal to followin g a specified model (e.g., Gau ssian distribution) . Our co ntributions are summarized as below: 1) analyzing th e transient and stead y-state MSD of the NR-NSAF algorithm ; 2) providing the mean condition on the step-size to ens ur e the algorithm stability . Moreover, unlike the e xisting analyse s of subband adaptive algorithms, the proposed analysis does not assume the analysis filter banks to being par aunitary. E xtensive simulations verify the proposed theoretical results . Notations: () T  , 2 || | |  , ma x ()   , {} E  , Tr( )  , ()   , and  denote the transpose operator , the Euclidean nor m of a vector , the largest ei genvalue of a matrix , the expectation o f rand om variables, the trace of a matrix , the spec tral rad ius of its matr ix argument , and th e Kronecker product, respectively. The notation diag{ }  yields the diagonal matrix accord ing to its vecto r argument. The vectorization operator vec( )  transforms an MM  matrix in to an 2 1 M  column ve ctor b y stacking successively the columns of th e matrix , and 1 v ec ( )   is t he inverse of vec( )  . T he symbols I and 0 denote th e identity and zero matrices with app ropriate sizes, respec tively. Also, all the vectors in this pap er are co lum n vectors. 2 2. The NR -NSAF Algorithm Supposing that t he desire d signal () dn is given b y the linear model with respect to the input s ignal () un : ( ) ( ) ( ) T o d n n n uw , (1) where o w is an unknown M -length v ector that needs to be identified , ( ) [ ( ) , ( 1 ),..., ( 1 )] T n u n u n u n M     u is the input vector, and () n is the system noise. Fig. 1 shows the multiband-struct ured SAF w ith N subbands. T he input signal () un and the desired signal () dn are partitioned into multiple subband signals () i un and () i dn by the analysis filter s () i Hz , 0 ,1 ,. .., 1 iN  , respectively. The subband outputs () i yn are obtained by filtering sig nals () i un through a fullband ad aptive filter whose weight vector is denoted as 12 ( ) [ ( ), ( ), .. ., ( )] T M k w k w k w k  w . Then, by N -fold decimating signals () i yn and () i dn , we can obtain , () iD yk and , () iD dk at lower samplin g rate, respectively. W e use n and k to rep resent the o riginal sequences a nd the decimated sequen ces , respectively. Fo r the i- th subb and, the decimated error signal is expressed as ,D , ( ) ( ) ( ) T i i D i e d k k k  uw , (2) where ( ) [ ( ), ( 1 ), .. ., ( 1 )] T i i i i k u kN u kN u kN M     u and , ( ) ( ) i D i d k d k N . () un () dn 0 () Hz 1 () Hz 1 () N Hz  0 () Hz 1 () Hz 1 () N Hz  1, () D dk 1, () ND dk  0, () D yk 1, () D yk 1, () ND yk  0 () un 1 () un 1 () N un  0 () ek 1 () ek 1 () N ek  () en    0 () Gz 1 () Gz 1 () N Gz   N  N  N  N  N  N  N  N  N         0, () D ek 1, () D ek 1, () ND ek  0 () yn 1 () yn 1 () N yn   () k w Unknown system 0 () dn 1 () dn 1 () N dn   () n  o w 0, () D dk () k w () k w Fig. 1 Multiband s tructured SAF . In [8], the NR-NS AF algorithm for updating the weight vector is described as 11 , D 2 00 2 ( ) ( ) ( 1 ) ( ) ( ) + PN ii p pi i kk k k p k           u ww u , (3) 1 ,D , 0 ( ) ( ) ( ) P T i i D i p p d k k k p        uw , (4) where   1 1 0 P pp p p         ,  ( 01   ) is a weighting factor, 0   is the step-size , 0   is a small regularization constant to avoid the division by zero, and P denotes the number of re using r ecent weight vectors at each iteration. Note that , the NR -NSAF algorithm reduces to t he INSAF and NSAF algorithms when 1   and 1 P  , respectively. 3. Perfor mance analys is Let us introduce two matrices :   ( ) ( ), ( 1 ), .. ., ( 1 ) k k N k N k N L     U u u u ,   0 1 1 , , ... , N   H h h h , and two vectors :   ( ) ( ), ( 1 ), ..., ( 1 ) T k d kN d kN d kN L     d , ( ) [ ( ), ( 1 ), ... , ( 1 )] T k N u k N u k N u kN M     u , where i h is the impulse respon se of the i -th analysis f ilter () i Hz with len gth of L . Then, w e ca n find the follo wing relations: 0 1 1 ( ) ( ) , ( ), .. ., ( ) ( ) DN k k k k k U u u u U H , (5) 0 , 1 , 1 , ( ) ( ) , ( ) , . . . , ( ) ( ) T T D D D N D k d k d k d k k d H d , (6) ( ) ( ) ( ) T o k k k d U w η , (7) where   ( ) ( ) , ( 1 ), ... , ( 1 ) T k kN kN k N L        η . We use (4)-(6 ) to rearrange (3 ) as 1 1 0 1 0 ( 1 ) ( ) ( ) ( ) ( ) ( ) ( ) , P p p P T T T p p k k p k k k k k p w w U H Λ H d H U w                    (8) where   ( ) diag ( ) ( ) T N D D k k k   Λ I U U . Subtracting o w from both sides of (8) yields   1 0 ( 1 ) ( ) ( ) ( ) P Mp p k k k p k            w I A w b , (9) where ( ) ( ) o kk w w w represents t he weight error vecto r as , 1 ( ) ( ) ( ) ( ) T DD k k k k   AU Λ U and 1 ( ) ( ) ( ) ( ) T D k k k k   bU Λ H η . For deriving (9) , we also use the relation 1 0 1 P p p      . Before proceeding , we define some block matrices and vectors : ( 1) ( 1) ( 1) ( 1) () ( ) , M M P M P M M P M P k k            A0 00 ( 1) ( 1) M P M P M         β I0 , () () ( 1 ) k k kP          w W w , ( 1) 1 () () MP k k      b B 0 , wh ere   01 , ..., PM    β I . With these definitions , ( 9) can be rewritten as   ( 1 ) ( ) ( ) ( ) MP k k k k      W I W B . ( 10 ) Equation ( 10 ) will be the starting point to analyze the performance of the N R-NSAF algorithm. For tractable anal ys is , the following a ssumptions are necessary. A1 ) : T he sy stem noise () n  is a white process with zero-mean and varia nce 2   , which is independent of () un . A2): () n u is zero-mean stationary random vector with 3 positive definite covaria nce matr ix. A3) : () i k u for 0 ,1 ,. .., 1 iN  and ( ) kp w for 0 , ..., 1 pP  are independe nt each other, wh ich is the well-known independ en ce ass umption used for analyzing the performance of adaptive al gorithms [2] , [14]-[27]. From the assum pt ion A 2), () i k u for 0 ,1 ,. .., 1 iN  are also zer o-mean stationar y with p ositive definite co variance matrices . According to assumpti on s A 1) and A3), w e can further assume that () k W is independent of () k and () k B . 3.1 Mean behavior Under assumptio ns A1) and A3), the expectatio n of both sides of (10) is obtained:         ( 1 ) ( ) ( ) MP E k E k E k     W I W . ( 11 ) Theorem 1 : The NR -NSAF alg orithm is mean stable if , and only if the step size satis fy     ma x 2 0. () Ek    A ( 12 ) Proof : See Appendi x. At the steady-state, i.e., w hen k  , we obtain from (11):   1 () MP E   W 0 . ( 13 ) The above relation means that the NR-NSAF algorithm can yield an unbiased esti mate for the unknown vector o w . 3.2 Mean square b ehavior Post-multiplying (10 ) with i ts transpose and defining ( ) ( ) ( ) T k k k Φ WW , the follo wing matrix recursion is developed:     2 2 ( 1 ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) + ( ) ( ) ( ) ( ) ) ) ( ) ( ) . TT TT TT T T MP T TT MP k k k k kk k k k kk k k k k k k                Φ Φ Φ Φ Φ BB I W B ( B( W I (14) With assumptions A1 -A3) , th e expectations of the last two terms in (14) ar e zero . Consequently, enforcing the expectatio n operator on bo th sides of (14) , we have                   2 2 ( 1 ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) + ( ) ( ) . T T TT TT T E k E k E k E k E k E k E k E k k E k k         Φ Φ Φ Φ Φ BB ( 15 ) The last expectation ter m of (15) can be further expressed as     ( 1) ( 1) ( 1) ( 1) ( ) ( ) ( ) ( ) T M M P T M P M M P M P E k k E k k            b b 0 BB 00 , ( 16 )       2 1 1 1 22 2 2 2 0 2 ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) = || || ( ) + T T T DD T N ii i i i E k k E k k k k kk E k                b b U Λ HH Λ U uu h u . ( 17 ) To continue the analysis, w e need two pr operties of the Kronecker product , n amely,       v ec ve c T  XZ Y Y X Z ( 18 ) and         X Y Z Ω X Z Y Ω ( 19 ) for any matrices   , , , X Y Z Ω of co mpatible dimensions [28]. Subsequently, ta king the vecto rization for all t he matrices in t he recursion (15), after merging similar ter ms, it is established that             2 v ec ( 1 ) ve c ( ) v ec ( ) ( ) , T E k E k E k k    Φ F Φ BB ( 20 ) where the 2 2 2 2 M P M P  matrix F is gi ven by             22 2 ( ) ( ) ( ) ( ) . MP M P MP E k E k E k k                F I I I ( 21 ) The MSD is defined as 2 2 1 MS D( ) ( ) = Tr ( ) ( ) =Tr ( ) . T k E k E k k Ek w w w Φ ( 22 ) where 1 () Ek Φ is the first MM diagonal block of () Ek Φ . So, based on the inverse operator 1 v ec ( )   , the recursion (20) models the MSD e volution b ehavior of the NR -NSAF algorith m with respect to the iteratio n k . It is seen from (21) that t he NR -NSAF algorithm is convergent in the m ean square sense if, and only if, the m atrix F is stab le that all the eigenvalue s o f F ar e in the range ( 1 , 1 )  [14]. However, further obtaining the step size range f ro m it is dif ficult due to th e ex istence of the matrix    . Fortunately, w e have deduced the mean square co nvergence condition 02   in an alternative method , see Appendix in [1 2]. Then, if the algorithm conve rges to the steady -state, the equality     ( 1 ) ( ) E k E k  Φ Φ as k  will be hold. Accordingly, we can arrive at the stead y-state solution o f (20):           22 1 21 ( ) v ec v ec ( ) ( ) . T MP E E k k       Φ I F B B ( 23 ) Using the relation     Tr vec( ) vec( ) T T  XY X Y , the steady-state MSD of t he NR -NSAF algorith m can be derived from (23) , i.e. , 22 1 2 MSD( ) T r ( ) v e c ( ) v ec ( ) ( ) . T MP MP T EP E k k P  Φ I I F BB ( 24 ) Remark 1 . Reference [24] presents an MSD( )  expression 4 for the simple form of the NR-NSAF al gorithm w hen =1  and =0  (that is the INSAF algorithm). Nonetheless, it benefits from two extra assumptio ns: 1) when the number of subbands is sufficiently large, the decimated subband input signals are close to the white signals, i.e ., 2 { ( ) ( )} i T i i u E k k M u u I and 2 { ( ) ( )} i T i i u E k k M uu ; 2) The length M o f adaptiv e filter is large so that the fluctuation of the energy of the decimated subband i nput si gnals from one iteration to the next i teration is small. In ad dition, the MSD( )  in [24] requires knowing t he variances of the subband noises, 2 i , which i s usually given by 22 ii N under the paraunitary a ssumption for the analysis filters [21]. Ho wever, we propose to use 2 2 2 2 = || || i i h . 4. Simulation results Simulations are co nducted in the system identification. Both the adaptive filter and the unknown vector ha ve the sa me length M =16. The w ei ght vect or of adap tive filter is i nitialized as a null vector. T he correlate d input signal () un is generated by filtering either a zero mean w hite Gaussian signal w ith unit variance or a unifor m d istrib ut ion sig nal with the interval [ 1 , 1 ] , through a first-order autoregr essive s ystem of a po le at 0.9 [22] , called as the Gaussian i nput and the uniform inp ut in simulations, respectively. The s ystem noise () n  is a zero mean white Gaussian process , giving to a certain signal- to -noise rate (SNR). T he analysis filters a re d esigned based on the cosine modulated filter ban ks, wh ere the length of the prototype filter is 64 when the num ber of subbands is N =8 , u nless o therwise specified. For evaluating t he pro posed theoretical expressions , the expectations o n the subband inp uts sho wn in ( 17) and (21) are estim ated by ensemble averaging . T he regularizatio n constant  is set to 0 .001 except Fig . 8 . All the simulation s results are averaging over 200 independent trials. 4.1 Transient performance To begin with, the m ean evolution behavior of the algorithm is checked in Fig. 2 for identifying t he unknown vector w o =[ 0.51, − 0.04, 0.02, 0.09, 0.22, 0.20, 0.13, − 0.48, − 0.39, 0.32, − 0.11, − 0.30, 0. 25, −0.24, 0.6, −0.01] T . It is clear to see that the theor etical weights calculated b y ( 11 ) match well with the simulated weights. In the following exa mples, we investigate the M SD pe rformance of the algorithm by 10 10 log MSD( ) k ( dB ) . The unknown vector is rando mly generated by using the function ( ,1 ) 0.5 rand M  in MATLAB an d nor malized by =1 T oo ww . Fig. 3 shows the effect of the parameter  on the NR -NSAF performance. As ca n be seen, t he t heoretical MSD curves computed b y ( 20 ) h ave g ood agreem ent with the simulated curves . In addition , for values of  closer to 1, the stead y-state MSD is lower while retaining comparable convergence rate. Therefore , 1   is preferred for the NR -NSAF algorithm. 100 200 300 400 -0.4 -0.2 0 0.2 0.4 0.6 iterations, k (a) mean of weigh ts sim ulation theor y 100 200 300 400 -0.4 -0.2 0 0.2 0.4 0.6 iterations, k (b) mean of weigh ts Fig. 2 Mean behav ior of the algorithm. (a) 0.5   , ( b) 1   . [Gaussian input, SNR=10 dB, μ =0.5, and P =3 ]. 50 100 150 200 250 300 350 400 -1 2 -1 0 -8 -6 -4 -2 0 iterations, k MS D (dB) Sim ulation (  =1) Sim ulation (  =0. 5) Sim ulation (  =0. 2) T heor y Fig. 3 MSD curv es of the algorithm ve rsus  values. [G aussian input, SN R 10 dB = , 0.5 =  and 3 P  ]. Fig. 4 d epicts the MSD results of the NR -NSAF a lgorit hm with P =1, 2 , 3 and 4 values, where P =1 also denotes the NS AF algorithm . It is see n that theoretical calculatio n gives good fit with the simulation. Fig. 4(a) also shows that, i n a low SNR scenario, th e NR-NSAF al gorithm has sm aller steady-state MSD for lar ger P , without slowing convergence. In a high SNR c ase see Fig . 4(b), however, the converg ence of th e algorithm is slow ed as P i ncreases. It foll ows that the NR -NSAF al gorithm will w or k better th an the NSAF algorithm when t he en vironment is highly noisy. 50 100 150 200 -40 -35 -30 -25 -20 -15 -10 -5 0 iterations, k (b) MS D (dB) 50 100 150 200 -12 -10 -8 -6 -4 -2 0 iterations, k (a) MS D (dB) Simulati on (P=1) Simulati on (P=2) Simulati on (P=3) Simulati on (P=4) T heory Fig. 4 MSD curv es of the algorithm ve rsus P values. (a) SNR=10 dB, (b) 5 SNR=40 dB. [ 0.5   and 1   ]. In Fig. 5 , the MSD performance curves of the NR-NSAF algorithm with different step sizes ( μ =0 .5, 0.1 and 0.4) are shown for both Gaussian and u nifor m inp uts. Fig. 6 depicts similar results when the number of su bband s is N =4. As one can see, the theoretical re sults f airly agree w ith the simulated results. It is worth noting that, as th e member of constant step size based adaptive algorithms, users need to consider a compro mise between fast converge nce and low steady-state MSD whe n choosing the step size for the NR -NSAF algorith m. 50 100 150 200 250 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 iterations, k (a) MS D (dB) 50 100 150 200 250 -16 -14 -12 -10 -8 -6 -4 -2 0 iterations, k (b) MS D (dB) Simulati on (  =0.8) Simulati on (  =0.3) Simulati on (  =0.1) T heory Fig. 5 MSD curves of the a lgor ithm v ersus step sizes. (a) Gaussia n input, (b) uniform input. [ SN R 10 d B  , N =8, 3 P  , and 1   ]. 100 200 300 400 500 600 -22 -20 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 iterations, k (a) MS D (dB) 100 200 300 400 500 600 -18 -16 -14 -12 -10 -8 -6 -4 -2 0 iterations, k (b) MS D (dB) Simulati on (  =0.8) Simulati on (  =0.3) Simulati on (  =0.1) T heory Fig. 6 MSD curves of the a lgor ithm v ersus step sizes. (a) Gaussia n input, (b) uniform input. [ SN R 10 d B  , N =4, 3 P  , and 1   ]. 4.2 Steady-state p erformance Fig . 7 examine s the e ffectiveness of (22) for predicting the steady-state MSD of t he NR -NSAF algorithm as a function of the step size . T he simulatio n values are the average over 200 MSDs at t he stead y-state stage. The step size μ is increased from 0. 1 to 1 . To fairly comparing with the t heory pr esented in [24] , we choo se 1   and 0   . As can be seen from Fi g. 7 , the proposed theoretical results have a good m atch with the simulated results for a smaller step size. However, a d iscrepancy of them can also be observed in Fig. 7 for larger step sizes. In comparison, the p roposed (38) works b etter than the theory from [24] used for predicting t he st eady-state MSD of the NR- NSAF algorithm. T his is because t hat th e m odel in [24] requires large enough number of subbands and lo ng adaptive filter. 10 -1 10 0 -24 -22 -20 -18 -16 -14 -12 -10 -8 -6 -4 st ep siz e  st ead y -state MSD (dB) Simulati on ( P =1) Simulati on ( P =2) Simulati on ( P =3) Simulati on ( P =4) pro posed theor y theor y fr om [ 24] Fig. 7 Steady -state MSDs versus step sizes . [Gaussian input ]. 5. Conclusion W e have a nalyzed in detail the performance of the NR -NSAF algorithm in terms of the transient and steady-state MSD. The p roposed anal ysis is based on the vectoriza tion and the Kronec ker product of matrices, th ereb y it drops the specified distribution a ssumption of the input signal. In addition, the paraunitary assumption is un necessary for the analysis filter banks in o ur analysis. For the spec ial INSAF al gorithm, the proposed steady-state expression o utperforms the previous theory in [2 4] for a lo w-order adaptive filter scenario. Simulation res ults h ave shown goo d agree ment w ith the theoretical results. Appendix In order to ensure the s tability that the recursion (11) evolves with the iteration k , all the eigenval ues of the matrix     () MP E k   Ξ I must be in side the unit circle, i.e.,   1   Ξ . We rearrange Ξ as 0 1 1 ( 1 ) ( 1) , ... P M P M P M         Ξ Ξ Ξ Ξ I0 , ( 31 ) where     () p p M Ek   Ξ IA for 0 ,1 , ..., 1 pP  . To proceed, we ta ke advanta ge of th e block-maximum-nor m of the block matrix, with the notation b   [29]: 2 01 m a x , m a x , b b b p b pP x0 Ψ Ψ xx xx (32) where Ψ is a n MP MP  matrix w ith block entries of size MM  each, and 01 , ..., T TT P     x x x an 1 MP  vector with block entries o f size 1 M  each. Then, we obtain the following inequalities: 6 1 02 2 2 2 0 1 2 0 ma x , , ..., ma x ma x , ma x , P p p P p b b P pp b p b                        x0 x0 Ξ x x x Ξ x Ξ xx x ( 33 )       11 2 22 00 1 2 0 2 () () ( ) . PP p p M p p pp P Mp b p M b Ek Ek Ek                      Ξ x I A x I A x I A x ( 34 ) Inserting (34) into (3 3) yields     2 m ax ( ) , 1 M b Ek    Ξ IA . ( 35 ) Since the spectral rad ius of a matrix is u pper bounded by its any norm [30] , it can be established that       2 m ax ( ) , 1 1 M b Ek       Ξ Ξ IA , ( 36 ) which leads further to     ( ) 1 M Ek   IA . (37) Equation (36) m eans th at any eigen value j  of Ξ satisfies | | 1 j   for 1 , ..., j MP  . It is stressed that Ξ could have an eigenvalue  with 1   . However, (37) can remove this possibility. T o p rove it, we assu me that such an eige nvalue exists, with a n 1 MP  eigenvalue vector x consisting of 01 , ..., T TT P     x x x . Also, agai n using (3 1), the following relation j e   Ξ xx ( 38 ) with respect to the angle  can be expanded as 1 0 2 0 1 0 ( ) , , ..., , ..., , T T P T T T j T T p p P P p e           Ξ x x x x x ( 39 ) which further red uces to: 1 ( 1) 11 0 P jp p P P p e          Ξ xx . (40) By the triangular inequalit y of norms, we obtain       11 ( 1) ( 1) 00 2 2 1 0 2 1 0 2 2 = ( ) = ( ) 1. PP j p j p pp pp P p p P pM p M ee Ek Ek                     Ξ Ξ Ξ IA IA (41) Since ( 42 ) makes th e contradic tion of th e ass umption 1   , w e have | | 1 j   for any eigenvalue j  of Ξ [17]. Subsequently, by means of the eigenvalue decomposition of   () Ek A , then the r ange of the step size that gu arantee s the mean stability of the algorithm is obtained fro m (37):     ma x 2 0 () Ek    A . ( 42 ) 6. References [1] J. Benesty, and Y. Huang, Adaptive signal p rocessing — applicat ions to real-worl d problems, Berlin, Ge rmany: Springer-Verlag, 2003. [2] A . H. Sayed, Fundamentals of Adaptive Filte ring. Hoboken, NJ, USA: Wiley, 2003. [3] K . A. Le e, and W . S. Gan, “I mproving converge nce of the NLMS algorithm usi ng constrained subband updates,” I EEE Signal Pr ocess. Le t t., vol. 11, no. 9, pp. 736 – 739, 2004. [4] J. Ni, and F . Li, “A variable ste p -size matrix normal ized subband adaptive filter,” I EEE Trans. Audio Speech Lang. Proce ss., vol. 18, no. 6, pp. 1290 – 1299, 20 10. [5] J. J. Jeong, K. Koo, G. T. Choi, an d S. W. Kim, “A var iable step si ze for normalized subband adaptive f ilters,” I EEE Signal Process. Lett. vol. 19, no. 12, pp. 906 – 90 9, 2012. [6] J. H. Seo, and P. G. Park, “Variable individual step -size su bband adaptive filtering algo rithm,” Electron. L ett., vol. 50, no. 3, pp. 1 77 – 178, 2014. [7] F . Yang, M . Wu, P . Ji, and J . Yang, “ An improved multiba nd -struct ured subband ada ptive filter algorithm, ” I EEE Signal Process. L ett., vol. 19, n o. 10, pp. 647 – 650, 2012. [8] Y. S. Ch oi, S. E. K im and W.J. Song , “ Noise-robust normalised subban d ad aptive filte ring, ” Electronics L etters , v ol. 48, no. 8 , 432 – 434, 2012. [9] J. Ni, “Impro ved n ormalised subband adaptive filter,” Electron. L ett. , vol. 48 , no. 6, p p. 320 – 321 , 2012. [10] H. Zhao, Z. Zheng, Z. Wang, and B. Chen, “I mproved affine p rojec tion subband ad apt ive filter for high back ground n oise environments,” Sig nal Processing, 137: 356-362, 2017. [11] Y. Yu, H. Zhao, R.C de Lamare , and L . L u, “Sparsity -aware su bband adaptive algorithms w ith adjus table penalties,” Digital Signal Processing, vol. 84, pp. 93-10 6, 2018. [12] Y . Yu , H . Z hao , B . C hen , “ Set-me mbership impro ved normalised s ubband adaptive filter algorithms for acoustic echo cancellation, ” IET Signal Processing, vol . 12, no. 1, pp. 42 – 50 , 2018. [13] H. C ho, C. W. L ee, S. W . Kim, “ Derivation of a new normalized least me an squares algorithm w ith modified minimization crite rion, ” Si gnal Processing, vol . 89, no. 4, pp. 692 - 6 95, 2009. [14] H. C. Shin, A.H. Say ed, “ Me an-square performance of a fam ily of affine projection algor ithms, ” I EEE Trans. Signal Process., vol. 52, n o. 1 , p p. 90 – 102 , 2004. [15] S. Zhang, J. Z hang, H. C. So , “ Mean s quare de viation analy sis of L M S and NLMS algorithms with white reference inputs,” Signal Processing, vol. 131, pp. 20 - 26 , 2017. [16] S. E. Kim, J. W. Lee , W . J. So ng, “ A theory on the co nvergence behavior of the affine projection algorithm, ” IEEE Trans. Signal Process., vol. 59, no. 12, pp. 6233 – 6239, 2011. [17] S . E . Kim, J . W . Lee, an d W . J . Song, “ A n oise-resilient affine projectio n algorithm and its conver gence analysis, ” Sign al Process., vol. 121, pp. 94 – 101, 20 16. [18] M . S. E . A badi, M . S . Shafiee, M . Zalaghi . “ A low computational complexity normalized su bband adaptive filter algorithm employing signed regre ssor of input sig nal, ” EU RASI P Journal on A dvances i n S ignal Processing, vol . 2018, no. 1, pp. 1 – 23 , 2018. [19] K. A. Le e, W. S. G an, and S. M. Kuo, “Mean -square perf ormance analy sis of t he n ormalize d subb and adaptive filte r,” in Proc. 40th A silomar Conf. Signals, Syst., Comp ut., 2006, pp. 248 – 252. [20] J. Ni and X. C hen, “Steady -state mean-square er ror analy sis of regularize d normalized subband adaptive filte rs,” Signal Process., vol . 93, no. 9, pp. 2648 – 2652, 2013. [21] W. Yin, and A. S. Mehr, “Stochastic analy sis of t he normalized subband adaptive filter algorithm,” IEEE Tr ans. Circuits Syst. I: Reg. Pap., vol . 58, no. 5, pp. 1020 – 10 33, 2011. [22] J . J . Jeong, S . H . Kim, G . K oo, and S . W . Kim , “ Mean-square deviation analysis of multiban d-structured subban d adaptive filter algorithm, ” IEEE Trans. Signal Pro cess. vol. 64, no. 4, pp. 985 – 994 , 2016. [23] F . Yang, M . Wu, P . Ji, Z . K uang, and J . Yang, “ Tr ansient and steady-state analyses of the improv ed multiband-struc tured subband ad aptive filter algorithm, ” I E T Signal Process., vo l. 9, no. 8, pp. 596 – 604 , 2015. 7 [24] J. J. Jeong, K. Koo, G . K oo, and S. W. Kim, “Steady -state mean-square deviation analysis o f improve d normaliz ed subban d ada ptive filte r,” S ignal Process., vol . 106, pp. 49 – 54, 2015. [25] Y . Yu , H . Z hao , and L. Lu, “ Steady-state behavior of the improved normalized subband adaptive filter a lgor ithm and i ts improvement in under-model ing, ” Signal, Image and Video Processing , vol. 12, no. 4 , pp. 617 – 624, 201 8. [26] T . Y. Al-Naffouri a nd A . H. Saye d , “ Transient analysis of a daptive filters with error n onlinearities, ” IEEE Trans. Signal Process., vol. 51 , no. 3, pp. 653-663, 2003. [27] B. Chen, L . Xing, H. Zhao, N. Zheng, J.C. Principe, “G eneralized correntropy for robust adap tive filtering,” IEEE Trans. S ignal Process., vol. 64, no. 13, pp. 3376 – 3387, 2016. [28] G. Alex ander, K ronecker products and matrix calculus with ap plications. New Yo rk: Halsted, 1981. [29] A. H. Sa ye d, “ Diffusion adaptation over n etworks, ” E-Refer ence Si gnal Processing, S. Theodoridis and R. Chellapa, Eds. Amster dam, The Netherlands: Else vier, 2013. [30] R. A. Hor n, C. R. Johnson, Matrix analysis, Cambridge university press, 2012.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment