Anti-measurement Matrix Uncertainty Sparse Signal Recovery for Compressive Sensing

Compressive sensing (CS) is a technique for estimating a sparse signal from the random measurements and the measurement matrix. Traditional sparse signal recovery methods have seriously degeneration with the measurement matrix uncertainty (MMU). Here…

Authors: Yipeng Liu, Qun Wan, Fei Wen

Anti-measurement Matrix Uncertainty Sparse Signal Recovery for   Compressive Sensing
Anti-measurement Matrix Uncertainty Sparse Signal Recovery for Compressive Sensing 1,2 Yipeng Liu, 1 Qun Wan, 1 Fei Wen, 2 Jia Xu, 2 Yingning Peng 1 Department of E lectronic Engineerin g, University of Electronic Science and Technology o f China (UESTC), Chengdu 611731, Chin a 2 Department of Electr onic Engineering, Tsinghua Uni versity, Beijing, 100084 , China ABSTRACT Compressive sensing (CS) is a technique for est ima ting a sparse signal from the random measurements and the measurement matrix. Traditional sp arse si gnal recovery methods have seri ously degeneration with the measurement matrix uncertainty (MMU). Here th e MMU is modeled as a bounded additive error. A n anti-unce rtainty cons traint in the form of a mixed 2  and 1  norm is deduced from the sparse signal model with MMU. Then we combine the sparse co nstraint with the ant i-uncertainty constraint to get an anti-uncertainty sparse s ignal recovery op e rator. Num erical sim ulations d emonstra te that the proposed operator has a better rec onstructing performance with t he MMU than trad itional methods. Index Terms— compressive sensing, sp arse signal recovery, mea surement matrix unce rtainty. 1. INTRODUCTION Estimation of an unknown sparse vector from a li mited number of observations is a common problem. It requires reconstruct the signal wi th a much fewe r randomized samples than Nyquist sampling with hi gh probability on condit ion that the signal has a spar se representation (Cand es and Wakin, 2008). The vector is sai d to be spar se in that most of the coeffi cients are zero or much small a nd only a few ar e large. The sparse signal recovery refers t o th e problem of correctly estimating t he positions and amplitudes of the non-zero entries from a set of linear measurements. It is of broad interest, arisi ng in various areas, including subset selection in regression, structure estimation in grap hical models, sparse approximation, signal denoi sing. Most the sparse signal recovery algorit hms, such as orthogonal matched pursuit (OMP) (Needell and Vershynin, 2009 ), basis pursuit (BP) (Chen et al., 1999) and Dan tzig Selector (DS) (Candes and Tao, 2007), did not consider t he uncertainty in the measurem ent matrix. In fact the matrix uncertainty is that the measurement m atrix is observed with additive error. The measurement matrix uncertainty (MMU) problem arises in many applicat ions includin g image processin g with finite frequency grids, channel estimation with fi nite cha nnel impulse respon se (CIR) grids, ADC wit h aliasing, jitter, finite quantization, aperture effect, and non-linear effects. It is common problem in signal processin g. Under MMU, the present sparse si gnal recovery algorithms (Candes and Tao, 2007; Chen et al., 199 9; Needell and Vershynin, 2009) turn ou t to be ext r emely unstable. Apart from signal processing, Th e MMU also arises in many other a reas, such as mode l selection with missing dat a, portfolio selection, and so on. To prevent the performance degradation caused b y the MMU, This paper proposes a r obust sparse signal recovery method. Combining with the sparse constraint in the form of minimization of the 1  norm, a mixed 2  and 1  norms constraint with respect to the MMU parameter is taken to allow cert ain uncertainty in the measurem ent ma trix for sparse signal recovery. Numerical experiments demonstrate that the proposed robust sparse signal recover y has a better performance than traditional way.. 2. SIGNAL MODEL Considering the random measurement model with MMU:   yA θ  N  θ R      BA V    where M  y  is the random samp les, the real  random measurement matrix A is unknown, B is the M N  observed measurem ent matrix with an additiv e noise V , and the 1 N  vector θ is sparse with most of its elements are zero or cl ose to zero. Without loss of generality, we mainly assume that V is random but bounded:     V    where 0   and   stands for   matrix norm. 3. TRIDITIONAL SPARSE SIGNAL RECOVERY There are two groups of traditional methods to rec onstructing the sparse si gnal. One is the convex programming, such as basis pursuit (BP) (Chen et al., 1999) and Dantzig Select or (DS) (Candes and Tao, 2007), etc; and t he other is greedy algorithm, such as Orthogonal matched purs uit (OMP) (Needell and Vershynin, 2009, iterative thresholding (Blumensath and Davies, 2009), etc. BP has almost the same performance as DS (James et al, 2009). The convex programm ing has better reconst ruction accuracy than greedy algorithms; while the greedy algo rithm has a less Computational complexity. For a higher accuracy, BP is usually chos en to reconstruct the sparse signal. To encourage sparsity, The 0  optimization is optimal but non-convex and known to be NP-h ard. This practice approximation is known as basis pursu it (BP) which is a pri nciple for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest 1  norm of coefficients among all such deco mpositio ns. It can be formulated with the observed random measurement matrix B and the random measurement vector y as:  1 min , s. t.  θ θ yB θ   (4) is a second-order cone programming (SOCP). It can be solved by many convex pro gramming tools, such as SeDuMi (Sturm, 1999) , etc. 3. THE PROPOSED ROBUST SPARSE SIGNAL RECOVERY As BP is designed without a llowing MMU, it mism atches the sparse signal m odel with MMU (1) (2). The simulation in the next section would also demons trate that it has a bad perf ormance to estimate the sparse signal θ fr om the me asure ments y and the observed measurement matrix B . To insert the MMU factor in the sparse signal recovery algorithm, we propose an an ti- uncertainty constraint (AUC), i t can be formulated as  21 M   yB θθ   where 1 i i x   x and   12 2 2 i i x   x are the 1  norm and the 2  norm of the vector x = [x 1 , x 2 , … , x N ] T . It gives the relationship between the square erro r and sparsi ty of the estimated signal with the MMU parameter incorporated. Combining the AUC wi th the sparse constraint, we g et the anti -uncertainty operator (AUO) as:  1 21 min s.t. M   θ yB θθ   It can be reformulated as    2 1 min s.t. t M t t    yB θ θ   This is a convex programming, and can be solved by software as SeDuMi (St urm, 1999). With a tighter constraint on the squared error and the sp arsity , the denoising performance would be improved, and higher reconstruction accuracy would be obtain ed. The detailed derivation of AUC is as fol lows:  22 2 2 () ()       yB θ yA V θ Ar A V θ V θ   Here we defi ne the ro w vector v m , m = 1, 2, … , M of the matri x V as:  1 2 M              v v V v    Then (8) can be reformulated as:  2 2 1 M m m    V θ v θ   where  means the m odulus operator of a scalar. It is easy to prove that    ,1 1 , 2 2 , ,1 , 2 , 1 2 11 mm m m N N mm m N N m vv v vv v               v θ v θ    where v m, i , i = 1, 2, … , N , is the i- th eleme nt of the vector v m ; and i  , i = 1, 2, … , N , is the i- th element of the vecto r θ . Taking (11) into (10), we can get   2 21 1 1 M m m    V θ v θ   Then, obviously we have    2 21 , m a x 1 1 2 1, max 1 1 M m M M       V θ v θ v θ V θ   where 1, ma x v means th e maximum one of all the values of the 1 i v , i = 1, 2, … , M . With the conditi on (3), we can get:  21 M   Vr r   Combining (8) and (14), we get the AUC (5). It is a relaxing version of the standard square erro r bound constr aint and gives the relationship between the square error and sparsity measure of the estimated signal with the sampling distorti on parameter incorporated. 4. SIMULATION In this sect ion we present si mulation resul ts to demonstra te the performance gain of the AUO. Without loss of generality, we assume the length of the sparse signal is N = 500, the length o f random measurement vect or is M = 125 and the number of nonzero entries of the sparse signal is K = 6. The positions of the nonzero entries are randomly distributed, and the amplitudes of the nonzero entr ies are all set to be one. V is a Gaussi an random matrix with the bound of the measurement matrix elements being δ = 0.7. The real m easurement matrix A is the random Sub-sampling ma trices which are generated by choosing M separate rows uniformly at random from the unit matrix. Figure 1 demonstrates that t he real sparse signal and the normalized sparse signal s recovered by the BP and the AUO with 1000 independent tr ials were average d. Comparing with the real sparse si gnal, it shows that the noise level is too high to correctly dist inguish the nonzero entr ies of the sparse signal recovered by the BP. However the AUO can suc cessfully suppr ess the noise. Th e recovere d signal gives out conspicuous n onzero entries with th e positions cor rectly correspo nding to the real sparse sig nal. To further demonstra te the performance of the prop osed AUO, Monte Carlo simulation i s used here. A new performance evaluation function i s defined to represent the number of incorrectly estimated elements in L Monte Carlo simulation:     01 10 #| # | ii i PP P P L     (15 ) where  01 #| i PP counts the number of estimated nonzero el ements which should be zero in fact ; and  10 #| i PP counts the number of estimated zero elem ents which should be nonzero in fact. Figure 2 gives the performance function  with different number of nonzero elements. Obviously it shows tha t both AUO and BP have their  increase with the increase of the number of nonzero elements . Besides, it demon strates th at the AUO outperforms BP for it a chieves sma ller number of incorrectly estimated elem ents with the number of nonzero elements. Figure 3 gives the perform ance function  with differen t number of measure ment numbers. It shows that bot h AUO and BP have t heir  decrease with the increase of the number of measur em ents. Bes ides, it can be seen that wi th the same number of measure ments, AUO ach ieves smaller  than BP. In the condition of this simulation, AUO’s the performance gain against BP is more obvious from M = 30 to M = 140. 5. CONCLUSION In this paper, we propose a robust sparse signal recovery with the m easurem ent matr ix uncertaint y. By Combining the AUC and 1  norm m inimization, performance gain against BP is obtained when the measurement m atrix uncertainty is existed. In the fu ture, the theor etical pe rformance eval uations can b e researche d to further solidate th e performance of the propose d method. Be sides, the pr oposed AUO is a convex programming. The corresponding greedy al gorithm can be developed. Fi nal ly, the propose d AUO estimate s a single ve ctor from a single snap, and it can be generalized to th e Multiple Measurement V ectors (MMV) situation. ACKNOWLEDGEMENT This work was supported in part by t he National Natural Sci ence Foundation of China under grant 60772146, the National Hi gh Technology Research and Development Pro g ram of China (863 Program) under grant 2008AA12Z306, the Key Project of Chinese Ministry of Education under grant 109139, China National Scien ce Foundation under Grant 6097108 7, China Ministry Research Foundation under Grant 9140A07 011810JW0111 and 9140 C130510D246, Aerospace Innovati on Foundation und er Grant CASC200904. REFERENCES Blumensath T, Davies ME (2009). Iterative hard thresholding for compressed sensing. Applied and computational harmonic analysis 27(3): 265-274. Candes EJ, Tao T (2007). The Dantzig select or: statis tical estimation when p is much larger than n. Annals of Stat istics 35(6): 2313-2351. Candes EJ, Wakin MB (2008). An introduction to compressive sampling. IEEE Signal Processing Magazine 25(2): 21-30. Chen SS, Donoho DL, Saunders MA (1999). Atomic d ecomposition by basis pursuit. SIAM Journal on Scientific Computing 20(1): 33 – 61. James GM, Peter R, Lv J (2009). DAS SO: connections between the Dant zig sel ector and lasso. Journal of the Royal Statistical Soci ety: Series B (Statist ical Methodology), 71(1): 127-142. Needell D, Vershynin R (2009). Uniform uncertainty principle and signal reco very via regularized orthogonal matching p ursuit. Foundations of Computatio nal Mathematics 9(3): 1615-3383. Sturm J (1999). Using sedumi 1.02, a m atlab toolbox for optimizati on over symmetric cones. Optimization Methods and Software 11 (12) : 625–653. 0 50 100 150 200 250 300 350 400 450 500 0 0.2 0.4 (a) T h e real com pressi v e si gna l 0 50 100 150 200 250 300 350 400 450 500 0 0.2 0.4 (b) LASSO 0 50 100 150 200 250 300 350 400 450 500 0 0.2 0.4 (c) AU O T h e real com pressi v e si gn al LASSO AU O Figure 1 The norma lized real spa rse signal, t he sparse signa ls recovered by the BP and the AUO with 1000 independent tri als were averaged. 1 2 3 4 5 6 0. 2 0. 4 0. 6 0. 8 1 1. 2 1. 4 1. 6 1. 8 N u mber of n on z er o el emen t s I ncor r ect l y est i mat ed el ement number BP AU O Figure 2 The incorrect ly estimated ele ment numbers of AUO and BP with di fferent number of nonzero elemen ts. 0 50 100 150 200 0 1 2 3 4 5 6 7 N u m ber of M easu re m en t s I ncor rect l y est i mat ed el ement number BP AU O Figure 3 The incorre ctly estimat ed element numbers of AUO and BP wit h different number of measurements.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment