Compressive sensing: a paradigm shift in signal processing

We survey a new paradigm in signal processing known as "compressive sensing". Contrary to old practices of data acquisition and reconstruction based on the Shannon-Nyquist sampling principle, the new theory shows that it is possible to reconstruct im…

Authors: ** 논문에 명시된 저자는 제공되지 않았으나, 본 연구의 핵심 아이디어와 주요 결과는 **Emmanuel C, ès**

Compressive sensing: a paradigm shift in signal processing
Compr essive sensing: a paradigm shift in signal pr ocessing Olga V . Holtz University of California -berke ley & T echnische Universität Berlin Abstract W e survey a new paradigm in signal p rocessing known as "compressive sensing". Contrary to old practices of data ac quisition and reconstruction based on the Shannon- Nyquist sampling princi ple, the ne w the ory shows that it is poss ible to reconstruct images or signals of scientific interest a ccurately and even exactly fr om a number of samples which is far small er t han the desir ed resolution of the i mage/signal, e. g., the number of pixels in t he image. This new t echnique draws from results in s e veral fields of mathematics, incl uding algebra, optimiz ation, pr obability theory , and harmonic analysis. W e w ill d iscuss some o f the key mathematica l ideas behind compressive sensing, as well as its implic ations to other fields: numerical anal ys is, information theory , t heoretical compute r science, and eng ine e ring. 1 Introduc t ion Compr essive sensing [45, 119] is a new concept in s ignal processing where one s eeks to minimiz e the number of measu rements to be taken fr om signals while still r etaining the information necessary to approxima te them well. The ideas have their origins in cert ain abstract r esults fr om f unctional ana lysis and app roxim ation theory [79, 92] but were recently brought into the forefr ont by the work of Candés , Ro mberg and T ao [13, 15, 12 ] and Donoho [45] who cons tructed concr ete al go r ithms and showed their promi se in application. Sparse approximation has been studied for nearly a century , and it has numer ous applications. T emlyakov [111] locates the first example in a 1907 paper of S chmidt [104]. In the 1950s, statisticians launched an extensive investigation o f another s parse approxima tion problem called subs e t selection in regr ess ion [ 87] and r ecent ly l east angle regression [54, 113]. Later , approximation the orists began a syste matic s t udy of m -term approxima tion with respect to orthonormal bases and redundant systems [38, 111] and very r ecent ly in [25, 26]. Over the la st decade, the signal pr ocess ing c ommunity spur red by the work of Coifman et al. [28, 29] and Mallat et al. [84, 37, 36] has become interested in sparse 1 repr esen t ations for compression and analysis of audio [72], images [63] and video [90]. Sparsity c riteria also arise in deconvolution [ 110 ], signal modeling [100], p re- conditioning [74], machine learning [70], de-noising [22], regulariza tion [33, 35] and error correction [16 , 19, 60, 58, 59, 61]. Mos t sparse approximation problems employ a linear mode l in which the collection of e lementary signals is both linearly dep e ndent and lar ge. T hese models are o ften called r edund ant or overcomplete. Re cent research sugges ts that overcomplete models off er a genuine increase in approximation p o wer [95, 62]. Unfortunate ly , they also raise a serious challenge. H o w do we find a good repr esen t ation of the input signal among the pletho ra of po ssibilities? One method is t o select a parsimonious o r s parse r epresentation. The exact rationale for invok - ing sparsity may range from enginee ring to e conomics t o philosophy . At least t hree justifications ar e commonly given: 1. It is somet imes know n a priori that the input signal can be exp ressed as a short linear combination of elementary signals also contaminated with no ise . 2. The approximation ma y have an associated cost that must be cont rolled. F or example, the computational cost o f e valuating the appr oximation depends on the number of e leme ntary s ign als that participate. In compression, the goal is to minimiz e t he number o f bits requir ed to store the approximation. 3. So me researc hers cite Occam’s Razor , ”Pluralitas non es t ponend a s ine necess i- tate” (causes must not be multiplied beyond necessity). Sparse app roxim ation pr oblems ar e compu t ationally challenging beca use most reasonabl e sparsity measures are not convex. A formal har dness pr oof for one im- portant class of problems independen t ly appeared in [88] a nd [36]. A vast arra y o f heuristic methods for producing sparse approximations have bee n proposed, but the literature contains fe w g u arantees of their performance. The pert inent numerical te ch- niques fall into at least three basic categories : 1. The convex relaxation approach replac es t he no nconvex sparsity measure with a related convex function to o btain a convex programming problem. T he con ve x program can be solved in polynomial time w ith s tandard software [8], and one expects that it will yield a g o od sparse approximation. Mo re o n that will be s aid in the sequel. 2. Greedy methods make a s e quence of locally o ptimal choices in an e ffort to pro- duce a go od g lobal solution to the appr oximation pr oblem. This category in- cludes forw ard selection p rocedur es (such as matching pursuits), backward se - lection and others. Althou gh these approaches s ometimes succeed [31, 67, 69 , 68, 115, 117, 118], t hey can also fail spectacularl y [40, 22]. The monographs of Miller [87] and T emlyakov [11 1] t aste the many flavors of g reedy heurist ic. 3. Sp e cializ ed nonlinear programmi ng software has been developed that att empts to so lve sparse approxima tion problems directly u sing, fo r example, inte rior point met hods [96]. These technique s are only g uaranteed to d iscover a locally optimal solution though . Several problems require s olutions t o be obt ained fr om underdetermined s ystems o f linear equations , i.e., syste ms with fewer equations t h an unknown s . Some example 2 Figur e 1: When Fourier coef ficients of a testbed medical ima ge known as the Logan–Shepp pha ntom (top left) ar e sampled along 22 radial lines in the fre- quency domain (top right ), a naive, “minimal energy ” r econstruct ion setting unobserved Fourier coef ficients to 0 is m a rred by artifacts (bottom left). ℓ 1 - r econstructio n (bott om right) is exact. of such p roblems arise in linear filtering signal pr oces sing, and inverse problems. For an underdetermined syste m of linear equations, if there is any solution, there are in- finitely many so lutions. In many applications, t he “simplest“ s olution is mos t accept- able. Such a solution is inspir ed by the minimal ist principle of Occam’s Razor . For example, if the parameters of a model ar e being e stimated then among all mode ls that explain th e data equall y well, the one with the minimum number of parameters is most desirable. The no tion that spa rse signals–meaning signals with a small number of nonzero coeffic ients for a given basis (and no noise ) one can (with high probab ility) be recon- structed exactly via ℓ 1 -minimi zation is not exactly ne w . The ide a was first e xpressed in 1986, by Fadil Santosa and W illi am Symes [103]. But the full e xtent of the theor y , including t he robustness of the reconstruction p rocedur e, is only now coming into full focus. One of the champions o f this approach, David Donoho, coined the term ” com- pr essed sensing “ to emphasize the fact that ℓ 1 -minimi zation is not just a new way of massaging a ”complete“ s e t of measurements into a compact form, but rather a new way of thinking about how to measur e things in the first place [45]. This new way of thinking has pr ofoundly practical implications. Making mea- 3 surements can be exp ensive, in te rms of time, money , or (in the case of, say , x-rays) damage d one t o the object being imaged. Compressive sensing has the po tential to provide subst antial cost savings without sacrificing accuracy . In one impressive nu- merical experiment, Candès, Romberg, and T ao [1 2] showe d that a 51 2 × 512-pixel test image, known as th e Lo gan-Shepp phantom, can be reconstructed exactly fr om 512 Fourier coefficients sampled along 22 radial lines–with, in o ther words, more than 95% of the ostensibly r elevant data missing (see Figure 1). A host of practica l applications a re n o w being explored, incl uding new sensing techniques, new analog-to-digital converters , and a new d igital camera with a s ingle photon detector , being d eveloped by Kevin Kelly , Richard Baraniuk and the Digital Signal Pr oces sing group at Rice (dsp.rice.ed u/cs/cscamera) [107, 121]. 2 Mathematic a l foundations 2.1 Sparsity and undersampling The celebrate d Nyquist-Shannon-Whitt ake r sampling theo rem sho ws th at a s ignal with bandwidth 2 Ω is completely dete rmined by its uniform samples if and only if the samples are taken at least at the Nyquist rate Ω / π . This principle use d to und erlie all signal acquisition techniques used in practice, such as cons umer electronics, medical imaging, analog-to-digital convers ion and so on. Compressive s ampling puts forward a novel sampling paradigm that replaces the notion of band-limited signals with that of sp ars e signals. This new notion allows for dramatically “undersampled” signals to be captured and manipulated using a very small amount of data. The point o f this section is to e xplain the basic mathematics behind this n e w theory . Suppose x is an unk nown vecto r in R N (a digital image or signal). W e p lan to sample x using n linear functionals o f x and then reconstruct. W e are interested in the case n ≪ N , when we have many fewer measurements than the dimension of the signal space. Such situations arise in many applications. For e xample, in biomedical imaging, far fewer m easu rements are ty pically collected than the number of pixels in the image of inte rest. Furt her examples ar e provided by virtually any domain of science or t echnology whe re am oun t s of data are very lar ge and cos t s of o bserva- tion/acquisition/measurement are nontrivial. The measurements y k ar e obtained by se nsing x against n vectors φ k ∈ R N . Thus y k = h x , φ k i for k = 1, . . . , n , or , equivalently y = Φ x (1) for some n × N measur ement/sensing matrix Φ . Thus we arrive at an underdetermined syste m of linear e quations, w h ich, as is well kno wn, in general has infinitely many solutions, s o our p robl em is ill-posed. B ut suppo se that our signal x is sparse or com- pr essible , i.e . , that is (ess entially) depe n d s o n ly o n a small number of degrees of fr ee - dom. T o g ive a first impression of the theory , we in fact ass ume that the signal can be written exactly as a linear combination of o nly a few basis vectors. Mathematicall y the p robl em ca n be formulated as follows. Given a matrix Φ ∈ R n × N with many m ore columns than rows ( n ≪ N ), and a vector y ∈ R n , find a 4 Figur e 2: ℓ 1 minimization vector x ∈ R N with a minimum possible number o f nonzero entr ies , i.e. , minimiz e k x k 0 subject to Φ x = y (2) where k x k 0 is the n u mber o f nonzero entries of x [30]. By allowing noise ( ε ≥ 0), we obtain a varia tion of the problem (2): minimiz e k x k 0 subject to k Φ x − y k 2 ≤ ε , (3) These probl ems per se are NP-har d even for ε = 0, see [65, 88]. The cla ss ical, well studied, approa ch would be to minim ize the 2-norm k x k 2 in the above p robl ems, but this usually yields a solution vector x that is full, while for a sparse r epresentation we would like to find a vecto r x with fe w nonzero entries. The main approach taken in compressive s e nsing is t o minimize th e 1-norm k x k 1 instead. minimiz e k x k 1 subject to Φ x = y , (4) and minimiz e k x k 1 subject to k Φ x − y k 2 ≤ ε , (5) respectively [22], wher e k x k 1 : = ∑ i | x i | (See Figure 2). V ery surprisingly , the ℓ 1 min- imiza tion yields the same result as the ℓ 0 minimiz ation in many cases of p r actical interest. This pheno menon was initiall y obs erved by engineers and g eophysicists, most notably Claerbout and Log an as e arly as 1970s (see [110]), and by S antosa and Symes in 1986 [103], as mentioned in the introduction. In the last five years o r so, a series of papers [12, 41, 42, 46, 47, 56, 64 , 73, 105] explained why ℓ 1 minimiz ation can recover sparse signals in a variety of practical se tups. In our next section, we give a few sample theorems about this remarkabl e p henomenon. 5 Finally , th e ℓ 1 minimiz ation problem can be effici ent ly solved by convex program- ming or by linear pr ogramming (LP) [8]. Mo st compressive sensing results due to Candès, Donoho, Romberg, T ao and oth e rs [41, 42, 12, 45, 25, 15] are all based on this method (see also [125, 126, 127]). Other approaches include greedy algorithms, for instance, the so -called matching pursuit introduced by Mallat and Zhang [84, 91, 114, 115]. Re cently many variations on matching purs uit have been proposed, among which are o rthogonal matching purs uit [91, 89], st agewise ort h o gonal matching pur- suit [51], gradient pursuit [6 ], and others. 2.2 Incoherence and restricted isometry Given an n × N matrix Φ , the firs t basic question is to determine wh e ther Φ is go od for compressive sensing , i.e ., will lead t o go o d r ecovery of sp arse solutions t o the equations Φ x = y . Candès and T ao [10]–[20] introduced a necessary condition t h at guarantee s an es- timate of its performance on classe s of s parse ve cto rs. Definition ([19, 12, 13]) . A matrix Φ is said to satisfy the Restricted Isometry Property (RIP) of or de r k with constant δ : = δ k ∈ ( 0, 1 ) if ( 1 − δ k ) k x k 2 2 ≤ k Φ x k 2 2 ≤ ( 1 + δ k ) k x k 2 2 (6) for any x such that k x k 0 ≤ k. It is st raightforward to see that this condition can be reformulated as follows: Con- sider n × # T matrices Φ T formed by the columns of Φ with indices in the set T . The n the Gramian matrices G T : = Φ t T Φ T ar e bounde d and bound edly invertible on l 2 with bounds as in (6), uniform for all T of size # T = k . Since each matrix G T is s ymmetric and nonneg ative definite, this is equivalent to e ach of these matrices having the ir eigenvalues in the i nte rval [ 1 − δ k , 1 + δ k ] . The role played by RI P becomes clear from the following result of Candès and T ao. Theorem 1 ([18, 11]) . If the n × N matrix satisfies RIP of order 3 k for some δ ∈ ( 0, 1 ) , then, for any vector x ∈ R N , the ℓ 1 minimization pr oblem (4) has a solution x ∗ such that k x − x ∗ k 2 ≤ C · k x − x k k 1 √ k , (7) wher e x k denotes the best k-sparse appr oximation to x and C denotes a constant. The con d ition (7) means that the Φ ’s with higher values of k for which RIP is satis- fied perform bette r in compr es sive se nsing. F or example, if an n × N matrix Φ has the restricted isometry property of order k , then its pe rformance in l 2 -norm on the unit ball of l N 1 is of or de r C / √ k , and the optimal performance is achieved if Φ satisfies RIP of order k = Θ ( n / log ( N / n ) ) [39]. T his is indeed achieved via various probab ilistic constructions [41, 42 , 43, 44, 48]. 6 A primary matrix measure related t o RIP is mutual incoherence [22, 118, 114, 115]: M ( Φ ) : = max i 6 = j | ( Φ t Φ ) i , j | , i.e., the maximum inner p roduct of dist inct columns o f Φ . Since the columns of Φ are usually n o rmalized to be of 2-norm 1, the mutual incoherence of a matrix is between 0 and 1. This notion can be g eneralized [77] as follows: For a given normalized matrix Φ ∈ R n × N , its k-mutual incohere nce M k ( Φ ) is define d by M k ( Φ ) : = max # S ≤ k max i 6 = j | ( Φ T S Φ S ) i , j | . The mutual incoherences M k ar e intimately related t o the bes t cons tant δ k with which the matrix Φ satisfies RIP of order k , but a full unders tanding of t his con n e ction has not been r eached [77, 78]. A challenging aspect of RI P is its comput ational cost. Inde ed, RIP i s a pr opert y of the s ubmatrices o f a specific size. A t present, no su be xponential-time algorithm is k nown for testing RI P . Int roducing o ther matrix measures may potent ially he lp in eff ectively verifying the RI P or finding other , less de manding conditions for s parse recovery . One such weaker condition has bee n introduced by Cohen, Dahmen and DeV ore in [25]. T o motiva te the ir condition, we first recal l t hat a pair ( Φ , ∆ ) where Φ is a sensing matrix and ∆ is a de coder , is cal led instance-op timal of orde r k for a nor me d space ( V , k · k V ) if there exists an absolute constant C su ch t hat k x − ∆ ( Φ x ) k V ≤ C k x − x k k V . The matrix Φ h as the null space pro perty in V if k x k V ≤ c k x − x k k V for all x such that Φ x = 0. The importance of the null s pace pr ope rty can be s een fr om the following result: Theorem 2 ([25, 26]) . Given an n × N m atrix Φ , a n orm k · k V and a value k, the instance optimalit y in V with constant C 0 is equivalent to the null space pr operty of Φ of order 2 k with the constant C 0 /2 in the sufficiency part and the same constant C 0 in the necessit y part. Note that the null space property is pr es erved und er row o perations on t he matrix Φ since, as its name s ugges ts, it is simply a pr ope rty of its null space. This pr op erty is therefor e less rigid than the RIP and may all ow for a mor e efficient verification. 2.3 Compressible signals In practice, most signals may no t be exactly sparse in a given basis but may concen- trate near a sp arse set. In fact, the mos t commonly use d mod e ls in signal processing assume that t he coeffici ent s of the s ignal with respect to, say , a wavelet basis, d ecay rapidly away from their essent ial supp o rt. Smooth signals, images with bounded variation and those with bounded B esov norm ar e kno wn to be of th at typ e. 7 Given a nearly sparse signal x , d enote by x k its best k -sp arse approxima tion, i.e., the vector obtained by kee ping the k largest coe f ficients of x and discarding the rest. Candès, Romberg and T ao [12] s howed that th e initial signal can be recovered with error of order k x − x k k 1 / √ k wh e never the se nsing matrix s atisfies RIP of order 4 k and the RIP constants δ 3 k and δ 4 k ar e no t too close to 1. Theorem 3 ([18]) . Let Φ satisfy R IP of order 4 k with δ 3 k + 3 δ 4 k < 2 . Then, for any signal x, the solution x ∗ to (4) satis fies k x ∗ − x k 2 ≤ C · k x − x k k 1 √ k , with a well-be haved constant C . A s imilar result holds [12] for stable recovery from imperfe ct measurements, i.e., in the setting of problem (5). All tog ether , this ind icates that ℓ 1 minimiz ation st ably recovers the lar ges t k coeffients of a nearly k -sparse vector even in the pr es ence of noise. This result is in fact optimal for important classes of signals: Let x belong to the weak- ℓ p ball or radius R , i.e., let the decreasing rearrangement of its coe f ficients | x | ( 1 ) ≥ | x | ( 2 ) ≥ · · · ≥ | x | ( N ) satisfy the condition | x | ( i ) ≤ R · i − 1/ p , i = 1, . . . , N . This can be shown to imply k x − x k k 2 ≤ C · R · k 1/2 − 1/ p and k x − x k k 1 ≤ C · R · k 1 − 1 / p for some con s tant C . Moreover , for gene ric elements in weak- ℓ p , no better e stimates ar e obtainable. In othe r words, ℓ 1 recovery achieves an approximation e rror roughly as small as the error obtained by de liberately selecting the k lar ge st coefficients of the signal. 2.4 Good sensing matrices Most sampling algorithms develope d so far in compressive s ensing ar e based on ran- domization [17, 45]. T ypically , the sen s ing matrices are produced by taking i.i.d. ran- dom variables with some given proba bility distribution and then normalizing t heir columns. S uch matrices are guarante e d to per fo r m well with very high probabi lity , i.e., with t he failure rate exp onentially small in the size of the matrix [45]. Following [10], we mention thr ee rando m constructions that ar e by now standard. Random matrice s with i.i.d. entries. Consider the matrix Φ w ith entries drawn independe ntly at random from the Gaussian probabili ty distribution with mean zero and variance 1/ n . Then [15, 42], with overwhelming probabili ty , the ℓ 1 minimiz ation (4) r ecovers k -sparse s olutions whenever k ≤ const · n / log ( N / n ) . 8 Fourier ensemble. Let Φ be obtained by randomly s electing n rows from the N × N discrete Fo urier transform and renormalizi ng the columns so th at they have 2- norm 1. If the rows are s elected at random, then [15] as above, with overwhelm- ing probabi lity , the ℓ 1 minimiz ation (4) recovers k -sparse vectors for k ≤ cons t · n / ( log N ) 6 . General o rthogonal ensembles. S uppose Φ is obtained by s electing n rows fr om an N × N ort honormal matrix U and renormalizing the columns to be of unit length. In the rows are selected at random, then [15] k -sparse recovery by ℓ 1 minimiz ation (4) is guaranteed with overwhelming pr obability provided that k ≤ cons t · 1 M 2 ( U ) n ( log N ) 6 . Note that the Fou rier matrix U s atisfies M ( U ) = 1, s o this is a gene r alization of the Fourier ensemble. The natural problem already being addr es sed by se ve r al authors is ho w to achieve robust d eterministic cons tructions of good CS matrices. T ao in [109 ] points out the im- portance of this problem, as well as its s imilarity to othe r derandomization problems fr om th e oretical computer science and combinatorics. S everal d eterministic const ruc- tions are currently known (s ee, e.g., [39, 76]). However , the performance of matrices provided by these det erministic constructions is not yet on a par with t h at of matrices arising proba bilistically . T o g ive s everal e xamples, DeV ore in [39 ] proposes a const ruction of cyclic matri- ces us ing finite fields that s atisfy RIP of order k for k ≤ C √ n log n / log ( N / n ) , which falls s hort of t he above-mentione d range k ≤ C n / log ( N / n ) kno wn for probabili s- tic constructions. Ind y k in [76] and Xu and Hassibi in [123] propose anot her scheme for compressive se n s ing with de terministic p e rformance g uarantees based on bipar- tite expander graphs. Ano ther flavor of randomness is introduced in [2] where ran- dom T oeplitz matrices are constructed with entries drawn indep endently from a given probab ility d istribution. 2.5 Optimality and n -wi dths The p erformance of the best sensing matrices Φ , which is presently achieved by ran- dom matrices with proba bilistic guarantee s, y ields r ecovery of k -sparse vectors using n samples (so that the matrix Φ is n × N ) provided that k ≤ cons t · n / log ( N / n ) . In particular , a k -sparse vector can be r ecovered, say , by random projections, of di- mension O ( k · log ( N / k ) ) [41 ]. For signals x in the weak- ℓ p ball of radius R , ℓ 1 recovery gives the error [45 ] k x ∗ − x k 2 ≤ const · R · ( n / log ( N / n ) ) − 1/ p + 1 / 2 . 9 It turns out that this p erformances cannot be improved even by u sing poss ibly adap- tive sets of measur ement s and reconstruction algorithms. The matte r turns ou t to be closely rela te d to t he issue of t h e so-called Gelfand widths [92] known from approximation theo ry: Fo r a class F , let E n ( F ) be the bes t reconstruction error from n linear measurements E n ( F ) : = inf sup f ∈F k f − D ( y ) k 2 , y = Φ f , where the infimum is ove r all s e ts of n linear functionals and all reconstruction algo- rithms D . Th e error E n ( F ) is esse ntially equal [92] to the Gelfand width of the class F defined as d n ( F ) : = inf V { sup f ∈F k P V f k : cod im ( V ) < n } , where P V is t h e o rthogonal p rojector o n t he subs pace V . Gelfand widths are known for many classe s of inte rest. In particular , Kashin [79], Garnaev and Gluskin [66] show e d that the Gelfand widths for the we ak- ℓ p ball of radius R satisfy c · R ·  log ( N / n ) + 1 n  − 1/ p + 1 / 2 ≤ d n ( F ) ≤ C · R ·  log ( N / n ) + 1 n  − 1/ p + 1 / 2 . for some universal constants c and C . This shows that the recovery p rovi de d by compressive se nsing te chniques is in fact optimal for weak- ℓ p norms in sp ite of being completely non-adaptive [25 , 10]. This is one more indication of the great po t ential o f compressive sensing in applications. 3 Connect i ons with oth e r fiel d s 3.1 Statistical estimation Candès [10] and Dono h o [45] point out a number of connections of compressive se ns- ing with ideas fr om s t atistics and coding theo ry . W e briefly mention main ideas here. In statistical estimation, t he s ign al is ass umed t o be measured w ith s tochastic e r- rors y = Φ x + z where z is a vector of i.i.d. (indepen d ent identically distributed) random varia bles with mean zero and variance σ 2 . V e r y often, z is assumed to be Gaussian. The problem is again to r ecover x from y . One se eks to des ign an e s timator whose accuracy de pends on the information con- tent of the object x . The Dantzig selector [20] es timates x by solving th e conve x program minimiz e k ˜ x k 1 subject to sup i | ( Φ T r ) i | ≤ λ σ for some λ > 0, whe re r is the resi dual r : = y − Φ e x . The se ideas are very close to t he so-called lasso appr oach [113, 54]. 10 Analogously to ℓ 1 minimiz ation in compressive s ensing, the Danzig s elector was shown [20] to recover s parse and compressible signals with the number o f measure- ments much smaller t han the dimens ion o f x and within a logarithmic factor of the ideal mean squ ar ed error one would only ac hieve with an ora cle s upplying p erfect information which coor dinates are nonzero and which are above the noise level. 3.2 Error-correcting codes In coding theory [60, 58, 59, 61], a vector x is transmitted to a remote receiver . The in- formation x is encoded us ing an n × N matrix C with n ≪ N . Gr os s e r rors may occur during transmission, s o that a fraction of the entries of C x is complet ely corrupted. The location and the damage done to those entries ar e unknown. It turns out that a const ant fraction of errors with arbitrary magnitudes can still be corrected [19] by solving a su itable linear minimization problem. In fact, k nown me t hods recover the vector x exactly provided the fraction of the corrupted entries is not too big [20, 10]. 3.3 Frame theory The theory of compressive se nsing matrices closely resembles t he basic theory of frames [23, 32, 83, 124]. A countable collection of elements { f i } i ∈ I is a frame for a Hilbert s pace H if th e re exist constants 0 < A ≤ B < ∞ (t h e lower and upper frame bound ) such that, for all g ∈ H , A k g k 2 H ≤ ∑ i ∈ I |h g , f i i | 2 ≤ B k g k 2 H . A frame is called tight if the upp e r and lower bounds are t he same A = B . A frame is bounded if inf i ∈ I k f i k H > 0 (the condition sup i ∈ I k f i k H < ∞ follows automaticall y fr om t h e definition o f a frame). A frame is unit norm if k f i k H = 1 for all i ∈ I . If { f i } i ∈ I is a frame only for its closed linear span, it is called a frame sequence. A family { f i } i ∈ I is a Riesz basic sequence for H if it is a Riesz basis for its closed linear span, i.e., if, for some constants 0 < A ≤ B < ∞ and for all sequen ce s of scalars { c i } i ∈ I , A ∑ i ∈ I | c i | 2 ≤ k ∑ i ∈ I c i f i k 2 H ≤ B ∑ i ∈ I | c i | 2 . The analogy with the restricted isome try property is obvious, however , the latter is imposed only on submatrices for me d fr om the original matrix. This analogy must be wort h pursuing in both directions, i.e ., look ing for applica- tions of the the ory and methodology of compressive se nsing t o frames and vice versa. Randomization techniques from compr es sive sens ing could be of particular interest in attacking probl ems from frame t heory (cf. [4, 116]). 4 Practica l implicat ions Compressive sensing, and more gen e rally the poss ibili ty o f efficiently capturing s parse and compressible sign als us ing a relatively small number of measurements, paves the way for a number of p ossible applications. 11 Figur e 3: The scheme of the CS came ra Data acquisition. New physical sampling devices may be d esigned that directly recor d discrete low-rate incoherent measurements of the analog signal. This should be espe ciall y u s eful in situations where large collections of s amples may be costly , diffic ult or imposs ible to obtain. Data compression. The sp arse basis in wh ich the signal is to be represented may be unkno wn or un available. Howe ve r , a rando mly designed Φ is suitable for almost all signals. W e stress that these protocols are nonadaptive to t he signal and simply requir e to correla te it with a s mall number of other fixed vecto rs. Inverse p roblems. The measurement syste m may have to satisfy rigid const r aints such as in MR angiog raphy and ot h e r MR setups, where Φ recor ds a s ubset of the Fourier t ransform. Howe ver , if a sp arse basis exist s that is also incoherent with Φ , then effici ent sensing is possible. A particularly interesting e x ample o f successful implementation of compressive s ens- ing methodolog y is provided by a d igital camera newly de veloped by Richar d Bara- niuk and Kevin Kelly at Rice Univers ity (se e ds p.rice.edu/cs/cscamera) [107 , 121]. In the dete cto r array of a con ve n t ional digital camera, each pixel performs an analog-to-digital conversion; for example, t he dete ctor on a 5-megapixel camera pro- duces 5 milli on bits for e ach image. This large amount of data is then dramatically reduced through a compression algorithms (using wavelet or other techniques) so as not to overburden typical sto rage and transfer capacities. Rather t han collect 5 million p ixe ls for an image, the new camera samples only a factor of about four times t he 50,000 pixels that the jpg compression might t ypically output. These 200,000 single-pixel measurements provide an immediate 25-fold sav- ings in data collected comp ared with 5 megapixels. The camera d eveloped at Rice replaces the CCD array with a d igital-mic romirr or device (DMD). A se quence of rando m projections is perfor me d on the micr omirror ar- ray , so that t h e image “bounces o ff ” of e ach random patt ern in the se quence, and the reflected light from each pattern is collected sequentially with a pho todiode sen s or that acts as the single-pixel detecto r (see Figure 3). After taking a sequ e nce of esse n- tially time-multiplexed measurements, a s pecific ℓ 1 minimiz ation algorithms d e codes the picture out of the collected seque nce of single-pixel measurements. 12 Referenc es [1] G. A ubert, J. Aujol, Modeling very oscillating signals, applica tion to image processing, Applied Mathematics a nd Optimization 5 1 (2) (20 05) 1 63–18 2. [2] W . Bajwa, J. Haupt, G. Raz, S. W right, and R. Nowak, T oeplitz-structured compressed sensing matrices. IEEE W orkshop on Statistical Signal Processing (SSP), M adison, W is- consin, August 2007. [3] R. Ba raniuk. Optim al tree Approximation using W avelets. In A. J. Aldr oubi and M. Unser , editors, W ave le t Applications in Signal Processing, volume VII, pages 196 –207 , 1999. SPIE. [4] R. Bara niuk, M . Davenport, R. DeV ore, and M . W akin, A simple proof of the restricted isometry property for random matrices. T o app e ar in Constr . Appr . , 20 07. [5] L . A. B assalygo, M. S . Pinsker , Complexity of an optimum nonblocking switching net- work without reconnection s, Problems in Information T ransmission, 9 (1 ) (1973) 84–8 7. [6] T . Blumensath a nd M. E. Davies, Iterative thresholding for spa rse approximations. Preprint, 2 007. [7] T . Blumensath a nd M. E. Davies, Gradient pursuits. Preprint, 20 07. [8] S . Boyd, a nd L. V andenberghe, Convex Optimization, Ca mbridge University Press, 2004. (a vailable at http://www .stanford.edu/˜boyd/cvxbook/). [9] A .M. B ruckstein, M. Elad, M . Zibolevsky , A non-negative and sparse enough solution of an underdetermined linear system of equations is unique, Submitted to IEEE T rans. Inform. Theory , 2007 . [10] E. J. Candès, C ompressive sampling, Proceedings of the Int. Congress of Mathematics, 3, pp. 1433–1 452, Madrid, Spa in, 2006. [11] E. J. Ca ndès a nd P . Randall, Highly robust er ror correction by convex programmin g. Preprint, 2 006. [12] E. J. Candès, J. Romberg, and T . T ao, Robust uncertainty p r inciples: Exact signal recon- struction from highly incomplete frequency information. IEE E T rans. Inform. Theory , 52(2) pp. 4 89–5 0 9, February 20 06. [13] E. J. Candès, and J. Romberg, Quantitative robust uncertainty principles and optimally sparse de compositions . Foundations of C omput. Math., 6 (2), pp. 22 7–254 , April 20 06. [14] E. J. Candès and J. Romberg, Sparsity and incoherence in compressive sampling. Inver se Problems, 2 3(3) pp. 9 69–98 5, 2007 . [15] E. J. Candès, and T . T ao, Near optimal signal recovery from random projections: Univer- sal encoding strate gies? IEEE T r ans. Inform. Theory , 52( 12), pp. 540 6–54 2 5, December 2006. [16] E. J. Candès and T . T ao. Decoding by linear programming, IEEE T rans. Inform. Theory 51, 42 03–42 15, 2006. [17] E. Candès and J . Romberg. Practica l signal recovery from ra ndom projections, Proceed- ings of the SPIE, V olume 5674 , pp. 76–86, 2005 . [18] E. Candès, J. Romberg, and T . T ao. Sta b le signal recovery from incomplete a nd inaccu- rate mea surements, Communications on Pure and Applied Mathematics, V ol 59, No. 8, pp. 1 2 07–1 223, Aug 2006 . 13 [19] E. J. Candès and T .T a o. Error C orrection via linear programming, In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS) , pp. 295 – 308, 2 005. [20] E. Ca ndès and T . T ao, The Dantzig se le c tor: Statistical estimation when p is much larger than n , T o appear in A nnals of S tatistics. [21] S. S. Chen, B asis Pursuit, Ph.D. Thesis, Depa rtment of Statistics, S tanford U niver sity , 1995. [22] S. S. Chen, D. L. Donoho, a nd M. A . Sa unders. Atomic de composition by basis pursuit, SIAM J. Scientific Computing 20 (1999) , 3 3–61. [23] O. Christensen. An intro duction to fr ames and Riesz bases. Applied and Numerica l Harmonic A nalysis. B irkhäuser Boston, Inc., Boston, MA, 2 003. [24] A. Cohen, Numerical Analysis of W avelet Methods, Studies in Mathematics and its Applications, vol. 32, Elsevier , Amsterdam, 200 3 . [25] A. Cohen, W . Dahmen, and R. DeV ore, Compressed sensing and best k -ter m a p p roxi- mation. Preprint, 20 06. [26] A. Cohen, W . Dahmen, a nd R. DeV ore, Nea r optimal approximation of arbitrar y vectors from highly incomplete measurements. Preprint, 2007 . [27] A. Cohen, W . Dahmen, I. Daubechies, and R. DeV ore, T r ee Approximation and Optimal Encoding. A ppl. Comput. Harmon. Anal. 11 (200 1 ), no. 2, 192–2 26. [28] R. R. C oifma n and Y . Me yer . Nouvelles bases orthonormées de L 2 ( R ) ayant la structure du système de W alsh. Manuscript, Mathematics Dept., Y ale Univ ., 1 989. [29] R. R. Coifman and M . V . W ickerhauser . Entropy-based algorithms for best-basis selec- tion. IEE E T rans. Inform. Theory , vol. 38, pp. 713–71 8, Mar . 199 2. [30] G. Cormode, M . Da tar , P . Indyk and S. Muthukrishnan: Comparing Data Streams U sing Hamming Norms ( How to Zero In). IEEE T rans. Knowl. Data Eng. 15 ( 3): 529–54 0 (2003). [31] C. Couvreur a nd Y . Bresler . On the optimality of the Ba ckward Greedy Algorithm for the subset sele c tion problem. S IAM J. M atrix Anal. Appl., 2 1(3):7 97â ˘ A ¸ S-808, 20 00. [32] I. Daubechies, T en Lectures on W ave lets, CBMS -NSF Lecture Notes no. 61, SIA M , 1 992. [33] I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding a lgorithm for lin- ear inverse problems with a sparsity constraint. Comm. Pure Appl. Math., vol. 57 (11) , pages 1 4 13–1 457, 2004. [34] I. Da ubechies, M. Fornasier , and I. Loris, Accelerated projected gradient method for linear inverse problems with sparsity constraints. Preprint, 2007 . [35] I. Daubechies, G. T eschke and L. V ese. Iteratively solving linear inverse problems with general convex constraints, Inverse Problems and Imaging, 1(1), 29–46 , 200 7. [36] G. Da v is, S. Mallat, and M. A vellaned a . Greedy ada ptive approximation. J. Constr . Ap- prox., 13:57– 98, 199 7. [37] G. Davis, S. Mallat, and Z. Zhang. A daptive time-frequency decompositions. S PIE Jour- nal of Optical Engineering 19 9 4, 33(7) :2183 –2191. [38] R. A. DeV ore. Nonlinear approximation. Acta Numer ., 7 ( 1998) , 5 1–15 0. [39] R. A. DeV ore, Deterministi c constructions of compressed sensing matrices. Preprint, 2007. [40] R. DeV ore and V . N. T emlyakov . Some remarks on greedy algorithms. A dv . Comput. Math., 5:1 73–1 8 7, 1996 . 14 [41] D. L. Donoho, For most large unde rdeter mined systems of equations, the minimal ℓ 1 - norm near-solution a pproximates the sparsest near-solution, Comm. Pure Appl. Math. 59, no. 7 ( 2006) , pp. 907â ˘ A ¸ S-934 . [42] D. L. Donoho, For most large underdetermined systems of linear equations the minimal ℓ 1 -norm solution is also the sparsest solution, Comm. Pure A ppl. Math. 59, no. 6 (20 0 6), pp. 7 9 7-â ˘ A ¸ S829. [43] D. L. Donoho, Neighborly polytopes and sparse solutions of undetermined linear e qua- tions. Preprint, 200 5. [44] D. L. Do noho, High-dimensional centrally-symmetric polytopes wi th neighborliness proportional to dimension. Disc. Comput. Geometry , 35 (4) pp. 617 –652 , 2 006. [45] D. L. Donoh o, Compressed Sensing, IEEE T rans. Inform. Theory . 52, n. 4, (2006), pp. 1289– 1306 . [46] D. L. Donoho and X. Huo. Uncertainty principles and ideal atomic decomposition. IEEE T rans. Inform. Theory , 47 (2 001), 2845 – 2862 . [47] D. L. Donoho and M. Elad. Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization, Proc. Natl. Aca d . S ci. US A 1 0 0 (2003 ), 219 7–22 0 2. [48] D. L. Donoh o, M. Elad , and V . T emlyakov , Stable recovery of sparse overcomplete rep- resentations in the presence of noise, IEEE T ra ns. Inform. Theory 52, no. 1 (20 06), pp. 6â ˘ A ¸ S-18. [49] D. L . Donoho a nd J. T anner . Sparse nonnegative solutions of underdetermined lin- ear equations by li near programming. Proc. National Academy of Sciences, 1 02(2 7 ), pp.9446 –945 1 , 20 05. [50] D. Donoho and Y . T saig, Fast solution of L 1-norm minimization problems when th e solution may be sparse, Preprint, October 2 006. [51] D. L. Donoho, Y . T saig, I. Drori, a nd J . L. Starck, Sparse solution of underdetermined linear equations by sta gewise orthogonal matching pursuit, Preprint, M arch 2006. [52] P . L. Dra gotti, M. V etterli, and T . Blu, S a mpling moments and reconstructing signals of finite ra te of innovation: Shannon meets Strang-Fix. IEEE T r a ns. on Signal Proc., 55 ( 7), pp. 1 7 41–1 757, May 200 7. [53] M. F . Duarte, M. B. W akin, and R. G. Baraniuk, Fast reconstruction of piecewise smooth signals from random projections, in Online Proc. SP ARS05 , Rennes, France, Nov . 2005 . [54] B. Efron, T . Hastie, I. J ohnstone and R. T ibshirani, Least a ngle regression, Ann. Statist. 32, no. 2 ( 2004) , 407 –499. [55] I. Ekeland, R. T emam, Convex analysis a nd varia tional problems, vol. 28, SIAM, Philadelphia, P A, 1 9 99. [56] M. Elad and A. M. Bruckstein. A genera lized uncertainty principle and spa rse represen- tation in pairs of bases. IE EE T rans. Inform. Theory , 48( 9):25 5 8–25 67, 2002. [57] H. W . Engl, M. Hanke, and A. Neubauer , Regularization of Inverse Problems, Kluwer , Dordrecht, 199 6. [58] J. Feldman. Decoding Error-Correcting Codes via Linea r Programming. PhD thesis, Massachusetts Institute of T echnology , 2 003. [59] J. Feldman, D. R. Karger , and M. J. W ainwright. LP decoding. In Proc. 41st Annual Allerton Conference on Communi ca tion, Control, and Computing, October 20 03. 15 [60] J. Feldman and David R. Karger . Decoding turbo-like c odes via linear programmin g. Proc. of the 4 3rd Symposium on Foundations of Computer Sc ience , p.251–2 60, Novem- ber 1 6-19, 2002 . [61] J. Feldman, M. J. W ainwright, and D. R. Karger . Using linear programming to decode linear codes. IE EE T rans. Inform. Theory 51(3 ): 954 –972 , 2 005. [62] P . Frossard and P . V andergheynst. Redundant representations in image processing. In Proc. of the 2003 IEEE International Conference on Image Pro cessing, 2003. Special ses- sion. [63] P . Frossard, P . V andergheynst, R. M. F . I V entura, and M. Kunt. A posteriori quantization of progressive Ma tching Pursuit streams. IE EE T rans. Signal Proc., 52(2 ):525 –535, Feb. 2004. [64] J. J. Fuchs. On sparse representations in arb itra ry redundant bases. IEEE T ra ns. Inform. Theory , 50(6):13 41–1 344, 2004 . [65] M. R. Garey and D. S. Johnson, Computers and Intractability . A Guide to the Theory of NP-Completeness, W . H. Freeman a nd Company , New Y ork, 1979. [66] A. Garnaev and E. Gluskin, The widths of a Euclidean ball, Dokl. Akad. Nauk US SR 277 (1 984), 1048 –105 2 ; English transl. S oviet Math. Dokl. 30 (198 4) 20 0–20 4 . [67] A. C. Gilbert, S . Guha, P . Indyk, S. Muthukrishnan and M. Strauss, Near-optimal sparse fourier representations via sampling, in Proc 34th ACM symposium on Theory of Com- puting, pp. 152–16 1, ACM Press, 200 2 . [68] A.C. Gilbert, S. Muthukrishnan a nd M . Strauss, Improved T ime Bounds for Near- Optimal Sparse Fourier Representation, to a p p e ar at the W avelets XI c onference in the SPIE S ymposium on Optics & Photonics, 2005 , San Diego, California, USA. [69] A. C. Gilbert, S. Muthukrishnan, and M . J. Strauss, Approximation of Functions over Redundant Dictionaries Using Coherence, in Proceedings of 2003 SIA M Symposium on Discrete Algorithms SODA, pp. 2 43–2 5 2, 2003 . [70] F . Girosi. An equivalence between spa rse a p p roximation a nd Support V ector M achines. Neural Comput., 1 0(6): 1455ï£ ¡-1480, 19 98. [71] I. F . Gorodnits ky and B. D. Rao, S p a rse signal reconstruction f rom limited data using focuss : a reweighted norm minimization algorithm, IEE E T ra ns. S ignal Proc., vol. 45, no. 3 , pp. 600 –616 , M arch 19 9 7. [72] R. Gribonval and E. B acry . Harmonic decomposition of audio signals with M a tching Pursuit. IEEE T rans. Signal Proc., 51 (1):1 01–11 1, 2003. [73] R. Gribonval and M. Nielsen. S parse representations in uni ons of bases. IEEE T ra ns. Inform. Theory , 49(12 ):332 0–3325, 200 3. [74] M. Gr ote and T . Huckle. Parallel pr econditioning with sparse approximate inverses. SIAM J. Sci. Comput., 18 ( 3):83 8–853 , 1997. [75] J. Haupt a nd R. Nowak, Information Theory , IEEE T rans. Inform. Theory , vol.52, no.9, pp.4036 –404 8 , Sept. 2006. [76] P . Indyk, Explicit c onstructio ns f or compressed sensing of sparse signals. Symp. on Dis- crete Algorithms, 20 08. [77] S. Jokar , Mutual Incoherence, Restriceted Isometry Property and Kronecker Product of Matrices, Preprint, 200 8. [78] S. Jokar and M. Pfetsch, Exact and Approximate Spa rse Solutions of Unde rdetermined Linear E quations, Matheon-Preprint 377, March 2007. 16 [79] B. Kashin, The widths of certa in finite d imensional sets and classes of smooth functions, Izvestia 41( 1977 ) , 334–3 51. [80] B. S. Kashin a nd V . N. T emlyakov , A remark on compressed sensing. Preprint, 20 07. [81] S. Kunis a nd H. Rauhut, Random Sampling of Spa rse T rigonometric Polynomials II - Or- thogonal Matching Pursuit versus Basis Pursuit. Foundations of Computational Math- ematics, Spr inger New Y ork, August 10, 200 7 . [82] D. M . Malioutov , M. Çetin, and A.S. W illsky , Optimal Sparse Representations in Gen- eral Overcomplete Bases. IEEE Int. Conf. Acoustics, Spe ech and Signal Processing, May 2004, Montreal, Cana d a. [83] S. M a llat, A W av e let T our of Signal Processing. Boston, MA: Academic, 1998. [84] S. Mallat and Z. Zhang. Matching Pursuits with time-f requency dictionaries. IEEE T rans. Signal Proc., 41( 12):3 397–3 415, 1993. [85] I. Ma ravic and M. V etterli, Sampling a nd reconstructio n of signals with finite rate of in- novation in the presence of noise. IEEE T ra ns. S ignal Proc., 53(8), pp. 2 7 88–2 805, August 2005. [86] Y . Meyer , Oscillatory patterns in image p rocessing and nonlinear e volution equations, University Lecture Serie s, V ol. 22, A merican Mathematical Society , Provindence, 2001 . [87] A. J. Miller . Subset Selection in Regression. Chapman a nd Ha ll, London, 2nd edition, 2002. [88] B.K. Natarajan, Sparse Approximate Solutions to Linear Systems. SIAM J. C omput. 2 4: 227–2 34, 1 995. [89] D. Needell and R. V ershynin, Uniform uncertainty principle and sign al recovery via regularized orthogonal matching pursuit. Preprint, 200 7. [90] T . Nguyen and A. Zakhor . M a tching Pursuits based multiple description video coding for lossy environments. In Proceedings of the 2 003 IEEE International Conference on Image Processing, Barcelona, 2003 . [91] Y . C. Pati, R. Reza iifar , P . Krishnaprasad, Orthogonal matching p ur suit: recursive func- tion approximation with applications to wavelet decomposition , in: Proceedings of the 27th A nnual Asilomar Conference in S ignals, System a nd Computers, vol. 1, 199 3, pp. 40-44 . [92] A. M. Pinkus, N -widths in Approximation Theory . Ergeb. Math. Grenzgeb. (3) 7 , Springer-V erlag, Berlin 19 85. [93] A. M. Pinkus, On L 1 -Approximation. Cambridge T racts in Mathematics, C a mbridge University Press, V ol. 93, 198 9. [94] R. Ramlau and G. T eschke. A Thresholding Itera tion for Nonlinear Operator Equations with Spa rsity Constraints. DFG-SPP-1114 Preprint, 20 05. [95] B. D. Rao a nd Y . Bresler . Signal p rocessing with sparse ness constraints. In Proc. of the 1998 IE EE International Conference on Acoustics, Speech and S ignal Processing, 12 - 15 May 199 8, vol. 3, 1 861– 1864. [96] B. D. Rao and K. Kreutz-Delga d o. An affine scaling methodology for best basis selec tion. IEEE T ra ns. Signal Proc., 47(1 ):187 â ˘ A ¸ S200, 199 9. [97] H. Rauhut, Random sampling of sparse tr igonometric polynomials. Appl. Comput. Harmon. Anal. 2 2 (20 0 7), no. 1, 16–4 2. [98] H. Rauhut, Stability Results for Random S ampling of S parse T rigonometric Polynomi- als. Preprint, 200 6. 17 [99] H. Rauhut, K. Schass, and P . V andergheynst, Compressed sensing and redundant dic- tionaries. Preprint, 2006 . [100] J . Rissanen. M odeling by shortest data description. Automatica , 14, 4 65-4 7 1, 1979 . [101] R. Rockafellar and R. W e ts, V ariational analysis, Springer-V erlag, Be rlin, 199 8. [102] M . Rudelson and R. V e r shynin. Geometric approach to er ror-corr ecting codes a nd re- construction of signals. T echnical report, Depar tment of M athematics, University of California, Davis, 2005. [103] F . Santosa, and W .W . Symes, Linear inversion of band-limited reflection seismograms, SIAM J. Sci. Statist. Comput. 7 (1986 ), 13 07–13 30. [104] E . Schmidt, Zur Theorie der linearen und nichtlinearen Integralgleichungen, Math. Ann. 63 (1907), 433– 476. [105] Y . Sharon, J. W right, and Y . M a, Computation a nd relaxation of conditions for equiva- lence between ℓ 1 and ℓ 0 minimization. Preprint, 2007 . [106] T . Strohmer , a nd R. Heath Jr . Grassmannian f rames with a pplications to coding and communications. Ap p l. Comp. Harm. Anal., vol. 14(3) : 257– 275, 20 0 3. [107] D. T akhar , J. Laska, M . W akin, M . Duarte, D. Baron, S. Sa rvotham, K. Kelly , and R. Baraniuk, A new compressive imaging camera architectur e using optical-domain com- pression. (Proc. of Computational Imaging IV at SPIE Ele c tronic Imaging, S an Jose, Cal- ifornia, Ja nuary 2006 ) [108] T . T ao, An uncertainty principle for cyclic groups of prime order , Math. Res. Letters 12 (2005 ), 121 –127 . [109] T . T ao, http://terrytao.wordpress.com/2007/07/02 /open-question-deterministic-uup-matrices/. [110] H. L. T aylor , S. C. Banks, and J. F . McCoy . Deconvolution with the ℓ 1 norm. Geophysics, 44(1) :39–5 2 , 19 79. [111] V .N. T emlyakov , Nonlinear methods of ap proximation, Found. Comput. Math. 3 (2003) 33–10 7. [112] G. T eschke. Multi-Fra me Representations in L inear Inverse Problems with Mixed Multi- Constraints. Ap p l. Comput. Harmon. Anal. 2 2 (20 07), no. 1, 43– 6 0. [113] R. T ibshirani, Regression shrinkage a nd selection via the la sso, J. Royal. S tatist. Soc. B. 58, 26 7–288 . [114] J .A. T ro pp. Greed is Good: A lgorithmic Results for Sparse A pproximation. IEEE T rans. Inform. Theory . 50 (1 1 ), Oct. 2004 , pp. 2231–2 2 42. [115] J .A. T ropp. Just Relax : Convex programming methods for Subset Sleection a nd Sparse Approximation. IEEE T ra ns. Inform. Theory , vol. 51 ( 3), pp. 1 030–1 051, March, 2 006. [116] J . A. T ropp. The random pa ving property for uniformly bounded matrices. T o ap p ear in Studia Math, 2 007. [117] J . A. T ropp and A. C . Gilbert, Signal recovery from partial information via Orthogonal Matching Pursuit, IEEE T rans. Info. Theory . T o a ppear , 2 007. [118] J . A . T ropp, A. C. Gilbert, S . Muthukrishnan, and M. J. Stra uss. Improved sparse a p- proximation over quasi-incoherent dictionaries. In Proc. of the 2003 IEEE International Conference on Image Processing, Barcelona, 2003. [119] Y . T saig a nd D. L. Donoho, Extensions of compressed sensing. Signal Processin g, 86 (3), pp. 5 4 9–57 1, March 20 06. 18 [120] Y . T saig and D.L . Donoh o, Breakdown of equivalence between the minim al ℓ 1 -norm solution and the sparsest solution, Signal Processing, 86 (3), pp. 53 3–548 , March 20 06. [121] M . W akin, J. Laska, M. Duarte, D. Baron, S. Sarvotham, D. T akhar , K. Kelly , and R. Baraniuk, An A rchitecture for Compressive Ima ging, Proc. International Conference on Image Processing – ICIP 2006, Atlanta, GA, Oct. 200 6. [122] M . V etterli, P . Marz iliano, and T . Blu, Sampling signals with finite rate of innovation, IEEE T ra ns. Signal Proc., vol. 5 0 , no. 6, June 2 002. [123] W . Xu, B. Hassibi, Efficient compressive sensing with deterministic guarantees using expander gra p hs. IE EE Information Theory W orkshop, Lake T ahoe, Septe mber 2007. [124] R. Y oung, An introduction to nonharmonic Fourier series. A c a demic Press, New Y ork, 1980. [125] Y . Z hang. Solution-recovery in ℓ 1 -norm for non-sq uare linear systems: deterministic conditions and open questions. T echnical report TR05-06, Department of computational and Ap p lied Mathematics, Rice University , Houston, TX, 2005. [126] Y . Zhang. A Simple Proof for Recovera bility of ℓ 1 -Minimization: Go Over or Under? T echnical report TR05-09 , Department of computational and Applied Ma thematics, Rice University , Houston , TX, 2 005. [127] Y . Zhang. A Simple Proof for Recovera bility of ℓ 1 -Minimization ( II): the Nonnegativity Case. T echnical report TR05 -10, Depa rtment of computational and Applied Mathemat- ics, Rice University , Houston, TX, 2 005. 19

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment