Iterative Reweighted Algorithms for Sparse Signal Recovery with Temporally Correlated Source Vectors

Iterative Reweighted Algorithms for Sparse Signal Recovery with   Temporally Correlated Source Vectors
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Iterative reweighted algorithms, as a class of algorithms for sparse signal recovery, have been found to have better performance than their non-reweighted counterparts. However, for solving the problem of multiple measurement vectors (MMVs), all the existing reweighted algorithms do not account for temporal correlation among source vectors and thus their performance degrades significantly in the presence of correlation. In this work we propose an iterative reweighted sparse Bayesian learning (SBL) algorithm exploiting the temporal correlation, and motivated by it, we propose a strategy to improve existing reweighted $\ell_2$ algorithms for the MMV problem, i.e. replacing their row norms with Mahalanobis distance measure. Simulations show that the proposed reweighted SBL algorithm has superior performance, and the proposed improvement strategy is effective for existing reweighted $\ell_2$ algorithms.


💡 Research Summary

The paper addresses a critical gap in multiple‑measurement‑vector (MMV) sparse recovery: existing iterative re‑weighted algorithms (both ℓ₁ and ℓ₂ based) ignore temporal correlation among the source vectors, leading to severe performance degradation when such correlation is present. Building on the block sparse Bayesian learning (bSBL) framework, the authors model each row of the source matrix X as a multivariate Gaussian N(0, γ_i B), where γ_i controls sparsity and a common positive‑definite matrix B captures the temporal covariance shared by all sources. By assuming a single B for all rows they avoid over‑fitting while still allowing the algorithm to learn the dominant correlation structure.

Through a duality transformation of the Type‑II marginal likelihood, they derive a source‑space penalty that replaces the usual ℓ_q norm of a row with its Mahalanobis distance x_iᵀ B⁻¹ x_i. Consequently, the weight associated with each row becomes 1/γ_i, which now depends on the whole current estimate of X (via B and Σ₀), making the algorithm non‑separable. The resulting iterative scheme—named ReSBL‑QM—updates X, the hyper‑parameters γ_i, and the covariance B in closed form:

  1. X^{k+1}=W Φᵀ(λI+ΦWΦᵀ)^{-1}Y,
  2. γ_i^{k+1}= (1/L)

Comments & Academic Discussion

Loading comments...

Leave a Comment