📝 Original Info
- Title: On the Iterative Decoding of High-Rate LDPC Codes With Applications in Compressed Sensing
- ArXiv ID: 0903.2232
- Date: 2009-03-12
- Authors: Fan Zhang, Henry D. Pfister
📝 Abstract
This paper considers the performance of $(j,k)$-regular low-density parity-check (LDPC) codes with message-passing (MP) decoding algorithms in the high-rate regime. In particular, we derive the high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the $q$-ary symmetric channel ($q$-SC). For the BEC, the density evolution (DE) threshold of iterative decoding scales like $\Theta(k^{-1})$ and the critical stopping ratio scales like $\Theta(k^{-j/(j-2)})$. For the $q$-SC, the DE threshold of verification decoding depends on the details of the decoder and scales like $\Theta(k^{-1})$ for one decoder. Using the fact that coding over large finite alphabets is very similar to coding over the real numbers, the analysis of verification decoding is also extended to the the compressed sensing (CS) of strictly-sparse signals. A DE based approach is used to analyze the CS systems with randomized-reconstruction guarantees. This leads to the result that strictly-sparse signals can be reconstructed efficiently with high-probability using a constant oversampling ratio (i.e., when the number of measurements scales linearly with the sparsity of the signal). A stopping-set based approach is also used to get stronger (e.g., uniform-in-probability) reconstruction guarantees.
💡 Deep Analysis
Deep Dive into On the Iterative Decoding of High-Rate LDPC Codes With Applications in Compressed Sensing.
This paper considers the performance of $(j,k)$-regular low-density parity-check (LDPC) codes with message-passing (MP) decoding algorithms in the high-rate regime. In particular, we derive the high-rate scaling law for MP decoding of LDPC codes on the binary erasure channel (BEC) and the $q$-ary symmetric channel ($q$-SC). For the BEC, the density evolution (DE) threshold of iterative decoding scales like $\Theta(k^{-1})$ and the critical stopping ratio scales like $\Theta(k^{-j/(j-2)})$. For the $q$-SC, the DE threshold of verification decoding depends on the details of the decoder and scales like $\Theta(k^{-1})$ for one decoder. Using the fact that coding over large finite alphabets is very similar to coding over the real numbers, the analysis of verification decoding is also extended to the the compressed sensing (CS) of strictly-sparse signals. A DE based approach is used to analyze the CS systems with randomized-reconstruction guarantees. This leads to the result that strictly
📄 Full Content
to this system because the measurements are real-valued and provide an infinite amount of information when there is no measurement noise.
This paper provides detailed descriptions and extensions of work reported in two conference papers [13], [14].
We believe the main contribution of all these results are:
The observation that the Sudocodes reconstruction algorithm is an instance of verification decoding and its decoding thresholds can be computed precisely using numerical DE [13]. For ensembles with at least 3 nonzero entries in each column, this implies that no outer code is required. For signals with δn non-zero entries, this reduces the lower bound on the number of noiseless measurements required from O(n ln n) to O(n).
The introduction of the high-rate scaling analysis for iterative erasure and verification decoding of LDPC codes [13], [14]. This technique provides closed-form upper and lower bounds on decoding thresholds that hold uniformly as the rate approaches 1. For example, it shows that (3, k)-LDPC codes achieve 81% of capacity on the BEC for sufficiently large k. This also shows that, for strictly-sparse signals with δn non-zero entries and noiseless measurements, 3δn measurements are sufficient (with (4, k)-LDPC codes) for verificationbased reconstruction uniformly as δ → 0. While it is known that δn + 1 measurements are sufficient for reconstruction via exhaustive search of all support sets [30], this shows that O(δn) measurements also suffice for sparse measurement matrices with low-complexity reconstruction. In constrast, the best bounds for linearprogramming reconstruction require at least O δn ln 1 δ measurements.
The application of the high-rate scaling analysis to compute the stopping distance of erasure and verification decoding. For example, this shows that almost all long (j, k)-LDPC codes, with j = 2 + ⌈2 ln(k -1)⌉, can correct all erasure patterns whose fraction of erasures is smaller than 1 k-1 .
Section II provides background information on coding and CS. Section III summarizes the main results. In Section IV, proofs and details are given for the main results based on DE. While in Section V, proofs and details are provided for the main results based on stopping-set analysis. Section VI discusses a simple information-theoretic bound on the number of measurements required for reconstruction. Section VII presents simulation results comparing the algorithms discussed in this paper with a range of other algorithms. Finally, some conclusions are discussed in Section VIII.
[Author’s Note: The equations in this paper were originally typeset for two-column presentation, but we have submitted it in one-column format for easier reading. Please accept our apologies for some of the rough looking equations.]
LDPC codes are linear codes introduced by Gallager in 1962 [31] and re-discovered by MacKay in 1995 [32].
Binary LDPC codes are now known to be capacity approaching on various channels when the block length tends to infinity. They can be represented by a Tanner graph, where the i-th variable node is connected to the j-th check node if the entry on the i-th column and j-th row of its parity-check matrix is non-zero.
LDPC codes can be decoded by an iterative message-passing (MP) algorithm, which passes messages between the variable nodes and check nodes iteratively. If the messages passed along the edges are probabilities, then the algorithm is also called belief propagation (BP) decoding. The performance of the MP algorithm can be evaluated using density evolution (DE) [33] and stopping set (SS) analysis [34] [35]. These techniques allow one to compute noise thresholds (below which decoding succeeds w.h.p.) for average-case and worst-case error models, respectively.
An LDPC code is defined by its parity-check matrix Φ, which can be represented by a sparse bipartite graph. In the bipartite graph, there are two types of nodes: variable nodes representing code symbols and check nodes representing parity-check equations. In the standard irregular code ensemble [36], the connections between variable nodes and check nodes are defined by the degree distribution (d.d.) pairs λ(x) = dv i=1 λ i x i-1 and ρ(x) = dc i=1 ρ i x i-1 where d v and d c are the maximum variable and check node degrees and λ i and ρ i denote the fraction of edges connected to degree-i variable and check nodes, respectively. The sparse graph representation of LDPC codes implies that the encoding and decoding algorithms can be implemented with linear complexity in the block length 1 . Since LDPC codes are usually defined over the finite field GF (q) instead of the real numbers, we need to modify the encoding/decoding algorithm to deal with signals over real numbers. Each entry in the parity-check matrix is chosen either to be 0 or to be a real number drawn from a continuous distribution. The parity-check matrix Φ ∈ R m×n can also be used as the measurement matrix in the CS system (e.g., the signal vector x
…(Full text truncated)…
📸 Image Gallery
Reference
This content is AI-processed based on ArXiv data.