We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.
Deep Dive into Instanton-based Techniques for Analysis and Reduction of Error Floors of LDPC Codes.
We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a
LDPC codes [1], [2], have been the focus of intense research over the past decade because they can approach theoretical limits of reliable transmission over various channels even when decoded by sub-optimal low complexity algorithms.
Two important classes of such algorithms are (i) iterative decoding algorithms, which include message passing algorithms (variants of the BP algorithm [3] and Gallager type algorithms [1]), and bit flipping algorithms [4], [5] (serial and parallel), as well as (ii) the LP decoding algorithm [6]. Characterization of the error performance of sub-optimal algorithms (or simply decoders) is still an open problem, and has been addressed for both LDPC code ensembles, as well as for individual codes [7]. Error performance of LDPC codes in the asymptotic limit of the code length is well characterized for a large class of sub-optimal decoders over different channels (the interested reader is referred to [1], [8], [9], [10] for general theory of message passing algorithms, [4], [5], [11], [12] for analysis of bit flipping algorithms and expander based arguments and [13], [14], [15] for analysis of the LP decoder).
A common feature of all the analysis methods used in deriving the asymptotic results is that the underlying assumptions hold in the limit of infinitely long code and/or are applicable to an ensemble of codes. Hence, they are of limited use for the analysis of a given finite length code. The performance of a code under a particular decoding algorithm is characterized by the bit-error-rate (BER) or the frameerror-rate (FER) curve plotted as a function of the signal-tonoise ratio (SNR). A typical BER/FER vs SNR curve consists of two distinct regions. At small SNR, the error probability decreases rapidly with SNR, with the curve looking like a waterfall. The decrease slows down at moderate values turning into the error floor asymptotic at very large SNR [16]. This transient behavior and the error floor asymptotic originate from the sub-optimality of decoder, i.e., the ideal maximumlikelihood (ML) curve would not show such a dramatic change in the BER/FER with the SNR increase. While the slope of the BER/FER curve in the waterfall region is the same for almost all the codes in the ensemble, there can be a huge variation in the slopes for different codes in the error floor region [7]. Since for sufficiently long codes the error floor phenomenon manifests itself in the domain unreachable by brute force Monte-Carlo (MC) simulations, analytical methods are necessary to characterize the FER performance.
Finite length analysis of LDPC codes is well understood for decoding over the binary erasure channel (BEC). The decoder failures in the error floor domain are governed by combinatorial structures known as stopping sets [17]. Stopping set distributions of various LDPC ensembles have been studied by Orlitsky et al. (see [18] and references therein for related works). Unfortunately, such a level of understanding of the decoding failures has not been achieved for other important channels such as the BSC and the AWGNC.
In this paper, we focus on the decoding failures of LDPC codes for iterative as well as LP decoders over the BSC and the AWGNC. Failures of iterative decoders for graph based codes were first studied by Wiberg [19] who introduced the notions of computation trees and pseudo-codewords. Subsequent analysis of the computation trees was carried out by Frey et al. [20] and Forney et al. [21]. The failures of the LP decoder can be understood in terms of the vertices of the so-called fundamental polytope which are also known as pseudo-codewords [6]. Vontobel and Koetter [22] introduced a theoretical tool known as graph cover approach and used it to establish connections between the LP and the message passing decoders using the notion of the fundamental polytope. They showed that the pseudo-codewords arising from the Tanner graph covers are identical to the pseudo-codewords of the LP decoder. Vontobel and Koetter [23] also studied the relation between the LP and the min-sum decoders.
For iterative decoding on the AWGNC, MacKay and Postol [24] were the first to discover that certain “near codewords” are to be blamed for the high error floor in the Margulis code. Richardson [16] reproduced their results and developed a computation technique to predict the performance of a given LDPC code in the error floor domain. He characterized the troublesome noise configurations leading to the error floor using combinatorial objects termed trapping sets and described a technique (of a Monte-Carlo importance sampling type) to evaluate the error rate associated with a particular class of trapping sets. The method from [16] was further refined for the AWGNC by Stepanov et al. [25] who introduced the notion of instantons. In a nutshell, an instanton is a configuration of the noise which is positioned in between a codeword (say zero codeword) and another pseudo-codeword (which is not necessarily a code
…(Full text truncated)…
This content is AI-processed based on ArXiv data.