Error Exponents of Erasure/List Decoding Revisited via Moments of Distance Enumerators

Reading time: 6 minute
...

📝 Original Info

  • Title: Error Exponents of Erasure/List Decoding Revisited via Moments of Distance Enumerators
  • ArXiv ID: 0711.2501
  • Date: 2016-11-17
  • Authors: ** 논문에 명시된 저자 정보가 제공되지 않았습니다. (가능한 경우, 원문에서 확인하시기 바랍니다.) **

📝 Abstract

The analysis of random coding error exponents pertaining to erasure/list decoding, due to Forney, is revisited. Instead of using Jensen's inequality as well as some other inequalities in the derivation, we demonstrate that an exponentially tight analysis can be carried out by assessing the relevant moments of a certain distance enumerator. The resulting bound has the following advantages: (i) it is at least as tight as Forney's bound, (ii) under certain symmetry conditions associated with the channel and the random coding distribution, it is simpler than Forney's bound in the sense that it involves an optimization over one parameter only (rather than two), and (iii) in certain special cases, like the binary symmetric channel (BSC), the optimum value of this parameter can be found in closed form, and so, there is no need to conduct a numerical search. We have not found yet, however, a numerical example where this new bound is strictly better than Forney's bound. This may provide an additional evidence to support Forney's conjecture that his bound is tight for the average code. We believe that the technique we suggest in this paper can be useful in simplifying, and hopefully also improving, exponential error bounds in other problem settings as well.

💡 Deep Analysis

📄 Full Content

In his celebrated paper [3], Forney extended Gallager's bounding techniques [2] and found exponential error bounds for the ensemble performance of optimum generalized decoding rules that include the options of erasure, variable size lists, and decision feedback (see also later studies, e.g., [1], [4], [5], [6], [8], and [10]).

Stated informally, Forney [3] considered a communication system where a code of block length n and size M = e nR (R being the coding rate), drawn independently at random under a distribution {P (x)}, is used for a discrete memoryless channel (DMC) {P (y|x)} and decoded with an erasure/list option. For the erasure case, in which we focus hereafter, an optimum tradeoff was sought between the probability of erasure (no decoding) and the probability of undetected decoding error. This tradeoff is optimally controlled by a threshold parameter T of the function e nT to which one compares the ratio between the likelihood of each hypothesized message and the sum of likelihoods of all other messages. If this ratio exceeds e nT for some message, a decision is made in favor of that message, otherwise, an erasure is declared.

Forney’s main result in [3] is a single-letter lower bound E 1 (R, T ) to the exponent of the probability of the event E 1 of not making the correct decision, namely, either erasing or making the wrong decision. This lower bound is given by

where E 0 (s, ρ) = -ln y x P (x)P1-s (y|x)

The probability of the undetected error event E 2 (i.e., the event of not erasing but making a wrong estimate of the transmitted message) is given by E 2 (R, T ) = E 1 (R, T ) + T . 1 As can be seen, the computation of E 1 (R, T ) involves an optimization over two auxiliary parameters, ρ and s, which in general requires a two-dimensional search over these two parameters by some method. This is different from Gallager’s random coding error exponent function for ordinary decoding (without erasures), which is given by:

with E 0 (ρ) being defined as

where there is only one parameter to be optimized. In [3], one of the steps in the derivation involves the inequality ( i a i ) r ≤ i a r i , which holds for r ≤ 1 and non-negative {a i } (cf. eq. ( 90) in [3]), and another step (eq. (91e) therein) applies Jensen’s inequality. The former inequality introduces an additional parameter, denoted ρ, to be optimized together with the original parameter, s. 2In this paper, we offer a different technique for deriving a lower bound to the exponent of the probability of E 1 , which avoids the use of these inequalities. Instead, an exponentially tight evaluation of the relevant expression is derived by assessing the moments of a certain distance enumerator, and so, the resulting bound is at least as tight as Forney’s bound. Since the first above-mentioned inequality is bypassed, there is no need for the additional parameter ρ, and so, under certain symmetry conditions (that often hold) on the random coding distribution and the channel, the resulting bound is not only at least as tight as Forney’s bound, but it is also simpler in the sense that there is only one parameter to optimize rather than two. Moreover, this optimization can be carried out in closed form at least in some special cases like the binary symmetric channel (BSC). We have not found yet, however, a convincing 3 numerical example where the new bound is strictly better than Forney’s bound. This may serve as an additional evidence to support Forney’s conjecture that his bound is tight for the average code. Nevertheless, the question whether there exist situations where the new bound is strictly better, remains open.

We wish to emphasize that the main message of this contribution, is not merely in the simplification of the error exponent bound in this specific problem of decoding with erasures, but more importantly, in the analysis technique we offer here, which, we believe, is applicable to quite many other problem settings as well. It is conceivable that in some of these problems, the proposed technique could not only simply, but perhaps also improve on currently known bounds. The underlying ideas behind this technique are inspired from the statistical mechanical point of view on random code ensembles, offered in [9] and further elaborated on in [7].

The outline of this paper is as follows. In Section 2, we establish notation conventions and give some necessary background in more detail. In Section 3, we present the main result along with a short discussion. Finally, in Section 4, we provide the detailed derivation of the new bound, first for the special case of the BSC, and then more generally.

Throughout this paper, scalar random variables (RV’s) will be denoted by capital letters, their sample values will be denoted by the respective lower case letters, and their alphabets will be denoted by the respective calligraphic letters. A similar convention will apply to random vectors of dimension n and their sample values, which will be denoted wit

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut