Error Exponents of Erasure/List Decoding Revisited via Moments of Distance Enumerators

Error Exponents of Erasure/List Decoding Revisited via Moments of   Distance Enumerators
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The analysis of random coding error exponents pertaining to erasure/list decoding, due to Forney, is revisited. Instead of using Jensen’s inequality as well as some other inequalities in the derivation, we demonstrate that an exponentially tight analysis can be carried out by assessing the relevant moments of a certain distance enumerator. The resulting bound has the following advantages: (i) it is at least as tight as Forney’s bound, (ii) under certain symmetry conditions associated with the channel and the random coding distribution, it is simpler than Forney’s bound in the sense that it involves an optimization over one parameter only (rather than two), and (iii) in certain special cases, like the binary symmetric channel (BSC), the optimum value of this parameter can be found in closed form, and so, there is no need to conduct a numerical search. We have not found yet, however, a numerical example where this new bound is strictly better than Forney’s bound. This may provide an additional evidence to support Forney’s conjecture that his bound is tight for the average code. We believe that the technique we suggest in this paper can be useful in simplifying, and hopefully also improving, exponential error bounds in other problem settings as well.


💡 Research Summary

The paper revisits Forney’s classic error‑exponent analysis for erasure/list decoding, which traditionally relies on a two‑parameter optimization over (ρ, s) and uses Jensen’s inequality together with the inequality (∑_i a_i P_i)^r ≤ ∑_i a_i P_i^r for r ≤ 1. These steps introduce an auxiliary parameter ρ that complicates the computation of the exponent E₁(R,T). The authors propose a fundamentally different technique: instead of applying those inequalities, they evaluate the moments of a “distance enumerator” that counts codewords at a given distance from the received vector. By doing so, they obtain an exponentially tight bound without the need for the extra parameter.

The key technical condition is that the quantity γ_y(s) = −ln ∑_x P(x) P^s(y|x) be independent of the output symbol y; when this holds, they denote it simply by γ(s). This symmetry is satisfied for uniform input distributions and channels whose transition matrix columns are permutations of each other (e.g., binary symmetric channel, additive noise models over groups). Under this condition, the authors derive a single‑parameter expression:

E*1(R,T) = sup{s≥0}


Comments & Academic Discussion

Loading comments...

Leave a Comment