On Low Complexity Maximum Likelihood Decoding of Convolutional Codes

On Low Complexity Maximum Likelihood Decoding of Convolutional Codes
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper considers the average complexity of maximum likelihood (ML) decoding of convolutional codes. ML decoding can be modeled as finding the most probable path taken through a Markov graph. Integrated with the Viterbi algorithm (VA), complexity reduction methods such as the sphere decoder often use the sum log likelihood (SLL) of a Markov path as a bound to disprove the optimality of other Markov path sets and to consequently avoid exhaustive path search. In this paper, it is shown that SLL-based optimality tests are inefficient if one fixes the coding memory and takes the codeword length to infinity. Alternatively, optimality of a source symbol at a given time index can be testified using bounds derived from log likelihoods of the neighboring symbols. It is demonstrated that such neighboring log likelihood (NLL)-based optimality tests, whose efficiency does not depend on the codeword length, can bring significant complexity reduction to ML decoding of convolutional codes. The results are generalized to ML sequence detection in a class of discrete-time hidden Markov systems.


💡 Research Summary

The paper investigates the average computational complexity of maximum‑likelihood (ML) decoding for convolutional codes, framing the problem as a search for the most probable path through a Markov graph. Traditional complexity‑reduction techniques—such as sphere decoding—are typically integrated with the Viterbi algorithm (VA) and rely on the sum‑log‑likelihood (SLL) of an entire candidate path as a bound. By comparing the SLL of a hypothesized path with that of the current best path, these methods can prune large portions of the search space, thereby avoiding exhaustive enumeration.

The authors first demonstrate that SLL‑based optimality tests become fundamentally inefficient when the coding memory (i.e., the number of states of the underlying trellis) is held fixed while the codeword length (N) grows without bound. They prove that, as (N\to\infty), the distribution of SLL values for all admissible paths concentrates, causing the gap between the best and second‑best SLL to shrink to zero in probability. Consequently, the SLL bound loses discriminative power, and the decoder is forced to examine essentially all paths, negating any complexity gain. This result clarifies a long‑standing observation that sphere‑decoder‑style pruning works well for short blocks but deteriorates for long sequences, yet it had not previously been formalized for convolutional codes.

To overcome this limitation, the paper introduces a novel class of optimality tests based on neighboring log‑likelihoods (NLL). Instead of aggregating likelihoods over the entire path, the NLL approach examines a local window around each source symbol at time index (t). Specifically, it computes the conditional log‑likelihood difference
\


Comments & Academic Discussion

Loading comments...

Leave a Comment