Neural Window Decoder for SC-LDPC Codes
In this paper, we propose a neural window decoder (NWD) for spatially coupled low-density parity-check (SC-LDPC) codes. The proposed NWD retains the conventional window decoder (WD) process but incorporates trainable neural weights. To train the weights of NWD, we introduce two novel training strategies. First, we restrict the loss function to target variable nodes (VNs) of the window, which prunes the neural network and accordingly enhances training efficiency. Second, we employ the active learning technique with a normalized loss term to prevent the training process from biasing toward specific training regions. Next, we develop a systematic method to derive non-uniform schedules for the NWD based on the training results. We introduce trainable damping factors that reflect the relative importance of check node (CN) updates. By skipping updates with less importance, we can omit $\mathbf{41%}$ of CN updates without performance degradation compared to the conventional WD. Lastly, we address the error propagation problem inherent in SC-LDPC codes by deploying a complementary weight set, which is activated when an error is detected in the previous window. This adaptive decoding strategy effectively mitigates error propagation without requiring modifications to the code and decoder structures.
💡 Research Summary
The paper introduces a Neural Window Decoder (NWD) specifically designed for spatially coupled low‑density parity‑check (SC‑LDPC) codes. While conventional window decoding (WD) already offers low latency, reduced memory, and moderate complexity by processing a sliding sub‑graph (window) of the coupled chain, it suffers from two major drawbacks: sub‑optimal error‑correction performance and error propagation (EP) across windows. NWD augments the classic WD with trainable neural weights, preserving the original message‑passing schedule but allowing the decoder to learn how to attenuate or amplify messages for better convergence.
Three novel contributions are presented:
-
Target‑Specific Training – The loss function is restricted to the target variable nodes (VNs) within the current window. By pruning the output space to only these nodes, the neural network automatically discards unnecessary parameters, leading to faster convergence and a three‑fold improvement in block error rate (BLER) compared to the conventional WD under the same computational budget. Active learning and normalized validation error are employed to select the most informative training samples and to keep the validation metric balanced across SNRs.
-
Neural Non‑Uniform Scheduling – A set of trainable damping factors (γ) is introduced for each check‑node (CN) update. The damping factor mixes the newly computed CN message with its previous value; a large γ indicates that the current update contributes little to decoding progress. After training, the γ values are examined and the least important CN updates are permanently skipped. This results in a static, hardware‑friendly schedule that eliminates about 41 % of CN updates (≈45 % reduction in overall decoding complexity) without any measurable performance loss, outperforming prior soft‑BER‑based dynamic schedules that require costly online estimations.
-
Adaptive NWD for Error Propagation – To mitigate EP, the authors collect training data that deliberately includes EP‑inducing error patterns and train a second weight set, called the EP‑resilient weight set. During decoding, a simple error‑detection flag from the previous window determines which weight set to use: the standard set for normal operation or the EP‑resilient set when an error was previously observed. This switch is performed without any changes to the code structure, decoder architecture, or additional online BER calculations, yet it significantly lowers the probability that a single window failure cascades through the chain.
The overall training pipeline consists of (i) building the unrolled Tanner graph for a window of size W and a maximum of ℓ iterations, (ii) applying target‑specific loss, (iii) learning both CN weights and damping factors, and (iv) optionally training the EP‑resilient set using a boosting‑style learning approach. The authors use the neural min‑sum algorithm, share weights across edges to keep memory requirements modest, and train with the Adam optimizer over 1 000 epochs with 10 000 samples per epoch drawn from multiple SNR points.
Experimental validation is performed on a protograph‑based (3, 6) regular SC‑LDPC code of length 20 000, window size W = 6, target size T = 1, and up to 16 iterations per window. Results show:
- A 0.5 dB SNR gain in BLER over the conventional WD at the same complexity, or equivalently the same BLER with a 45 % reduction in CN updates.
- The adaptive scheme reduces the EP‑induced BLER spike dramatically, improving both block error rate and frame error rate for long transmission chains.
- The non‑uniform schedule is static and can be stored as a lookup table, eliminating runtime computation overhead.
From a hardware perspective, NWD retains the WD’s simple message‑passing architecture, requiring only additional storage for the learned CN weights, damping factors, and the optional EP‑resilient weight set. The static schedule enables straightforward pipeline optimization, and the EP‑resilient switch can be implemented with a single flag and multiplexers.
In summary, the Neural Window Decoder provides a comprehensive solution that simultaneously boosts decoding performance, cuts computational load, and suppresses error propagation in SC‑LDPC systems. Its design respects the hardware‑friendly nature of window decoding while leveraging deep‑learning techniques to achieve gains that would otherwise demand more invasive code or decoder modifications. This makes NWD a promising candidate for next‑generation low‑latency communication standards that employ spatially coupled codes.
Comments & Academic Discussion
Loading comments...
Leave a Comment