EqDeepRx: Learning a Scalable MIMO Receiver
While machine learning (ML)-based receiver algorithms have received a great deal of attention in the recent literature, they often suffer from poor scaling with increasing spatial multiplexing order and lack of explainability and generalization. This paper presents EqDeepRx, a practical deep-learning-aided multiple-input multiple-output (MIMO) receiver, which is built by augmenting linear receiver processing with carefully engineered ML blocks. At the core of the receiver model is a shared-weight DetectorNN that operates independently on each spatial stream or layer, enabling near-linear complexity scaling with respect to multiplexing order. To ensure better explainability and generalization, EqDeepRx retains conventional channel estimation and augments it with a lightweight DenoiseNN that learns frequency-domain smoothing. To reduce the dimensionality of the DetectorNN inputs, the receiver utilizes two linear equalizers in parallel: a linear minimum mean-square error (LMMSE) equalizer with interference-plus-noise covariance estimation and a regularized zero-forcing (RZF) equalizer. The parallel equalized streams are jointly consumed by the DetectorNN, after which a compact DemapperNN produces bit log-likelihood ratios for channel decoding. 5G/6G-compliant end-to-end simulations across multiple channel scenarios, pilot patterns, and inter-cell interference conditions show improved error rate and spectral efficiency over a conventional baseline, while maintaining low-complexity inference and support for different MIMO configurations without retraining.
💡 Research Summary
EqDeepRx is a practical deep‑learning‑enhanced MIMO OFDM receiver that augments conventional linear processing with a few carefully designed neural‑network blocks. The architecture preserves the standard signal‑processing chain—channel estimation, equalization, detection, and demapping—while inserting lightweight modules that improve performance without incurring prohibitive computational cost.
The core of the system is a shared‑weight DetectorNN. Unlike previous end‑to‑end CNN receivers that process the entire time‑frequency grid at once, DetectorNN operates independently on each spatial stream (or layer) using the same set of parameters. Consequently, the number of learnable parameters does not grow with the number of MIMO layers, and the overall complexity scales almost linearly with the multiplexing order, making the design suitable for massive MIMO scenarios.
Channel estimation follows the classic pilot‑based approach. Raw rank‑one estimates are obtained from orthogonal pilots, linearly interpolated across the frequency axis, and then refined by a DenoiseNN. This small network learns a frequency‑domain smoothing filter that adapts to the statistics of the training data, yielding more accurate and robust channel estimates than a fixed linear filter, especially in the presence of noise and inter‑cell interference.
Two linear equalizers are run in parallel: a regularized zero‑forcing (RZF) equalizer and an LMMSE equalizer that incorporates an interference‑plus‑noise covariance matrix (INCM). The INCM is estimated from pilot‑based residuals over an “interference coherence bandwidth” (two PRBs, i.e., 24 subcarriers). By sharing the same covariance estimate across this bandwidth, the LMMSE equalizer avoids per‑resource‑element matrix inversions, dramatically reducing the arithmetic load. Both equalizers produce unit‑gain symbols, which are concatenated and fed to DetectorNN. The parallel structure provides complementary views of the received signal—RZF offers a low‑complexity baseline, while LMMSE mitigates colored interference—improving training stability and final detection performance.
DetectorNN processes the concatenated equalized symbols and outputs soft symbol estimates for each stream. A compact DemapperNN then converts these soft symbols into bit log‑likelihood ratios (LLRs) suitable for the LDPC decoder. All neural‑network components are deliberately shallow (a few fully‑connected layers), keeping inference latency and memory footprint low.
The authors evaluate EqDeepRx with 5G/6G‑compliant OFDM parameters across a broad set of channel models (EPA, EVA, 3GPP Urban Macro), pilot patterns, and varying levels of inter‑cell interference. Simulations are performed in a full‑time‑domain environment, so ISI and ICI are present even though the receiver model assumes a frequency‑domain linear system. Results show that EqDeepRx consistently outperforms a baseline linear receiver (LMMSE + ZF) by 1–2 dB in SNR for both uncoded BER and coded BLER, translating into a 10–15 % gain in spectral efficiency. Importantly, the same trained model works for 2, 4, 6, and 8 MIMO layers without retraining, demonstrating true scalability.
An extensive ablation study compares several variants: (i) removing DenoiseNN (pure linear interpolation), (ii) using only a single equalizer, and (iii) employing independent DetectorNNs per stream (no weight sharing). The full EqDeepRx configuration yields the best trade‑off between performance and complexity, confirming that (a) frequency‑domain denoising is essential for accurate channel estimates, (b) parallel equalization provides robustness against strong interference, and (c) shared‑weight detection dramatically reduces parameter count while preserving generalization.
Complexity analysis reports FLOP counts for each block. DenoiseNN, DetectorNN, and DemapperNN each require on the order of 10⁴–10⁵ FLOPs per OFDM slot, while the LMMSE equalizer’s matrix inversion is performed once per interference coherence bandwidth, further limiting the overall arithmetic budget. The total increase over a pure linear receiver is roughly 10–15 % in FLOPs, well within the processing capabilities of modern base‑station ASICs or GPUs.
In summary, EqDeepRx demonstrates that a judicious blend of expert knowledge (linear channel estimation, LMMSE/INCM equalization, ZF) and lightweight deep learning (DenoiseNN, shared‑weight DetectorNN, DemapperNN) can deliver a scalable, explainable, and high‑performance MIMO receiver ready for 5G/6G deployments. The design addresses the three major hurdles of earlier ML‑based receivers—poor scaling with multiplexing order, lack of interpretability, and excessive computational demand—while achieving measurable gains in error‑rate performance and spectral efficiency across realistic cellular scenarios.
Comments & Academic Discussion
Loading comments...
Leave a Comment