Deep Unfolded Fractional Optimization for Maximizing Robust Throughput in 6G Networks
The sixth-generation (6G) of wireless communication networks aims to leverage artificial intelligence tools for efficient and robust network optimization. This is especially the case since traditional optimization methods often face high computational complexity, motivating the use of deep learning (DL)-based optimization frameworks. In this context, this paper considers a multi-antenna base station (BS) serving multiple users simultaneously through transmit beamforming in downlink mode. To account for robustness, this work proposes an uncertainty-injected deep unfolded fractional programming (UI-DUFP) framework for weighted sum rate (WSR) maximization under imperfect channel conditions. The proposed method unfolds fractional programming (FP) iterations into trainable neural network layers refined by projected gradient descent (PGD) steps, while robustness is introduced by injecting sampled channel uncertainties during training and optimizing a quantile-based objective. Simulation results show that the proposed UI-DUFP achieves higher WSR and improved robustness compared to classical weighted minimum mean square error, FP, and DL baselines, while maintaining low inference time and good scalability. These findings highlight the potential of deep unfolding combined with uncertainty-aware training as a powerful approach for robust optimization in 6G networks.
💡 Research Summary
The paper addresses the problem of robust weighted‑sum‑rate (WSR) maximization in a downlink multi‑antenna base‑station (BS) serving multiple single‑antenna users, a scenario that is central to the envisioned 6G networks. Classical fractional programming (FP) techniques, especially the quadratic transform (QT), can reformulate the non‑convex WSR problem into tractable sub‑problems, but they require many iterative updates involving matrix inversions, eigen‑decompositions and binary searches, which makes them unsuitable for real‑time operation in highly dynamic 6G environments.
To overcome this limitation, the authors propose Uncertainty‑injected Deep Unfolded Fractional Programming (UI‑DUFP). The core idea is to “unfold’’ the FP iterations into a finite‑depth neural network: each iteration becomes a trainable layer, and within each layer a small number (N) of projected gradient descent (PGD) steps refine the beamforming matrix V while enforcing the total transmit‑power constraint via a projection operator Ω_C. The only learnable parameters are the step‑sizes μ^(n)_m for each PGD step, which are optimized end‑to‑end by back‑propagation.
Robustness to channel estimation errors is incorporated by uncertainty injection during training. The true channel h_k is modeled as an estimated channel ˆh_k plus a complex Gaussian error h_err,k ∼ CN(0,σ_h^2 I). For each training iteration, B independent error realizations are drawn, producing B perturbed channel matrices ˆH_b. The unfolded network computes the WSR R_Q,b for each realization, and the empirical γ‑quantile (the γ·B‑th smallest value) is taken as the robust objective R_γ^max. The loss function is defined as the negative of the average of these γ‑quantile values across all M unfolded layers, i.e.,
L(μ)=−∑_{m=1}^M quantile_γ{R_Q,b(V^{(N)}_m,g_m,u_m)}.
By maximizing this quantile the model directly enforces the probabilistic robustness constraint Pr
Comments & Academic Discussion
Loading comments...
Leave a Comment