Adaptive Aggregation with Two Gains in QFL
Federated learning (FL) deployed over quantum enabled and heterogeneous classical networks faces significant performance degradation due to uneven client quality, stochastic teleportation fidelity, device instability, and geometric mismatch between local and global models. Classical aggregation rules assume euclidean topology and uniform communication reliability, limiting their suitability for emerging quantum federated systems. This paper introduces A2G (Adaptive Aggregation with Two Gains), a dual gain framework that jointly regulates geometric blending through a geometry gain and modulates client importance using a QoS gain derived from teleportation fidelity, latency, and instability.
💡 Research Summary
The paper tackles the emerging challenge of federated learning (FL) in environments where quantum‑enabled communication coexists with heterogeneous classical networks. In such settings, client reliability varies widely due to stochastic teleportation fidelity, latency fluctuations, and device instability, while model parameters often live on non‑Euclidean manifolds (e.g., angles of variational quantum circuits on toroidal spaces). Traditional FL aggregation methods such as FedAvg assume a Euclidean parameter space and uniform communication quality, leading to severe performance degradation in quantum federated learning (QFL).
To address these issues, the authors propose A2G (Adaptive Aggregation with Two Gains), a dual‑gain aggregation framework that simultaneously incorporates (i) a Quality‑of‑Service (QoS) gain α (with exponents γ and δ) to weight client contributions according to teleportation fidelity, latency, and instability, and (ii) a geometry gain β to control curvature‑aware blending between local and global models on a Riemannian manifold.
QoS Gain (α). For each client i at round t the framework measures three physical‑layer indicators: teleportation fidelity F_i,t, communication latency τ_i,t, and channel variance σ_i,t². These are combined into a QoS factor q_i,t = F_i,t^α·(τ_i,t+ε)^γ·(σ_i,t²+ε)^δ. The factor is multiplied by the data‑size proportion p_i and then normalized across all K clients to produce trust weights w_i,t. When α = 0 the weights reduce to the uniform averaging of FedAvg; larger α values amplify high‑fidelity, low‑latency, low‑variance clients, effectively filtering unreliable participants.
Geometry Gain (β). Because many quantum model parameters are periodic, the authors embed the aggregation in a Riemannian manifold M. Using the logarithmic map Log_{θ_t}(·) and exponential map Exp_{θ_t}(·), each local model θ_i,t is projected onto the tangent space at the current global model θ_t, yielding a weighted tangent vector v_t = Σ_i w_i,t·Log_{θ_t}(θ_i,t). The manifold correction term is defined as Ψ(θ_t,{θ_i,t}) = Exp_{θ_t}(v_t) – θ_t. The scalar β∈
Comments & Academic Discussion
Loading comments...
Leave a Comment