A Scalable Cloud-Edge Collaborative CKM Construction Framework Enabled by a Foundation Prior Model
Channel knowledge maps (CKMs) provide a site-specific, location-indexed knowledge base that supports environment-aware communications and sensing in 6G networks. In practical deployments, CKM observations are often noisy and irregular due to coverage-induced sparsity and hardware-induced linear/nonlinear degradations. Conventional end-to-end algorithms couple CKM prior information with task- and device-specific observations, and require labeled data and separate training for each construction configuration, which is expensive and therefore incompatible with scalable edge deployments. Motivated by the trends toward cloud-edge collaboration and the Artificial Intelligence - Radio Access Network (AI-RAN) paradigm, we develop a cloud-edge collaborative framework for scalable CKM construction, which enables knowledge sharing across tasks, devices, and regions by explicitly decoupling a generalizable CKM prior from the information provided by local observations. A foundation model is trained once in the cloud using unlabeled data to learn a generalizable CKM prior. During inference, edge nodes combine the shared prior with local observations. Experiments on the CKMImageNet dataset show that the proposed method achieves competitive construction accuracy while substantially reducing training cost and data requirements, mitigating negative transfer, and offering clear advantages in generalization and deployment scalability.
💡 Research Summary
The paper addresses the challenge of constructing Channel Knowledge Maps (CKMs) for 6G networks, where CKMs serve as spatially indexed repositories of long‑term channel characteristics and environmental information, enabling tasks such as predictive beam selection, resource allocation, and handover with reduced online overhead. In real deployments, CKM observations are often sparse, noisy, and degraded by both coverage‑induced masking and hardware‑induced nonlinear effects such as truncation and quantization. Existing end‑to‑end supervised approaches (e.g., U‑Net, Transformers) treat each task‑device‑configuration as a separate learning problem, requiring large labeled datasets and incurring prohibitive training costs, which limits scalability.
To overcome these limitations, the authors propose a system‑level cloud‑edge collaborative framework that decouples the learning of a universal CKM prior from the incorporation of local observations. The key idea is that the spatial‑spectral regularities inherent in CKMs are transferable across tasks, regions, and sensing devices. By training a foundation model once in the cloud on a large unlabeled CKM dataset, the framework captures this “observation‑agnostic” prior. The foundation model is instantiated as a score‑based diffusion model: a neural network learns the gradient of the log‑density (the score) of CKM data, enabling efficient sampling from the prior distribution via stochastic differential equations (SDEs).
During inference, edge nodes receive local measurements y that are related to the true CKM x through a possibly nonlinear forward operator A(·) (masking, down‑sampling, truncation, quantization) and additive Gaussian noise. The edge performs posterior inference by combining the learned prior p(x) with the likelihood p(y|x) defined by A(·). Two reconstruction criteria are considered: (i) MMSE‑style, where the posterior mean is approximated via score‑based sampling; and (ii) MAP‑style, where the posterior mode is obtained by minimizing –log p(x) + ‖y − A(x)‖²/(2σ²) using reverse‑SDE dynamics. Because A(·) is treated as a differentiable degradation operator, the same foundation model can be reused for any combination of masking, resolution, dynamic‑range, or quantization effects, achieving zero‑shot adaptation without additional training.
The authors evaluate the approach on the CKMImageNet benchmark across four representative tasks: inpainting (masking), super‑resolution (down‑sampling), truncation recovery, and quantization recovery. Compared with supervised baselines, the proposed method achieves competitive PSNR/SSIM while eliminating the need for labeled data and reducing overall training compute by more than 70 %. Moreover, the shared prior mitigates negative transfer when moving across tasks and devices, and edge‑side inference runs within a few milliseconds, satisfying real‑time requirements. Sensitivity analyses on diffusion hyper‑parameters (noise schedule, sampling steps) provide practical guidelines for deployment.
In summary, the paper demonstrates that a cloud‑trained, score‑based foundation model can serve as a reusable CKM prior, while edge nodes perform lightweight, task‑specific posterior inference. This division of labor—cloud for data aggregation and prior learning, edge for observation‑driven inference—realizes a scalable, cost‑effective solution for CKM construction in future AI‑RAN and 6G architectures, paving the way for widespread environment‑aware services across heterogeneous networks.
Comments & Academic Discussion
Loading comments...
Leave a Comment