Hybrid Cognitive IoT with Cooperative Caching and SWIPT-EH: A Hierarchical Reinforcement Learning Framework

Hybrid Cognitive IoT with Cooperative Caching and SWIPT-EH: A Hierarchical Reinforcement Learning Framework
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper proposes a hierarchical deep reinforcement learning (DRL) framework based on the soft actor-critic (SAC) algorithm for hybrid underlay-overlay cognitive Internet of Things (CIoT) networks with simultaneous wireless information and power transfer (SWIPT)-energy harvesting (EH) and cooperative caching. Unlike prior hierarchical DRL approaches that focus primarily on spectrum access or power control, our work jointly optimizes EH, hybrid access coordination, power allocation, and caching in a unified framework. The joint optimization problem is formulated as a weighted-sum multi-objective task, designed to maximize throughput and cache hit ratio while simultaneously minimizing transmission delay. In the proposed model, CIoT agents jointly optimize EH and data transmission using a learnable time switching (TS) factor. They also coordinate spectrum access under hybrid overlay-underlay paradigms and make power control and cache placement decisions while considering energy, interference, and storage constraints. Specifically, in this work, cooperative caching is used to enable overlay access, while power control is used for underlay access. A novel three-level hierarchical SAC (H-SAC) agent decomposes the mixed discrete-continuous action space into modular subproblems, improving scalability and convergence over flat DRL methods. The high-level policy adjusts the TS factor, the mid-level policy manages spectrum access coordination and cache sharing, and the low-level policy decides transmit power and caching actions for both the CIoT agent and PU content. Simulation results show that the proposed hierarchical SAC approach significantly outperforms benchmark and greedy strategies. It achieves better performance in terms of average sum rate, delay, cache hit ratio, and energy efficiency, even under channel fading and uncertain conditions.


💡 Research Summary

This paper tackles the joint optimization of energy harvesting, spectrum access, power allocation, and cooperative caching in a hybrid underlay‑overlay cognitive Internet‑of‑Things (CIoT) network. Existing works typically address only one of these aspects or assume static parameters such as a fixed time‑switching (TS) ratio for simultaneous wireless information and power transfer (SWIPT). To overcome these limitations, the authors propose a three‑level hierarchical reinforcement learning (HRL) framework built on the Soft Actor‑Critic (SAC) algorithm, termed H‑SAC.

The system model consists of a CIoT transmitter‑receiver pair coexisting with a primary‑user (PU) pair. Time is divided into equal slots; the PU may be active or idle in each slot, and the CIoT device can harvest RF energy and transmit data within the same slot using a TS protocol. When the PU is active, the CIoT operates in underlay mode, controlling its transmit power to satisfy interference constraints. When the PU is idle or when the CIoT has cached popular PU content, it switches to overlay mode, gaining spectrum access by offering the cached data to the PU. This hybrid access strategy, together with a finite cache at the CIoT node, creates a mixed discrete‑continuous decision space: the TS ratio and transmit power are continuous, while the choice of access mode, cache sharing, and cache update are discrete.

The authors formulate a weighted‑sum multi‑objective problem that simultaneously maximizes system throughput and cache hit ratio while minimizing transmission delay, subject to power, interference, cache‑capacity, and energy‑availability constraints. Because the problem is stochastic and combinatorial, a model‑free learning approach is required.

The H‑SAC architecture decomposes the overall decision into three layers:

  1. High‑level policy (Level‑1) outputs the TS factor α∈

Comments & Academic Discussion

Loading comments...

Leave a Comment