On Oligopoly Spectrum Allocation Game in Cognitive Radio Networks with Capacity Constraints

On Oligopoly Spectrum Allocation Game in Cognitive Radio Networks with   Capacity Constraints
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Dynamic spectrum sharing is a promising technology to improve spectrum utilization in the future wireless networks. The flexible spectrum management provides new opportunities for licensed primary user and unlicensed secondary users to reallocate the spectrum resource efficiently. In this paper, we present an oligopoly pricing framework for dynamic spectrum allocation in which the primary users sell excessive spectrum to the secondary users for monetary return. We present two approaches, the strict constraints (type-I) and the QoS penalty (type-II), to model the realistic situation that the primary users have limited capacities. In the oligopoly model with strict constraints, we propose a low-complexity searching method to obtain the Nash Equilibrium and prove its uniqueness. When reduced to a duopoly game, we analytically show the interesting gaps in the leader-follower pricing strategy. In the QoS penalty based oligopoly model, a novel variable transformation method is developed to derive the unique Nash Equilibrium. When the market information is limited, we provide three myopically optimal algorithms “StrictBEST”, “StrictBR” and “QoSBEST” that enable price adjustment for duopoly primary users based on the Best Response Function (BRF) and the bounded rationality (BR) principles. Numerical results validate the effectiveness of our analysis and demonstrate the fast convergence of “StrictBEST” as well as “QoSBEST” to the Nash Equilibrium. For the “StrictBR” algorithm, we reveal the chaotic behaviors of dynamic price adaptation in response to the learning rates.


💡 Research Summary

This paper investigates dynamic spectrum sharing in cognitive radio networks from an oligopoly pricing perspective, where primary users (PUs) sell surplus spectrum to secondary users (SUs) for monetary compensation. Two realistic capacity‑constraint models are introduced. The first, termed “strict‑constraint” (type‑I), imposes a hard cap on the amount of spectrum each PU can offer. Under this model the interaction follows a classic Bertrand competition: each PU chooses a price, SUs respond with demand based on the price vector, and the market reaches a Nash equilibrium (NE). The authors develop a low‑complexity searching (LCS) algorithm that recursively halves the price interval to locate the unique NE efficiently. Uniqueness is proved by exploiting the monotonicity of the best‑response functions and the strict convexity of the profit functions. When the game is reduced to a duopoly, a closed‑form expression for the leader‑follower price gap is derived, revealing an inherent asymmetry: the leader sets a slightly higher price to maximize total revenue while the follower adjusts accordingly.

The second model, “QoS‑penalty” (type‑II), captures the situation where exceeding a PU’s capacity degrades service quality and incurs a penalty cost. The penalty is modeled as a nonlinear function of the overload amount. To handle the non‑convexity, the authors introduce a novel variable transformation that linearizes the penalty term, converting the game back into a tractable Bertrand form. After transformation, the best‑response functions remain monotone, allowing a proof of existence and uniqueness of the NE.

Recognizing that market participants often lack complete information about rivals’ cost structures or demand functions, the paper proposes three myopically optimal learning algorithms for duopolistic settings:

  1. StrictBEST – each PU observes the opponent’s current price and instantly applies its best response. Numerical experiments show convergence within fewer than ten iterations to an error below 10⁻⁴.

  2. StrictBR – based on bounded rationality, price updates follow a gradient‑like rule with a learning rate α:
    (p_i^{(t+1)} = p_i^{(t)} + α


Comments & Academic Discussion

Loading comments...

Leave a Comment