Shannon Meets Nash on the Interference Channel

Shannon Meets Nash on the Interference Channel

The interference channel is the simplest communication scenario where multiple autonomous users compete for shared resources. We combine game theory and information theory to define a notion of a Nash equilibrium region of the interference channel. The notion is game theoretic: it captures the selfish behavior of each user as they compete. The notion is also information theoretic: it allows each user to use arbitrary communication strategies as it optimizes its own performance. We give an exact characterization of the Nash equilibrium region of the two-user linear deterministic interference channel and an approximate characterization of the Nash equilibrium region of the two-user Gaussian interference channel to within 1 bit/s/Hz..


💡 Research Summary

The paper tackles one of the most fundamental multi‑user communication scenarios—the two‑user interference channel—by merging concepts from information theory and game theory. Each transmitter–receiver pair (user) is modeled as a selfish player who can freely choose any coding, modulation, and power allocation strategy permitted by information theory, with the sole objective of maximizing its own achievable rate. This leads to a non‑cooperative game where the strategy set of each player is the entire space of admissible communication schemes, and the payoff is the resulting Shannon‑theoretic rate given the opponent’s strategy. A Nash equilibrium (NE) is defined as a pair of strategies where neither user can unilaterally improve its rate.

The authors first consider the linear deterministic interference channel (LDIC), a simplified bit‑level model introduced by Avestimehr, Diggavi, and Tse that captures the essence of Gaussian interference while being analytically tractable. By exhaustively characterizing each user’s best‑response function in the LDIC, they obtain an exact description of the NE region. The region consists of two qualitatively different sub‑regions. In the “treat‑interference‑as‑noise” (TIN) sub‑region, each user simply decodes its own signal while regarding the interfering signal as additional noise; this is optimal when the interfering link is sufficiently weak. In the complementary sub‑region, a partial‑decode‑and‑cancel strategy reminiscent of the Han‑Kobayashi scheme becomes a best response: each transmitter splits its message into a “common” part that can be decoded by the unintended receiver and a “private” part that is treated as noise. The deterministic analysis reveals that the optimal structure is to allocate the most significant bits of the transmitted vector to the common part and the less significant bits to the private part, thereby achieving the NE.

Having obtained a clean structural insight from the deterministic model, the paper turns to the more realistic two‑user Gaussian interference channel (GIC). Exact NE characterization for the GIC is intractable, so the authors adopt a constant‑gap approach. They prove that for every channel‑gain regime there exists an NE in which each user either (i) treats interference as Gaussian noise or (ii) decodes the interfering signal’s common part and then cancels it before decoding its own private part. By carefully bounding the gap between the achievable rates of these strategies and the outer bounds on the capacity region, they show that the resulting NE rates lie within at most one bit per second per Hertz of the information‑theoretic capacity region. Consequently, the NE region of the GIC is approximated to within a 1‑bit gap.

The main contributions can be summarized as follows:

  1. A unified game‑theoretic formulation of the interference channel that permits arbitrary coding strategies, thereby bridging the gap between classical capacity analysis and selfish user behavior.
  2. Exact NE region for the LDIC, providing a complete description of when TIN or partial decode‑and‑cancel is the best response, and demonstrating that NE always exists.
  3. A 1‑bit approximation of the NE region for the GIC, establishing that selfish, non‑cooperative behavior does not lead to catastrophic performance loss; the equilibrium rates are essentially capacity‑optimal.

Beyond the theoretical results, the paper offers several practical insights. First, the existence of an NE in every channel regime suggests that distributed, uncoordinated power and coding control can be stable in dense wireless networks, alleviating the need for centralized scheduling. Second, the 1‑bit guarantee provides a simple design rule: each device need only decide between two elementary strategies (TIN or partial decode‑and‑cancel) based on its measured interference‑to‑signal ratio, and the resulting performance will be near‑optimal. Third, the work opens a pathway for extending NE analysis to larger networks, dynamic fading environments, and learning‑based best‑response algorithms.

In the context of related literature, the paper distinguishes itself from earlier works that either (a) restrict the strategy space to power control only, or (b) assume full cooperation (e.g., Han‑Kobayashi region) when deriving achievable rates. By allowing the full suite of information‑theoretic strategies, the authors capture a richer set of equilibria and demonstrate that even without cooperation, the system can operate close to the Shannon limit.

Future research directions suggested include extending the NE characterization to K‑user interference channels, incorporating time‑varying channels and stochastic learning dynamics for best‑response updates, and validating the theoretical predictions through experimental testbeds. In sum, “Shannon Meets Nash on the Interference Channel” provides a seminal framework that unites the two pillars of communication theory—capacity and strategic interaction—offering both deep theoretical insight and actionable guidance for the design of autonomous wireless networks.