Distributed spatial multiplexing with 1-bit feedback

Reading time: 6 minute
...

📝 Original Info

  • Title: Distributed spatial multiplexing with 1-bit feedback
  • ArXiv ID: 0710.1522
  • Date: 2007-10-09
  • Authors: ** - Jatin (ETH Zurich) - R. Boelcskei (ETH Zurich) **

📝 Abstract

We analyze a slow-fading interference network with MN non-cooperating single-antenna sources and M non-cooperating single-antenna destinations. In particular, we assume that the sources are divided into M mutually exclusive groups of N sources each, every group is dedicated to transmit a common message to a unique destination, all transmissions occur concurrently and in the same frequency band and a dedicated 1-bit broadcast feedback channel from each destination to its corresponding group of sources exists. We provide a feedback-based iterative distributed (multi-user) beamforming algorithm, which "learns" the channels between each group of sources and its assigned destination. This algorithm is a straightforward generalization, to the multi-user case, of the feedback-based iterative distributed beamforming algorithm proposed recently by Mudumbai et al., in IEEE Trans. Inf. Th. (submitted) for networks with a single group of sources and a single destination. Putting the algorithm into a Markov chain context, we provide a simple convergence proof. We then show that, for M finite and N approaching infinity, spatial multiplexing based on the beamforming weights produced by the algorithm achieves full spatial multiplexing gain of M and full per-stream array gain of N, provided the time spent "learning'' the channels scales linearly in N. The network is furthermore shown to "crystallize''. Finally, we characterize the corresponding crystallization rate.

💡 Deep Analysis

Figure 1

📄 Full Content

We consider a special class of interference networks, where M N non-cooperating sources are divided into M mutually exclusive groups G i , i = 1, 2, . . . , M , such that the N sources in the i th group, denoted as S j i , j = 1, 2, . . . , N , are dedicated to transmit, through slow-fading channels, a common message to their assigned single-antenna destination D i . All transmissions occur concurrently and in the same frequency band and the destinations D i do not cooperate. This network models the second hop of the coherent multi-user relaying protocol in [2] under the assumption that the first hop transmission is errorfree. The results in [2] imply that, for the interference network considered in this paper, for M fixed and N → ∞, full spatial multiplexing gain of M and a per-stream (distributed) array gain of N can be obtained, provided that each source knows the channel to its assigned destination perfectly. In this paper, we analyze the case where the perfect channel state information assumption is relaxed to having a 1-bit broadcast feedback This research was supported in part by the Swiss National Science Foundation (SNF) under grant No. 200020-109619 and by the Nokia Research Center Helsinki, Finland.

The authors are with ETH Zurich, Switzerland, (Email:

{jatin,boelcskei}@nari.ee.ethz.ch) channel from each destination D i to its sources G i . These broadcast feedback channels are non-interfering. We provide a feedback-based iterative distributed (multi-user) beamforming algorithm, which “learns” the channels between each group of sources and its assigned destination. This algorithm is a straightforward generalization, to the multi-user case, of the feedback-based iterative distributed beamforming algorithm proposed recently by Mudumbai, Hespanha, Madhow and Barriac in [1] for networks with a single group of sources and a single destination. Making the simplifying assumption, compared to [1], of the fading coefficients as well as all the signals being real-valued allows us to put the iterative algorithm into a Markov chain context, thereby setting the stage for a simple convergence proof. We then show that, for M finite and N → ∞, spatial multiplexing based on the beamforming weights produced by the iterative algorithm achieves full spatial multiplexing gain of M and full perstream (distributed) array gain of N , provided the time spent “learning” the channels scales linearly in N . We furthermore demonstrate that the M effective links G i → D i in the network not only decouple (reflected by full spatial multiplexing gain) but also converge to non-fading links as N → ∞, i.e., in the terminology of [3], the network “crystallizes”. Finally, we quantify the impact of the performance of the iterative algorithm on the crystallization rate, and we show that the multiuser nature of the network leads to a significant reduction in the crystallization rate, when compared to the M = 1 case. Notation: The superscripts T and -1 stand for transposition and inverse, respectively. |G| denotes the cardinality of the set G, |x| is the absolute value of the scalar x, and a denotes the greatest integer that is smaller than or equal to the real number a. N (µ, σ 2 ) stands for the normal distribution with mean µ and variance σ 2 . log(•) denotes logarithm to the base 2. f X (•) stands for the probability density function (p.d.f.) of the random variable X and X ∼ Y denotes equivalence in distribution. A ≡ B denotes that the sets (of terminals) A and B are equal. P (ω) is the probability of event ω, E[X] and VAR[X] are the expected value and the variance, respectively, of the random variable X and w.p. stands for with probability. Since the terminals in G i are assumed to have a common message for their assigned destination D i , we will be using the notation G i → D i to denote the corresponding singleinput single-output link between the group G i and destination D i . Vectors and matrices are set in lower-case and upper-case bold-face letters, respectively.

We assume that x i [n] is the common message of the sources in G i to be transmitted to D i . In the remainder of the paper, we distinguish between a training phase during which the N scalar arXiv:0710.1522v1 [cs.IT] 8 Oct 2007 channels between S j i , j = 1, 2, . . . , N, and D i are “learned” for each i ∈ {1, 2, . . . , M }, and a data transmission phase following the training phase. During the training phase for G i the feedback broadcast channel D i → G i is used once every frame of T f time slots. The l th frame (l = 0, 1, …), denoted by F l , consists of the time slots n = lT f , lT f + 1, . . . , (l + 1)T f -1.

In the l th frame, each source S j i multiplies the sequence x i [n] (which is a training sequence during the training phase) by a corresponding beamforming weight α j i [l] ∈ {1, -1} before transmission; these beamforming weights are kept constant during the entire frame. We furthermore assume that all the channels in the network are flat-fading

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut