Technical Report: Achievable Rates for the MAC with Correlated Channel-State Information

Reading time: 6 minute
...

๐Ÿ“ Original Info

  • Title: Technical Report: Achievable Rates for the MAC with Correlated Channel-State Information
  • ArXiv ID: 0812.4803
  • Date: 2008-12-31
  • Authors: Researchers from original ArXiv paper

๐Ÿ“ Abstract

In this paper we provide an achievable rate region for the discrete memoryless multiple access channel with correlated state information known non-causally at the encoders using a random binning technique. This result is a generalization of the random binning technique used by Gel'fand and Pinsker for the problem with non-causal channel state information at the encoder in point to point communication.

๐Ÿ’ก Deep Analysis

Deep Dive into Technical Report: Achievable Rates for the MAC with Correlated Channel-State Information.

In this paper we provide an achievable rate region for the discrete memoryless multiple access channel with correlated state information known non-causally at the encoders using a random binning technique. This result is a generalization of the random binning technique used by Gel’fand and Pinsker for the problem with non-causal channel state information at the encoder in point to point communication.

๐Ÿ“„ Full Content

Multi-user information theory, random binning, multiple access channel, dirty paper coding.

We consider a discrete memoryless multiple access channel (MAC) with two correlated states each known by one of the encoders. Specifically, we assume the following model:

where s 1 โˆˆ S 1 and s 2 โˆˆ S 2 are known non-causally at encoder 1 and encoder 2, respectively. The channel inputs are x 1 โˆˆ X 1 and x 2 โˆˆ X 2 , and the channel output is y โˆˆ Y. The memoryless channel implies that

The first user transmits the message m 1 โˆˆ {1, . . . , M 1 }, and the second user transmits the message m 2 โˆˆ {1, . . . , M 2 }, where m 1 and m 2 are independent random variables with uniform distributions, and M 1 = 2 nR1 , M 2 = 2 nR2 . The first encoder observes the channel state information S 1 non-causally and generates the transmitted codeword

In the same way, the second encoder generates the transmitted codeword

The decoder uses the following mapping to reconstruct the transmitted messages

i.e., ( m1 , m2 ) = ฯˆ(y). The error probability is defined as

In the following theorem we provide an inner bound for the capacity region of (1) which is derived using a generalization of the random binning technique [1].

Theorem 1. An inner bound for the capacity region of (1) is given by

for some admissible pair (U, V )

where the admissible pairs satisfy:

The theorem implies that the following two Markov chains are satisfied:

Proof: We denote the set of วซ-typical of two n-sequences a and b where

วซ (A, B) (we use the same notation as in [2]).

Fix the distributions P (U, X 1 |S 1 ) and P (V, X 2 |S 2 ). Calculate the marginal distributions P (U ) and P (V ).

โ€ข Codebooks generation: Let

Codebook 1: Generate 2 n(J1+R1) of independent u k sequences of length n, generating each element i.i.d according to distribution n i=1 P (u i ), and distribute these sequences randomly among M 1 bins where each bin has 2 nJ1 sequences.

Codebook 2: Generate 2 n(J2+R2) of independent v k sequences of length n, generating each element i.i.d according to distribution n i=1 P (v i ), and distribute these sequences randomly among M 2 bins where each bin has 2 nJ2 sequences.

โ€ข Encoder of user 1: Given the state sequence s 1 and the message m 1 , search in bin m 1 of codebook 1 for a u sequence such that (u, s 1 ) โˆˆ A n วซ (U, S 1 ). Send x 1 which is jointly typical with u and s 1 , i.e., (u,

โ€ข Encoder of user 2: Given the state sequence s 2 and the message m 2 , search in bin m 2 of codebook 2 for a

โ€ข Decoder: Given the received vector y, search for unique sequences u and v such that

The error probability is given by

where the inequality follows the asymptotic equipartition property (AEP) [2]. Hence, we need to evaluate only the second term. We define the following error events for specific sate sequences (s 1 , s 2 ) :

Then by union bound, the error probability is upper bounded by

We now evaluate the probability of each error events. For independent u and s 1 the probability that (u, s 1 ) โˆˆ

A n วซ (U, S 1 ) is bounded below by Pr ((u, s 1 ) โˆˆ A n วซ (U, S 1 )) = (u,s1)โˆˆA n วซ (U,S1) P (u)P (s 1 )

Hence, we have that

where (15) follows since 1x โ‰ค exp(-x). Hence, this term decays to zero as n โ†’ โˆž. In the same way

Pr(E 2 (s 2 , m 2 )) goes to zero as n โ†’ โˆž.

Provided that E 1 (s 1 , m 1 ) and E 2 (s 2 , m 2 ) have not occurred, i.e., (u m1,j1 , s 1 ) โˆˆ A [2] we have that

where the typical set

) is associated with the joint distribution

Hence, we have that

In fact, we have (with high probability) that the sequences (u m1,j1 , v m2,j2 , s 1 , s 2 ) generated using the joint distribution (18), which is equivalent to the Markov chain U โ†” S 1 โ†” S 2 โ†” V .

Provided that E 3 (s 1 , s 2 , m 1 , m 2 ) has not occurred, from the AEP we have that

Likewise, we have that

where ( 23) and (24) follow from AEP; (26) follows from the chain rule for mutual information; (29) follows from the Markov chain U โ†” S 1 โ†” V . In the same way, it can be shown that

Furthermore,

where ( 35) and (36) follow from AEP; (41) follows from the Markov chain U โ†” S 1 โ†” S 2 โ†” V ; (43) follows from the chain rule for mutual information.

The theorem follows from (30), (31), (43), since for any arbitrary วซ > 0 the conditions in (7) imply that P

as n โ†’ โˆž.

We consider now two special cases of the memoryless MAC with correlated state information known non-causally at the encoders. The first case is for S 1 = S 2 , i.e., the relation between the states is deterministic. The second case is for independent states.

I. Single state: in this case we have single state which is known to both encoders, i.e., S = S 1 = S 2 , the achievable rate region is given by

for some admissible pair (U, V )

where the admissible pairs satisfy: (U, X 1 ) โ†” S โ†” (V, X 2 ), and (U, V ) โ†” (X 1 , X 2 , S) โ†” Y .

The Gaussian case of single interference is given by

where Z โˆผ N (0, N ), the interference S is known non-causally to user 1 and user 2, and the power constraints are P 1 and P 2 for user 1 and user

…(Full text truncated)…

๐Ÿ“ธ Image Gallery

cover.png page_2.webp page_3.webp

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

โ†‘โ†“
โ†ต
ESC
โŒ˜K Shortcut