In this paper we provide an achievable rate region for the discrete memoryless multiple access channel with correlated state information known non-causally at the encoders using a random binning technique. This result is a generalization of the random binning technique used by Gel'fand and Pinsker for the problem with non-causal channel state information at the encoder in point to point communication.
Deep Dive into Technical Report: Achievable Rates for the MAC with Correlated Channel-State Information.
In this paper we provide an achievable rate region for the discrete memoryless multiple access channel with correlated state information known non-causally at the encoders using a random binning technique. This result is a generalization of the random binning technique used by Gel’fand and Pinsker for the problem with non-causal channel state information at the encoder in point to point communication.
Multi-user information theory, random binning, multiple access channel, dirty paper coding.
We consider a discrete memoryless multiple access channel (MAC) with two correlated states each known by one of the encoders. Specifically, we assume the following model:
where s 1 โ S 1 and s 2 โ S 2 are known non-causally at encoder 1 and encoder 2, respectively. The channel inputs are x 1 โ X 1 and x 2 โ X 2 , and the channel output is y โ Y. The memoryless channel implies that
The first user transmits the message m 1 โ {1, . . . , M 1 }, and the second user transmits the message m 2 โ {1, . . . , M 2 }, where m 1 and m 2 are independent random variables with uniform distributions, and M 1 = 2 nR1 , M 2 = 2 nR2 . The first encoder observes the channel state information S 1 non-causally and generates the transmitted codeword
In the same way, the second encoder generates the transmitted codeword
The decoder uses the following mapping to reconstruct the transmitted messages
i.e., ( m1 , m2 ) = ฯ(y). The error probability is defined as
In the following theorem we provide an inner bound for the capacity region of (1) which is derived using a generalization of the random binning technique [1].
Theorem 1. An inner bound for the capacity region of (1) is given by
for some admissible pair (U, V )
where the admissible pairs satisfy:
The theorem implies that the following two Markov chains are satisfied:
Proof: We denote the set of วซ-typical of two n-sequences a and b where
วซ (A, B) (we use the same notation as in [2]).
Fix the distributions P (U, X 1 |S 1 ) and P (V, X 2 |S 2 ). Calculate the marginal distributions P (U ) and P (V ).
โข Codebooks generation: Let
Codebook 1: Generate 2 n(J1+R1) of independent u k sequences of length n, generating each element i.i.d according to distribution n i=1 P (u i ), and distribute these sequences randomly among M 1 bins where each bin has 2 nJ1 sequences.
Codebook 2: Generate 2 n(J2+R2) of independent v k sequences of length n, generating each element i.i.d according to distribution n i=1 P (v i ), and distribute these sequences randomly among M 2 bins where each bin has 2 nJ2 sequences.
โข Encoder of user 1: Given the state sequence s 1 and the message m 1 , search in bin m 1 of codebook 1 for a u sequence such that (u, s 1 ) โ A n วซ (U, S 1 ). Send x 1 which is jointly typical with u and s 1 , i.e., (u,
โข Encoder of user 2: Given the state sequence s 2 and the message m 2 , search in bin m 2 of codebook 2 for a
โข Decoder: Given the received vector y, search for unique sequences u and v such that
The error probability is given by
where the inequality follows the asymptotic equipartition property (AEP) [2]. Hence, we need to evaluate only the second term. We define the following error events for specific sate sequences (s 1 , s 2 ) :
Then by union bound, the error probability is upper bounded by
We now evaluate the probability of each error events. For independent u and s 1 the probability that (u, s 1 ) โ
A n วซ (U, S 1 ) is bounded below by Pr ((u, s 1 ) โ A n วซ (U, S 1 )) = (u,s1)โA n วซ (U,S1) P (u)P (s 1 )
Hence, we have that
where (15) follows since 1x โค exp(-x). Hence, this term decays to zero as n โ โ. In the same way
Pr(E 2 (s 2 , m 2 )) goes to zero as n โ โ.
Provided that E 1 (s 1 , m 1 ) and E 2 (s 2 , m 2 ) have not occurred, i.e., (u m1,j1 , s 1 ) โ A [2] we have that
where the typical set
) is associated with the joint distribution
Hence, we have that
In fact, we have (with high probability) that the sequences (u m1,j1 , v m2,j2 , s 1 , s 2 ) generated using the joint distribution (18), which is equivalent to the Markov chain U โ S 1 โ S 2 โ V .
Provided that E 3 (s 1 , s 2 , m 1 , m 2 ) has not occurred, from the AEP we have that
Likewise, we have that
where ( 23) and (24) follow from AEP; (26) follows from the chain rule for mutual information; (29) follows from the Markov chain U โ S 1 โ V . In the same way, it can be shown that
Furthermore,
where ( 35) and (36) follow from AEP; (41) follows from the Markov chain U โ S 1 โ S 2 โ V ; (43) follows from the chain rule for mutual information.
The theorem follows from (30), (31), (43), since for any arbitrary วซ > 0 the conditions in (7) imply that P
as n โ โ.
We consider now two special cases of the memoryless MAC with correlated state information known non-causally at the encoders. The first case is for S 1 = S 2 , i.e., the relation between the states is deterministic. The second case is for independent states.
I. Single state: in this case we have single state which is known to both encoders, i.e., S = S 1 = S 2 , the achievable rate region is given by
for some admissible pair (U, V )
where the admissible pairs satisfy: (U, X 1 ) โ S โ (V, X 2 ), and (U, V ) โ (X 1 , X 2 , S) โ Y .
The Gaussian case of single interference is given by
where Z โผ N (0, N ), the interference S is known non-causally to user 1 and user 2, and the power constraints are P 1 and P 2 for user 1 and user
…(Full text truncated)…
This content is AI-processed based on ArXiv data.