One-shot Multiple Access Channel Simulation

One-shot Multiple Access Channel Simulation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We consider the problem of shared randomness-assisted multiple access channel (MAC) simulation for product inputs and characterize the one-shot communication cost region via almost-matching inner and outer bounds in terms of the smooth max-information of the channel, featuring auxiliary random variables of bounded size. The achievability relies on a rejection-sampling algorithm to simulate an auxiliary channel between each sender and the decoder, and producing the final output based on the output of these intermediate channels. The converse follows via information-spectrum based arguments. To bound the cardinality of the auxiliary random variables, we employ the perturbation method from [Anantharam et al., IEEE Trans. Inf. Theory (2019)] in the one-shot setting. For the asymptotic setting and vanishing errors, our result expands to a tight single-letter rate characterization and consequently extends a special case of the simulation results of [Kurri et al., IEEE Trans. Inf. Theory (2022)] for fixed, independent and identically distributed (iid) product inputs to universal simulation for any product inputs. We broaden our discussion into the quantum realm by studying feedback simulation of quantum-to-classical (QC) MACs with product measurements [Atif et al., IEEE Trans. Inf. Theory (2022)]. For fixed product inputs and with shared randomness assistance, we give a quasi tight one-shot communication cost region with corresponding single-letter asymptotic iid expansion.


💡 Research Summary

This paper studies the problem of simulating a two‑sender, single‑receiver multiple‑access channel (MAC) when the two senders share unlimited common randomness with the receiver. The focus is on the one‑shot (non‑asymptotic) setting and on product input distributions, i.e., the joint input is the product of two independent marginals. The authors give a near‑tight characterization of the required communication rates (R₁,R₂) in terms of smooth max‑information quantities, together with explicit cardinality bounds on auxiliary random variables.

The main technical contribution is a coding scheme based on rejection sampling. Each sender j observes its own input X_j and the shared randomness S_j, draws an auxiliary variable U_j according to a conditional distribution p_{U_j|X_j}, and compresses U_j into a message M_j of length 2R_j bits. The receiver, having both messages and both copies of the shared randomness, reconstructs (U₁,U₂) and then generates the final output Y by sampling from a conditional distribution p_{Y|U₁,U₂} that exactly reproduces the target MAC law q_{Y|X₁,X₂}. The error is measured in total variation distance and is bounded by a parameter ε that can be made arbitrarily small by appropriate choices of smoothing parameters ε₁, ε₂ and a small δ used in the analysis.

The achievable region (inner bound) is expressed as
 R_j ≥ I^{ε_j−δ}{max}(X_j;U_j) + log log(1/δ) ,
where I^{·}
{max} denotes the ε‑smoothed max‑mutual information. The outer (converse) region requires only
 R_j ≥ I^{ε_j}{max}(X_j;U_j) .
Both bounds are taken over all auxiliary conditional distributions p
{U₁|X₁}, p_{U₂|X₂} that admit a joint distribution p_{X₁,X₂,U₁,U₂,Y}=q_{X₁}q_{X₂}p_{U₁|X₁}p_{U₂|X₂}p_{Y|U₁,U₂} matching the target MAC.

A notable methodological advance is the use of the perturbation technique of Anantharam et al. (2019) to bound the cardinalities of the auxiliary alphabets in the one‑shot setting. The classic support‑lemma approach does not preserve the required correlation structure between the output Y and the auxiliaries, so the perturbation method is adapted to the smooth‑max‑information framework, yielding finite bounds that depend only on the sizes of the input alphabets and the smoothing parameters.

The authors then take the i.i.d. limit: when the product input distribution is repeated n times, the smooth max‑information converges to the ordinary mutual information I(X_j;U_j). Consequently the inner and outer bounds collapse to a single‑letter rate region R_j ≥ I(X_j;U_j), which exactly recovers the fixed‑input result of Kurri et al. (2022) and extends it to universal simulation, i.e., the same rates work for any product input distribution, not just a predetermined one.

The paper also treats a quantum‑to‑classical MAC with a “classical scrambling” structure and feedback. In this model each sender measures a quantum state to produce a classical input, and after the simulation the classical inputs are fed back to the senders. Using shared randomness, the authors construct a similar rejection‑sampling protocol and characterize the one‑shot communication cost in terms of the smooth quantum max‑mutual information I^{ε}_{max}(A;B). The asymptotic expansion again yields a single‑letter expression analogous to the classical case.

Overall, the work provides (i) a clean information‑theoretic formulation of MAC simulation in the one‑shot regime, (ii) a practically implementable coding scheme based on rejection sampling, (iii) rigorous inner and outer bounds expressed via smooth max‑information, (iv) finite cardinality guarantees for auxiliary variables, (v) a seamless bridge to the asymptotic i.i.d. regime, and (vi) an extension to quantum‑classical MACs with feedback. These contributions fill a gap in the literature where MAC simulation had previously been understood only for fixed i.i.d. inputs or in the broadcast setting, and they open the door to further investigations of multi‑terminal channel simulation under limited communication and shared randomness.


Comments & Academic Discussion

Loading comments...

Leave a Comment