Coding Versus ARQ in Fading Channels: How reliable should the PHY be?

Reading time: 6 minute
...

📝 Original Info

  • Title: Coding Versus ARQ in Fading Channels: How reliable should the PHY be?
  • ArXiv ID: 0904.0226
  • Date: 2009-04-01
  • Authors: Peng Wu, Nihar Jindal

📝 Abstract

This paper studies the tradeoff between channel coding and ARQ (automatic repeat request) in Rayleigh block-fading channels. A heavily coded system corresponds to a low transmission rate with few ARQ re-transmissions, whereas lighter coding corresponds to a higher transmitted rate but more re-transmissions. The optimum error probability, where optimum refers to the maximization of the average successful throughput, is derived and is shown to be a decreasing function of the average signal-to-noise ratio and of the channel diversity order. A general conclusion of the work is that the optimum error probability is quite large (e.g., 10% or larger) for reasonable channel parameters, and that operating at a very small error probability can lead to a significantly reduced throughput. This conclusion holds even when a number of practical ARQ considerations, such as delay constraints and acknowledgement feedback errors, are taken into account.

💡 Deep Analysis

Deep Dive into Coding Versus ARQ in Fading Channels: How reliable should the PHY be?.

This paper studies the tradeoff between channel coding and ARQ (automatic repeat request) in Rayleigh block-fading channels. A heavily coded system corresponds to a low transmission rate with few ARQ re-transmissions, whereas lighter coding corresponds to a higher transmitted rate but more re-transmissions. The optimum error probability, where optimum refers to the maximization of the average successful throughput, is derived and is shown to be a decreasing function of the average signal-to-noise ratio and of the channel diversity order. A general conclusion of the work is that the optimum error probability is quite large (e.g., 10% or larger) for reasonable channel parameters, and that operating at a very small error probability can lead to a significantly reduced throughput. This conclusion holds even when a number of practical ARQ considerations, such as delay constraints and acknowledgement feedback errors, are taken into account.

📄 Full Content

In contemporary wireless communication systems, ARQ (automatic repeat request) is generally used above the physical layer (PHY) to compensate for packet errors: incorrectly decoded packets are detected by the receiver, and a negative acknowledgement is sent back to the transmitter to request a re-transmission. In such an architecture there is a natural tradeoff between the transmitted rate and ARQ re-transmissions. A high transmitted rate corresponds to many packet errors and thus many ARQ retransmissions, but each successfully received packet contains many information bits. On the other hand, a low transmitted rate corresponds to few ARQ re-transmissions, but few information bits are contained per packet. Thus, a fundamental design challenge is determining the transmitted rate that maximizes the rate at which bits are successfully delivered. Since the packet error probability is an increasing function of the transmitted rate, this is equivalent to determining the optimal packet error probability, i.e., the optimal PHY reliability level.

We consider a wireless channel where the transmitter chooses the rate based only on the fading statistics because knowledge of the instantaneous channel conditions is not available (e.g., high velocity mobiles in cellular systems). The transmitted rate-ARQ tradeoff is interesting in this setting because the packet error probability depends on the transmitted rate in a non-trivial fashion; on the other hand, this tradeoff is somewhat trivial when instantaneous channel state information at the transmitter (CSIT) is available (see Remark 1).

We begin by analyzing an idealized system, for which we find that making the PHY too reliable can lead to a significant penalty in terms of the achieved goodput (long-term average successful throughput), and that the optimal packet error probability is decreasing in the average SNR and in the fading selectivity experienced by each transmitted codeword. We also see that for a large level of system parameters, choosing an error probability of 10% leads to near-optimal performance. We then consider a number of important practical considerations, such as a limit on the number of ARQ re-transmissions and unreliable acknowledgement feedback. Even after taking these issues into account, we find that a relatively unreliable PHY is still preferred. Because of fading, the PHY can be made reliable only if the transmitted rate is significantly reduced. However, this reduction in rate is not made up for by the corresponding reduction in ARQ re-transmissions.

There has been some recent work on the joint optimization of packet-level erasure-correction codes (e.g., fountain codes) and PHY-layer error correction [1]- [4]. The fundamental metric with erasure codes is the product of the transmitted rate and the packet success probability, which is the same as in the idealized ARQ setting studied in Section III. Even in that idealized setting, our work differs in a number of ways. References [1], [3], [4] study multicast (i.e., multiple receivers) while [2] considers unicast assuming no diversity per transmission, whereas our focus is on the unicast setting with diversity per transmission. Furthermore, our analysis provides a general explanation of how the PHY reliability should depend on both the diversity and the average SNR. In addition, we consider a number of practical issues specific to ARQ, such as acknowledgement errors (Section IV), as well as hybrid-ARQ (Section V).

We consider a Rayleigh block-fading channel where the channel remains constant within each block but changes independently from one block to another. The t-th (t = 1, 2, • • • ) received channel symbol in the i-th (i = 1, 2, • • • ) fading block y t,i is given by

where h i ∼ CN (0, 1) represents the channel gain and is i.i.d. across fading blocks, x t,i ∼ CN (0, 1)

denotes the Gaussian input symbol constrained to have unit average power, and z t,i ∼ CN (0, 1) models the additive Gaussian noise assumed to be i.i.d. across channel uses and fading blocks. Although we focus on single antenna systems and Rayleigh fading channel, our model can be easily extended to multiple-input and multiple-output (MIMO) systems and other fading distributions as commented upon in Remark 2.

Each transmission (i.e., codeword) is assumed to span L fading blocks, and thus L represents the time/frequency selectivity experienced by each codeword. In analyzing ARQ systems, the packet error probability is the key quantity. If a strong channel code (with suitably long blocklength) is used, it is well known that the packet error probability is accurately approximated by the mutual information outage probability [5]- [8]. Under this assumption (which is examined in Section IV-A), the packet error probability for transmission at rate R bits/symbol is given by [9, eq (5.83)]:

Here we explicitly denote the dependence of the error probability on the average signal-to-noise ratio SNR, the selectivity order L, and th

…(Full text truncated)…

📸 Image Gallery

cover.png page_2.webp page_3.webp

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut