Feedback-based online network coding

Reading time: 6 minute
...

📝 Original Info

  • Title: Feedback-based online network coding
  • ArXiv ID: 0904.1730
  • Date: 2009-04-13
  • Authors: Researchers from original ArXiv paper

📝 Abstract

Current approaches to the practical implementation of network coding are batch-based, and often do not use feedback, except possibly to signal completion of a file download. In this paper, the various benefits of using feedback in a network coded system are studied. It is shown that network coding can be performed in a completely online manner, without the need for batches or generations, and that such online operation does not affect the throughput. Although these ideas are presented in a single-hop packet erasure broadcast setting, they naturally extend to more general lossy networks which employ network coding in the presence of feedback. The impact of feedback on queue size at the sender and decoding delay at the receivers is studied. Strategies for adaptive coding based on feedback are presented, with the goal of minimizing the queue size and delay. The asymptotic behavior of these metrics is characterized, in the limit of the traffic load approaching capacity. Different notions of decoding delay are considered, including an order-sensitive notion which assumes that packets are useful only when delivered in order. Our work may be viewed as a natural extension of Automatic Repeat reQuest (ARQ) schemes to coded networks.

💡 Deep Analysis

Deep Dive into Feedback-based online network coding.

Current approaches to the practical implementation of network coding are batch-based, and often do not use feedback, except possibly to signal completion of a file download. In this paper, the various benefits of using feedback in a network coded system are studied. It is shown that network coding can be performed in a completely online manner, without the need for batches or generations, and that such online operation does not affect the throughput. Although these ideas are presented in a single-hop packet erasure broadcast setting, they naturally extend to more general lossy networks which employ network coding in the presence of feedback. The impact of feedback on queue size at the sender and decoding delay at the receivers is studied. Strategies for adaptive coding based on feedback are presented, with the goal of minimizing the queue size and delay. The asymptotic behavior of these metrics is characterized, in the limit of the traffic load approaching capacity. Different notions o

📄 Full Content

This paper is a step towards low-delay, highthroughput solutions based on network coding, for realtime data streaming applications over a packet erasure network. In particular, it considers the role of feedback for queue management and delay control in such systems.

Reliable communication over a network of packet erasure channels is a well studied problem. Several solutions have been proposed, especially in the case when there is no feedback. We compare below, three such approaches -digital fountain codes, random linear network coding and priority encoding transmission.

  1. Digital fountain codes: The digital fountain codes ( [1], [2]) constitute a well-known approach to this problem. From a block of k transmit packets, the sender generates random linear combinations in such a way that the receiver can, with high probability, decode the block once it receives any set of slightly more than k linear combinations. This approach has low complexity and requires no feedback, except to signal successful decoding of the block. However, fountain codes are designed for a point-to-point erasure channel and in their original form, do not extend readily to a network setting. Consider a two-link tandem network. An end-to-end fountain code with simple forwarding at the middle node will result in throughput loss. If the middle node chooses to decode and re-encode an entire block, the scheme will be sub-optimal in terms of delay, as pointed out by [3]. In this sense, the fountain code approach is not composable across links. For the special case of tree networks, there has been some recent work on composing fountain codes across links by enabling the middle node to re-encode even before decoding the entire block [4].

  2. Random linear network coding: Network coding was originally introduced for the case of error-free networks with specified link capacities ( [5], [6]), and was extended to the case of erasure networks [7]. In contrast to fountain codes, the random linear network coding solution of [8] does not require decoding at intermediate nodes and can be applied in any network. Each node transmits a random linear combination of all coded packets it has received so far. This solution ensures that with high probability, the transmitted packet will have what we call the innovation guarantee property, i.e., it will be innovative 1 to every receiver that receives it successfully, except if the receiver already knows as much as the sender. Thus, every successful reception will bring a unit of new information. In [8], this scheme is shown to achieve capacity for the case of a multicast session.

An important problem with both fountain codes and random linear network coding is that although they are rateless, the encoding operation is performed on a block (or generation) of packets. This means that in general, there is no guarantee that the receiver will be able to extract and pass on to higher layers, any of the original packets from the coded packets till the entire block has been received. This leads to a decoding delay.

Such a decoding delay is not a problem if the higher layers will anyway use a block only as a whole (e.g., file download). This corresponds to traditional approaches in information theory where the message is assumed to be useful only as a whole. No incentive is placed on decoding “a part of the message” using a part of the codeword. However, many applications today involve broadcasting a continuous stream of packets in real-time (e.g., video streaming). Sources generate a stream of messages which have an intrinsic temporal ordering. In such cases, playback is possible only till the point up to which all packets have been recovered, which we call the front of contiguous knowledge. Thus, there is incentive to decode the older messages earlier, as this will reduce the playback latency. The above schemes would segment the stream into blocks and process one block at a time. Block sizes will have to be large to ensure high throughput. However, if playback can begin only after receiving a full block, then large blocks will imply a large delay.

This raises an interesting question: can we code in such a way that playback can begin even before the full block is received? In other words, we are more interested in packet delay than block delay. These issues have been studied using various approaches by [9], [10] and [11] in a point-to-point setting. However, in a network setting, the problem is not well understood. Moreover, these works do not consider the queue management aspects of the problem. In related work, [12] and [13] address the question of how many original packets are revealed before the whole block is decoded in a fountain code setting. However, performance may depend on not only how much data reaches the receiver in a given time, but also which part of the data. For instance, playback delay depends on not just the number of original packets that are recovered, but also the order in which they are recov

…(Full text truncated)…

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut