Robust Network Coding in the Presence of Untrusted Nodes

Reading time: 6 minute
...

📝 Abstract

While network coding can be an efficient means of information dissemination in networks, it is highly susceptible to “pollution attacks,” as the injection of even a single erroneous packet has the potential to corrupt each and every packet received by a given destination. Even when suitable error-control coding is applied, an adversary can, in many interesting practical situations, overwhelm the error-correcting capability of the code. To limit the power of potential adversaries, a broadcast transformation is introduced, in which nodes are limited to just a single (broadcast) transmission per generation. Under this broadcast transformation, the multicast capacity of a network is changed (in general reduced) from the number of edge-disjoint paths between source and sink to the number of internally-disjoint paths. Exploiting this fact, we propose a family of networks whose capacity is largely unaffected by a broadcast transformation. This results in a significant achievable transmission rate for such networks, even in the presence of adversaries.

💡 Analysis

While network coding can be an efficient means of information dissemination in networks, it is highly susceptible to “pollution attacks,” as the injection of even a single erroneous packet has the potential to corrupt each and every packet received by a given destination. Even when suitable error-control coding is applied, an adversary can, in many interesting practical situations, overwhelm the error-correcting capability of the code. To limit the power of potential adversaries, a broadcast transformation is introduced, in which nodes are limited to just a single (broadcast) transmission per generation. Under this broadcast transformation, the multicast capacity of a network is changed (in general reduced) from the number of edge-disjoint paths between source and sink to the number of internally-disjoint paths. Exploiting this fact, we propose a family of networks whose capacity is largely unaffected by a broadcast transformation. This results in a significant achievable transmission rate for such networks, even in the presence of adversaries.

📄 Content

arXiv:0811.3475v3 [cs.IT] 26 Apr 2010 Robust Network Coding in the Presence of Untrusted Nodes Da Wang, Danilo Silva and Frank R. Kschischang Abstract—While network coding can be an efficient means of information dissemination in networks, it is highly susceptible to “pollution attacks,” as the injection of even a single erroneous packet has the potential to corrupt each and every packet received by a given destination. Even when suitable error- control coding is applied, an adversary can, in many interesting practical situations, overwhelm the error-correcting capability of the code. To limit the power of potential adversaries, a broadcast transformation is introduced, in which nodes are limited to just a single (broadcast) transmission per generation. Under this broadcast transformation, the multicast capacity of a network is changed (in general reduced) from the number of edge-disjoint paths between source and sink to the number of internally-disjoint paths. Exploiting this fact, a family of networks is proposed whose capacity is largely unaffected by a broadcast transformation. This results in a significant achievable transmission rate for such networks, even in the presence of adversaries. Index Terms—adversarial nodes, broadcast transformation, error correction, JLC networks, multicast capacity, network coding. I. INTRODUCTION Network coding [1] is a promising approach for efficient information dissemination in packet networks. Network coding generalizes routing, allowing nodes in the network not only to switch packets from input ports to output ports, but also to combine incoming packets in some manner to form outgoing packets. For example, in linear network coding, fixed-length packets are regarded as vectors over a finite field Fq, and nodes in the network form outgoing packets as Fq-linear com- binations of incoming packets. For the single-source multicast problem, it is known that linear network coding suffices to achieve the network capacity [2], [3]. Recently the problem of error correction in network coding has received significant attention due to the fact that pollution The work of D. Wang was supported by NSERC Undergraduate Summer Research Award. The work of D. Silva was supported by CAPES Foundation, Brazil. The material in this paper was presented in part at the 45th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, September 2007. D. Wang was with The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G4, Canada. He is now with the Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139 USA (e-mail: dawang@mit.edu). D. Silva was with The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G4, Canada. He is now with the School of Electrical and Computer Engineering, State University of Campinas, Campinas, SP 13083-970, Brazil (e-mail: danilo@decom.fee.unicamp.br). F. R. Kschischang is with The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G4, Canada (e-mail: frank@comm.utoronto.ca). attacks can be catastrophic. Indeed, the injection of even a single erroneous packet somewhere in the network has the potential to corrupt each and every packet received by a given sink node. This problem was first investigated from an edge- centric perspective [4], where a number of packet errors could arise in any of the links in the network. Alternatively, under a node-centric perspective, it is assumed that an adversarial node may join the network and transmit corrupt packets on all its outgoing links, but the other links in the network remain free of error. One approach, investigated in [5], [6], for dealing with the pollution problem is to apply cryptographic techniques to ensure the validity of received packets, permitting corrupted packets to be discarded by each node, and therefore preventing the contamination of other packets. This approach typically requires the use of large field and packet sizes, which leads to computationally expensive operations at the nodes and possibly to significant transmission delay. These requirements may be acceptable in the large-file-downloading scenario, but may be incompatible with delay-constrained applications such as streaming-media distribution. Another approach (and the one followed in this paper) is to look for end-to-end coding techniques that require little or no intelligence at the internal nodes. Jaggi et al. [7] show that, if C is the network capacity (per transmission-generation) and z is the min-cut from the adversary to a destination, then a rate of C −2z packets per generation is achievable. The same rate can also be achieved using the subspace approach introduced by K¨otter and Kschischang [8], [9]. A higher rate C −z can be achieved using a scheme proposed in [7] (see also [10]) if

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut