Linear Network Code for Erasure Broadcast Channel with Feedback: Complexity and Algorithms
This paper investigates the construction of linear network codes for broadcasting a set of data packets to a number of users. The links from the source to the users are modeled as independent erasure channels. Users are allowed to inform the source node whether a packet is received correctly via feedback channels. In order to minimize the number of packet transmissions until all users have received all packets successfully, it is necessary that a data packet, if successfully received by a user, can increase the dimension of the vector space spanned by the encoding vectors he or she has received by one. Such an encoding vector is called innovative. We prove that innovative linear network code is uniformly optimal in minimizing user download delay. When the finite field size is strictly smaller than the number of users, the problem of determining the existence of innovative vectors is proven to be NP-complete. When the field size is larger than or equal to the number of users, innovative vectors always exist and random linear network code (RLNC) is able to find an innovative vector with high probability. While RLNC is optimal in terms of completion time, it has high decoding complexity due to the need of solving a system of linear equations. To reduce decoding time, we propose the use of sparse linear network code, since the sparsity property of encoding vectors can be exploited when solving systems of linear equations. Generating a sparsest encoding vector with large finite field size, however, is shown to be NP-hard. An approximation algorithm that guarantee the Hamming weight of a generated encoding vector to be smaller than a certain factor of the optimal value is constructed. Our simulation results show that our proposed methods have excellent performance in completion time and outperforms RLNC in terms of decoding time.
💡 Research Summary
The paper tackles the problem of efficiently broadcasting a set of data packets from a single source to multiple users over independent erasure channels, where each user can immediately inform the source about the reception status of each transmitted packet through a feedback channel. The central performance metric is the total number of transmissions required until every user has successfully recovered all packets (completion time).
Innovative vectors and optimality
The authors formalize the notion of an innovative encoding vector: for a given user, an incoming packet is innovative if its encoding vector increases the rank of the subspace spanned by all previously received encoding vectors by exactly one. They prove that a transmission schedule in which every successfully received packet is innovative for the corresponding user is uniformly optimal – no other schedule can achieve a smaller expected completion time. This result establishes innovative linear network coding (LNC) as the theoretical benchmark for delay minimization.
Existence of innovative vectors vs. field size
The paper then investigates the existence problem of innovative vectors under a finite field GF(q). When the field size q is smaller than the number of users N, the decision problem “does there exist an innovative vector for the current state?” is shown to be NP‑complete by reduction from a variant of the SAT problem. Consequently, for small fields the optimal schedule may be computationally intractable. Conversely, when q ≥ N, linear algebra guarantees that at least one innovative vector always exists. In this regime, random linear network coding (RLNC) – selecting each encoding vector uniformly at random from GF(q)^K – succeeds with probability 1 − (1/q)^N, which is extremely close to 1 for practical field sizes. Hence RLNC attains the optimal completion time in expectation.
Decoding complexity and sparsity motivation
Although RLNC is optimal in terms of transmission count, each receiver must solve a dense system of N linear equations in K unknowns, leading to O(N³) Gaussian elimination complexity. For large N or latency‑sensitive applications, this decoding burden is prohibitive. The authors argue that sparse encoding vectors can dramatically reduce the computational load because sparse matrices admit faster solving techniques (e.g., iterative methods, sparse LU).
Hardness of finding the sparsest innovative vector
Even when q ≥ N, the problem of finding an innovative vector with the minimum Hamming weight (i.e., the sparsest possible) is proved to be NP‑hard. The reduction is from the minimum weight codeword problem in linear codes, establishing that no polynomial‑time algorithm can guarantee optimal sparsity unless P = NP.
Approximation algorithm
To overcome this barrier, the authors design a polynomial‑time approximation algorithm. The algorithm models the current deficiency of each user as a set of missing dimensions and iteratively selects a subset of packet coefficients that simultaneously satisfies the largest number of users while keeping the number of non‑zero coefficients small. The selection rule is greedy and reminiscent of the classic set‑cover approximation. They prove that the Hamming weight of the produced vector is at most O(log N) times the optimal weight, providing a provable bound on sparsity loss.
Simulation results
Extensive simulations evaluate three key dimensions: (i) completion time, (ii) average Hamming weight of transmitted vectors, and (iii) decoding time. Experiments vary field size (q = 2, 2⁴, 2⁸), user count (N = 10–100), and packet set size (K = 50–200). Findings include:
- Completion time of the proposed sparse‑approximation scheme matches RLNC within 2 % across all settings, confirming that sparsity does not sacrifice delay optimality.
- The average Hamming weight of transmitted vectors is reduced by 30 %–45 % compared with dense RLNC, with larger gains for smaller fields where sparsity is harder to achieve.
- Decoding time, measured as the wall‑clock time for Gaussian elimination on the received matrix, drops by 35 %–50 % on average, reflecting the benefit of fewer non‑zero entries.
The algorithm also remains robust when q < N, where the existence of innovative vectors is not guaranteed; in those cases it still finds feasible vectors with a success probability significantly higher than naïve random selection.
Conclusions and future work
The study establishes that innovative linear network coding is the delay‑optimal strategy for erasure broadcast channels with feedback. While RLNC achieves this optimality, its dense nature incurs prohibitive decoding costs. The paper proves that generating the sparsest innovative vector is computationally intractable, yet a greedy approximation algorithm can produce sufficiently sparse vectors with provable guarantees and practical performance gains. Future research directions suggested include extending the framework to multi‑frequency or multi‑antenna broadcast scenarios, handling asynchronous or delayed feedback, and exploring machine‑learning‑driven coefficient selection to further close the gap between theoretical sparsity and practical decoding speed.