`Q-Feed - An Effective Solution for the Free-riding Problem in Unstructured P2P Networks
This paper presents a solution for reducing the ill effects of free-riders in decentralised unstructured P2P networks. An autonomous replication scheme is proposed to improve the availability and enha
This paper presents a solution for reducing the ill effects of free-riders in decentralised unstructured P2P networks. An autonomous replication scheme is proposed to improve the availability and enhance system performance. Q-learning is widely employed in different situations to improve the accuracy in decision making by each peer. Based on the performance of neighbours of a peer, every neighbour is awarded different levels of ranks. At the same time a low-performing node is allowed to improve its rank in different ways. Simulation results show that Q-learning based free riding control mechanism effectively limits the services received by free-riders and also encourages the low-performing neighbours to improve their position. The popular files are autonomously replicated to nodes possessing required parameters. Due to this improvement of quantity of popular files, free riders are given opportunity to lift their position for active participation in the network for sharing files. Q-feed effectively manages queries from free riders and reduces network traffic significantly
💡 Research Summary
The paper tackles the persistent free‑rider problem in unstructured peer‑to‑peer (P2P) systems by introducing a novel framework called Q‑Feed. Free‑riders—nodes that consume resources without contributing—degrade overall performance, increase traffic, and reduce file availability. Existing countermeasures such as reputation systems, centralized incentives, or simple replication schemes either assume a structured overlay or fail to adapt to the highly dynamic nature of unstructured networks. Q‑Feed addresses these gaps by integrating reinforcement learning (specifically Q‑learning) with an autonomous replication mechanism, thereby providing a decentralized, self‑organizing solution.
Core Mechanism
Each peer maintains a Q‑table that records a Q‑value for every neighboring node. After each query round, the peer measures the neighbor’s response time, success ratio, and amount of data transferred. These metrics are transformed into a scalar reward, r, which is fed into the standard Q‑learning update:
Q(s,a) ← (1‑α)·Q(s,a) + α·
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...