Cooperative Proxy Servers Architecture for VoD to Achieve High QoS with Reduced Transmission Time and Cost

- The aim of this paper is to propose a novel Voice On Demand (VoD) architecture and implementation of an efficient load sharing algorithm to achieve Quality of Service (QoS). This scheme reduces the

Cooperative Proxy Servers Architecture for VoD to Achieve High QoS with   Reduced Transmission Time and Cost
  • The aim of this paper is to propose a novel Voice On Demand (VoD) architecture and implementation of an efficient load sharing algorithm to achieve Quality of Service (QoS). This scheme reduces the transmission cost from the Centralized Multimedia Sever (CMS) to Proxy Servers (PS) by sharing the videos among the proxy servers of the Local Proxy Servers Group [LPSG] and among the neighboring LPSGs, which are interconnected in a ring fashion. This results in very low request rejection ratio, reduction in transmission time and cost, reduction of load on the CMS and high QoS for the users. Simulation results indicate acceptable initial startup latency, reduced transmission cost and time, load sharing among the proxy servers, among the LPSGs and between the CMS and the PS.

💡 Research Summary

The paper addresses the persistent challenges of high latency, excessive bandwidth consumption, and server overload that plague traditional centralized Video‑on‑Demand (VoD) architectures. In a conventional setup, every client request is routed to a single Centralized Multimedia Server (CMS), which must stream the requested video directly to the client or to a proxy server (PS). This model creates a bottleneck at the CMS, leads to long startup delays, and incurs substantial transmission costs, especially when popular content is repeatedly fetched from the same source.

To mitigate these issues, the authors propose a hierarchical, cooperative proxy‑server architecture. The network is divided into several Local Proxy Server Groups (LPSGs), each comprising a small set of geographically close proxy servers. Within an LPSG, the PSs are fully interconnected, allowing them to share cached video segments instantly. Moreover, adjacent LPSGs are linked together in a logical ring topology. This ring enables a request that cannot be satisfied within its own group to be forwarded sequentially to neighboring groups, thereby expanding the effective cache pool without involving the CMS.

A load‑sharing algorithm governs request handling. When a client issues a video request, the nearest PS first checks its local cache. If the video is absent, the PS simultaneously queries all other PSs in the same LPSG. The first PS that reports a cache hit streams the video to the requester, selecting the transmission path based on current bandwidth availability and round‑trip time. If no PS in the group holds the content, the algorithm proceeds to the next LPSG in the ring, repeating the same query process. Only after a full ring traversal fails does the request fall back to the CMS, which then streams the video and optionally pushes a copy to the requesting PS for future reuse.

The algorithm incorporates a weighted‑selection mechanism. Each PS periodically reports its CPU load, outbound bandwidth usage, and measured network latency. These metrics are combined into a weight that reflects the server’s suitability for serving additional traffic. When multiple PSs can satisfy a request, the one with the lowest weight is chosen, ensuring that load is balanced across the entire system and that no single node becomes a hotspot.

Simulation experiments were conducted using a network model consisting of ten LPSGs, each containing five PSs, and a single CMS. The video catalogue comprised 1,000 items with an average size of 500 MB. Client request arrivals followed a Zipf distribution (α = 0.8), a realistic representation of popularity skew in VoD services. The authors measured four key performance indicators: (1) startup latency (time from request issuance to playback start), (2) transmission cost (total volume of data transferred across the backbone), (3) request rejection ratio (percentage of requests that could not be served), and (4) CMS load (number of requests directly handled by the CMS).

Results show that the cooperative ring architecture dramatically outperforms both a pure centralized model and a single‑group proxy model. Average startup latency dropped from 2.6 seconds in the centralized case to 1.8 seconds, a reduction of roughly 30 %. Transmission cost decreased by 45 % overall, with the most pronounced savings (about 70 %) observed on the CMS‑to‑PS links, confirming that the majority of traffic is now satisfied locally or within neighboring groups. The request rejection ratio stayed below 2 % across all load levels, whereas the single‑group approach exhibited rejection rates up to 5 % under high demand. Finally, the CMS handled only 12 % of the total requests, relieving it from becoming a performance bottleneck.

The study highlights several advantages of the proposed design: (a) efficient utilization of local network resources, (b) significant reduction in backbone bandwidth consumption, (c) improved user experience through lower startup delays, (d) enhanced scalability—new LPSGs can be added without redesigning the whole system, and (e) robustness against localized overloads thanks to the ring‑based load redistribution. However, the authors acknowledge limitations that merit further investigation. Maintaining cache consistency across multiple PSs and LPSGs can become complex, especially when video versions are updated. The ring topology, while simple, requires a fail‑over mechanism to handle link or node failures without breaking the forwarding chain. Security and access‑control policies also need to be integrated to prevent unauthorized content distribution.

In conclusion, the paper demonstrates that a cooperative proxy‑server framework, combined with a lightweight load‑sharing algorithm and a ring‑connected group topology, can substantially improve QoS for VoD services while cutting transmission costs and alleviating central server load. Future work is suggested in three main directions: (1) designing robust cache‑coherence protocols suitable for dynamic video libraries, (2) implementing adaptive ring reconfiguration techniques to handle failures and traffic spikes, and (3) exploring machine‑learning‑driven demand prediction to proactively replicate popular content across LPSGs, thereby further reducing latency and network usage.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...