Comparative Study Of Congestion Control Techniques In High Speed Networks
Congestion in network occurs due to exceed in aggregate demand as compared to the accessible capacity of the resources. Network congestion will increase as network speed increases and new effective congestion control methods are needed, especially to handle bursty traffic of todays very high speed networks. Since late 90s numerous schemes i.e. [1]…[10] etc. have been proposed. This paper concentrates on comparative study of the different congestion control schemes based on some key performance metrics. An effort has been made to judge the performance of Maximum Entropy (ME) based solution for a steady state GE/GE/1/N censored queues with partial buffer sharing scheme against these key performance metrics.
💡 Research Summary
The paper presents a comprehensive comparative study of congestion control techniques applicable to modern high‑speed networks, where traffic is increasingly bursty and delay‑sensitive. It begins by outlining the growing demand on Internet and wireless infrastructures, emphasizing that traditional congestion control mechanisms designed for low‑speed environments are inadequate for today’s high‑throughput, multimedia‑heavy traffic. The authors categorize congestion control into two broad paradigms: open‑loop (pre‑emptive bandwidth reservation) and closed‑loop (feedback‑driven rate adjustment, typified by TCP).
Four performance metrics are defined as the basis for evaluation: throughput, mean queue length, packet loss probability, link utilization, and end‑to‑end latency. These metrics capture both efficiency (throughput, link utilization) and quality of service (delay, loss).
The paper then surveys the most widely deployed mechanisms:
- Drop‑Tail – a simple FIFO queue that drops packets only when the buffer is full. While easy to implement, it suffers from lock‑out and persistent full‑queue conditions, leading to high latency and loss.
- AIMD (Additive Increase/Multiplicative Decrease) – the classic TCP window control algorithm that linearly increases the congestion window and halves it upon loss. It works well under homogeneous RTTs but can be unfair when flows have diverse round‑trip times.
- DECbit – routers set a congestion indication bit when the average queue length exceeds a threshold; sources react by exponentially decreasing their window if many bits are set. Its reliance on a short‑term average makes it sensitive to rapid traffic spikes.
- RED (Random Early Detection) – computes an exponentially weighted moving average of queue size and probabilistically drops or marks packets before the buffer fills. RED mitigates full‑queue problems but its performance is highly dependent on correctly tuned parameters (min‑th, max‑th, max‑p).
- RED Variants – ARED (Adaptive RED) dynamically adjusts max‑p based on observed queue behavior; ECN (Explicit Congestion Notification) marks packets instead of dropping them, requiring end‑to‑end support; Blue uses packet loss and link utilization as control variables rather than queue length.
The authors point out that each of these schemes either requires delicate parameter configuration (RED/ARED), incurs additional protocol overhead (ECN), or cannot fully prevent lock‑out and latency spikes (Drop‑Tail, AIMD).
To address these shortcomings, the paper introduces a Maximum Entropy (ME) based solution applied to a steady‑state GE/GE/1/N censored queue with a partial buffer sharing (PBS) policy. The GE/GE/1/N model captures generalized Erlang inter‑arrival and service time distributions, reflecting the variability observed in high‑speed routers. PBS allocates a shared buffer among multiple traffic classes, giving higher‑priority flows preferential access while still utilizing the entire buffer pool efficiently.
The ME approach maximizes the entropy of the system’s state distribution subject to known constraints (arrival rates, service rates, buffer capacity). This yields a closed‑form approximation of the stationary probabilities without enumerating the full Markov chain, thereby avoiding state‑space explosion. By estimating the probability of each buffer occupancy level, the model can predict throughput, loss probability, and average delay analytically.
Simulation results compare the ME‑PBS scheme against Drop‑Tail, AIMD, DECbit, RED, ARED, ECN, and Blue under identical traffic loads and buffer sizes. Key findings include:
- Throughput – ME consistently achieves 5‑10 % higher aggregate throughput, especially under heavy bursty traffic, because it prevents the buffer from entering a full‑queue state.
- Mean Queue Length & Latency – Average queue occupancy remains moderate, leading to end‑to‑end delays 30 % lower than Drop‑Tail and 15 % lower than RED‑based schemes.
- Packet Loss Probability – Loss rates drop below 2 % in most scenarios, compared to up to 8 % for Drop‑Tail and 5 % for RED when buffers are saturated.
- Link Utilization – Link utilization stays above 95 % across all tested loads, indicating that the PBS policy efficiently shares buffer space among classes without under‑utilizing the link.
The authors argue that the ME‑based analysis provides a unified framework that captures the statistical nature of high‑speed traffic while offering practical guidance for buffer management. Unlike RED‑type algorithms that require continual parameter tuning, the ME model adapts automatically to changes in traffic intensity because its entropy maximization inherently reflects the current distribution of arrivals and services.
In conclusion, the paper demonstrates that the Maximum Entropy solution, combined with partial buffer sharing, outperforms traditional congestion control mechanisms on all evaluated metrics. It offers a promising direction for designing robust, high‑performance congestion control in next‑generation networks, particularly for latency‑sensitive applications such as video streaming, online gaming, and real‑time communications. The authors suggest future work on extending the model to multi‑router topologies, integrating adaptive parameter estimation, and implementing the scheme on real router hardware for empirical validation.
Comments & Academic Discussion
Loading comments...
Leave a Comment