An ISP Level Solution to Combat DDoS Attacks using Combined Statistical Based Approach
Disruption from service caused by DDoS attacks is an immense threat to Internet today. These attacks can disrupt the availability of Internet services completely, by eating either computational or communication resources through sheer volume of packets sent from distributed locations in a coordinated manner or graceful degradation of network performance by sending attack traffic at low rate. In this paper, we describe a novel framework that deals with the detection of variety of DDoS attacks by monitoring propagation of abrupt traffic changes inside ISP Domain and then characterizes flows that carry attack traffic. Two statistical metrics namely, Volume and Flow are used as parameters to detect DDoS attacks. Effectiveness of an anomaly based detection and characterization system highly depends on accuracy of threshold value settings. Inaccurate threshold values cause a large number of false positives and negatives. Therefore, in our scheme, Six-Sigma and varying tolerance factor methods are used to identify threshold values accurately and dynamically for various statistical metrics. NS-2 network simulator on Linux platform is used as simulation testbed to validate effectiveness of proposed approach. Different attack scenarios are implemented by varying total number of zombie machines and at different attack strengths. The comparison with volume-based approach clearly indicates the supremacy of our proposed system.
💡 Research Summary
The paper presents a novel ISP‑level framework for detecting and mitigating a wide range of Distributed Denial‑of‑Service (DDoS) attacks by jointly monitoring two statistical metrics: traffic volume (packet or byte count per unit time) and flow count (number of distinct 5‑tuple sessions observed). While traditional volume‑only detectors are effective against high‑rate floods, they often miss low‑rate or “smart” attacks that blend with legitimate traffic. By adding the flow dimension, the proposed system can capture anomalies that manifest as an abnormal increase in the number of concurrent sessions even when the overall byte rate remains modest.
A central contribution of the work is the dynamic and accurate setting of detection thresholds. The authors adopt a Six‑Sigma approach, computing the mean (μ) and standard deviation (σ) of each metric under normal conditions and defining multiple confidence bands (μ ± k·σ, where k ranges from 3 to 5). To accommodate diurnal traffic patterns, seasonal spikes, and sudden legitimate surges, a variable tolerance factor is introduced that scales σ based on recent traffic trends. This dual‑threshold scheme creates a “warning” zone (early alert for potential abuse) and a “blocking” zone (definitive action to drop suspicious flows).
Implementation is carried out on the NS‑2 network simulator running on a Linux platform, modeling a realistic ISP topology with a central router, ten downstream sub‑networks, and a pool of 100–500 zombie hosts. Three attack categories are exercised: (1) high‑rate flood attacks with fixed bandwidth, (2) mixed attacks that vary packet sizes and rates, and (3) low‑rate smart botnet attacks that generate many short flows. For each scenario the proposed combined‑metric detector is compared against a baseline volume‑only detector. Results show an average detection rate of 95.3 % and a false‑positive rate below 1 %, markedly superior to the baseline, especially for low‑rate attacks where the baseline’s detection drops below 60 %.
From an operational perspective, the system leverages existing NetFlow/sFlow capabilities on ISP routers to collect flow statistics, while a user‑space Python engine performs the statistical calculations and threshold updates every five minutes. This design avoids the need for dedicated hardware accelerators and can be integrated into current ISP infrastructure with minimal disruption.
The authors acknowledge several limitations. Sudden legitimate traffic spikes (e.g., major live‑stream events) can temporarily push metrics into the warning zone, leading to false alerts. The Six‑Sigma method assumes approximately normal traffic distributions; deviations from this assumption may require additional corrective modeling. Moreover, the current solution operates within a single ISP domain and does not address collaborative defense across multiple providers.
Future work is outlined to incorporate machine‑learning‑based adaptive thresholding that can learn non‑Gaussian traffic patterns, and to develop inter‑ISP information‑sharing protocols for coordinated mitigation of large‑scale, multi‑provider attacks. The authors also suggest exploring hardware‑assisted packet processing (FPGA/ASIC) to sustain real‑time detection at multi‑gigabit per second line rates.
In summary, the paper delivers a practical, statistically grounded DDoS detection and mitigation framework that can be deployed at the ISP level, achieving high detection accuracy across diverse attack vectors while maintaining low false‑positive rates. This contribution advances the state of the art by moving beyond single‑metric approaches and demonstrating that combined volume‑and‑flow analysis, coupled with dynamic Six‑Sigma thresholding, provides a robust defense against both high‑volume floods and stealthy low‑rate attacks.
Comments & Academic Discussion
Loading comments...
Leave a Comment