BLADE: Adaptive Wi-Fi Contention Control for Next-Generation Real-Time Communication

Next-generation real-time communication (NGRTC) applications, such as cloud gaming and XR, demand consistently ultra-low latency. However, through our first large-scale measurement, we find that despite the deployment of edge servers, dedicated conge…

Authors: Fengqian Guo, Yuhan Zhou, Longwei Jiang

BLADE: Adaptive Wi-Fi Contention Control for Next-Generation Real-Time Communication
B L A D E : Adaptiv e W i-Fi Contention Control f or Next-Generation Real-T ime Communication Fengqian Guo 1 ∗ Y uhan Zhou 2 1 ∗ Longwei Jiang 1 Congcong Miao 1 Y uxin Liu 3 Chenren Xu 2 Hancheng Lu 4 Chang W en Chen 5 Y axiong Xie 3 † Honghao Liu 1 † 1 T encent 2 P eking University 3 University at Buf falo, SUNY 4 Institute of Artificial Intelligence, China 5 The Hong K ong P olytechnic University Abstract Next-generation real-time communication (NGR TC) appli- cations, such as cloud gaming and XR, demand consistently ultra-low latenc y . Howe v er , through our first large-scale mea- surement, we find that despite the deplo yment of edge serv ers, dedicated congestion control, and loss recovery mechanisms, cloud gaming users still experience long-tail latency in Wi- Fi networks. W e further identify that W i-Fi last-mile access points (APs) serve as the primary latency bottleneck. Specifi- cally , short-term packet deli very droughts, caused by funda- mental limitations in W i-Fi contention control standards, are the root cause. T o address this issue, we propose B L A D E , an adaptiv e contention control algorithm that dynamically ad- justs the contention windo ws (CW) of all W i-Fi transmitters based on the channel contention lev el in a fully distributed manner . Our ns3 simulations and real-world e v aluations with commercial W i-Fi APs demonstrate that, compared to stan- dard contention control, B L A D E reduces W i-Fi pack et trans- mission tail latency by over 5 × under heavy channel con- tention and significantly stabilizes MA C throughput while ensuring fast and fair conv ergence. Consequently , B L A D E reduces the video stall rate in cloud gaming by ov er 90%. 1 Introduction Emerging ne xt generation real time communication (NGR TC) systems such as cloud gaming [ 1 , 2 ] and Extended Reality (XR) [ 3 , 4 ] are re volutionizing ho w users experience interac- tiv e digital content. These applications ha ve been rapidly adopted across both entertainment and b usiness, with the global cloud gaming market alone growing from $1,286.6 million in 2022 to a projected $13.6 billion by 2028 [ 5 ]. Such next-generation real-time streaming applications require both high bandwidth ( e .g., ~30 Mbps for cloud gaming [ 6 ] and ~50 Mbps for XR [ 7 ]) and consistently low latency to maintain their interactive nature and deliv er immersi ve user experi- ences [ 8 , 9 ]. Long-tail Latency . Except av erage latency and throughput, NGR TC are extremely sensitiv e to long-tail latency: ev en a single latency spike, i.e. , latency larger than 200 ms, can ∗ equal contribution † Corresponding authors. trigger a video stall, catastrophically disrupting the user’ s im- mersiv e experience [ 8 – 13 ]. The stakes are high; prior work has sho wn that a mere 0.5% increase in stall rate can reduce user retention time by a third [ 8 , 9 ]. Consequently , the viabil- ity of this bur geoning multi-billion dollar market hinges on deliv ering data with near-perfect punctuality . T o pre vent these disrupti ve stalls, the underlying network must provide highly predictable, timely deli very . W ith a video stall defined as any frame deliv ery taking longer than 200 ms, NG-R TC applications implicitly demand that the network deliv er nearly e very frame within this strict b udget; ev en failures at the 99.99th percentile can sev erely degrade the user experience. This imposes a stringent requirement for predictable network performance. The Internet’ s most prev alent last-hop technology , W i-Fi, is theoretically incapable of providing this guarantee. Unlike cellular networks, which le verage centralized scheduling to allocate resources and manage latency [ 14 , 15 ], W i-Fi’ s design is fundamentally distributed and uncoordinated. Its reliance on a contention-based channel access protocol (CSMA/CA) means that devices must independently compete for transmis- sion opportunities without any central control. This lack of coordination makes it impossible to guarantee timely pack et deli very , as latency can v ary dramatically based on the instan- taneous lev el of channel contention. Our large-scale measurement study on the T encent ST AR T cloud gaming platform confirms this theoretical limitation (§ 3.1 ). The study , which analyzed 336 million video frames from 200 commercial Wi-Fi access points deployed nation- wide, rev eals that while the wired portion of the network (from server to access point) maintains lo w latency , staying below 200 ms e ven at the 99.99th percentile, the total end-to- end latency can exceed 1000ms when the wireless last hop is included. This empirical evidence dictates that the solution must reside at the W i-Fi last hop. This is not a problem that traditional end-to-end congestion control can solve, as conges- tion control only mitigates queuing delay rather than reducing the brief, intermittent sudden jitters inherent to the W i-Fi last hop. The catastrophic latency spikes are brief, intermittent, and localized entirely within the Wi-Fi last hop. Therefore, resolving the random jitters of over -the-air transmission is critical for NGR TC. Our research focus is on the core cause of video stalls: the W i-Fi last hop. W e address this by de- signing a deployable Media Access Control (MA C)-layer contention-control algorithm that reduces the tail latenc y of the last hop and enhances the smoothness of application-layer transmission. Packet-Deli very Drought and its Root Cause. Drilling down into the W i-Fi bottleneck, our measurements rev eal that these latency spikes are caused by a specific, recurring failure mode we term a pac ket-delivery dr ought : a 200 ms interv al during which an access point fails to deli ver a single packet to a user . This MA C-layer phenomenon is the direct cause of application-layer failure: video stalls. Our central empirical finding is that 86.19% of all video stalls are directly correlated with the occurrence of at least one such drought, establishing a near one-to-one mapping between the two. The root cause of these droughts is not slow physical trans- mission or a lack of channel capacity . Our measurements confirm that the time spent on ph ysical packet transmission (PHY TX) is consistently brief, with a 99.99th-percentile de- lay below 5 ms. In stark contrast, the contention interv al, i.e. , the time a device spends waiting for channel access, exhibits an alarming heavy tail, exceeding 200 ms at the 99.99th per - centile. This delay originates from a fundamental flaw in the IEEE 802.11 CSMA/CA protocol: short-term unfairness driven by its exponential back off mechanism . Specifically , af- ter a collision, a device doubles its contention windo w (CW), creating a temporary but sev ere priority asymmetry . Other devices with smaller CWs can repeatedly seize the channel while the device with the large CW is forced to wait, its back- off counter perpetually frozen. In congested environments, these interruptions can extend a simple back off fr om millisec- onds to hundreds of milliseconds , starving the de vice of access and creating a packet-deli very drought. This is a failure of micro-fairness, not aggreg ate channel efficiency . A conv entional approach is to lev erage e xisting W i-Fi Qual- ity of Service (QoS) mechanisms—such as the priority queues of the Enhanced Distributed Channel Access (EDCA) mecha- nism defined in the IEEE 802.11e standard. On the one hand, encrypted traf fic has become the dominant form of Internet traffic [ 16 ], making the identification of traffic for specific priority le vels extremely challenging. On the other hand, in dense network en vironments, concurrent contention for the channel by multiple high-priority traf fic flows merely inten- sifies channel competition, leading to more frequent pack et collisions and thereby e xacerbating the tail latency issues that such QoS mechanisms were originally designed to mitigate. This demonstrates that a simple priority scheme is insuf ficient. What is needed is a cooperative mechanism that allows all devices to adapt to the actual le vel of channel contention. Solution: Predictable and Cooperative Contention. T o eliminate these droughts, we must replace W i-Fi’ s fla wed signaling with a mechanism that enables predictable, coop- erativ e behavior . W e present B L A D E , an adaptiv e contention control algorithm that fundamentally changes how devices perceiv e and react to network congestion. The critical flaw in the standard protocol is its reliance on a local and r eactive sig- nal : a collision. While all de vices listen before talk using clear channel assessment (CCA), they only adjust their behavior aggressiv ely after their own transmission fails. This signal is local: unin volved de vices remain oblivious to the contention ev ent; and reactive: addressing congestion only after it has al- ready caused a failure. This leads to uncoordinated responses where some de vices are forced to wait while others, with smaller contention windows, continue to seize the channel, creating the priority asymmetries that cause droughts. B L A D E solves this by deri ving a universal and proactiv e signal from the same CCA mechanism. Instead of waiting for a personal failure, each device continuously measures the micr oscopic access rate (MAR): the ratio of successful trans- mission e vents (from an y device) to the number of idle time slots it observes. Because the protocol forces ev ery device to defer to any ongoing transmission, de vices within the same carrier-sense domain typically observe consistent busy/idle slot dynamics. (Hidden terminals and partial visibility can violate this assumption; we make this e xplicit and discuss mitigation via R TS/CTS in § 4.2.1 and § H .) This provides a consistent, shared, and quantitativ e measure of the channel’ s current contention le vel. By shifting from a local, reactiv e sig- nal to a uni versal, proacti ve one, B L A D E enables all devices to act cooperatively . They adjust their contention windows based on a shared understanding of network congestion, pre- venting the short-term unfairness that causes pack et-delivery droughts in the first place. B L A D E is a MA C-layer transmitter- side mechanism. In our primary , downlink-dominated cloud gaming setting, deploying B L A D E on APs (the dominant trans- mitters) already addresses long-tail contention among neigh- boring APs and does not require client ST A modifications; when uplink traffic is significant, an AP can optionally ad- vertise contention parameters via standards-compliant EDCA parameter sets, or ST As can run B L A D E locally . B L A D E achiev es this cooperative behavior through tw o core mechanisms. First, it introduces the uni versally observ- able contention signal: MAR. Second, it employs a hybrid in- cr ease multiplicative decr ease (HIMD) policy that uses MAR as feedback, enabling co-channel devices to collecti vely and dynamically adapt their contention windows to match compe- tition le vel. This allows the network to con verge on f air and efficient operation without e xplicit coordination, proactively prev enting the priority asymmetries that cause droughts. W e e valuate B L A D E through extensi ve real-world experi- ments with commercial W i-Fi APs and ns3 simulations. The results demonstrate that B L A D E directly remedies the root cause of tail latency . Compared to the standard contention con- trol, B L A D E reduces W i-Fi packet transmission tail latency by ov er 5 × under heavy channel contention. This MA C-layer improv ement translates directly to application-lev el benefits: for cloud gaming, B L A D E reduces the 99th-percentile video frame deli very latenc y to ≤ 0.5 × the baseline and, conse- Frame Generation Packetization cmd /ack User Device Edge Server WAN Wireless LAN Wi - Fi AP pkt cmd /ack Δ = 16.7 ms (60 FPS) pkt pkt pkt pkt Δ Δ Δ Δ Figure 1: The system architecture of next-generation real-time streaming ov er wireless LAN. quently , cuts the video stall rate by over 90%. Contribution . This paper makes the follo wing contributions: • W e conduct a large-scale measurement of a commercial cloud gaming service and identify that packet-deliv ery droughts in the W i-Fi last hop, caused by fundamental limi- tations in standard contention control, are the root cause of high tail latency for NGR TC applications. • W e design and implement B L A D E , an adaptiv e contention control algorithm that dynamically and cooperativ ely adjusts the contention windows of all transmitters based on a novel, univ ersally observable contention signal. • W e e valuate B L A D E using both simulations and commercial W i-Fi APs, demonstrating that it significantly reduces W i- Fi packet transmission latency and stabilizes throughput, ultimately reducing the video stall rate by ov er 90%. Ethical claim. All user data collected in this work are obtained with explicit permission from the users and are anonymized to protect their priv acy . This work does not raise any ethical concerns and conforms to the IRB policies of the authors’ institutions. 2 Background T o understand the latency challenges facing NGR TC appli- cations, this section pro vides essential background. W e first describe the architecture of these systems and their strict performance requirements. W e then examine the details of W i-Fi’ s contention-based channel access, which is the root of the performance bottlenecks we address. 2.1 Next-Generation R TC System Architectur e. The inherent contradiction between computing power demands and terminal portability has giv en rise to the core technical paradigm of computation and in- teraction decoupling — computationally intensi ve tasks are processed in the cloud, while terminals focus on low-latenc y local interaction and content presentation. T ypical applica- tions such as cloud extended reality , AI smart glasses, and cloud gaming are all b uilt on this paradigm. Compared with traditional Real-T ime Communication (R TC) scenarios like liv e streaming and video conferencing, these immersive inter - activ e applications impose much more stringent performance requirements on network transmission [ 8 , 17 , 18 ]: the end-to- end latency needs to be reduced from hundreds of millisec- onds to tens of milliseconds, and the bit-rate must be increased from sev eral megabits per second to tens of megabits per sec- ond. W e define such dedicated network transmission demands Contention Interval PPDU PHY TX SIFS ACK Dev. 1 FES Dev. 2 FES Suspend & Wait Backoff Time Slots 1 2 3 4 5 6 7 8 9 10 11 = DIFS TX Failure Figure 2: W i-Fi Frame exchange sequence. for immersi ve real-time interaction as Ne xt-Generation Real- T ime Communication (NGR TC). Based on our analysis at T encent ST AR T cloud gaming service, 1 . A typical NGR TC ov er WLAN system comprises four ke y components: the cloud server , W AN, W i-Fi Access Point (AP), and user de- vice, as illustrated in Fig. 1 . The cloud server generates video frames at a fixed frame rate, and each frame is packetized into multiple pack ets for netw ork transmission. F or example, at 60 FPS, a new frame is generated and transmitted ev ery 16.7 ms. These pack ets trav erse the W AN to reach the W i-Fi AP , which then deli vers them wirelessly to the user’ s device. The system operates bidirectionally - upon receiving video frames, the user device sends ackno wledgments (ACKs) along with inter - acti ve commands ( e.g ., character mo vement or action triggers in mobile games) back to the cloud server . These user inputs then influence the generation of subsequent video frames. QoE Requirements on WLAN. The QoE of NGR TC criti- cally depends on two k ey parameters: video quality [ 8 ] and interaction smoothness [ 8 – 10 , 19 , 20 ]. F or better video quality , these applications stream at much higher bitr ates ( e.g., over 30 Mbps for cloud gaming [ 8 ] and ov er 200 Mbps for VR [ 21 ]) compared to traditional R TC applications. For smooth inter - actions, they require a higher frame rate ( i.e ., 60 to 144 FPS) and demand consistently low video frame delivery latency . Specifically , tail latency is particularly crucial: ele vated tail latency–e ven at 99.99th percentile–can directly cause fre- quent video freezes and stalls [ 8 , 9 ], significantly degrading user experience: a recent study [ 9 ] has shown that ev en a minor 0.5% increase in stall rate leads to a dramatic 33% reduction in user retention time. Thus, although W i-Fi 6/7 of- fers theoretical rates of 9.6 Gbps and 46 Gbps, far exceeding NGR TC’ s bitrate requirements, NGR TC’ s sensitivity to tail latency imposes higher demands on the real-time performance and stability of wireless network transmissions. 2.2 WLAN Channel Access In this section, we introduce Wi-Fi’ s contention-based channel access and packet transmission procedures. Channel via CSMA/CA. W i-Fi leverages carrier sense mul- tiple access/collision avoidance (CSMA/CA) for channel ac- cess, which requires a de vice to monitor channel activity be- fore packet transmission. Specifically , as illustrated in Fig. 2 , a device must detect the channel as idle for B back off slots before initiating transmission. The value of B is randomly cho- sen from the range [0, CW] upon each transmission, where 1 W e infer access type from the client’ s activ e network interface at session start and exclude sessions where access type is unkno wn. CW is the contention window of the device. If the device detects an ongoing transmission during its countdown from B , it suspends the countdo wn and resumes only after detecting the channel as idle for a DCF 2 interframe space (DIFS) in- terval. Upon successful completion of the B slot countdo wn, the device g ains channel access for transmission. Channel Contention Interval. W e define the contention interval as the period starting from DIFS until the successful completion of B backof f slots countdown. During one device’ s contention interval, other devices may gain channel access first, as illustrated in Fig. 2 . Consequently , the duration of a contention interval is determined by two factors: the initial number of backoff slots ( B ) and the number of channel access instances obtained by competing devices. Wi-Fi Pack et T ransmission Procedur e. After gaining chan- nel access through CSMA/CA, the device encapsulates pack- ets into PLCP Pr otocol Data Units (PPDUs) with radio head- ers and proceeds with packet transmission during the PHY TX period shown in Fig. 2 . Upon successful PPDU reception, the receiv er must acknowledge by sending an A CK frame after a short inter -frame space (SIFS) interv al. If a transmission fails ( i.e., no A CK or a N ACK is receiv ed), the sender triggers a re- transmission and re-gains channel access through CSMA/CA. From the initial DIFS to the final A CK, it is defined as a fr ame exc hange sequence (FES) for a PPDU. Wi-Fi MA C Throughput Analysis. W i-Fi MA C throughput is determined by three key components: i) PHY transmission rate, which dictates how quickly data packets can be trans- mitted over the air; ii) Channel access overhead, including variable contention intervals and fixed intervals like DIFS, SIFS, and A CK; iii) Transmission failure rate, failures are primarily caused by poor signal strength or signal collisions, where multiple devices attempt to transmit simultaneously . 2.3 Predictability VS. Efficiency in CSMA It is important to distinguish the problem of tail latency from the well-studied issue of CSMA efficiency . CSMA ef ficiency is typically defined as the ratio of airtime used for successful data payload transmission to the total airtime consumed by a full frame exchange sequence, which includes fixed overheads like the contention interv al, DIFS, SIFS, and the A CK frame. A long-standing challenge in W i-Fi has been that as physical data rates increased, the time to transmit a packet shrank, while these ov erheads remained constant, causing a decline in ov erall channel efficiency . This problem has been the subject of extensiv e research in the past two decades. Representative e xamples include fine-grained or frequency-domain contention to shrink time- domain overheads [ 23 , 24 ], explicit collision notification to curtail wasted airtime and hidden-terminal losses [ 25 ], and hybrid centralized–distributed coordination to exploit con- troller visibility while retaining CSMA agility [ 26 ]. Earlier algorithmic tuning of contention parameters like wise sought 2 DCF means distributed coordinate function in IEEE 802.11 [ 22 ]. high throughput and fairness under CSMA [ 27 ]. This problem is no w largely mitigated in modern W i-Fi standards by highly effecti ve solutions like frame aggregation ( e.g . , A-MPDU). By allowing multiple pack ets to be transmitted after a single contention event, aggre gation amortizes the overhead cost and significantly improv es system throughput. This paper, howe ver , does not aim to solve the general CSMA efficienc y problem. Instead, we focus on a distinct but equally critical issue: the opportunistic and se vere infla- tion of the contention interv al for specific packets. While low ef ficiency is a systemic issue af fecting average through- put, the problem we address is transient and statistical. The long-term throughput and average latenc y for a user can be perfectly acceptable, yet the user’ s experience can be ruined by intermittent packet-deli very droughts where the contention window for a single video frame inflates to an extreme v alue. Efficienc y is a problem of av erages; contention-dri ven tail latency is a problem of outliers, and for NGR TC applications, these outliers are catastrophic. 3 Measurement and Moti vation In this section, we build the case for our proposed solution by first identifying and then diagnosing the core performance problem for NGR TC applications in today’ s W i-Fi networks. 3.1 Large-Scale Online Measur ement In this section, we present the first lar ge-scale measurement study of Wi-Fi performance for NGR TC, re vealing critical limitations in current W i-Fi APs’ ability to support these demanding workloads. While it is well known that CSMA/CA contention intro- duces variable per -packet delay , the application-le vel implica- tions for modern high-bitrate interactiv e streaming—and the concrete failure mode that triggers stalls—hav e been unclear . Our measurement contributes three pieces of e vidence: (i) quantified stall-rate tails under W i-Fi versus wired access, (ii) latency decomposition showing that the Wi-Fi last hop dominates ev en when W AN R TT is lo w and stable, and (iii) a near one-to-one correlation between 200 ms packet-deli very droughts and video stalls. These findings moti vate a link-layer contention-control mechanism that targets micro-lev el access fairness and bounds last-hop tail latency . 3.1.1 T estbed and Data Collection Scheme T estbed. T o understand ho w W i-Fi last-hop performance im- pacts next-generation real-time streaming applications, we conducted an extensi ve measurement study through T encent ST AR T cloud gaming platform. This platform deli vers high- quality interactive gaming content, streaming 1080p to 4K video at 60-144 FPS with bitrates around 50 Mbps. Our mea- surement infrastructure consists of 200 commercial Wi-Fi access points distributed to volunteer users nationwide. T o reduce and stabilize server -side queuing delay (so that last- hop effects are more visible), we deployed Pudica [ 8 ] on the cloud-gaming servers. Pudica enables near-zero queu- 50 70 90 95 96 97 98 99 P er centiles (%) 0 50 100 150 Stall R ate ( ) 5GHz W i-F i (2024-12) W ir ed (2024-12) Figure 3: Stall rate percentiles in Dec. 2024. 50 70 90 95 96 97 98 99 P er centile (%) 0 50 100 150 Stall R ate ( ) 5GHz W i-F i (2022-12) 5GHz W i-F i (2024-12) Figure 4: Stall rate for 5 GHz W i-Fi in Dec. 2022 and 2024. ing delay , emerging as the state-of-the-art congestion control algorithm tailored for low-latenc y demands. Data Collection. T o collect comprehensiv e network perfor- mance data, we instrumented the WNIC dri ver on our W i-Fi APs to report essential channel status, MA C layer metrics, and PHY layer parameters. The AP also records the successfully transmitted packets within each 200 ms interval, providing direct insight into wireless channel contention. Along with traditional metrics like RSSI, transmission delay , packet loss, and channel properties, the APs report these measurements ev ery 200 ms. This data collection scheme allows us to mea- sure server -to-router R TT and distinguish between wired and wireless latency issues. Our server also collects transport layer statistics including frame-lev el R TT , packet loss, and jitter . Concretely , the AP periodically measures a server ↔ AP R TT (ev ery 200 ms) o ver the control channel and reports it with the same granularity as MA C/PHY metrics. The serv er separately obtains per-frame end-to-end R TT from the cloud-gaming feedback path. W e align the two by time (no clock synchro- nization is needed because both are R TTs). Over one year , we gathered data from 336 million video frames—representing the first large-scale study of wireless last-hop performance for real-time streaming. 3.1.2 Measurement Results High V ideo Stall Rate. Fig. 3 sho ws the video stall rate 3 percentiles for cloud gaming users in December 2024 across different networks. W e report stall rate as stalls per 10,000 frames ( × 10 − 4 ), so values above 100 correspond to more than 1% of frames stalling. 5 GHz W i-Fi e xhibits significantly higher tail latency compared to wired networks, indicating the superior stability of wired connections. Fig. 4 compares the stall-rate percentiles for 5 GHz W i-Fi sessions from two matched one-month snapshots (Dec. 2022 vs. Dec. 2024) un- der the same stall definition. The similarity indicates that, ev en as Wi-Fi hardware e volves, tail stalls driv en by CSMA/CA contention remain a dominant factor in dense en vironments. Wi-Fi Last-Mile Causes High V ideo Stall Rate. Our mea- surement campaign rev eals the root cause of these video stalls: the W i-Fi last hop acts as a critical performance bottleneck. By analyzing each video frame’ s end-to-end path, we find striking dif ferences between wired and wireless segments. As 3 W e define a video stall as occurring when end-to-end frame deliv ery latency exceeds 200 ms. This metric is based on user QoE feedback and has been adopted by many pre vious studies in NGR TC [ 8 , 9 , 11 ]. Range Probability (%) Range Probability (%) 0 86.19 5 0.78 1 0.29 [6,10) 2.55 2 0.39 [10,20) 2.86 3 0.36 [20,50) 2.46 4 0.29 (50, ∞ ) 3.82 T able 1: The distribution of the number of packets transmitted by the W i-Fi router within 200 milliseconds, when down- stream long-tail latency occurs (with the absence of wired network issues confirmed). shown in Fig. 5 , the wired portion (serv er to AP) maintains consistently low latency , staying below 200 ms e ven at the 99.99th percentile. Ho wever , when including the wireless last hop (AP to user), the total latency can exceed 1000 ms. T o precisely quantify this impact, we decomposed each frame’ s deliv ery time into wired and wireless components. Fig. 6 re- veals that the wireless segment contributes disproportionately to total latency , with its share gro wing dramatically as deliv- ery times increase. This finding is particularly concerning because it shows that ev en with state-of-the-art W AN con- gestion control, the wireless last hop remains the primary obstacle to reliable real-time streaming. Packet Delivery Dr oughts: Root Cause of Frame Stalls. T o identify the root cause of W i-Fi last-hop delays, we analyzed the correlation between stalled video frames and success- ful packet transmissions. For each frame with a high end- to-end delay (server to client), we examined the number of successfully transmitted packets within each 200 ms window of the frame’ s transmission. T o isolate W i-Fi-induced stalls, we focused on frames where server -to-client latenc y e xceeded 200 ms while server -to-router R TT remained below 50 ms— effecti vely filtering out stalls caused by wired network issues. T able 1 re veals a striking pattern: in 86.19% of these stalled frames, the router failed to successfully transmit e ven a single packet during at least one 200 ms interval, despite potentially having transmission opportunities. This near one-to-one corre- spondence between packet deli very droughts and frame stalls suggests a fundamental issue in Wi-Fi’ s channel access mecha- nism, where either transmission opportunities are not obtained or packets fail to be delivered even when opportunities are granted. In contrast, we calculate the PHY transmission delay of PPDUs and present its distribution in Fig. 7 . Once PPDUs are granted transmission opportunities, the actual transmis- sion completes quickly , with 92.7% finishing within 3.5 ms and a maximum delay of 7.5 ms. T o further understand why AP fails to deli ver packets, we in vestigated the relationship between packet delivery and channel contention. W e define the channel contention rate as the proportion of airtime occupied by other transmitters within each 200 ms interval (longer airtime by others indi- cating higher contention). Fig. 8 sho ws that the probability of zero packet deliveries rises dramatically with increased channel contention—when contention exceeds 80%, the prob- 0 200 400 600 800 1000 Delay (ms) 99.99 99.9 99 90 50 0 CDF (%) Wired T otal Figure 5: Distribution of video frame latenc y in cloud g aming. 0-50 50-100 100-200 200-300 >300 T otal Frame Delay (ms) 0 20 40 60 80 100 Delay Proportion (%) Wired Wireless Figure 6: Cloud gaming video frame latency decomposition. [0,1.5] [1.5,3.5] [3.5,5.5] [5.5,7.5] PPDU PHY TX Delay Range (ms) 0 20 40 60 80 100 Proportion (%) 67.1 25.6 5.7 1.6 Figure 7: Distribution of W i-Fi PHY transmission delay . [0,20] [20,40] [40,60] [60,80] [80,100] Contention Rate Range (%) 0.0 0.5 1.0 1.5 2.0 P ( m 2 0 0 = 0 ) ( % ) 0.02 0.03 0.05 0.23 1.49 Figure 8: The distribution of transmission opportunity . AP Num. Session Num. Stall Rate (%) 2 52349 0.08 4 25624 0.17 6 14414 0.42 ≥ 8 7976 1.34 T able 2: Relations between video stall rate of W i-Fi sessions and W i-Fi AP numbers in the en vironment from our online cloud gaming platform for 8 weeks. ability of a complete deliv ery drought is 74.5 times higher than under 20% contention. W e further validated this relationship through an 8-week field study where, with user consent, we monitored the num- ber of nearby Wi-Fi APs as a proxy for potential channel contention. T able 2 demonstrates that video stall rates, par- ticularly at the tail, increase systematically with the num- ber of surrounding APs. These findings establish that frame stalls primarily occur when routers e xperience packet deliv ery droughts during periods of intensiv e channel contention. Im- portantly , a single 200 ms delivery drought already crosses the stall threshold used by the application, so mitigating micro- le vel droughts is directly reflected in lo wer stall rate and better user QoE. 3.2 Mechanism Behind Packet Delivery Droughts Our online measurements establish that deli very droughts con- centrate under high contention, but they do not expose the packet-le vel dynamics that create 100–200 ms gaps. W e there- fore complement them with ns-3 simulations and controlled experiments with commercial Wi-Fi APs. Across both set- tings, we find that (i) collisions increase retransmissions, and (ii) each retransmission triggers binary exponential back off whose countdo wn is repeatedly frozen under a busy chan- nel, stretching the ef fective contention interval from sub- millisecond to hundreds of milliseconds. W e report the full methodology and supporting figures in § D . These dynamics point to a deeper limitation: 802.11’ s contention contr ol is collision-driven and pur ely r eactive , which we summarize next. 3.2.1 Root Cause: Collision-Driven Reactiv e Contention The fundamental issue lies in 802.11’ s reactive approach to contention control. As detailed in § D , the long gaps are dominated by collision-driv en retransmissions and prolonged countdown freezes, rather than PHY transmission time. Cur- rent CSMA/CA mechanism has two critical limitations that lead to extended packet deli very times. First, the protocol al- ways initializes transmission with a small contention window ( C W min ), regardless of network contention le vels. In dense net- works with high contention, this approach inevitably leads to frequent collisions—multiple devices are lik ely to select sim- ilar small backof f values. A more ef fective approach would be to proacti vely adjust the initial windo w size based on ob- served network contention, starting with lar ger windo ws when contention is high. Second, the protocol creates unfair channel access after collisions. When a de vice experiences a collision, it doubles its contention window , while de vices without recent collisions maintain small windows. This creates a problematic asymme- try: devices with larger windows must count do wn through more slots, making them more likely to be interrupted by transmissions from devices with smaller windo ws. Each in- terruption forces the de vice to pause its countdo wn until the channel becomes idle again. In dense networks, these interrup- tions can extend a simple backoff period from milliseconds to hundreds of milliseconds. This reacti ve, device-by-de vice approach means the system nev er achiev es a coordinated response to network contention. Instead, devices independently adjust their windows based only on their own collision e xperiences, leading to persistent unfairness and inefficient channel utilization. A better ap- proach would be to maintain similar windo w sizes across all devices based on o verall network contention le vels, ensuring fair channel access while proacti vely prev enting collisions. 4 B L A D E Design T o meet the latency and throughput stability requirements of NGR TC in W i-Fi networks, we consider the following tw o as- pects as the most critical: (1) maintaining a lo w collision prob- ability; (2) ensuring that the CW of different co-channel W i-Fi de vices remains as consistent as possible. Based on these two principles, we design B L A D E , which le verages MAR and em- ploys the HIMD approach to adapti vely adjust CW , achieving a balance between high throughput and lo w collision proba- bility , without relying on priority queues in W i-Fi networks. W e first present the design goals of B L A D E . 4.1 Design Goals A practical and ef fectiv e contention windo w adjustment mech- anism should achiev e four key goals: High T ransmission Efficiency . The system must maximize channel utilization by maintaining an optimal collision rate. Since collision recovery (3-5ms) costs significantly more than contention slot time (9 µ s), we need to balance between av oiding excessiv e collisions from small contention windows and prev enting unnecessary idle periods from large windo ws. Fair Channel Access. All de vices should reach a consensus on network contention le vels and adjust their windows ac- cordingly . This differs from current IEEE mechanisms where de vices react indi vidually to their own transmission outcomes, leading to unfair access patterns. Instead, de vices should col- lectiv ely adapt their contention windo ws based on shared network conditions, ensuring balanced transmission opportu- nities across the network. Fast Conver gence. The system should rapidly adapt to net- work changes while maintaining stable operation. When net- work conditions shift ( e .g., , traffic flo ws joining or leaving), all devices should quickly conv erge to appropriate window sizes, maintaining both ef ficiency and fairness in their channel access patterns without oscillating between states. Minimal Assumptions. The system should be designed with- out relying on assumptions about user traffic patterns, the number of competing flows, or PPDU PHY transmission du- ration, as real-world networks are inherently comple x. They exhibit unpredictable user traf fic, highly dynamic compet- ing transmitters, and varying PHY transmission rates. This contrasts with existing studies on contention windo w control algorithms [ 28 – 32 ] beyond the IEEE 802.11 standard, which are based on these assumptions. These goals are particularly challenging because W i-Fi operates as a fully distributed system where de vices must make decisions without explicit coordination. 4.2 Search of a Uni versal Contention Signal Requirement. T o enable coordinated contention window ad- justment across distributed W i-Fi devices, we need a reliable signal that indicates network contention levels. This signal must satisfy three ke y requirements: it should be univ ersally observable by all de vices, accurately reflect current network competition, and remain stable enough to facilitate consen- sus. Se veral candidate signals f ace fundamental limitations: i) Collision-based signals only pro vide local feedback to in- volv ed de vices; ii) Detecting competing flows requires packet- le vel decoding at the MA C layer , which is both complex to im- plement and potentially misleading—flo ws operate at longer time scales and may be temporarily inactiv e, making them poor indicators of instantaneous network contention; iii) Air- time utilization rate (the fraction of airtime occupied by trans- missions) can be deceptive, since high utilization rate may simply caused by large PPDUs from few devices rather than actual competition for channel access. 4.2.1 Proposed Signal: MAR Definition of MAR. W e define the micr oscopic access rate (MAR) as the ratio of transmission opportunities to total Wi - Fi dev 1 S uccessful TX Wi - Fi dev 2 Failed TX (collision) Failed TX (collision) Detects channel busy time via CCA timeline timeline Idle slot - time TX duration 3 2 0 1 2 0 1 2 0 1 4 3 5 6 9 8 13 12 DIFS Figure 9: Illustration of MAR. There are 9 idle slot times (in red) and 2 TX durations, the MAR that both device 1 and device 2 detect via CCA is 2 / ( 9 + 2 ) . av ailable slots in the channel. As shown in Fig. 9 , each device monitors both idle slots during its backof f countdown and transmission ev ents in the channel. A transmission event occurs either when the de vice itself gains channel access or when it detects other devices’ transmissions through CCA. Mathematically , MAR is defined as: M AR = N t x N t x + N id l e (1) where N t x is the number of transmission e vents and N id l e is the number of idle slots during backof f countdown. In the example sho wn in Fig. 9 , there are 2 transmission e vents and 9 idle slots, resulting in a M AR of 2/11. MAR: Properties and Advantages. MAR offers three ke y advantages as a contention signal: Universal Observability . For devices that can carrier-sense each other (i.e., within the same carrier-sense domain), MAR is consistently observable: when any de vice transmits, others detect it via CCA and freeze backof f, leading to a shared se- quence of transmission e vents and idle slots. Hidden terminals and partial visibility can violate this assumption; we discuss mitigation via R TS/CTS and empirically validate rob ustness in § H . Dir ect Competition Indicator . MAR directly reflects the inten- sity of channel competition by measuring the ratio of trans- mission attempts to av ailable slots. Unlik e network utilization or flow counts, MAR captures the actual contention for trans- mission opportunities, allo wing devices to accurately gauge network competition le vels. Pr edictable Collision Contr ol. When devices maintain MAR at a target threshold through contention windo w adjustment, collision probability remains stable regardless of the number of competing devices (See § L for proof). This property en- ables systematic congestion management without requiring knowledge of netw ork size or traffic patterns. 4.3 MAR-Driven Contention W indow Control Problem Statement The core challenge in MAR-dri ven con- tention control is to dynamically adjust each transmitter’ s contention windo w (CW) to achiev e three key objecti ves. i) the system must maintain the observed MAR close to a tar- get value M AR t ar to ensure efficient channel utilization; ii) all competing transmitters must con verge to similar CW val- ues to guarantee fair channel access—significant dif ferences in CW values would giv e some transmitters unfair advan- tages in channel competition. iii) the system needs to rapidly adapt CW values in response to network changes, con verging quickly to optimal settings without oscillation. These objecti ves present inherent tensions. Aggressi ve CW adjustments achiev es faster conv ergence but risks creating temporary unfairness or oscillations. Conservati ve adjust- ments provide more stability b ut may react too slowly to net- work changes. Additionally , transmitters must achiev e these objectiv es through independent decisions without explicit coordination, as the distributed nature of W i-Fi networks pre- cludes direct communication between devices. 4.3.1 HIMD-based Contention Windo w Control Drawing inspiration from traditional TCP congestion control, we design a hybrid incr ease multiplicative decrease (HIMD) policy for CW adjustment. Note that the “increase/decrease” directions are in verted compared to transport-layer conges- tion windo ws: in W i-Fi, a larger contention window reduces a transmitter’ s attempt probability , so “increasing CW” makes the transmitter less aggressive. T raditional AIMD (additiv e increase multiplicativ e decrease) has been proven to achie ve fair bandwidth sharing in congestion control. W e extend it with a hybrid increase phase that combines both additiv e and multiplicativ e components—the additive component ensures steady fairness con vergence, while the multiplicati ve compo- nent provides rapid response to sev ere congestion. This hybrid approach offers better adapti vity than pure AIMD while main- taining its fairness properties. Like AIMD, our HIMD policy increases CW when MAR exceeds M AR t ar to reduce channel contention, and decreases CW when MAR is belo w M AR t ar to encourage more transmission attempts. Here, M AR t ar is the target microscopic access rate that we re gulate to in steady state (default 0.1), and M AR max is an empirical upper bound of MAR under saturated contention (def ault 0.35), used to nor - malize/clip the control signal and av oid over -reacting when the channel is nearly fully occupied by transmissions and fixed MA C overheads. Hybrid Increase. When the observ ed M AR exceeds the target M AR t ar , it indicates excessi ve contention, leading to more collisions and longer contention intervals. T o alleviate this, B L A D E increases the contention window C W to yield more transmission opportunities and reduce contention: C W = CW + M inc ( min { M AR , M AR max } − M AR t ar ) + A inc + CW · max { 0 , M AR − M AR max } (2) Eqn. 2 in volves two additi ve and one multiplicativ e terms: i) Additiv e term M inc ( min { M AR , M AR max } − M AR t ar ) ensures a fast er increase in CW when the observed MAR significantly exceeds the target and a slower increase when it is close to the target. W e use M inc = ( C W max − CW min ) / 2 by default; ii) Ad- ditiv e term A inc guarantees a minimum increase, promoting fairness among all transmitters’ C W values; iii) Multiplica- tiv e term C W · max { 0 , M AR − M AR max } is applied to handle extreme contention scenarios. When the observ ed MAR ex- ceeds M AR max , the channel is considered highly congested and unstable. In response, CW is increased multiplicativ ely to rapidly reduce contention. Overall, Eqn. 2 beha ves as a stable proportional controller on the MAR error within a safe range, while providing (i) a f airness floor via A inc and (ii) an emergenc y brake via the multiplicative term when the sys- tem enters a highly congested regime ( M AR > M AR max ). The default v alue of M AR max is set to 35%, as our simulation e x- periments rev ealed that under the IEEE standard, the MAR tends to rise to approximately 35% with an increasing number of competing W i-Fi flows. Multiplicative Decr ease. When the observed M AR is below the target M AR t ar , it indicates insufficient traf fic load, wasting transmission chances and overall bandwidth. T o effecti vely utilize the airtime resource, B L A D E should rapidly contend for more transmission chances by decreasing the contention window C W multiplicati vely: C W = β · CW , β < 1 . Our choice on β value in volves tw o factors: On the one hand, to quickly con verge without oscillation, we aim for the observ ed M AR to increase by ( M AR t ar − M AR ) / 2 per step. In the con ver ged state, M AR is (approximately) in versely proportional to the con verged C W : with attempt probability τ ≈ 2 CW + 1 per trans- mission chance, M AR = 1 − ( 1 − τ ) N ≈ N τ ≈ 2 N CW + 1 for τ ≪ 1 . Therefore, we use β 1 = M AR M AR t ar − M AR t ar − M AR 2 = 2 M AR M AR t ar + M AR (3) On the other hand, to accelerate fair con vergence, the greater the CW value is, the larger the reduction magnitude should be, therefore, we use β 2 = M d ec − ( 1 − M d ec )( CW − C W min ) CW max − CW min (4) where M d ec is a minimum decrease factor , with a default v alue of 0.95. The second term on the right-hand side of Eqn. 4 ensures that W i-Fi devices with larger C W values e xperience a greater reduction, thereby speeding up the con vergence pro- cess. Finally , combining the two considerations, we update the contention window as: C W = min ( β 1 , β 2 ) · C W (5) Because M AR is a channel-wide consensus signal (all trans- mitters on the same channel observ e the same busy/idle pat- tern), all nodes react to a common feedback loop. β 1 driv es the system to ward the fixed point where M AR ≈ M AR t ar , while β 2 contracts CW disparities f aster by applying lar ger reduc- tions to larger C W values. T aking min ( β 1 , β 2 ) av oids over - shooting and reduces oscillation. T ogether with hard bounds [ CW min , C W max ] , this yields rapid con vergence without persis- tent unfairness. T arget MAR. The target microscopic access rate M AR t ar is a critical parameter in our HIMD control polic y . Using a stan- dard CSMA/CA throughput model, the throughput-optimal M AR is approximately M AR o pt = 1 √ η + 1 , where η = T c / T s is the collision duration (in slots) relati ve to an idle backof f slot. In modern W i-Fi, η is typically lar ge (collisions last tens to hundreds of slots), which places M AR o pt in a narro w “safe” band around 0.1. Accordingly , we set M AR t ar = 0 . 1 by default, and § 6.2.1 sho ws that B L A D E remains robust when M AR t ar varies within this band. F ast Recovery Policy f or Collisions. While our HIMD polic y ensures stable con vergence, random collisions can still occur when multiple transmitters select the same backoff v alue. T o minimize the delay impact of these collisions, we implement a special handling for retransmissions. Upon a transmission failure, instead of the standard IEEE 802.11 approach of dou- bling CW , we set: C W f ail = CW + A f ail C W = CW f ail / 2 (6) This temporary CW reduction accelerates retransmission of collided packets while A f ail serves as a compensation term. After successful retransmission, we restore CW to C W f ail be- fore resuming normal HIMD control. T o prevent excessi ve contention, this halving is applied only to the first retransmis- sion attempt. 5 Implementation W e implement B L A D E on T enda AX12 Pro W i-Fi APs, ac- cessing the W i-Fi dri ver layer to monitor channel activity . Our implementation primarily le verages three hardw are counters from the CCA mechanism: TX_time : duration of AP’ s active data transmission; BUSY_time : duration when channel is busy with other transmissions; IDLE_slot_time : count of idle chan- nel slots. W e poll these microsecond-precision counters ev ery 1 ms and calculate the observed MAR by tracking changes in counter values. This provides accurate measurement of N t x and N id l e as defined in § 4.2.1 . For CW control, we implement the complete HIMD algorithm with: observ ation interval of 300 slots when calculating MAR (justified in § J ) and standard BE queue parameters ( C W min = 15 , C W max = 1023 ). The im- plementation consists of approximately 500 lines of C code, focusing on counter monitoring and CW adjustment logic. 6 Evaluation W e e valuate B L A D E ’ s performance through extensi ve exper - iments on commercial W i-Fi APs and ns3 simulations. The assessment cov ers diverse netw ork conditions, from saturated links to realistic traf fic, culminating in real-world tests with cloud gaming applications. W e benchmark B L A D E against the standard IEEE 802.11 contention control and other rele vant algorithms to validate its ef fectiv eness. 6.1 T race-driven Simulation W e use ns3 for our experimental environment because it accurately simulates the CSMA/CA behavior of W i-Fi net- works [ 33 , 34 ] and allows easy modification of the contention control policy . For PHY transmission rate selection, we use Minstrel [ 35 ], the default rate adaptation algorithm in both ns3 and the mac80211 module of Linux kernel. Baselines. W e e valuate B L A D E against the follo wing con- tention window control mechanisms: • B L A D E SC : B L A D E with only stable-state control logic ( i.e., HIMD) to demonstrate the ef fectiv eness of the fast recov ery policy for collisions; • IEEE : The default polic y in the IEEE 802.11 standard, as explained in § 3.2 , using the BE (Best Effort) A C queue ( CW min = 15 , CW max = 1023); • IdleSense [ 28 ]: It observes the mean number of idle slots between transmission attempts to control the contention window . W e provide the transmitter number N to it as it requires such information to operate; • DD A [ 29 ]: It controls the contention window to match the backof f delay threshold ∆ imposed by applications. W e ∆ to be 5 ms (99th percentile value in Fig. 29 ). 6.1.1 Saturated Link Experimental Setup. T o e valuate the performance of B L A D E under intensiv e contention, we deploy N AP-ST A pairs ( N = 2 , 4 , 8 , 16 ), each transmitting traffic from AP to ST A us- ing iperf to saturate the link. The ev aluation utilizes the 802.11ax standard (W i-Fi 6) operating in the 5 GHz band with a 40 MHz bandwidth. All transmitters share the same channel and can hear each other with equal signal strength. AP T ransmission Latency . T o demonstrate B L A D E ’ s effec- tiv eness in reducing tail latency for W i-Fi last hop, we first e valuate the PPDU transmission latenc y ( i.e., frame exchange sequence duration in Fig. 2 ) to show how long a PPDU blocks the AP sending queue. As sho wn in Fig. 10 , as the number of competing flo ws increases from 2 to 16, the median latency remains similar across all methods. Howe ver , the tail latency increases rapidly for the IEEE 802.11 standard contention control policy , exceeding 300 ms at the 99th percentile with 8 competing flows. In contrast, B L A D E achieves the lo west tail latency among all methods, limiting the 99.99th percentile la- tency to 200 ms e ven with 16 competing flows. Notably , under 16 competing flo ws and the standard contention control policy , we observe frequent AP-ST A disconnections due to Beacon frames experiencing excessi vely long contention intervals before transmission. This indicates that standard contention control polic y fails to operate ef f ecti vely under such high con- tention lev els. Additionally , B L A D E without the fast recov ery policy sho ws a slight increase in tail latency , highlighting the effecti veness of B L A D E ’ s fast recovery mechanism. Retransmission Rate. T o sho w B L A D E ’ s effectiv eness in av oiding collisions and improving transmission ef ficiency , we plot the distribution of PPDU retransmission counts under 8 competing flows ( N = 8 ) in Fig. 12 . Thanks to the stable state control polic y , B L A D E adapts the contention windo ws of all transmitters to the channel contention le vel, achie ving a lo w retransmission rate. Specifically , only 10% of PPDUs are retransmitted once, and 1% are retransmitted twice. In contrast, under intensi ve contention, the standard contention control policy results in 34% of PPDUs being retransmitted at least once, with 4% retransmitted more than twice. MA C Thr oughput. W e calculate the MA C throughput in 100 ms intervals and sho w the distribution in Fig. 11 . Due to its lower PPDU retransmission rate and transmission latenc y , 0 1 0 2 0 3 0 4 0 5 0 6 0 P P D U T r a n s m i s s i o n D e l a y ( m s ) 9 9 . 9 9 9 9 . 9 9 9 9 0 5 0 C D F ( % ) B l a d e B l a d e S C I E E E I d l e S e n s e D D A (a) N = 2. 0 5 0 1 0 0 1 5 0 2 0 0 2 5 0 P P D U T r a n s m i s s i o n D e l a y ( m s ) 9 9 . 9 9 9 9 . 9 9 9 9 0 5 0 C D F ( % ) B l a d e B l a d e S C I E E E I d l e S e n s e D D A (b) N = 4. 0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 P P D U T r a n s m i s s i o n D e l a y ( m s ) 9 9 . 9 9 9 9 . 9 9 9 9 0 5 0 C D F ( % ) B l a d e B l a d e S C I E E E I d l e S e n s e D D A (c) N = 8. 0 2 0 0 4 0 0 6 0 0 8 0 0 1 0 0 0 P P D U T r a n s m i s s i o n D e l a y ( m s ) 9 9 . 9 9 9 9 . 9 9 9 9 0 5 0 C D F ( % ) B l a d e B l a d e S C I E E E I d l e S e n s e D D A (d) N = 16. Figure 10: PPDU transmission delay distribution under N competing flows. 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 2 0 0 M A C Th r o u g h p u t ( M b p s ) 0 2 0 4 0 6 0 8 0 1 0 0 C D F ( % ) B l a d e B l a d e S C I E E E I d l e S e n s e D D A (a) N = 2. 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 1 0 0 M A C Th r o u g h p u t ( M b p s ) 0 2 0 4 0 6 0 8 0 1 0 0 C D F ( % ) B l a d e B l a d e S C I E E E I d l e S e n s e D D A (b) N = 4. 0 5 1 0 1 5 2 0 2 5 3 0 3 5 4 0 4 5 5 0 M A C Th r o u g h p u t ( M b p s ) 0 2 0 4 0 6 0 8 0 1 0 0 C D F ( % ) B l a d e B l a d e S C I E E E I d l e S e n s e D D A (c) N = 8. 0 5 1 0 1 5 2 0 2 5 3 0 M A C Th r o u g h p u t ( M b p s ) 0 2 0 4 0 6 0 8 0 1 0 0 C D F ( % ) B l a d e B l a d e S C I E E E I d l e S e n s e D D A (d) N = 16. Figure 11: Distribution of MA C throughput within 100 ms interval under N competing flows. B L A D E achieves higher median throughput compared to the standard policy as the number of competing flo ws increases. This result demonstrates that B L A D E improv es MA C layer transmission efficienc y for Wi-Fi APs. Furthermore, B L A D E results in a steadier and more con ver ged throughput distri- bution. In contrast to the standard policy , B L A D E prevents transient starv ation, where the MA C throughput within 100 ms drops to zero, demonstrating that B L A D E achieves f airer bandwidth allocation for all transmitters at the micro lev el. Con vergence & F airness. T o demonstrate the con vergence of B L A D E , we deploy fi ve AP-ST A pairs ( N = 5 ) and sequen- tially start and stop their transmissions over a 5-minute period. As sho wn in Fig. 12a , with the arri val and departure of com- peting flo ws, the contention windows of all transmitters adapt dynamically to the contention le vel and con verge within 1 sec- ond. Consequently , B L A D E quickly achiev es a fair bandwidth share among all transmitters, as illustrated in Fig. 12b . 6.1.2 Real-world T raffic Experimental Setup. T o ev aluate B L A D E ’ s performance un- der real-world network traffic, we follo w the simulation guide- lines outlined in the IEEE standard [ 36 ] and simulate a three- floor apartment in ns3, as shown in Fig. 14 . Each floor has eight rooms, each with one Wi-Fi AP (central-placed) and ten randomly distributed ST As forming a BSS. In e very BSS, the AP sends tw o cloud gaming flo ws to two ST As, and the other ST As run real-world traffic trace(video streaming, web browsing, file transfer , etc.). W e utilize four channels ( i.e., channel numbers 42, 58, 106, and 122) in the 5 GHz band with an 80 MHz bandwidth, ensuring that BSSes in adjacent rooms operate on different channels. T races. W e use real-world open-source traces collected from routers [ 37 ] and base stations [ 38 ], cov ering traffic patterns used in our simulation. These traces include timestamps and packet sizes for packet arri vals in both do wnlink and uplink, representing traffic patterns at wireless last hops. For cloud gaming, we additionally access our cloud gaming platform and collect traffic traces directly from the W i-Fi router . Perf ormance. Follo wing the calculation in § 6.1.1 , we plot the PPDU transmission latenc y and MA C throughput for cloud gaming flows in Fig. 15 and Fig. 16 . W ith contention from real-world competing network traf fic, B L A D E constrains the 99.9th and 99.99th percentile latency to 75 ms and 120 ms, respectiv ely .In contrast, other methods inflate the tail latency to ov er 300 ms at the 99.99th percentile, while the standard control policy e xceeds 500 ms. As a result, B L A D E achieves only a 5% starv ation rate ( i.e., MA C throughput within 100 ms drops to zero) across all methods, compared to the 25% starvation rate observ ed with the standard control policy . No- tably , DD A and IdleSense perform worse than in the saturated link scenario because they assume i.i.d. traf fic patterns from all competing flows, which is not true in real-w orld traffic. 6.2 Microbenchmarks 6.2.1 Influence of T arget MAR W e ev aluate the impact of the tar get MAR on the performance of B L A D E . W e repeat the experiment in § 6.1.1 with N = 4 and M AR t ar varying from 0.05 to M AR max = 0 . 35 . As shown in Fig. 17 , when M AR t ar de viates from the default v alue of 0.1 within ± 0 . 05, the performance of B L A D E remains relativ ely stable, with a ± 5 ms tail PPDU transmission delay and a ± 2 . 5 Mbps median MA C throughput deviation. Howe ver , as M AR t ar approaches M AR max , the tail latency increases rapidly , reaching 150% of the default v alue. These results align with our analysis in § F , which shows that M AR t ar = 0 . 1 is an appropriate and robust def ault value. 6.2.2 Parameter Sensitivity B L A D E is robust to parameter choices; varying M inc , M d ec , A inc , and A f ail yields negligible changes in throughput and 0 1 2 3 4 5 6 P P D U R e t r a n s m i s s i o n T i m e s 5 0 6 0 7 0 8 0 9 0 1 0 0 C D F ( % ) B l a d e B l a d e S C I E E E I d l e S e n s e D D A Figure 12: Retransmission times for each PPDU under 8 competing flows. 0 0 : 0 0 0 1 : 0 0 0 2 : 0 0 0 3 : 0 0 0 4 : 0 0 0 5 : 0 0 0 2 5 5 0 7 5 1 0 0 1 2 5 C o n t e n t i o n W i n d o w F l o w 1 F l o w 2 F l o w 3 F l o w 4 F l o w 5 (a) Contention window v alue. 0 0 : 0 0 0 1 : 0 0 0 2 : 0 0 0 3 : 0 0 0 4 : 0 0 0 5 : 0 0 0 2 5 5 0 7 5 1 0 0 1 2 5 1 5 0 1 7 5 2 0 0 2 2 5 M A C Th r o u g h p u t ( M b p s ) F l o w 1 F l o w 2 F l o w 3 F l o w 4 F l o w 5 (b) MA C throughput. Figure 13: Con vergence of B L A D E with five competing flo ws. 5 8 1 2 2 5 8 1 2 2 4 2 1 0 6 4 2 1 0 6 … R o o m 1 A P 1 0 S T A s F l o o r 8 R o o m s 4 C h a n n e l s A p a r t m e n t 3 F l o o r s 3 m 1 0 m 1 0 m Figure 14: Simulation topology of an apart- ment. 0 5 0 1 0 0 1 5 0 2 0 0 2 5 0 3 0 0 3 5 0 4 0 0 4 5 0 5 0 0 P P D U T r a n s m i s s i o n D e l a y ( m s ) 9 9 . 9 9 9 9 . 9 9 9 9 0 5 0 C D F ( % ) B l a d e B l a d e S C I E E E I d l e S e n s e D D A Figure 15: Cloud gaming flow PPDU trans- mission delay distribution. 0 5 1 0 1 5 2 0 2 5 3 0 3 5 4 0 4 5 5 0 5 5 6 0 M A C Th r o u g h p u t ( M b p s ) 0 2 5 5 0 7 5 1 0 0 C D F ( % ) B l a d e B l a d e S C I E E E I d l e S e n s e D D A Figure 16: Cloud gaming flow MA C throughput within 100 ms interval. 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 P P D U T r a n s m i s s i o n D e l a y ( m s ) 9 9 . 9 9 9 9 . 9 9 9 9 0 5 0 C D F ( % ) 3 5 % 3 0 % 2 5 % 2 0 % 1 5 % 1 0 % 5 % (a) PPDU transmission delay . 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 1 0 0 M A C Th r o u g h p u t ( M b p s ) 0 2 0 4 0 6 0 8 0 1 0 0 C D F ( % ) 3 5 % 3 0 % 2 5 % 2 0 % 1 5 % 1 0 % 5 % (b) MA C throughput. Figure 17: Performance of B L A D E under different tar get uti- lization rate M AR t ar . T able 3: Mobile gaming packet latenc y distribution (%) R TT (ms) 0 Competing Flow 1 Competing Flow 2 Competing Flows 3 Competing Flows IEEE Blade IEEE Blade IEEE Blade IEEE Blade [0, 10) 99.7 99.8 12.4 88.6 2.1 85.9 2.3 84.1 [10, 20) 0.3 0.2 32.1 11.2 30.6 13.8 22.7 15.7 [20, 30) 0.0 0.0 28.3 0.2 29.5 0.1 27.8 0.1 [30, 40) 0.0 0.0 18.1 0.0 19.0 0.2 22.7 0.1 [40, 50) 0.0 0.0 5.3 0.0 10.1 0.0 11.3 0.0 [50,100) 0.0 0.0 3.8 0.0 8.7 0.0 13.2 0.0 PPDU TX-delay percentiles (details in § C.1 ). 6.3 Real-W orld Experiments Experimental Setup. T o e valuate the performance of B L A D E in real world, we conduct experiments using commercial W i- Fi APs with B L A D E implemented. W e deploy 4 AP-ST A pairs. The ev aluation utilizes the 802.11ax standard (Wi-Fi 6) operating in the 5 GHz band with 40 MHz bandwidth. All transmitters share the same channel and can hear each other . 6.3.1 Saturated Links W e first saturate the wireless link with 4 AP each transmit- ting an iperf flow to the ST A. As shown in Fig. 18 , B L A D E consistently achiev es lower tail PPDU transmission delay compared to the IEEE standard, with more than 4 × reduc- tion. Therefore, as sho wn in Fig. 19 , B L A D E achie ves more stable and higher MA C bandwidth utilization than the IEEE standard, demonstrating better adaptation to real-world wire- less channel dynamics and reducing inef ficiencies caused by excessi ve contention windo w growth. 6.3.2 Cloud Gaming W e replay our cloud gaming session while injecting 0–3 con- tending iperf flows. Fig. 20 shows that B L A D E keeps the 99th-percentile end-to-end frame delay belo w 100 ms under heavy contention (vs. > 200 ms for IEEE), cutting stall rate by > 90% . This directly translates to smoother interactiv e gameplay , aligning with the motiv ation in § 3.1 . 6.3.3 Mobile Game T raffic we compares the R TT distribution of a mobile game under different numbers of competing flo ws using the IEEE 802.11 standard and B L A D E . all flows adopt the same CW adjustment algorithm during the experiment. As shown in T ab . 3 ,without competing flows, both strategies achieve ultra-low latency . Howe ver , IEEE 802.11 performance degrades sharply with more competing flo ws, with far fewer lo w-latency packets and higher R TTs. In contrast, B L A D E maintains over 84% of packets within [0,10) ms e ven under three competing flo ws, while IEEE 802.11 drops to only 2.3%. This demonstrates B L A D E ’ s effecti veness in mitigating contention and ensuring low-latenc y performance for mobile gaming applications. 6.3.4 File Downloading T ab . 4 presents the speed distrib ution under different con- tention levels while downloading a lar ge file, comparing IEEE 802.11 with B L A D E . W ithout competing flows, both schemes maintain speeds abov e 50 Mbps. Howe ver , IEEE degrades with 40% of traf fic below 50 Mbps under one compet ing flow ( B L A D E keeps 94% at 10–50 Mbps). Under heavy contention, 50% of IEEE traf fic drops belo w 10 Mbps,while 67% B L A D E traffic e xceeds 20 Mbps. These results proving BLADE miti- gates contention-induced de gradation for more stable through- put. 0 50 100 150 200 250 300 350 400 Delay (ms) 99.99 99.9 99 90 50 CDF (%) Blade Flow 1 Blade Flow 2 Blade Flow 3 Blade Flow 4 IEEE Flow 1 IEEE Flow 2 IEEE Flow 3 IEEE Flow 4 Figure 18: Distrib ution of transmission delay for four iperf flows. 0 20 40 60 80 100 MA C Thr oughput (Mbps) 0 20 40 60 80 100 CDF (%) Blade Flow 1 Blade Flow 2 Blade Flow 3 Blade Flow 4 IEEE Flow 1 IEEE Flow 2 IEEE Flow 3 IEEE Flow 4 Figure 19: Distribution of MA C through- put for four iperf flows. 0 50 100 150 200 R T T (ms) 99.99 99.9 99 90 50 CDF (%) Blade (0 conflict flow) Blade (1 conflict flow) Blade (2 conflict flows) Blade (3 conflict flows) IEEE (0 conflict flow) IEEE (1 conflict flow) IEEE (2 conflict flows) IEEE (3 conflict flows) Figure 20: End-to-end frame delay under varying number of iperf flows. T able 4: Download bandwidth distribution under different contention lev els (%) Bandwidth (Mbps) 0 Flow 1 Flow 2 Flows 3 Flows IEEE Blade IEEE Blade IEEE Blade IEEE Blade 0–5 0 0 1 0 0 0 1 0 5–10 0 0 5 0 43 1 79 0 10–20 0 0 50 2 57 17 10 24 20–30 0 0 3 3 0 71 0 74 30–40 0 0 0 52 0 9 0 2 40+ 100 100 41 43 0 2 0 0 Overall, these real-world experiments demonstrate that B L A D E effecti vely optimizes CW adjustment, accommodat- ing both throughput and latency-sensiti ve applications. 7 Discussion Why not centralized scheduling? Explicit scheduling (e.g., TDMA-style airtime reservation) av oids contention, but re- quires tight coordination and a common administrative do- main, which is unachiev able for W i-Fi routers purchased and controlled by end consumers who lack centralized man- agement capabilities.In contrast, B L A D E operates within CSMA/CA, only adjusting local contention windo ws (CW). Its fully distrib uted design enables incremental deployment in commodity APs without neighboring coordination. Coexistence with IEEE 802.11 Contention Control. B L A D E achiev es fair con ver gence when univ ersally deployed, but may inadvertently cede transmission opportunities to IEEE 802.11-compliant devices (which typically retain small contention windows). As shown in § G ,configuring B L A D E with a higher M AR t ar enhances its competiti veness with legac y devices. Notably , B L A D E supports incremental de- ployment: e ven on partial adoption, it suppresses contention- driv en packet-deliv ery droughts for its own traf fic, while full deployment maximizes fairness and latenc y performance. Hidden T erminal. Since B L A D E relies on the consensus sig- nal from the same channel, it may be affected by the Hidden T erminal Problem [ 39 ], where transmitters perceiv e differ - ent transmission opportunity utilization rates. In this setting, MAR should be interpreted as a “local” contention signal within a carrier-sensing domain rather than a globally consis- tent metric. R TS/CTS is widely used to mitigate this issue. Since a CTS is follo wed by a PPDU transmission from a hidden terminal, upon recei ving CTS, B L A D E can infer that two transmission opportunities ha ve been utilized when cal- culating MAR. W e show in § H that B L A D E maintains low PPDU transmission delay for all transmitters in the presence of hidden terminals. 8 Related W ork Real-Time Streaming in Wireless Networks. Many prior studies hav e discussed the latency bottleneck of wireless net- works in real-time streaming. They propose v arious methods to reduce the tail frame deli very latenc y , including congestion control algorithms [ 8 ], multipath transmission [ 9 ], and nov el loss recov ery schemes at both transport layer [ 11 ] and applica- tion layer [ 40 ]. These studies reg ard wireless fluctuations as inherent to the link and rely on indir ect solutions from higher layers to alle viate its impact. Orthogonal to all these methods, B L A D E takes a dir ect approach on the link layer to mitigate the long tail latency induced by W i-Fi last hop and can be jointly deployed with them. Wi-Fi Perf ormance Enhancement. Prior work impro ves W i- Fi via rate adaptation, contention control, channel selection, and A QM [ 28 , 29 , 35 , 41 – 43 ]. Few focus on latenc y-sensitive flows [ 10 , 44 ]. B L A D E targets contention-driv en tail latency with no traffic-pattern assumptions. 9 Conclusion In this paper , we reveal the fundamental limitation of the contention windo w adjustment mechanism in IEEE 802.11 standard and identify it to be the root cause of long tail video frame deliv ery latency of next-generation real-time communi- cation applications (NGR TC) in W i-Fi networks. W e present B L A D E , a novel contention windo w control algorithm. Com- pared to the standard, B L A D E can significantly reduce Wi-Fi transmission latency and improv e MAC throughput. W e be- liev e B L A D E to be an important building block to wards the rapid dev elopment of NGR TC. Acknowledgments W e thank our shepherd, Robert Ricci, and the anonymous revie wers for their valuable comments. W e would like to express our sincere gratitude to our colleagues at T encent ST AR T , including W eiting Xiao, Zhenxing W en, Jianjun Xiao, Nian W en and Jiafeng Chen, for their in valuable technical support, insightful discussions. References [1] Samsung Gaming Hub. https://www.samsung.com/ us/televisions- home- theater/tvs/gaming- hub / . (Accessed on 01/02/2024). [2] Google Cloud Gaming. https://cloud.google.com /solutions/games . (Accessed on 01/02/2024). [3] Google map live view support. https: //support.google.com/maps/answer/9332056 ?hl=en&co=GENIE.Platform%3DiOS , 2023. [4] Y outube vr - home. https://vr.youtube.com/ , 2023. [5] Cloud gaming market: Global industry trends, share, size, growth, opportunity and forecast 2023-2028. Mar- ket report 5732901, IMARC Group, 2023. [6] Xiaokun Xu and Mark Claypool. Measurement of cloud- based game streaming system response to competing TCP cubic or TCP BBR flows. In A CM IMC , 2022. [7] Simone Mangiante, Guenter Klas, Amit Nav on, Zhuang GuanHua, Ju Ran, and Marco Dias Silv a. VR is on the Edge: How to Deli ver 360° V ideos in Mobile Networks. In A CM VR/AR Network , 2017. [8] Shibo W ang, Shusen Y ang, Xiao Kong, Chenglei W u, Longwei Jiang, Chenren Xu, Cong Zhao, Xuesong Y ang, Jianjun Xiao, Xin Liu, Changxi Zheng, Jing W ang, and Honghao Liu. Pudica: T o ward Near-Zero queuing delay in congestion control for cloud gaming. In USENIX NSDI , 2024. [9] Y uhan Zhou, T ingfeng W ang, Liying W ang, Nian W en, Rui Han, Jing W ang, Chenglei W u, Jiafeng Chen, Long- wei Jiang, Shibo W ang, Honghao Liu, and Chenren Xu. A UGUR: Practical mobile multipath transport service for lo w tail latency in Real-T ime streaming. In USENIX NSDI , 2024. [10] Zili Meng, Y aning Guo, Chen Sun, Bo W ang, Justine Sherry , Hongqiang Harry Liu, and Mingwei Xu. Achiev- ing Consistent Low Latenc y for W ireless Real-T ime Communications with the Shortest Control Loop. In A CM SIGCOMM , 2022. [11] Zili Meng, Xiao K ong, Jing Chen, Bo W ang, Mingwei Xu, Rui Han, Honghao Liu, V enkat Arun, Hongxin Hu, and Xue W ei. Hairpin: Rethinking Packet Loss Re- cov ery in Edge-based Interactiv e V ideo Streaming. In USENIX NSDI , 2024. [12] Zili Meng, T ingfeng W ang, Y ixin Shen, Bo W ang, Mingwei Xu, Rui Han, Honghao Liu, V enkat Arun, Hongxin Hu, and Xue W ei. Enabling High Quality Real-T ime Communications with Adaptiv e Frame-Rate. In USENIX NSDI , 2023. [13] Jiangkai W u, Y u Guan, Qi Mao, Y ong Cui, Zongming Guo, and Xinggong Zhang. ZGaming: Zero-Latency 3D Cloud Gaming by Image Prediction. In ACM SIG- COMM , 2023. [14] 3GPP TR 38.211 (Release 16). 5G; NR; Physical channels and modulation. https://www.etsi.org /deliver/etsi_ts/138200_138299/138211/16.0 2.00_60/ts_138211v160200p.pdf , 2020. [15] 3GPP TR 38.213 (Release 16). 5G; NR; Physical Layer Procedures for Control. https://www.etsi.org/del iver/etsi_ts/138200_138299/138213/16.02.00 _60/ts_138213v160200p.pdf , 2020. [16] Cloudflare Radar 2025 Revie w. https://blog.cloud flare.com/radar- 2025- year- in- review/ . [17] Rachel Albert, Anjul Patne y , Da vid Luebke, and Joohwan Kim. Latenc y requirements for fo veated ren- dering in virtual reality . A CM T rans. Appl. P er cept. , 14(4), September 2017. [18] Richard Y ao, T om Heath, Aaron Da vies, T om Forsyth, Nate Mitchell, and Perry Hoberman. Oculus VR Best Practices Guide . Oculus VR, Inc., April 2014. V ersion 0.008 (April 30, 2014). [19] Sara Vlahovic, Mirko Suznjevic, and Lea Skorin-Kapov . The Impact of Network Latency on Gaming QoE for an FPS VR Game. In IEEE QoMEX , 2019. [20] Mohammed S. Elbamby , Cristina Perfecto, Mehdi Ben- nis, and Klaus Doppler . T o ward Lo w-Latency and Ultra- Reliable V irtual Reality . IEEE Network , 32(2):78–84, 2018. [21] Eugene Kornee v , Mikhail Liubogoshchev , Dmitry Bankov , and Evgeny Khorov . Ho w to model cloud vr: An empirical study of features that matter . IEEE Open Journal of the Communications Society , 5:4155–4170, 2024. [22] IEEE 802.11ax Standard . https://standards. ieee.org/ieee/802.11ax/7180/ . (Accessed on 22/10/2024). [23] Kun T an, Ji Fang, Y uanyang Zhang, Shouyuan Chen, Lixin Shi, Jiansong Zhang, and Y ongguang Zhang. Fine- grained channel access in wireless lan. In Pr oceedings of the A CM SIGCOMM 2010 Conference , pages 147– 158. A CM, 2010. [24] Souvik Sen, Romit Roy Choudhury , and Srihari Nelaku- diti. No time to countdown: Migrating backof f to the frequency domain. In Pr oceedings of the 17th Annual International Confer ence on Mobile Computing and Networking (MobiCom) , pages 241–252. A CM, 2011. [25] Souvik Sen, Romit Roy Choudhury , and Srihari Nelaku- diti. Csma/cn: Carrier sense multiple access with col- lision notification. In Pr oceedings of the 16th Annual International Confer ence on Mobile Computing and Networking (MobiCom) , pages 25–36. A CM, 2010. [26] V ivek Shri vasta va, Nabeel Ahmed, Shra van Rayanchu, Suman Banerjee, Sriniv asan Kesha v , Konstantina P apa- giannaki, and Arunesh Mishra. Centaur: Realizing the full potential of centralized wlans through a hybrid data path. In Pr oceedings of the 15th Annual International Confer ence on Mobile Computing and Networking (Mo- biCom) , pages 297–308. A CM, 2009. [27] Martin Heusse, Franck Rousseau, Romaric Guillier , and Andrzej Duda. Idle sense: An optimal access method for high throughput and fairness in rate diverse wireless lans. In Proceedings of the ACM SIGCOMM 2005 Confer ence , pages 121–132. ACM, 2005. [28] Martin Heusse, Franck Rousseau, Romaric Guillier , and Andrzej Duda. Idle sense: an optimal access method for high throughput and fairness in rate diverse wireless lans. In Pr oceedings of the 2005 Confer ence on Applications, T echnologies, Ar chitectur es, and Pr otocols for Computer Communications , SIGCOMM ’05, 2005. [29] Y . Y ang and R. Krav ets. Achie ving delay guarantees in ad hoc networks through dynamic contention window adaptation. In IEEE INFOCOM , 2006. [30] Xuejun T ian, Xiang Chen, T etsuo Ideguchi, and Y uguang Fang. Improving throughput and fairness in wlans through dynamically optimizing backof f. IEICE transactions on communications , 88(11):4328–4338, 2005. [31] Qin Y u, Y iqun Zhuang, and Lixiang Ma. Dynamic contention window adjustment scheme for improving throughput and fairness in ieee 802.11 wireless lans. In IEEE Global Communications Confer ence (GLOBE- COM) , 2012. [32] Qiang Ni, I. Aad, C. Barakat, and T . T urletti. Modeling and analysis of slo w cw decrease ieee 802.11 wlan. In 14th IEEE Pr oceedings on P ersonal, Indoor and Mobile Radio Communications, 2003. , 2003. [33] Nicola Baldo, Manuel Requena-Esteso, José Núñez- Martínez, Marc Portolès-Comeras, Jaume Nin-Guerrero, Paolo Dini, and Josep Mangues-Bafalluy . V alidation of the ieee 802.11 mac model in the ns3 simulator using the extreme testbed. In Pr oceedings of the 3r d International ICST Confer ence on Simulation T ools and T echniques , pages 1–9, 2010. [34] NS3 W i-Fi V alidation against Bianchi Model. https: //www.nsnam.org/docs/release/3.41/models/h tml/wifi- testing.html#bianchi- validation . (Accessed on 12/23/2024). [35] Andre w Mcgregor and Derek Smithies. Rate adaptation for 802.11 wireless networks: Minstrel. Submitted to A CM SIGCOMM , 2010. [36] "TGax Simulation Scenarios", IEEE 802.11-14/0980r16. https://mentor.ieee.org/802.11/dcn/14/11- 1 4- 0980- 16- 00ax- simulation- scenarios.docx . (Accessed on 12/10/2024). [37] VPN/Non-VPN Network Application T raffic Dataset (VN A T). https://www.ll.mit.edu/r- d/dataset s/vpnnonvpn- network- application- traffic- d ataset- vnat . (Accessed on 01/08/2025). [38] 5G Traf fic Datasets. https://www.kaggle.com/dat asets/kimdaegyeom/5g- traffic- datasets . (Ac- cessed on 01/08/2025). [39] W i-fi hidden terminal problem. https://en.wikiped ia.org/wiki/Hidden_node_problem , 2025. [40] Y ihua Cheng, Ziyi Zhang, Hanchen Li, Anton Arapin, Y ue Zhang, Qizheng Zhang, Y uhan Liu, Kuntai Du, Xu Zhang, Francis Y . Y an, Amrita Mazumdar , Nick Feamster , and Junchen Jiang. GRACE: Loss-Resilient Real-T ime V ideo through Neural Codecs. In USENIX NSDI , 2024. [41] Mathieu Lacage, Mohammad Hossein Manshaei, and Thierry T urletti. IEEE 802.11 rate adaptation: a practical approach. In Pr oceedings of the 7th A CM international symposium on Modeling, analysis and simulation of wir eless and mobile systems , 2004. [42] S. V asudev an, K. Papagiannaki, C. Diot, J. Kurose, and D. T owsley . Facilitating Access Point Selection in IEEE 802.11 W ireless Networks. In ACM IMC , 2005. [43] T oke Høiland-Jør gensen, Michał Kazior , Da ve Täht, Per Hurtig, and Anna Brunstrom. Ending the anomaly: achieving low latency and airtime fairness in W iFi. In USENIX A TC , 2017. [44] Changhua Pei, Y oujian Zhao, Y unxin Liu, Kun T an, Jian- song Zhang, Y uan Meng, and Dan Pei. Latency-based W iFi congestion control in the air for dense WiFi net- works. In IEEE IWQoS , 2017. [45] IEEE 802.11e Standard . https://standards. ieee.org/ieee/802.11e/3131/ . (Accessed on 22/10/2024). [46] Giuseppe Bianchi. Performance analysis of the ieee 802.11 distributed coordination function. IEEE JOUR- N AL ON SELECTED AREAS IN COMMUNICA TIONS , 18:535, 2000. A PPDU Contention Interval Calculation Since the random backof f procedure is handled by WNIC firmware, it is not easy to directly acquire the contention interval for each PPDU transmitted in W i-Fi. Therefore, we adopt a passiv e approach: we deploy an AP-ST A pair and use iperf to keep the AP WNIC b usy . W e place a W i-Fi sniffer closely to the AP to capture all traffic (including PPDUs and A CKs) related to the AP . As sho wn in Fig. 21a , because the WNIC keeps busy , the frame exchange sequence (FES) of the i -th PPDU is immediately follo wed by the FES of the ( i + 1 ) -th PPDU. From the snif fed trace, we can acquire the precise timestamp of PHY transmission ev ent T i t x and A CK ev ent T i ack of the i -th PPDU. Since the DIFS, SIFS and A CK are standard intervals with fix ed values, we can calculate the contention interv al of the ( i + 1 ) -th PPDU as T i + 1 t x − T i ack − A CK − DI F S . The PHY transmission time of the i -th PPDU can be calculated as T i ack − T i t x − SI F S . Upon transmission failures, no A CK frame is sniffed. As illustrated in Fig. 21b , in this case, we can calculate the PHY transmission time of the i -th PPDU τ i PHY as we can acquire the PPDU size and transmission rate from the sniffed trace. Therefore, the contention interval of the ( i + 1 ) -th PPDU can be calculated as T i + 1 t x − T i t x − τ i PHY − SI F S − DI F S . B Limitation of Priority-Based IEEE 802.11 Contention Control IEEE 802.11e standard [ 45 ] has proposed Enhanced Dis- tributed Channel Access (EDCA), defining several Access Categories (ACs) queues with dif ferent C W min and C W max v al- ues to accommodate various QoS requirements. Specifically , there are BK (Background) queue with C W min = 7 , CW max = 1023 BE (Best Ef fort) queue with C W min = 15 , C W max = 1023 , VI (V ideo) queue with C W min = 7 , CW max = 15 , and V O (V oice) queue with C W min = 1 , C W max = 3 . While the BE queue is the default, adopting A C queues with higher prior- ity can lower the CW value and occupy more transmission chances. Howe ver , with the rapid dev elopment of R TC appli- cations, it is common to hav e multiple R TC sessions in the same wireless en vironment now adays. When multiple flo ws with high priority contend for transmission chances in the same channel, the contention lev el can be se verely intensified, leading to more collisions. W e demonstrate this by repeating the experiment in § 6.1.1 , with N ( N = 2 , 4 , 6 ) iperf flows saturating the link with the VI queue. As shown in Fig. 22 , with competing flows from VI queues, the PPDU transmis- sion delay significantly increases even with N = 2 , compared to the BE queue (the 99.99th percentile delay is 56 ms for BE queue, as shown in Fig. 10a ). Therefore, the MA C throughput within 100 ms interval exhibits a more unsteady pattern, with 19% starvation rate when N = 4 (the starvation rate for BE queue when N = 4 is 4%, as sho wn in Fig. 11b ). Contention Interval PPDU PHY TX SIFS ACK DIFS Contention Interval PPDU PHY TX SIFS ACK DIFS 𝑖 - th FES (𝑖 + 1) - th FES 𝑇 !" # 𝑇 $%& # 𝑇 !" #'( 𝑇 $%& #'( (a) T ransmission success. Contention Interval PPDU PHY TX SIFS DIFS Contention Interval PPDU PHY TX SIFS ACK DIFS 𝑖 - th FES (𝑖 + 1) - th FES (Retransmission) 𝑇 !" # 𝑇 !" #$% 𝑇 &'( #$% 𝜏 )*+ # (b) T ransmission failure. Figure 21: Illustration of PPDU contention interval calculation. 0 2 0 0 4 0 0 6 0 0 8 0 0 1 0 0 0 P P D U T r a n s m i s s i o n D e l a y ( m s ) 9 9 . 9 9 9 9 . 9 9 9 9 0 5 0 C D F ( % ) N = 2 N = 4 N = 6 (a) PPDU transmission delay . 0 2 0 4 0 6 0 8 0 1 0 0 M A C Th r o u g h p u t ( M b p s ) 0 2 0 4 0 6 0 8 0 1 0 0 C D F ( % ) N = 2 N = 4 N = 6 (b) MA C throughput. Figure 22: Performance of VI AC queue with N competing flows. 0 5 0 1 0 0 1 5 0 2 0 0 2 5 0 3 0 0 3 5 0 4 0 0 P P D U T r a n s m i s s i o n D e l a y ( m s ) 9 9 . 9 9 9 9 . 9 9 9 9 0 5 0 C D F ( % ) B l a d e H i d d e n B l a d e E x p o s e d I E E E H i d d e n I E E E E x p o s e d (a) R TS/CTS disabled. 0 5 0 1 0 0 1 5 0 2 0 0 2 5 0 3 0 0 3 5 0 4 0 0 P P D U T r a n s m i s s i o n D e l a y ( m s ) 9 9 . 9 9 9 9 . 9 9 9 9 0 5 0 C D F ( % ) B l a d e H i d d e n B l a d e E x p o s e d I E E E H i d d e n I E E E E x p o s e d (b) R TS/CTS enabled. Figure 23: Influence of hidden terminal on B L A D E with R TS/CTS disabled/enabled. C Additional Evaluation Results C.1 Parameter Sensitivity W e ev aluate the parameter sensitivity of B L A D E by repeating the experiment in § 6.1.1 with N = 4 and different parameter values. As shown in T ab. 5 , changes in parameter values lead to negligible performance shifts compared to the default configuration. Therefore, B L A D E is robust and not sensiti ve to its parameters. D Detailed Anatomy of Packet Deliv ery Droughts This appendix expands the mechanism summary in § 3.2 . W e provide (i) ns-3 simulation e vidence that collisions increase retransmissions under multi-AP contention, (ii) controlled AP experiments validating the same effect in practice, and (iii) a packet-le vel example and delay decomposition illustrating how countdo wn freezes amplify contention intervals. V ariant A vg. MA C Throughput (Mbps) 50/95/99/99.9/99.99th PPDU TX Delay (ms) B L A D E Default 48.5 9.8/21.0/26.7/34.6/42.1 M inc = 250 48.1 9.8/21.2/27.3/35.2/43.1 M inc = 125 48.6 9.7/21.0/26.6/34.3/42.4 M d ec = 0 . 85 48.5 9.7/21.4/27.7/36.5/44.9 M d ec = 0 . 75 48.1 9.7/21.9/28.6/37.8/46.4 A inc = 10 48.2 9.8/21.2/26.9/35.0/42.1 A inc = 30 48.9 9.6/21.7/28.2/37.1/46.1 A f ail = 10 48.6 9.7/21.2/27.1/35.3/42.5 A f ail = 20 48.7 9.7/21.7/28.1/37.7/44.4 T able 5: Parameter Sensiti vity of B L A D E . T o understand why packets fail to be deliv ered within mea- surement intervals, we conducted systematic NS3 simulations, which accurately model W i-Fi’ s CSMA/CA behavior [ 33 , 34 ]. W e deploy N W i-Fi APs contending in the same channel, each transmitting iperf flows to saturate the link. Our analysis rev eals two key f actors that lead to packet deliv ery droughts: First, packet retransmissions significantly extend the to- tal delivery time, as each failed attempt requires additional transmission attempts. Fig. 26 demonstrates how channel contention directly impacts retransmission frequency: with 2 competing de vices ( N = 2 ), almost all packets are deliv- ered successfully on first attempt. Howe ver , as contention in- creases, packets require more retransmission attempts. When eight devices compete ( N = 8 ), 34% of packets need at least one retransmission, with some requiring up to 6 retries. This means a single packet can require multiple transmission at- tempts before successful deliv ery , substantially extending its deliv ery time. Second, and more critically , each retransmission triggers a vicious cycle due to W i-Fi’ s exponential backof f mecha- nism. When a transmission fails, the de vice doubles its con- tention windo w , ef fectively reducing its ability to compete for channel access compared to devices with smaller windows. Fig. 27 demonstrates this ef fect by tracking contention inter- vals across successi ve retransmission attempts (with N = 6 ). While the first transmission attempt experiences relati vely 0 . 0 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 . 6 0 . 7 0 . 8 0 . 9 MAR 2 0 7 0 1 2 0 1 7 0 2 2 0 2 7 0 3 2 0 3 7 0 4 2 0 4 7 0 MAR opt = 1 + 1 1 0 2 1 0 3 1 0 4 (a) N = 2. 0 . 0 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 . 6 0 . 7 0 . 8 0 . 9 MAR 2 0 7 0 1 2 0 1 7 0 2 2 0 2 7 0 3 2 0 3 7 0 4 2 0 4 7 0 MAR opt = 1 + 1 1 0 2 1 0 3 1 0 4 (b) N = 4. 0 . 0 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 . 6 0 . 7 0 . 8 0 . 9 MAR 2 0 7 0 1 2 0 1 7 0 2 2 0 2 7 0 3 2 0 3 7 0 4 2 0 4 7 0 MAR opt = 1 + 1 1 0 2 1 0 3 1 0 4 (c) N = 8. 0 . 0 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 . 6 0 . 7 0 . 8 0 . 9 MAR 2 0 7 0 1 2 0 1 7 0 2 2 0 2 7 0 3 2 0 3 7 0 4 2 0 4 7 0 MAR opt = 1 + 1 1 0 2 1 0 3 1 0 4 (d) N = 16. 0 . 0 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 . 6 0 . 7 0 . 8 0 . 9 MAR 2 0 7 0 1 2 0 1 7 0 2 2 0 2 7 0 3 2 0 3 7 0 4 2 0 4 7 0 MAR opt = 1 + 1 1 0 2 1 0 3 1 0 4 (e) N = 32. 0 . 0 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 . 6 0 . 7 0 . 8 0 . 9 MAR 2 0 7 0 1 2 0 1 7 0 2 2 0 2 7 0 3 2 0 3 7 0 4 2 0 4 7 0 MAR opt = 1 + 1 1 0 2 1 0 3 1 0 4 (f) N = 64. Figure 24: L ( M AR ) dynamics with respect to dif ferent values of M AR and η under dif ferent transmitter numbers N . The blue line illustrates the optimal M AR value for each η . 0 1 2 3 4 5 6 7 8 9 10 Time (s) 0 50 100 150 200 250 300 CW Wi-Fi dev 1: CW init. = 15 Wi-Fi dev 2: CW init. = 300 (a) T raditional AIMD. 0 1 2 3 4 5 6 7 8 9 10 Time(s) 0 50 100 150 200 250 300 CW Wi-Fi dev 1: CW init. = 15 Wi-Fi dev 2: CW init. = 300 (b) B L A D E ’ s HIMD Figure 25: The comparison of con ver gence speed of tradi- tional AMID and HIMD of B L A D E . short contention intervals, each subsequent attempt faces pro- gressiv ely longer delays due to the enlarged contention win- dow . By the sixth retransmission, over 60% of PPDUs ex- perience contention interv als exceeding 200ms. As a result, the transmission delay for each PPDU ( i.e ., frame exchange sequence duration in Fig. 2 ) significantly increases with com- peting flo w number N , as sho wn in Fig. 28 . This combination of frequent retransmissions and extended contention inter- vals explains the packet deliv ery droughts we observed in production networks. D.0.1 V alidating Simulation Results with Wi-Fi AP T o complement our simulation findings and v alidate them in real-world conditions, we conducted controlled experiments using commercial W i-Fi APs. While our simulations provide insights into the fundamental relationship between contention 0 1 2 3 4 5 6 7 P P D U R e t r a n s m i s s i o n T i m e s 5 0 6 0 7 0 8 0 9 0 1 0 0 C D F ( % ) N = 2 N = 4 N = 6 N = 8 Figure 26: PPDU retransmis- sion times with N competing flows. 0 5 0 1 0 0 1 5 0 2 0 0 2 5 0 3 0 0 3 5 0 4 0 0 4 5 0 P P D U C o n t e n t i o n I n t e r v a l ( m s ) 0 2 0 4 0 6 0 8 0 1 0 0 C D F ( % ) 1 s t 2 n d 3 r d 4 t h 5 t h 6 t h 7 t h Figure 27: PPDU back off time at the n-th transmission with N = 6. and packet delivery , real-world factors such as channel dy- namics and interference could affect these beha viors. W e set up a testbed using Xiaomi AX3600 W i-Fi APs in a typical office environment, where they experience natural channel contention from existing network traffic. Through these exper - iments, we aim to verify whether the packet deliv ery patterns and retransmission beha viors observed in simulations mani- fest similarly in practice. In our experiments, we established a saturated link between the AP and a client device (ST A) using iperf , allowing us to observe the system under consistent load. By analyzing air- sniffed traces, we calculated precise PHY transmission delay and contention interv al for each PPDU (detailed methodol- ogy in § A ). This controlled setting enabled us to dissect the transmission process and identify the k ey factors leading to packet deli very drought. 0 2 0 0 4 0 0 6 0 0 8 0 0 1 0 0 0 P P D U T r a n s m i s s i o n D e l a y ( m s ) 9 9 . 9 9 9 9 . 9 9 9 9 0 5 0 C D F ( % ) N = 2 N = 4 N = 6 N = 8 Figure 28: PPDU transmis- sion delay with N competing flows. 0 50 100 150 200 250 Delay (ms) 99.99 99.9 99 90 50 0 CDF (%) PHY Contention Figure 29: Contention interv al and PHY latency distribution for each W i-Fi PPDU. 1 s t TX 2 n d TX 3 r d TX c o n t e n t i o n i n t e r v a l c o n t e n t i o n i n t e r v a l Figure 30: Lifetime of a single PPDU (in red). Green triangles illustrate the competing traffic from other de vices. An Example of Extended Packet Delivery Time. Fig. 30 shows how a single packet’ s delivery time can extend to 75.9ms, orders of magnitude longer than expected. This signif- icant extension stems from two factors. First, after each colli- sion, the PPDU requires a retransmission attempt, and our ex- ample packet needs multiple retransmissions. Second, during each retransmission attempt, the contention interval becomes sev erely extended. While the doubled contention window only increases from CW=15 (max of 135 µs ) to CW=31 (max of 279 µs ), the actual contention intervals stretch to 43.5ms and 25.5ms because other de vices (green triangles) repeatedly gain channel access during the countdown process. Each time another device transmits, our PPDU must freeze its count- down. Through this combination of multiple retransmission attempts and extended contention interv als, what should be a quick packet deli very becomes a 75.9ms process. Statistical Analysis of Contention Intervals. T o quantify this phenomenon, we analyzed the distrib ution of both PHY transmission times and contention intervals across all PP- DUs (Fig. 29 ). PHY transmission time—the actual time spent transmitting data ov er the air as sho wn in Fig. 2 —remains pre- dictably brief ( < 5ms at 99.99th percentile) due to fixed W i-Fi hardware constraints. In stark contrast, contention intervals, representing time spent competing for channel access, exhibit alarming v ariability . While their median stays belo w 1ms, the tail extends dramatically , exceeding 200ms at the 99.99th percentile. This means W i-Fi devices spend orders of mag- nitude more time competing for transmission opportunities than actually transmitting data. T akeaway . Our real-world measurements validate that packet deli very drought stems from both retransmission attempts and extended contention intervals, with devices spending up to 200ms competing for channel access compared to just 5ms for actual data transmission. E MAR-Driven CW Contr ol Algorithm The detailed pseudo code of the MAR-dri ven CW control algorithm is shown in Alg. 1 . F T arget MAR Analysis The target microscopic access rate M AR t ar plays an essential role in B L A D E ’ s stable-state control policy . Here, we analyze the impact of M AR t ar and discuss the criteria for selecting its optimal value. F .1 In verse Proportion W e first analyze the relationship between the C W value and the microscopic access rate M AR . For a transmitter i with contention window C W i , the probability τ i of attempting a transmission at an y gi ven transmission chance (highlighted in red in Fig. 9 ) is the probability that its random backoff timer reaches zero at that moment: τ i = C W i ∑ CW i k = 1 k = 2 C W i + 1 (7) In the stable state, the contention windows of all transmitters con verge to the same v alue C W (i.e., τ i = τ = 2 CW + 1 ). W ith N transmitters, the probabilities for a transmission chance being idle, occupied by a successful transmission, or resulting in a collision are as follows: P i = ( 1 − τ ) N , P s = N τ ( 1 − τ ) N − 1 , P c = 1 − P i − P s (8) The microscopic access rate M AR represents the probability that a transmission chance is used: M AR = 1 − P i = 1 − ( 1 − τ ) N τ ≪ 1 ≈ N τ = 2 N C W + 1 (9) Since C W v alues are typically much lar ger than 1, we apply a first-order approximation in Eqn. 9 . This shows that, in the stable state, the microscopic access rate M AR is in versely proportional to the con verged contention windo w value C W . F .2 Robustness Here, we discuss the selection of the target MAR. In the stable state, the micro-lev el bandwidth fairness is achieved as all transmitters’ C W v alues con verge to the same. Therefore, our target MAR aims to maximize the ov erall transmission band- width. Suppose the av erage PPDU size is S , the av erage time of a successful transmission, collided transmission, and slot time is T t , T c , and T s , respectiv ely . Note that B L A D E ’ s design principle and control logic do not rely on these assumptions. Similar to the deri vation in [ 28 ], the ov erall throughput can be presented as: T h p = P t S P s T t + P c T c + P i T s = S T t + ( 1 − P i − P s ) η + P i P s · T s (10) where η = T c / T s . T o maximize T h p , combined with Eqn. 8 and Eqn. 9 , we only hav e to minimize the cost function: L ( M AR ) = ( 1 − P i − P s ) η + P i P s = N − M AR N · ( η − 1 ) M AR + 1 M AR ( 1 − M AR ) (11) The optimal value M AR o pt is determined by N and η . How- ev er , since M AR < 1 and N ≥ 2 , M AR has a negligible ef fect on the first term N − M AR N . As a result, M AR o pt is almost inde- pendent of the number of transmitters N . Next, consider the second term ( η − 1 ) M AR + 1 M AR ( 1 − MAR ) . T aking the deriv ativ e of this term and setting it equal to zero yields: M AR o pt = 1 √ η + 1 (12) Since η represents the number of time slots occupied by a collided transmission, its av erage value depends on the PHY transmission time of each PPDU, which in turn is determined by the PPDU size and PHY transmission rate. In the 802.11ax (W i-Fi 6) standard, η can range from 20 to over 500. More importantly , although the optimal value of M AR is primarily determined by η , the cost function L ( M AR ) is relati vely in- sensitiv e to changes in M AR . W e illustrate this by plotting L ( M AR ) against dif ferent values of M AR and η in a heatmap (Fig. 24 ) with increasing N values. As M AR deviates from M AR o pt , the change in L ( M AR ) is minimal, and such pat- tern does not change with the increase of N value. Therefore, B L A D E is robust to the selection of M AR t ar , as long as it remains within a "safe zone" ( ± 0 . 1 ) around M AR o pt . As a result, from Fig. 24 , we set the default value of M AR t ar to be 0.1. G Coexistence with IEEE 802.11 Standard Contention Control Similar to the experimental setup in § 6.1.1 , we deploy four AP-ST A pairs, with two pairs running B L A D E and the other two pairs using the IEEE 802.11 standard contention control policy . The APs saturate the wireless link by sending iperf traffic to the ST As. As shown in T ab . 6 , with M AR t ar increas- ing from 0.1 to 0.5, B L A D E becomes more competitiv e to the standard control policy and gains more MA C throughput with lower PPDU transmission delay . Therefore, B L A D E can be configured with higher M AR t ar values when coe xisting with the standard policy , while still ensuring con ver gence and fairness when all APs implement B L A D E . H Influence of Hidden T erminal Since B L A D E relies on the consensus signal from the same channel, it may be af fected by the Hidden T erminal Prob- lem [ 39 ]. T o demonstrate this impact, similar to the exper- imental setup in § 6.1.1 , but we deploy the AP-ST A pairs in three rooms arranged in a row . T ransmitters at both ends cannot hear each other , acting as hidden terminals, while trans- mitters in the middle can hear traf fic from both ends, acting as exposed terminals. W e sho w the PPDU transmission delay dis- tribution for hidden and e xposed terminals in Fig. 23 . When R TS/CTS is disabled, both B L A D E and the IEEE 802.11 stan- dard contention control policy result in increased tail latency for exposed terminals, as they undergo more intensive con- tention. Howe ver , with R TS/CTS enabled, because B L A D E counts CTS signals in opportunity utilization rate calculation, B L A D E shows much smaller differences in delay distrib ution between exposed and hidden terminals. These results demon- strate that B L A D E is compatible with the widely deployed R TS/CTS mechanism and is robust to the Hidden T erminal Problem when R TS/CTS is enabled. I B L A D E Contention Window Contr ol Algo- rithm W e show the pseudo-code of B L A D E contention window con- trol algorithm in Alg. 1 . J Observation Inter val Analysis Assume that all W i-Fi devices in the current W i-Fi en viron- ment are collaborativ ely w orking to stabilize the M AR around the target v alue M AR tar . W e assume that the state of the chan- nel being busy is represented by 1, and the state of the chan- nel being idle is represented by 0. These busy or idle states of the channel over a period of time form an approximate i.i.d. sequence of Bernoulli random variables X i , with suc- cess probability M AR tar = 0 . 15 . The sample mean ov er N obs observations is: X N obs = 1 N obs N obs ∑ i = 1 X i . For N obs = 300, the standard error (SE) of X 300 is: SE ( X 300 ) = r 0 . 15 × 0 . 85 300 ≈ 0 . 0206 . By the Chernof f bound for binomial distrib utions, the prob- ability of deviation e xceeding δ satisfies: P ( | X N obs − M AR tar | ≥ δ ) ≤ 2 exp  − N obs δ 2 3 M AR tar ( 1 − M AR tar )  . For N obs = 300 and δ = 0 . 02: P ( | X 300 − 0 . 15 | ≥ 0 . 02 ) ≤ 2 e − 0 . 314 ≈ 1 . 462% . This confirms that the estimation error remains negligible with high probability . So N obs = 300 is Suf ficient. M AR t ar = 0 . 1 / IEEE M AR t ar = 0 . 25 / IEEE M AR t ar = 0 . 35 / IEEE M AR t ar = 0 . 5 / IEEE A vg. MA C Throughput (Mbps) 2.2 / 94.1 21.8 / 60.0 28.1 / 52.5 32.0 / 43.9 50th PPDU TX Delay (ms) 224.8 / 4.6 21.3 / 6.5 16.1 / 7.2 13.7 / 7.9 95th PPDU TX Delay (ms) 491.7 / 11.3 48.6 / 21.6 38.9 / 25.1 38.9 / 25.1 99th PPDU TX Delay (ms) 634.1 / 17.9 63.9 / 38.8 52.9 / 48.5 52.9 / 48.5 99.9th PPDU TX Delay (ms) 888.2 / 34.9 88.8 / 85.7 72.2 / 112.6 72.2 / 112.6 T able 6: The performance of B L A D E coexisting with IEEE 802.11 standard contention control polic y . K Collision Probability in W i-Fi Networks Consider a scenario with N W i-Fi devices, where each de vice has a transmission probability of τ in an arbitrary time slot and the transmission queue of each device remains non-empty . The collision probability , denoted as ρ , can be expressed as: ρ = 1 − ( 1 − τ ) N − 1 . (13) Assume all W i-Fi de vices operate in the BE (Best Effort) queue, where the contention window ( C W ) doubles from C W min to C W max after each retransmission, up to a maximum of r retransmissions. Suppose x transmissions use C W min , and x ρ i transmissions use C W min 2 i for 0 ≤ i ≤ r . The probability of transmitting with CW min 2 i is then giv en by: P i = x ρ i ∑ r j = 0 x ρ j = ρ i ∑ r j = 0 ρ j . (14) Thus, the transmission probability τ can be deriv ed as: τ = r ∑ i = 0 2 P i C W min 2 i . (15) By solving Eqn. 13 , Eqn. 14 and Eqn. 15 simultaneously , the solution for ρ is obtained numerically using the bisection method within the range ( 0 , 1 ) , ensuring con vergence to the unique solution for each N . Fig. 31 illustrates the v ariation in collision probability as the number of co-channel W i-Fi de vices increases. The analysis assumes that all Wi-Fi devices operate under the BE queue and maintain continuously non- empty transmission queues. The results indicate that when the number of co-channel W i-Fi de vices reaches 10, the collision probability exceeds 50%, highlighting the significant impact of device density on netw ork performance. L When MAR is Fixed, the Collision Probabil- ity is Constrained Below MAR Consider a scenario where N W iFi de vices continuously trans- mit data packets. Assuming that the contention windo w (CW) value for each de vice is ω − 1 , the attempt probability τ (i.e., the probability that any gi ven W iFi device completes its ran- dom backoff and begins transmission at any slot-time [ 46 ] can be expressed as: τ = ω ω 2 2 = 2 ω (16) 0 1 2 3 4 5 6 7 8 9 10 Co -channel W i-F i Devices 0 10 20 30 40 50 60 % (Collision P r obability) Figure 31: Collision probability vs. number of co-channel W i- Fi de vices (when the transmission queue remains non-empty). The probability of a collision occurring when a W iFi de vice attempts transmission, denoted as ρ , is giv en by Eqn. 13 . The MAR is defined as the probability that the channel is in a busy state, which corresponds to at least one W iFi device transmitting. This can be expressed as: M AR = 1 − ( 1 − τ ) N (17) Giv en that τ = 2 ω ∈ ( 0 , 1 ) and is relati vely close to zero, it follows that: M AR = 1 − ( 1 − τ ) N > 1 − ( 1 − τ ) N − 1 = ρ (18) This implies that, when MAR is determined, the collision probability remains stable at a lev el below MAR. Algorithm 1: B L A D E contention window control. Parameters N obs , M AR t ar , M AR max ▷ Default 300, 0.1, 0.35 C W min , C W max ▷ Default 15, 1023 M inc , M d ec , A inc , A f ail ▷ Default 500, 0.95, 15, 5 Initialization N id l e ← 0 , N t x ← 0 , f irst _ rt x ← T r ue , C W ← CW min , CW f ail ← CW Func OnNewWNICState( s t at e , d ura t ion ) if st a t e = IDLE then N id l e ← N id l e + d ura t ion / sl ot _ t ime if st at e = BUSY then N t x ← N t x + 1 Func OnA CK() ▷ Stable Control Policy C W ← CW f ail ▷ Restore the CW at previous f ailure if N id l e + N t x < N obs then retur n ▷ No enough samples M AR ← N t x / ( N t x + N id l e ) if M AR > M AR t ar then C W ← CW + CW · max { 0 , M AR − M AR max } + M inc ( min { M AR , M AR max } − M AR t ar ) + A inc else C W ← min { M d ec − ( 1 − M d ec )( CW − CW min ) CW max − CW min , 2 M AR M AR t ar + M AR } × C W N id l e ← 0 , N t x ← 0 , C W f ail ← CW , f ir s t _ rt x ← T r ue Func OnA CKFailure() ▷ Fast Reco very from Collision if f irst _rt x then C W f ail ← CW + A f ail ▷ Increase and store CW C W ← CW f ail / 2 ▷ Accelerated retransmission f irst _ rt x ← Fal se ▷ Only accelerate once

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment