Buffer Sizing for 802.11 Based Networks

We consider the sizing of network buffers in 802.11 based networks. Wireless networks face a number of fundamental issues that do not arise in wired networks. We demonstrate that the use of fixed size buffers in 802.11 networks inevitably leads to ei…

Authors: Tianji Li, Douglas Leith, David Malone

Buffer Sizing for 802.11 Based Networks
1 Buf fer Sizing for 802.11 Based Netw orks T ia nji Li, Douglas Le ith, Da vid Malone Hamilton Institute, Na tional Uni versity of Ireland Maynooth, Ireland Email: { tianji.li, doug. leith, da v id.malone } @nuim.ie Abstract —W e consider the sizing of network buffers in 802.11 based networks. Wireless netwo rks face a number of fundamental issues that d o not arise in wired networks. W e demonstrate t hat the u se of fi xed size buffers in 802.11 networks inevitably leads to either undesirable ch annel under -u tilization o r unnecessary high delays. W e present two n ov el dynamic buffer sizing algorithms that achie ve hi gh thr oughput while maintaining low d elay across a wide range of network conditions. Experimental measurements demonstrate the utility of the proposed algorithms in a produc- tion WLAN and a lab testbed. Index T erm s —IEEE 802.11, IEEE 802 .11e, Wireless LANs (WLANs), Medi um access control (MAC), T ransmission control protocol (TCP ), Buffer Sizing, Stability Analysis. I . I N T R O D U C T I O N In comm unication networks, buffers are used to accommo - date short-term packet bursts so as to mitigate packet drops and to maintain high link efficiency . Packets ar e qu eued if too many packets arrive in a suf ficiently shor t interval of time during which a network device lacks the capacity to process all of them immediately . For wired routers, the sizing of b uffers is an acti ve research topic ([3 1] [5] [27] [32] [9]). The classical rule of thumb for sizing wired buf fers is to set buffer sizes to be the prod uct of the bandwidth and the average delay of the flows utilizing this link, namely the Bandwidth-Dela y Pr od uct (BDP) r ule [31]. See Section VII for discussion of other related w ork. Surprisingly , howe ver the sizing of buffers in wireless networks ( especially those based on 802.1 1/802. 11e) ap pears to h av e received very little atten tion within the networking commun ity . Exceptio ns includ e the recent work in [21] relatin g to buffer sizing for voice traffic in 802 .11e [2] WLANs, work in [23] which considers th e impact of buffer sizing on TCP upload/d ownload fairness, and work in [29] which is re lated to 80 2.11 e par ameter settings. Buffers p lay a key role in 80 2.11/8 02.11 e wire less net- works. T o illustrate this, we present measureme nts fr om the produ ction W LAN o f the Hamilto n Institute, which show that the cur rent state o f the art which makes use of fixed size buf fers, can easily lead to poor pe rforma nce. The topolog y of this WLAN is shown in Fig. 23. See the Append ix for further details of the con figuration used. W e recor ded R TTs before and af ter one wireless station started to download a 37MByte file from a web-site. Befo re starting the do wnload, we pin ged the access point ( AP) fr om a laptop 5 tim es, each time send ing 100 ping packets. Th e R T Ts r eported by th e ping program was between 2.6-3.2 ms. Howe ver, after starting This work is supported by Irish Research Council for Science, E ngineeri ng and T echnology and Science Founda tion Ireland Gra nt 07/IN.1/I9 01. the download and a llowing it to con tinue for a while (to let th e congestion con trol alg orithm of TCP probe for the av ailab le ban dwidth), the R TTs to the AP huge ly increased to 290 0-340 0 ms . Durin g the test, no rmal services such as web br owsing exper ienced obvious pau ses/lags on wireless stations using the network . Closer inspection rev ealed that the buf fer occupan cy at th e AP exceeded 200 packets most of the time and reach ed 250 packets from time to time during the test. Note that the increase in measured R TT c ould be almost entirely attributed to the resulting queuing dela y at the AP , and indicates that a mo re soph isticated approa ch to buffer sizing is requ ired. Ind eed, using the A* a lgorithm pr oposed in this paper, the R TTs ob served when repeating the same experiment fall to only 9 0-130 ms . This reduction in delay do es not come at the co st of reduc ed thr oughp ut, i.e., the m easured throughp ut with the A* algorithm and th e default buffers is similar . In this paper, we consider the sizing of buffers in 802.1 1/802. 11e ([1] [ 2]) based WLANs. W e fo cus on single- hop WLANs sin ce these are rapidly becom ing ubiqu itous as the last hop o n hom e and office networks as well as in so- called “hot spots” in airports and hotels, but n ote th at the propo sed schemes can be easily ap plied in multi-hop wireless networks. Our main focus in this pap er is on TCP traffic since this continues to constitute the bulk of traffic in modern networks (80 –90% [35] of current Intern et traffic and also of WLAN traffic [28]), althou gh we extend consider ation to UDP traffic at v ar ious points during the discussion and also during our exper imental te sts. Compared to sizing buf fers in wired routers, a numb er of fundam ental n ew issues arise when co nsidering 802.1 1-based networks. Firstly , unlike wired networks, wirele ss transmis- sions ar e inheren tly broadca st in n ature which leads to the packet service times at different stations in a WLAN bein g strongly cou pled. For example, the basic 802. 11 DCF e nsures that the wireless stations in a WLAN win a rough ly e qual number o f tra nsmission opp ortun ities [ 19], hence, the mean packet service time at a statio n is an or der of magnitu de longer when 10 other stations are active than when only a single station is a ctiv e. Consequently , the buffering r equirem ents at each station would also differ, depen ding on the numbe r of other ac ti ve stations in the WLAN. In add ition to variations in the mean service time, the distribution of p acket service times is also strongly dependen t on the WLAN offered load . This directly affects the burstiness of tran smissions and so buf fering requireme nts (see Sectio n III f or details). Sec ondly , wireless stations dyna mically adjust th e ph ysical transmission rate/modu lation used in o rder to regulate non-congestive ch an- nel losses. This rate adap tation, wh ereby the transmit rate may change by a factor of 50 o r m ore (e .g. from 1Mb ps to 2 54Mbp s in 802.11 a/g), ma y induc e large and rapid v ar iations in required b uffer sizes. Th irdly , th e ongoing 80 2.11 n stan- dards process p roposes to improve throug hput efficiency b y the use o f large fram es fo rmed b y aggregation of multiple packets ([3] [18]). This acts to couple throu ghput efficiency and buffer sizing in a new way since the latter directly affects the availability of suf ficient p ackets for aggregation into large frames. It follows fro m these o bservations that, among st o ther things, th ere doe s not exist a fixed buf fer size which can be used for sizing b uffers in WLANs. This leads natu rally to consideratio n of dyn amic buffer s izing strategies that adapt to changin g co nditions. In this p aper we demonstrate the ma jor perfor mance costs associated with the use of fixed buffer sizes in 802.11 WLANs ( Section III) and p resent two novel dynamic buf fer sizing algorith ms (Sections IV and V) th at ach ieve significant perfo rmance gains. The stability of the feed back loop indu ced by the adaptation is an alyzed, includ ing when cascaded with the feedb ack loop created b y TCP con gestion control a ction. The prop osed dynam ic buf fer s izing algorithms are comp utationally cheap and suited to implem entation on standard h ardware. In deed, we have impleme nted the algo- rithms in both the NS-2 simulator and the Linux MadWi fi driver [4] . In this pap er , in addition to extensi ve simu lation results we also pr esent experimental measuremen ts demo n- strating th e utility of the propo sed a lgorithms in a testbed located in office en v ironme nt and with realistic traffic. This latter include s a mix of TCP and UDP tr affic, a mix of u ploads and downloads, and a mix of co nnection sizes. The remainde r of the pap er is organized as follows. Sectio n II intro duces th e backg roun d of this work. In Sec tion III simulation results with fixed size buffers are reported to further motiv ate th is work. The propo sed alg orithms are then detailed in Section s IV and V. Experime nt details ar e presented in Section VI. After introd ucing related work in Section VII, we summarize our conclusions in Section VIII. I I . P R E L I M I N A R I E S A. IEE E 802. 11 DCF IEEE 802.11 a/b/g WLANs all share a common MAC al- gorithm called the Distributed Coo rdinated Fun ction (DCF) which is a CSMA/CA based algor ithm. On detecting the wireless med ium to be idle for a period D I F S , each wireless station initializes a backoff counter to a random nu mber selected unifo rmly fr om the inter val [0, CW -1] where CW is the co ntention win dow . Time is slotted and the b ackoff counter is decre mented each slot that the mediu m is idle. An impo rtant feature is that the coun tdown halts when the medium is detected busy an d o nly resu mes after the med ium is id le ag ain for a per iod D I F S . On the cou nter reaching zero, a station transmits a pa cket. If a collision occurs (two or more stations transmit simultaneou sly), C W is dou bled and the process repeated. On a successful transmission, CW is reset to the v alu e C W min and a ne w countd own starts. B. IEE E 802. 11e EDCA The 802.1 1e standar d extend s th e DCF algorithm (yielding the EDCA) by allowing the adjustment of MA C parameter s T S I F S ( µs ) 10 Idle slot duration ( σ ) ( µs ) 9 Retry li mit 11 Pack et s ize (bytes) 1000 PHY da ta rate (Mbps) 54 PHY ba sic rate (Mbps) 6 PLCP ra te (Mbps) 6 T ABLE I M A C / P H Y PA R A M E T E R S U S E D I N S I M U L AT I O N S , CO R R E S P O N D I N G T O 8 0 2 . 1 1 G . AP 1 n ............... wired hosts WLAN wired link Fig. 1. WLAN topology used in simulations. Wire d backhaul link bandwidth 100Mbps. MA C pa rameters of t he WLAN are listed i n T able I. that were previously fixed. In particular, the values of D I F S (called AI F S in 8 02.11 e) an d C W min may be set o n a per class basis for each station. While the full 802.1 1e stan dard is not implemented in current commodity har dware, th e EDCA extensions hav e been wid ely implem ented for some years. C. Unfairness among TCP Flow Consider a WLAN con sisting of n client stations each carrying one T CP uplo ad flow . The TCP ACKs are transmitted by the wireless AP . In this case TCP ACK p ackets can be easily q ueued/d ropped due to the fact that the basic 802.1 1 DCF ensures that stations win a rough ly equal number o f transmission oppor tunities. Nam ely , while the data packets for the n flows have an aggregate n/ ( n + 1 ) share of the transmission oppo rtunities the TCP A CKs for the n flows have only a 1 / ( n + 1) sh are. I ssues of this sort are k nown to lead to significant un fairness amon gst TCP flows but can be readily resolved using 802.11e function ality by treatin g TCP A CKs as a separ ate traffic class which is assigned higher pr iority [15]. W ith regard to thro ughp ut efficiency , the algo rithms in this paper perform similarly when th e DCF is u sed and when TCP A CKs ar e prioritized u sing the EDCA as in [15]. Per flow behavior does, of co urse, differ due to the in herent unfairness in the DCF and we therefo re mainly pr esent results using the EDCA to avoid flow-le vel unfairne ss. D. Simu la tion T opology In Sec tions III, IV and V -G, we use the simula tion to polog y shown in Fig. 1 w here the AP acts as a wireless router between the WLAN an d the In ternet. Upload flows origin ate fro m stations in the WLAN on the left and a re destined to wired host(s) in the wired network on the right. Download flo ws are from th e wired host(s) to stations in the WLAN. W e ignore differences in wir ed band width and delay fro m the AP to th e wired ho sts which can cau se TCP un fairness issues on the wired side (an or thogon al issue) by using the same wired- part R TT for all flo ws. Unless otherwise stated, we use the 3 0 2 4 6 8 10 0 0.05 0.1 0.15 0.2 0.25 MAC service time (ms) frequency (a) 2 stations 0 50 100 150 200 0 0.01 0.02 0.03 0.04 0.05 0.06 MAC service time (ms) frequency (b) 12 stations Fig. 2. Measured distributi on of per packet MAC service time. Solid vertic al lines m ark the m ean value s of distrib utions. Physical layer data/basic rates are 11/1 Mb ps. IEEE 802 .11g PHY p arameters shown in T ab le I an d the wired backhaul link bandwidth is 100Mbp s with R TT 200ms. For TCP traffi c, the widely deployed TCP Reno with SA CK extension is u sed. The advertised window size is set to b e 4096 packets (each has a payloa d of 10 00 bytes) which is the default size of cu rrent Linux kernels. The m aximum value of the TCP smooth ed R TT measurements (sR TT) is used as the measure of the delay experienc ed b y a flow . I I I . M O T I V A T I O N A N D O B J E C T I V E S W ireless communication in 802.11 networks is time-v a rying in nature, i. e., th e mean service time and the distribution of service time at a wireless station vary in time. The v ar iations are prim arily due to (i) changes in th e num ber o f active wireless station s and their loa d (i.e. offered load on the WLAN) and (ii) c hanges in the ph ysical tra nsmit r ate used (i.e. in re sponse to ch anging rad io ch annel conditions). In the latter case, it is straightfor ward to see that the service time can be easily increased/decr eased using low/high phy sical layer rates. T o see the imp act of offered load on the service time at a station, Fig. 2 plots the measured distribution of the MA C layer service time when the re ar e 2 and 12 station s acti ve. It can be seen th at the mean serv ice time chang es by over an order of m agnitud e as the number of stations varies. Observe also from th ese measured d istributions that there are si gnificant fluctuations in the s ervice time for a giv en fixed load . Th is is a direct con sequence o f the stoch astic nature o f the CSMA/CA contention me chanism used by the 802. 11/802 .11e MA C. This time-varing n ature directly affects buffering requir e- ments. Figur e 3 p lots link utilization 1 and max sR TT (propa- gation p lus smoo thed q ueuing delay ) v s buffer size fo r a range of WLAN o ffered lo ads and p hysical transmit rates. W e can make a n umber of o bservations. First, it can be seen that as the physical layer transmit rate is varied fro m 1Mbps to 21 6Mbps, the min imum buf f er size to ensure at least 90% through put e fficiency varies from about 20 packets to about 800 packets. No c ompro mise buffer size exists that ensu res bo th high efficiency and low delay across this rang e of transmit r ates. For example, a buffer size of 80 packets leads to R TTs exceed ing 500m s (even when only a single station is acti ve and so there a re no competing 1 Here the AP throughput perce ntage is the rat io betwe en the actual throughput achie ved using buf fer sizes sho w on the x-axis and the maximum throughput using the b uf fer sizes shown on the x-axi s. wireless stations) at 1 Mbps and throu ghpu t effi ciency below 50% at 216 Mbps. Note th at the transmit rates in cu rrently av ailab le draft 802 .11n equipment already exceed 216 Mbps (e.g. 300Mbp s is supp orted by current Atheros chipsets) and the trend is towards still high er tr ansmit rates. Even across the restricted range o f transmit rates 1Mbps to 54 Mbps supported by 80 2.11a/b /g, it can b e seen that a buf fer size o f 50 packets is required to ensure throughput ef ficiency above 80% yet this buf fer size induces delay s exceeding 10 00 and 3 000 m s at transmit rates of 11 and 1 Mbps, respe ctiv ely . Second, delay is strongly depend ent on the tra ffic load and the physical rates. For example, as the number of co mpeting stations (marked as “u ploads” in the figu re) is varied from 0 to 1 0, for a buf f er size o f 2 0 packets and physical transm it rate of 1Mb ps the delay v a ries fro m 300ms to ov er 2000 ms. T his reflects that the 802 .11 MAC allo cates av ailable transm ission oppor tunities equ ally on a verage among st the wireless stations, and so the mean service time (and thus delay ) increases with the number of station s. In co ntrast, at 21 6Mbps the d elay remains below 500m s for buf fer sizes up to 1600 packets. Our key co nclusion fro m th ese observations is that ther e exists no fixed b u ffer size capable of ensuring bo th h igh throug hput efficiency and reasonab le delay ac ross the range of physical rates an d o ffered loads experienced by mod ern WLANs. Any fixed cho ice of buf f er size necessarily carries the cost of significantly reduced th rough put efficiency and/or excessi ve qu euing delay s. This leads natu rally therefor e to the consideration of ad ap- ti ve ap proach es to buffer sizing, which dynam ically ad just the buffer size in re sponse to chang ing network conditions to ensure bo th high u tilization of the wireless lin k while a voiding unnecessarily lon g queuing delay s. I V . E M U L A T I N G B D P W e begin by considering a simple adapti ve a lgorithm based on th e classical BDP rule. Altho ugh this algorith m cannot take advantage of statistical multiplexing oppo rtunities, it is of interest both for its simplicity and because it will play a role in the more sophisticated A ∗ algorithm developed in the next section. As noted p reviously , and in contrast to wir ed networks, in 802.11 WLANs the mean service time is g enerally time- varying (d epende nt on WLAN lo ad and the phy sical transmit rate selected by a station). Consequently , there do es not exist a fixed B DP v alue. Ho wev er , we note that a wireless station can measure its own packet serv ice times by direct o bservation, i.e., by reco rding the time b etween a packet arriving at the head of the network interface queue t s and being successfully transmitted t e (which is indicated by receiving correctly the correspo nding MAC ACK). Note that this measu rement can be readily im plemented in real devices, e .g. by ask ing the hardware to r aise an inter rupt on rece ipt of a MAC A CK, and in curs only a m inor c omputatio nal burden. A verag ing these per p acket service times y ields the mean service time T serv . T o accom modate the time-varying natu re of the mean service time, this average can be taken over a sliding window . In this p aper, we con sider the use of expo nential smooth ing 4 5 10 20 50 80 150 400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 AP buffer size (pkts) AP throughput percentage (%) 1 download, 0 upload 1 download, 1 upload 1 download, 2 uploads 1 download, 5 uploads 1 download, 10 uploads 10 downloads, 0 upload 10 downloads, 1 upload (a) 1/1Mbps, thro ughput 5 10 20 50 80 150 400 200 400 500 1000 2000 AP buffer size (pkts) Max smoothed RTT (ms) 1 download, 0 upload 1 download, 1 upload 1 download, 2 uploads 1 download, 5 uploads 1 download, 10 uploads 10 downloads, 0 upload 10 downloads, 1 upload (b) 1/1Mbps, d elay 5 10 20 50 80 150 400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 AP buffer size (pkts) AP throughput percentage (%) 1 download, 0 upload 1 download, 1 upload 1 download, 2 uploads 1 download, 5 uploads 1 download, 10 uploads 10 downloads, 0 upload 10 downloads, 1 upload (c) 11/1Mbp s, throughput 5 10 20 50 80 150 400 200 400 500 1000 2000 AP buffer size (pkts) Max smoothed RTT (ms) 1 download, 0 upload 1 download, 1 upload 1 download, 2 uploads 1 download, 5 uploads 1 download, 10 uploads 10 downloads, 0 upload 10 downloads, 1 upload (d) 11/1Mbps, delay 5 10 20 50 80 150 400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 AP buffer size (pkts) AP throughput percentage (%) 1 download, 0 upload 1 download, 1 upload 1 download, 2 uploads 1 download, 5 uploads 1 download, 10 uploads 10 downloads, 0 upload 10 downloads, 1 upload (e) 54/6Mbp s, throughput 5 10 20 50 80 150 400 200 400 500 1000 2000 AP buffer size (pkts) Max smoothed RTT (ms) 1 download, 0 upload 1 download, 1 upload 1 download, 2 uploads 1 download, 5 uploads 1 download, 10 uploads 10 downloads, 0 upload 10 downloads, 1 upload (f) 54/6Mbps, delay 20 50 80 200 400 800 1600 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 AP buffer size (pkts) AP throughput percentage (%) 1 download, 0 upload 1 download, 1 upload 1 download, 2 uploads 1 download, 5 uploads 1 download, 10 uploads 10 downloads, 0 upload 10 downloads, 1 upload (g) 216/54Mbp s, throughpu t 20 50 150 400 800 1600 200 400 500 2000 AP buffer size (pkts) Max smoothed RTT (ms) 1 download, 0 upload 1 download, 1 upload 1 download, 2 uploads 1 download, 5 uploads 1 download, 10 uploads 10 downloads, 0 upload 10 downloads, 1 upload (h) 216/54Mbp s, delay Fig. 3. Throughput ef ficiency and maximum sm oothed round trip dela ys (max sR TT) for the topology in Fig. 1 when fixed size buf fers are used. Here, the AP throug hput effici ency is the ratio between the do wnload throughput achie ved using buf fer sizes indicat ed on the x-axi s and the maximum do wnload throughput achie ved using fixed size buf fers. Ra tes bef ore and after the ’/’ are used physica l layer data and basic rates. For the 216Mbps dat a, 8 packet s are aggre gated into each frame at the MA C layer t o impro ve t hroughput ef ficiency in an 802.11n-lik e scheme. The wired R TT is 2 00 ms. T serv ( k + 1) = (1 − W ) T serv ( k ) + W ( t e − t s ) to calculate a r unnin g average since this has the merit o f simplicity and statistical robustness (b y central limit argume nts). T he choice of smoothing parameter W in volves a trade- off between accommo dating time variations and en suring the accuracy of the estimate – this cho ice is con sidered in detail later . Giv en an online measurement of the mean service time T serv , the classical BDP ru le yields the following eBDP buf fer sizing strategy . Let T max be the target m aximum queuing delay . Noting that 1 /T serv is the mean service rate, we select buffer size Q eB DP accordin g to Q eB DP = min ( T max /T serv , Q eB DP max ) where Q eB DP max is th e u pper limit on buffer size. Th is effecti vely regulates the buffer size to equal the curren t mean BDP . The b uffer size d ecreases when the service rate falls and increases when the service r ate rises, so as to maintain an app roximately co nstant q ueuing dela y of T max seconds. W e may measure the flows ’ R TTs to d eriv e the value for T max in a similar w ay to me asuring the mean service rate, but in the examples presented h ere we simply use a fixed value of 200ms since this is an ap proxim ate upper bound on the R TT o f th e majority o f the cu rrent Intern et flows. W e note th at th e classical BDP rule is der i ved from the behavior o f TCP cong estion c ontrol (in p articular, th e reduc- tion of cwnd by half on p acket loss) and assumes a constant service rate and fluid-like packet arr i vals. Hence, fo r e xample, at low serv ice rate s the BDP r ule sugg ests use of extremely small buf fer sizes. Howev er , in add ition to accommo dating TCP beh avior , buffers have the ad ditional role o f ab sorbing short-term packet bursts and, in the case o f wire less lin ks, short-term fluctuation s in packet service times. It is these latter effects that lead to the steep drop -off in th rough put efficiency that can be observed in Fig. 3 when ther e are competing uplo ads (and so stochastic variations in packet service times due to channel conten tion, see Fig. 2.) plus small Algorithm 1 Drop tail operation of the eBDP algorithm . 1: Set the target queuing delay T max . 2: Set the over -provision para meter c . 3: for each inco ming packet p do 4: Calculate Q eB DP = min ( T max /T serv + c, Q eB DP max ) where T serv is f rom MA C Algor ithm 2. 5: if cu rrent queu e occupancy < Q eB DP then 6: Put p into queue 7: else 8: Drop p . 9: end if 10: end for buf fer sizes. W e therefore modif y th e eBDP update rule to Q eB DP = min ( T max /T serv + c, Q eB DP max ) where c is an over - provisioning amount to accommod ate short-term fluctuation s in service rate. Due to the complex nature of the serv ice time proc ess at a wireless station (which is cou pled to the traffic arrivals e tc at o ther stations in the WLAN) and o f the TCP traffic arriv al proce ss (where feedb ack creates co upling to the service time pr ocess), obtainin g an an alytic value f or c is intractab le. I nstead, based on the measu rements in Fig. 3 and others, we have found empirically that a v alue of c = 5 packets works well across a wid e ra nge of network conditions. Pseudo-co de f or eBDP is shown in Algorithms 1 and 2. The e ffecti veness of this simp le adaptiv e algorithm is illus- trated in Fig . 4. Fig. 4(a) shows the buffer size and q ueue occupan cy tim e histories whe n only a single station is active in a WLAN while Fig . 4(b) shows the correspon ding re sults when ten additional stations no w also contend for channel access. Comparing with Fig. 3(e), it can be seen that buffer sizes o f 3 30 packets and 70 p ackets, respec ti vely , are nee ded to yie ld 100% throug hput efficiency and e BDP selects buffer 5 Algorithm 2 MAC operation of the eBDP algorithm. 1: Set the av eraging parame ter W . 2: for each ou tgoing packet p do 3: Record servic e start time t s for p . 4: W ait until re ceiv e MA C AC K for p , record service end time t e . 5: Calculate service time o f p : T serv = (1 − W ) T serv + W ( t e − t s ) . 6: end f o r 50 100 150 200 0 100 200 300 400 500 600 Time (seconds) AP buffer (pkts) Occupancy Buffer size (a) 1 do wnload, 0 upload 50 100 150 200 0 100 200 300 400 500 600 Time (seconds) AP buffer (pkts) Occupancy Buffer size (b) 1 do wnload, 10 uploads Fig. 4. Histories of buff er size and buf fer occupanc y w ith the eBDP algorit hm. In (a) there is one do wnload and no upload flo ws. In (b) there are 1 downlo ad and 10 upload flo ws. 54/6Mbps physica l data /basic rates. sizes which a re in good agreem ent with these thresh olds. In Fig. 5 we plo t the throughp ut e fficiency (measured as the r atio of the a chieved throu ghput to that with a fixed 400-p acket buffer) and max sm oothed R TT over a range of network conditions obtained using the eBDP algorithm . It can be seen that th e adaptive algorithm maintains high throu ghput efficiency acro ss the entire range of op erating con ditions. This is achieved while mainta ining the latency appr oximately constant at around 4 00ms (200ms propagation delay plus T max = 200 ms queuing delay) – the latency rises slightly with the numb er of uploads due to the over -provisionin g parameter c used to ac commod ate stochastic fluctuations in serv ice rate. While T max = 200 ms is used as the target drain time in the eBDP algorithm, realistic traffic tends to consist of flows with a mix of R TTs. Fig. 6 p lots th e re sults as we vary the R TT of the wired bac khaul lin k wh ile keepin g T max = 2 00 ms. W e observe that the throug hput efficienc y is c lose to 100% for R TTs up to 20 0ms. F or an R TT of 300 ms, we ob serve a 0 2 4 6 8 10 0 20 40 60 80 100 # of uploads AP throughput percentage (%) 1 download 10 downloads (a) Throughput 0 2 4 6 8 10 0 100 200 300 400 500 600 # of uploads Max smoothed RTT (ms) 1 download 10 downloads (b) Delay Fig. 5. Pe rformance of t he eBDP algorithm as the number of u pload flo ws is varied . Data is shown for 1, 10 download flows and 0, 2, 5, 10 uploa ds. W ired R TT 200ms. Here the AP throu ghput percentag e is the ratio between the throughput achie ved usi ng the eBDP algorith m and that by a fixed buffe r size of 400 pack ets (i.e. the maximum achie va ble throughput in thi s case). 50 100 150 200 250 300 0 20 40 60 80 100 RTT (ms) AP throughput percentage (%) 1 download, 0 upload 10 download, 0 upload 1 download, 10 uploads (a) Throughput 50 100 150 200 250 300 150 200 250 300 350 400 450 500 550 600 RTT (ms) Max smoothed RTT (ms) 1 download, 0 upload 10 download, 0 upload 1 download, 10 uploads (b) Delay Fig. 6. Performance of the eB DP algorith m as the R TT of wired backha ul is vari ed. Data is sho wn for 1, 10 do wnloads and 0, 10 uploa ds. Here the AP throughput percent age is the ratio between the throughput a chie ved usin g the eBDP algorith m and that by a fixed bu ffe r s ize of 400 packets (i.e. the maximum achie v able throughput in thi s case). slight d ecrease in thro ughp ut when ther e is 1 download and 10 con tending upload flows, which is to be expected since T max is less than the lin k de lay and so the b uffer is less than the BDP . This could improved by measuring the average R TT instead of using a fixed value, but it is no t clear th at the benefit is worth the extra ef fort. W e also observe that there is a difference between the max smoothed R TT with and without upload flows. The R TT in our setup co nsists of th e wired link R TT , the queu ing delays f or TCP data and A CK packets and the MAC layer tran smission delays for TCP data and AC K packets. When the re ar e no upload flows, TCP ACK p ackets can be transmitted with negligible queuin g d elays since they only hav e to contend with the AP . When there are upload flo ws howe ver, stations with TCP A CK packets have to conten d with other stations sending TCP data packets as well. TCP A CK packets therefore c an be delaye d ac cording ly , which causes the increase in R TT observed in Fig. 6. Fig. 7 demo nstrates the ab ility of the eBDP algorith m to respond to changin g n etwork cond itions. At time 3 00s the number of uploads is increased from 0 to 10 flows . It can be seen that the buffer size quickly adapts to th e changed c ondi- tions wh en the weig ht W = 0 . 001 . This rou ghly corre sponds to a veraging over the last 1000 packets 2 . When the num ber of uploads is increased at time 30 0s, it takes 0.6 second s (cur rent throug hput is 13.5Mbp s so t = 1000 ∗ 800 0 / 13 . 5 ∗ 10 6 = 0 . 6 ) to send 1000 packets, i.e., the eBDP algorith m is able to react to network ch anges rough ly on a timescale of 0.6 secon d. V . E X P L O I T I N G S TA T I S T I C A L M U LT I P L E X I N G : T H E A * A L G O R I T H M While th e eBDP algorith m is simple and effecti ve, it is unable to take advantage of the statistical m ultiplexing of TCP cwn d b ackoffs when multiple flows share the same link. For example, it can b e seen fr om Fig. 8 that while a buf fer size of 3 38 pa ckets is needed to maximize throug hput with a single download flow , this falls to around 100 packets when 10 download flows share the link. However , in both cases th e eBDP algor ithm selects a buf fer size of approx imately 3 50 2 As per [8], the current val ue is avera ged ov er the last t observ ations for x% perce ntage of accurac y where x = 1 − (1 − W ) t , t is the number of updates (which are packet s in our case). When W = 0 . 001 and t = 1000 we have th at x = 0 . 64 . 6 50 100 150 200 250 300 350 400 0 100 200 300 400 500 Time (seconds) AP buffer (pkts) Occupancy Buffer size Fig. 7. Con vergen ce of t he eB DP al gorithm foll o wing a c hange in networ k conditi ons. One do wnload flo w . At time 200s the number of upload flo ws is increa sed from 0 to 10. 10 50 100 150 300 400 0 5 10 15 AP buffer size (pkts) AP throughput (Mbps) 1 download, 0 upload 10 downloads, 0 upload BDP/N 1/2 =102 BDP=338 (a) Throughput 10 50 80 200 300 400 100 150 200 250 300 350 AP buffer limit (pkts) Max smoothed RTT (ms) 1 download, 0 upload 10 downloads, 0 upload BDP/N 1/2 =102 BDP=338 (b) Delay Fig. 8. Impact of statistical multi ple xing. There are 1/10 do wnloads and no uploads. Wired R TT 200ms. packets (see Figs. 4(a) and 9). It can be seen from Fig. 9 th at as a result with the eBDP algorithm the buf fer ra rely empties when 10 flows share the link. That is, th e potential exists to lower the buffer size without loss of th rough put. In this section we consider the design of a measureme nt- based algo rithm (the AL T algorithm) tha t is cap able of taking advantage of such statistical multiplexing o ppor tunities. A. Ad a ptive Limit T u ning (ALT) F eed back Alg orithm Our objective is to simultaneou sly ach iev e b oth efficient link u tilization and low delays in the face o f stoch astic time- variations in the service time. In tuitively , for efficient link utilization we n eed to en sure that th ere is a packet a vailable to transmit w henever the station wins a transmission opportunity . That is, we want to minimize th e time that the station buffer lies empty , which in turn can be achieved by making the buf fer size sufficiently large (u nder fairly general traffic co nditions, buf fer occu pancy is a m onoto nically increasing function of 50 100 150 200 0 100 200 300 400 500 600 Time (seconds) AP buffer (pkts) Occupancy Buffer size (a) T ime history Fig. 9. Histories of buff er size and b uf fer occupanc y with the eBDP algorit hm when t here are 10 do wnloads and no uploads. buf fer size [13].). Ho wever , using large b uffers can lead to high queuing d elays, and to ensur e low de lays the buffer shou ld be as small as possible. W e would therefore like to op erate with the smallest b uffer size that ensures sufficiently high lin k utilization. This intuition sugg ests the fo llowing appr oach. W e observe th e buf f er o ccupancy over an inter val of time. If the buf fer rarely empties, we decrease the buffer si ze. Conversely , if the b uffer is o bserved to be empty f or too long, we increase the buf f er size. O f course, further work is required to convert this basic intuition into a well-b ehaved algorithm suited to practical im plementatio n. Not only do the terms “rarely ” , “too long” etc need to be made precise, but we note that a n inner feedb ack loop is created whereby buffer size is adjusted depend ing o n the measured link utilization, which in turn depend s on the buffer size. This new feedback loop is in addition to the existing outer feed back loop cr eated by TCP congestion control, whereb y th e o ffered load is a djusted based on the packet loss rate, w hich in tu rn is dep endent on buffer size. Stability an alysis of these cascad ed loop s is theref ore essential. W e now intro duce the fo llowing Adaptive Limit T un ing (AL T) algorithm. The dy namics a nd stability of th is algorithm will then be analyzed in later sections. Define a queue occupan cy threshold q thr and let t i ( k ) ( referred to as the idle time ) be th e du ration of time that th e queue spen ds at or below this threshold in a fixed obser vation interval t , and t b ( k ) (referre d to as th e busy time ) be the cor respond ing duratio n spent above the thr eshold. Note that t = t i ( k ) + t b ( k ) and the aggregate amo unt of idle/busy time t i and t b over an inter val can be readily observed by a station. Also, the lin k u tilitisation is lo wer bound ed by t b / ( t b + t i ) . Let q ( k ) deno te the b u ffer size during th e k -th observation interval. The buf fer size is then up dated accord ing to q ( k + 1) = q ( k ) + a 1 t i ( k ) − b 1 t b ( k ) , (1) where a 1 and b 1 are design par ameters. Pseud o-code for this AL T algorithm is given in Algorithm 3. This algorithm seeks to maintain a balance between the time t i that th e queue is idle and the time t b that the queue is busy . That is, it can be seen that when a 1 t i ( k ) = b 1 t b ( k ) , the buffer size is k ept unchanged. When the idle tim e is larger so that a 1 t i ( k ) > b 1 t b ( k ) , th en the buffer size is increased. Co n versely , when the b usy time is large enou gh tha t a 1 t i ( k ) < b 1 t b ( k ) , then the buf f er size is decreased. More g enerally , assuming q converges to a station ary dis- tribution (we discuss this in mor e detail later), then in stead y- state we hav e that a 1 E [ t i ] = b 1 E [ t b ] , i.e. , E [ t i ] = b 1 a 1 E [ t b ] and the m ean link utilization is the refore lower b ound ed b y E [ t b t i + t b ] = E [ t b ] t = 1 1 + b 1 /a 1 . (2) where we have made use o f the fact that t = t i ( k ) + t b ( k ) is constan t. It can ther efore be seen that choosing b 1 a 1 to b e small then ensures high utilization. Choosing values f or the parameters a 1 and b 1 is discussed in d etail in Section V -B, but we note here that values of a 1 = 10 and b 1 = 1 are found to work well and u nless oth erwise stated are used in this paper . W ith regard to the ch oice of observation interval 7 Algorithm 3 : The AL T algorithm. 1: Set the initial q ueue size, the maximum buf fer size q max and the minimum buf fer size q min . 2: Set the increase step size a 1 and the d ecrease step size b 1 . 3: for Every t secon ds do 4: Measure the idle time t i . 5: q ALT = q ALT + a 1 t i − b 1 ( t − t i ) . 6: q ALT = min ( max ( q ALT , q min ) , q max ) 7: end f o r Q(k) Q(k+1) T I (k) T B (k) cwnd buffer size occupancy Fig. 10. Illustra ting ev oluti on of the buffe r size. t , this is largely determin ed by the time requ ired to obtain accurate estimates of the q ueue id le an d busy times. In the reminder of this pape r we fin d a v alue of t = 1 secon d to be a go od choice. It is prud ent to constrain the buffer size q to lie between the minimum and the maxim um v alue s q min and q max . In the following, the m aximum size q max and the minimum buffer size q min are set to be 16 00 and 5 packets re spectiv ely . B. Sele c tin g the Ste p Sizes for ALT Define a congestion event as an ev ent wh ere the sum of all senders’ TCP cwn d decreases. This cwnd decre ase can be caused by the r esponse of TCP co ngestion control to a sing le packet loss, o r multiple packet losses that are lumpe d to gether in one R TT . Define a congestion ep och as the duration between two adjacent con gestion events. Let Q ( k ) denote the buf fer size at the k - th cong estion e vent. Then, Q ( k + 1) = Q ( k ) + aT I ( k ) − bT B ( k ) (3) where T I is the “idle” time, i.e., the duration in seconds when the queu e occupancy is belo w q thr during the k - th congestion epoch, and T B the “busy” time, i.e. , the duratio n when the queue occupan cy is above q thr . This is illustrated in Fig. 10 for the c ase of a single TCP flo w . Notice that a = a 1 and b = b 1 where a 1 and b 1 are parameters used in th e AL T algo rithm. I n th e remainder of this section we investi gate conditions to guar antee con vergence and stability of the buf fe r dynamics with TCP traffic, which naturally lead to guidelin es for the selection of a 1 and b 1 . W e first define some TCP related quantities before pro ceeding . Consider the case wh ere TCP flows may have different round -trip times and d rops need n ot be synch ronized . Let n be the n umber of TCP flo ws sh aring a link, w i ( k ) be the cwnd of flow i at the k - th co ngestion e vent, T i the ro und- trip propag ation delay of flow i . T o describe the cwn d additive increase we define the following qu antities: (i) α i is the rate in pack et/s at which flow i increases its congestion window 3 , (ii) α T = P n i =1 α i is the aggregate rate at which flows increa se th eir congestion windows, in packets/s, and (iii) A T = P n i =1 α i /T i approx imates the aggregate rate, in pack ets/ s 2 , at which flows incr ease th eir sending rate. Follo wing the k -th cong estion e vent, flows b ackoff their cwnd to β i ( k ) w i ( k ) . Flows m ay be u nsynch ronized, i.e., not all flows need back of f at a congestion e vent. W e cap ture this with β i ( k ) = 1 if flo w i d oes n ot backoff at e vent k . W e assume that the α i are constan t and that th e β i ( k ) (i.e. the pattern of flow backoffs) are independen t of the fl ow co ngestion windows w i ( k ) an d the buffer size Q ( k ) (this appe ars to be a go od approx imation in many practical situations, see [26]). T o relate the queue occupancy to the flow cwnd s, we ad opt a fluid-like approa ch an d ignore sub-R TT burstiness. W e also assume that q thr is suf ficiently small r elativ e to the buf f er size that we c an appr oximate it as zer o. Consider ing now the id le time T I ( k ) , on backoff af ter the k -th congestion ev ent, if the queue o ccupan cy do es no t fall b elow q thr then T I ( k ) = 0 . Otherwise, immediately af ter backoff the send rate of flow i is β i ( k ) w i ( k ) /T i and we ha ve that T I ( k ) = E [ B ] − P n i =1 β i ( k ) w i ( k ) /T i A T , (4) where E [ B ] is the m ean service rate of th e co nsidered buffer . At con gestion event k th e ag gregate flow throug hput nec- essarily equals the link capacity , i.e., n X i =1 w i ( k ) T i + Q ( k ) /E [ B ] = E [ B ] . W e then ha ve th at n X i =1 w i ( k ) T i = n X i =1 w i ( k ) T i T i + Q ( k ) /E [ B ] T i + Q ( k ) /E [ B ] = n X i =1 w i ( k ) T i + Q ( k ) /E [ B ] + Q ( k ) E [ B ] n X i =1 w i ( k ) T i + Q ( k ) / E [ B ] 1 T i Assume that the spread in flow r ound -trip pro pagation delays and congestion wind ows is small en ough that P n i =1 ( w i ( k ) / ( E [ B ] T i + Q ( k ))(1 /T i ) can be accur ately ap- proxim ated by 1 /T T , where T T = n P n i =1 1 T i is the harmon ic mean of T i . Th en n X i =1 w i ( k ) T i ≈ E [ B ] + Q ( k ) T T , and T I ( k ) ≈ (1 − β T ( k )) E [ B ] − β T ( k ) Q ( k ) /T T A T (5) 3 Standard TCP increase s the flow congesti on windo w by one packet per R TT , in which case α i ≈ 1 /T i . 8 where β T ( k ) = P n i =1 β i ( k ) w i ( k ) /T i P n i =1 w i ( k ) /T i is the effecti ve ag gregate backoff factor of the flows. When flows are syn chron ized, i.e., β i = β ∀ i , then β T = β . When flows are u nsynch ronized but have the same averag e backoff factor , i. e., E [ β i ] = β , then E [ β T ] = β . If the q ueue empties after b ackoff, the qu eue busy time T B ( k ) is directly giv en by T B ( k ) = Q ( k + 1) /α T (6) where α T = P n i =1 α i is the aggregate rate at which flows increase their congestion windows, in packets/s. Otherwise, T B ( k ) = ( Q ( k + 1) − q ( k )) /α T (7) where q ( k ) is the b u ffer o ccupancy af ter backoff. It turn s o ut that for the analysis of stability it is n ot necessary to calculate q ( k ) explicitly . Instead, letting δ ( k ) = q ( k ) /Q ( k ) , it is enoug h to no te that 0 ≤ δ ( k ) < 1 . Combining (3), ( 5), (6) and (7), Q ( k + 1) =  λ e ( k ) Q ( k ) + γ e ( k ) E [ B ] T T , q ( k ) ≤ q thr λ f ( k ) Q ( k ) , otherwise where λ e ( k ) = α T − aβ T ( k ) α T / ( A T T T ) α T + b , λ f ( k ) = α T + bδ ( k ) α T + b , γ e ( k ) = a 1 − β T ( k ) α T + b α T A T T T . T aking exp ectations, E [ Q ( k + 1)] = E [ λ e ( k ) Q ( k ) + γ e ( k ) E [ B ] T T | q ( k ) ≤ q thr ] p e ( k ) + E [ λ f ( k ) Q ( k ) | q ( k ) > q thr ](1 − p e ( k )) with p e ( k ) the probab ility th at the q ueue ≤ q thr following the k -th congestion e vent. Since the β i ( k ) are assumed indepen- dent of Q ( k ) we m ay assume that E [ Q ( k ) | q ( k ) ≤ q thr ] = E [ Q ( k ) | q ( k ) > q thr ] = E [ Q ( k )] and E [ Q ( k + 1)] = λ ( k ) E [ Q ( k )] + γ ( k ) E [ B ] T T (8) where λ ( k ) = p e ( k ) E [ λ e ( k ) | q ( k ) ≤ q thr ] + (1 − p e ( k )) E [ λ f ( k ) | q ( k ) > q thr ] , γ ( k ) = p e ( k ) E [ γ e ( k ) | q ( k ) ≤ q thr ] C. A Sufficient Con dition for S tability Provided | λ ( k ) | < 1 the queue dyn amics in (8) are ex- ponen tially stable. In more detail, λ ( k ) is the convex c om- bination of E [ λ e ( k )] and E [ λ f ( k )] (where the conditio nal depend ence of the se expectations is un derstood , but om itted to streamline notation ) . Stability is therefore guaranteed pro- vided | E [ λ e ( k )] | < 1 and | E [ λ f ( k )] | < 1 . W e have that 0 < E [ λ f ( k )] < 1 when b > 0 since α T is n on-n egati ve and 0 ≤ δ ( k ) < 1 . The stability condition is theref ore that | E [ λ e ( k )] | < 1 . 2000 2500 3000 3500 4000 0 200 400 600 800 1000 1200 Time (seconds) AP buffer (pkts) Buffer occupancy Buffer limit Cwnd (a) Instabi lity 100 200 300 400 500 0 100 200 300 400 500 600 700 800 Time (seconds) AP buffer (pkts) Buffer occupancy Buffer limit Cwnd (b) Stability Fig. 11. Instabil ity and stability of the AL T algorit hm. In (a), a= 100, b=1, the maximum buffe r size is 50000 pack ets. In (b), a=10, b=1, the maxi mum buf fer s ize is 400 packets. In both figures, there is 1 download and no upload. Under mild independen ce conditio ns, E [ λ e ( k )] = α T − aE [ β T ( k )] α T / ( A T T T ) α T + b . Observe that, α T A T T T = 1 n ( P n i =1 1 /T i ) 2 P n i =1 1 /T 2 i when we u se th e stand ard TCP AIMD increa se of one packet per R TT , in which case α i ≈ 1 /T i . W e theref ore ha ve that 1 /n ≤ α T / ( A T T T ) ≤ 1 . Also, when the stand ard AIMD backoff factor of 0.5 is used, 0 . 5 < E [ β T ( k )] < 1 . Thu s, since a > 0 , b > 0 , α T > 0 , it is suf ficien t that − 1 < α T − a α T + b ≤ E [ λ e ( k )] ≤ α T α T + b < 1 A sufficient condition (from the left inequality) for stability is then that a < 2 α T + b . Using a gain (as in the e BDP algor ithm) 200ms a s the maximum R TT , a rough lower bound on α T is 5 (correspon ding to 1 flow with R TT 20 0ms). The stability constraint is then that a < 10 + b. (9) Fig. 1 1(a) d emonstrates th at the in stability is ind eed ob - served in simulations. Here, a = 100 an d b = 1 are u sed as example v alues, i.e., the stability conditions are not satisfied. It can be seen that the buf f er size at co ngestion ev ents oscillates around 40 0 pa ckets rather than conver ging to a constant value. W e note, howe ver , that in this example and o thers the instability consistently manifests itself in a benign m anner (small oscillations). Howe ver, we leave detailed analy sis of the onset of instability as f uture work. Fig. 11(b) sho ws th e correspon ding results with a = 10 and b = 1 , i.e., wh en the stability conditions are satisfied. It can be seen th at the buffer size at co ngestion e vents settles to a constant v alue, th us the buffer size time history converges to a per iodic cycle. D. F ixed poin t When the system dyn amics are stable and p e = 0 , from (8) we have that lim k →∞ E [ Q ( k )] = (1 − E [ β T ]) b/a + E [ β T ] E [ B ] T T . (10) For synchronized flows with th e stand ard TCP backoff factor of 0.5 (i.e. , E [ β T ] = 0 . 5 ) and the sam e R TT , 9 10 −3 10 −2 10 −1 10 0 0 20 40 60 80 100 b/a Normalized throughput (%) 1 download, 0 upload 10 downloads, 0 upload 1 download, 10 uploads 10 downloads, 10 uploads 1/(1+b/a) Fig. 12. Impact of b/a on throughput ef ficienc y . The maximum b uf fer size is 400 packe ts, and the minimum buf fer size is 2 pack ets. (1 − E [ β T ]) b/a + E [ β T ] E [ B ] T T reduces to the BDP when b/ a = 0 . T his indicates that for high link utilizatio n we would like the ratio b/a to be small. Using (5), (6) and (10) we have that in steady- state the expected lin k utilization is lo wer bou nded by 1 1 + b a α T A T T T ≥ 1 1 + b a . (11) This lower b ound is plotted in Fig. 1 2 together with the measured thro ughp ut efficiency vs b/ a in a variety of traffic condition s. Note that in this figure th e lower bou nd is violated by the measu red data when b/ a > 0 . 1 and we h av e a large number of uploads. At such large v alues of b/a plus many conten ding stations, the target b u ffer sizes are extremely small and micro-scale burstiness means that TCP R TOs o ccur frequen tly . It is this that lea ds to violation of the lower boun d (11) ( since p e = 0 does no t hold) . Howev er , this corresp onds to an extreme o perating regime and fo r smaller v alues of b/a the lower bou nd is respected . It can be seen from Fig. 1 2 th at the efficiency decrea ses when the ratio of b/a in creases. I n order to ensure th roug hput efficiency ≥ 90 % it is required that b a ≤ 0 . 1 . (12) Combined with the stability co ndition in inequality (9), we have that a = 10 , b = 1 are feasible integer v a lues, that is, we choose a 1 = 1 0 an d b 1 = 1 for the A* algorithm. E. Conver gence rate In Fig. 13(a ) we illustrate the conv ergence rate of the AL T algorithm . Ther e is one download, and at tim e 500s the number of uplo ad flows is increased from 0 to 1 0. It can b e seen that the buffer size limit co n verges to its new value in arou nd 200 seconds or 3 min utes. I n gener al, the con vergence rate is determined b y the produ ct λ (0) λ (1) ...λ ( k ) . In this example, the buf fer does not empty after b ackoff and the co n vergence rate is thus deter mined by λ f ( k ) = α T + bδ ( k ) α T + b . T o achieve fast con vergence, we r equire small λ f ( k ) so that Q ( k + 1 ) = λ f ( k ) Q ( k ) is decre ased quickly to the d esired value. W e thus need large b to achieve fast co n vergence. Howe ver , b = 1 is used h ere in orde r to r espect th e stability condition in (9) and the throug hput efficiency condition in ( 12). Note th at when condition s c hange such that the buf fe r size needs to incr ease, the c onv ergence r ate is determined b y the a par ameter . T his has a v alue of a = 10 and thus the algorithm ada pts much more quickly to increase the buf fer than to decrease it and the 200 400 600 800 1000 0 100 200 300 400 500 Time (second) AP buffer (pkts) Occupancy Buffer size (a) The AL T algorithm 200 400 600 800 1000 0 100 200 300 400 500 Time (second) AP buffer (pkts) Occupancy Buffer size (b) T he A* algorith m Fig. 13. Conv ergence rate of the AL T and A* algorit hms. One downloa d flo w , a = 10 , b = 1 . At time 50 0s the number of uploa d flows is i ncreased from 0 to 10. 100 200 300 400 500 0 100 200 300 400 500 Time (seconds) AP buffer (pkts) Occupancy Buffer size (a) 10 do wnloads only 200 400 600 800 1000 0 100 200 300 400 500 Time (second) AP buffer (pkts) Occupancy Buffer size (b) 10 downl oads, with 10 uploads starting at time 500s Fig. 14. Buf fer time histories with t he A* algorithm, a=10, b= 1. example in Fig. 13(a) is essentially a worst case. In the next section, we addr ess the slow conver gence by com bining the AL T and the e BDP algor ithms to create a hybrid algo rithm. F . Combining eBDP and ALT : The A* Algorithm W e can combine th e eBDP an d AL T algor ithms b y using the mean packet ser vice time to calculate Q eB DP as per the eBDP algorithm (see Section IV), and the idle/busy times to calculate q ALT as p er the AL T algorithm. W e then select the buf fer size as min { Q eB DP , q ALT } to yie ld a hybrid a lgorithm, referred to as th e A* algo rithm, that com bines the eBDP an d the AL T algo rithms. When c hannel cond itions chang e, the A* a lgorithm uses the eBDP mea sured service time to adju st the buf fer size promp tly . T he con vergence rate depends on the smoothin g weight W . As c alculated in Section IV, it takes aroun d 0.6 second for Q eB DP to con verge. The A* algor ithm can further use the AL T alg orithm to fine tune the buf fer size to exploit the po tential redu ction due to statistical mu ltiplexing. The effecti vene ss o f this hybrid approa ch wh en th e traffic load is increased suddenly is illustrated in Fig. 13( b) (which can be directly compared with Fig. 13(a)). Fig. 14(b) sho ws the correspo nding time histories for 1 0 download flows and a changin g n umber of c ompeting up loads. G. P erforman ce The basic impetu s for the design of the A* alg orithm is to exploit the possibility o f statistical multip lexing to reduce buf fer sizes. Fig. 14(a) illustrates the perf orman ce of th e A* algorithm when there are 10 d ownloads and no up load flo ws. Comparing with the r esults in Fig . 9 u sing fixed size b u ffers, we can see that the A * algor ithm can achieve significantly 10 50 100 150 200 300 0 20 40 60 80 100 RTT (ms) AP throughput percentage (%) 1 download, 0 upload 10 download, 0 upload 1 download, 10 uploads 1/(1+b/a) (a) Throughput efficien cy 50 100 150 200 300 0 100 200 300 400 500 600 RTT (ms) Max smoothed RTT (ms) 1 download, 0 upload 10 download, 0 upload 1 download, 10 uploads (b) Delay Fig. 16. Performanc e of the A* algori thm as the wired R TT is varie d. Physical layer data and basic rates are 54 and 6 Mbps. Here the AP throughput percentag e is the ratio between the throughput ach ie ved using the A * alg orithm and th e maximum throughput using fixed size buffe rs. smaller buf f er s izes (i.e., a reduction from more than 3 50 pack- ets to 100 packets approx imately) when mu ltiplexing exists. Fig. 15 summar izes the throug hput an d delay perfor mance of the A* algorithm for a ran ge o f network co nditions (numb ers of u ploads an d downloads) and physical transmit r ates ranging from 1 Mbps to 216Mbp s. This can be co mpared with Fig. 3. It can b e seen that in comparison with the use of a fixed buf fer size the A* algorithm is able to achie ve high throughput efficiency acr oss a wide ran ge of operating con ditions while minimizing qu euing delays. In Fig. 16 we further ev aluate the A* algorith m when the wired R TTs are varied from 50-30 0ms and the number of u ploads is v aried fro m 0 -10. Comparin g these with the results (Fig. 5 and 6) of the eBDP alg orithm we can see that the A* algorith m is ca pable of exploiting the statistical multiplexing wh ere feasible. In particu lar , significan tly lower delays are ach iev ed with 10 do wnload flows wh ilst maintaining compara ble th rough put ef ficiency . H. Imp a ct of Chan nel Err ors In the foregoing simulations the channel is error free and packet losses are solely due to b uffer overflo w and MAC-layer collisions. In fact, chann el errors have only a minor impact on the effecti veness of buffer sizing algorithm s as e rrors play a similar r ole to collisions with regard to th eir impact o n link utilization. W e support th is claim first using a simulation example with a channel having an i.i.d noise indu cing a bit error rate (BER) of 10 − 5 . Results are shown in Fig. 17 where we can see a similar tren d as in the cases when the medium is erro r free (Fig s. 15(e) 15(f)). W e further confir m this claim in our test-bed implementa- tions wh ere tests were conducted in 802.1 1b/g channels an d noise related losses were observed. See Section VI for details. I. DCF Operation The p roposed b uffer sizing algorithm s are still valid for DCF since link utilizatio n an d delay conside rations remain applicable, as is the av ailability of serv ice time (fo r the eBDP algorithm ) and id le/busy time m easurements (fo r the AL T algorithm ). I n particular, if the co nsidered buf fer is heavily backlog ged, to ensure low delay s, the buffer size should be reduced . If oth erwise the buffer lies emp ty , it ma y b e due 0 2 4 6 8 10 0 20 40 60 80 100 # of uploads AP throughput percentage (%) 1 download 10 downloads 1/(1+b/a) (a) Throughput 0 2 4 6 8 10 0 100 200 300 400 500 600 # of uploads Max smoothed RTT (ms) 1 download 10 downloads (b) Delay Fig. 17. Performance of the A* al gorithm when t he channe l has a BER o f 10 − 5 . Physical layer data and basic rates are 54 and 6 Mbps. Here the AP throughput percenta ge is the rat io between the th roughput achie ved usi ng the A* algorithm and the maximum throughpu t using fixe d size buf fers. to that the cu rrent buffer size is too small which cau ses the TCP source backs off after b u ffer overflow . T o accommodate more future p ackets, the buffer size can b e increased. Note that increasing buffer sizes in this case would no t lead to high delays but has the potential to impr ove throughp ut. This tradeoff between th e thr oughp ut and the delays thus holds f or both EDCA an d DCF . Howe ver, the DCF allocates roug hly equal numbers of transmission opportunities to stations. A consequence o f u sing DCF is thus that when the num ber of up load flo ws increases, the uploads may produce enough TCP A CK p ackets to keep the AP’ s qu eue saturated. In fact, o nce there are two u pload flows, TCP becom es unstable due to rep eated timeouts (see [20] for a d etailed demo nstration) , c ausing the un fairness issue discussed in Section II-C. Therefore, we pre sent results for up to two uploads in Fig. 18, as this is the greatest num ber of upload flows where TCP with DCF can exhibit stab le behavior using both fixed size buffers a nd the A* algorithm. Note that in this case using the A* a lgorithm on upload stations can also decrease the delays and maintain h igh through put efficiency if their buffers are frequently backlog ged. W e also pr esent results when there are download flo ws only (so the un fairness issue does not exist). Fig. 1 9 illustrates the throu ghpu t and delay perfor mance ach iev ed using the A* algorithm and fixed 400-packet buffers. As in the EDCA cases, we can see that the A * algor ithm is able to maintain a high throug hput efficiency with comp aratively low d elays. Note that DCF is also used in the production WLAN test where the A* algor ithm is o bserved to perfo rm well (see Section I). J. Ra te Adap tation W e did no t implemen t rate adaptation in ou r simulations. Howe ver, we did implem ent the A* algor ithm in the Lin ux MadW ifi dr i ver which inclu des rate adap tation alg orithms. W e tested the A* alg orithm in th e pro duction WLAN of the Hamilton Institute with the default Samp leRate algorithm enabled. See Section I. V I . E X P E R I M E N TA L R E S U LT S W e ha ve implem ented the propo sed algor ithms in the Linux MadW ifi driver , and in this section we present tests o n a n 11 0 2 4 6 8 10 0 20 40 60 80 100 # of uploads AP throughput percentage (%) 1 download 10 downloads 1/(1+b/a) (a) 1/1Mbps, throughput 0 2 4 6 8 10 0 500 1000 1500 2000 # of uploads Max smoothed RTT (ms) 1 download 10 downloads (b) 1/1Mbps, d elay 0 2 4 6 8 10 0 20 40 60 80 100 # of uploads AP throughput percentage (%) 1 download 10 downloads 1/(1+b/a) (c) 11/1Mbp s, throughput 0 2 4 6 8 10 0 200 400 600 800 1000 # of uploads Max smoothed RTT (ms) 1 download 10 downloads (d) 11/1Mbps, delay 0 2 4 6 8 10 0 20 40 60 80 100 # of uploads AP throughput percentage (%) 1 download 10 downloads 1/(1+b/a) (e) 54/6Mbps, th roughput 0 2 4 6 8 10 0 200 400 600 800 1000 # of uploads Max smoothed RTT (ms) 1 download 10 downloads (f) 54/6Mbps, delay 0 2 4 6 8 10 0 20 40 60 80 100 # of uploads AP throughput percentage (%) 1 download 10 downloads 1/(1+b/a) (g) 216/54Mbp s, throughpu t 0 2 4 6 8 10 0 200 400 600 800 1000 # of uploads Max smoothed RTT (ms) 1 download 10 downloads (h) 216/54Mbp s, delay Fig. 15. Throughput ef ficienc y and maximum smoothed round trip dela ys (max s R TT) for the topol ogy in Fi g. 1 when the A* al gorithm is used. Here , the AP throughput ef ficienc y is the ratio between the throughput achie ved using the A* algorithm and the maximum throughput achie ved using fixed size buf fers. Rates before a nd aft er the ’/’ are use d ph ysical layer dat a and basic rates. For the 216Mbps data, 8 pack ets are aggrega ted int o e ach f rame at the MA C layer to impro ve throughput efficie ncy in an 802.11n-like scheme . The wired R TT is 2 00 ms. 0 2 4 6 8 10 0 20 40 60 80 100 # of uploads AP throughput percentage (%) 1 download 10 downloads 1/(1+b/a) (a) Throughput 0 2 4 6 8 10 0 100 200 300 400 500 600 # of uploads Max smoothed RTT (ms) 1 download 10 downloads (b) Delay Fig. 18. Performance of t he A* al gorithm for 802.11 DCF opera tion when there are both upload and downl oad flo ws in the network. Here the AP throughput percenta ge is the rat io between th e throughput achie ved usi ng the A* algorit hm a nd the maximum throughput using fixed size buf fers. Physical layer da ta and basic rates u sed are 54 and 6 Mbps. 10 50 100 150 300 400 0 20 40 60 80 100 120 140 160 180 200 AP buffer size (pkts) AP throughput percentage (%) 1 download, 0 upload 10 downloads, 0 upload (a) Throughput 10 50 100 150 300 400 0 20 40 60 80 100 120 140 160 180 200 AP buffer size (pkts) Max RTT percentage (%) 1 download, 0 upload 10 downloads, 0 upload (b) Delay Fig. 19. Performance of t he A* al gorithm for 802.11 DCF opera tion when there are download flo ws only in the netw ork. Here we illustra te the percen tage (betwee n the re sults achi e ved using the A* algorithm and those using varied AP buf fer sizes as shown on the x-axis) of both throughput and delays. Physical l ayer dat a and b asic rate s used are 54 an d 6 Mbp s. experimental testbed located in an office environment and introdu ce results illustrating operation with complex traffic that includes bo th T CP and UDP , a mix o f uplo ads and do wnloads, and a mix of conn ection sizes. A. T estbed Exp eriment The testbed to polog y is shown in Fig. 20. A wired n etwork is emulated using a desktop PC runnin g du mmyn et software on FreeBSD 6.2 which en ables lin k ra tes and propag ation delays to b e contro lled. The wireless AP and the server are co nnected to the du mmyne t PC by 1 00Mbp s Ethernet links. Rou ting in the network is statically c onfigur ed. Network m anagemen t is carried out using ssh over a wired con trol plane to a void affecting wireless traffic. In th e WLAN, a deskto p PC is used as the AP a nd 12 PC- based embedd ed Linu x boxes b ased on the Soek ris net4801 are used a s client stations. All are equ ipped with an Athero s 802.1 1b/g PCI card with an extern al anten na. All n odes run a Linux 2.6. 21.1 kernel and a MadW ifi wireless dr iv er (version r2366 ) modified to allow us to adju st the 802.1 1e C W min , C W max and AI F S pa rameters as require d. Sp ecific vendor features on the wireless card, such as turbo mode, rate adaptation and multi-rate retries, are disabled . All of the tests are perf ormed using a transmission rate of 11Mbp s (i.e., we use an 80 2.11b PHY) with R TS/CTS disabled and the ch annel number explicitly set. Channel 1 has been selected to ca rry out the experiments. The testbed is not in an isolated radio en vironm ent, and is subjec t to the usual impairments seen in an o ffice en vironm ent. S ince the wireless stations are based on low p ower emb edded sy stems, we have tested these wireless stations to c onfirm that the hardware perform ance (especia lly the CPU) is no t a bottleneck f or wireless transmissions at the 1 1Mbps PHY rate used. The configuratio n of th e v ar ious network buf fers and MAC p arameters is detailed in T ab le II. Although b oth SACK enabled TCP NewReno and T CP CUBIC with r eceiv er buffers of 4096KB h ave been tested, here we only r eport the results for the latter a s CUBIC is now the de fault congestion co ntrol algor ithm used in Linu x. Default values o f L inux Kernel 2. 6.21.1 ar e u sed f or all the 12 AP Server WLAN STA_1 STA_2 STA_11 STA_12 Dummynet ...... Fig. 20. T opology used in e xperimental tests. Parame ters V alues Interf ace tx q ueue 2 pac ket s Dummynet qu eue 100 pack ets MA C Preamble long MA C data rate 11 Mbps MA C A CK rate 11Mbps MA C retries 11 T ABLE II T E S T B E D PA R A M E T E R S S U M M A RY . other TCP par ameters. W e put TCP A CK packets into a hig h priority que ue (we use the WME A C V O queue of MadWi fi as a n examp le) which is assign ed p arameters of C W min = 3 , C W max = 7 and AI F S = 2 . TCP data packets are collected into a lower pr iority qu eue ( we u se th e WME A C VI q ueue) which is assi gned C W min = 31 , C W max = 1023 an d AI F S = 6 . W e use iperf to generate TCP traffic an d results are collected using both iper f and tcp dump . B. T raffic Mix W e con figure the traffic mix on the network to captur e the complexity of rea l n etworks in order to help gain grea ter confidenc e in the p ractical u tility of the proposed buffer sizing approa ch. W ith r eference to Fig. 20 showing the network topolog y , we create the following traffic flo ws: • TCP uplo ads . One long-liv ed TCP up load from each of ST As 1, 2 and 3 to th e server in the wired network. ST As 2 and 3 always use a fixed 40 0-packet buf fer , wh ile ST A 1 uses bo th a fi xed 400 -packet b uffer and the A* algorithm. • TCP downlo ads . One long-lived TCP download from the wired server to e ach of ST As 4, 5 and 6. • T wo way UDP . On e two-way UDP flow f rom th e wired server to ST A 7. The pac ket size used is 64 b ytes and th e mean inter-packet interval is 1s. Another UDP flo w from the wired server to ST A 8 with th e used packet size of 1000 byte s and the mean inter-packet in terval of 1s. • Mix o f TCP co nnection sizes . These flows mimic web traffic 4 . A short TCP d ownload from the wired server to ST A 9, the connec tion size of which is 5KB (ap proxi- mately 3 packets). A slightly long er TCP do wnload from the wired server to ST A 10 with a con nection size of 20KB (appro ximately 13 packets) and another to ST A 11 (conne ction size 30 KB, namely , around 20 packets). A fourth co nnection sized 10 0KB from the server to ST A 12. F or each size o f these con nection, a n ew flow is 4 Note tha t in th e produc tion WLAN test, we used real web t raf fic. started every 1 0s to allow collection o f statistics on the mean com pletion time. C. Results Fig. 21 shows example tim e h istories of th e buffer size and occupan cy at the AP with a fixed buf fer size of 400 packets and when the A* algo rithm is used fo r d ynamic buffer sizing. N ote tha t in this example the 4 00 packet buf fer never completely fills. Instead the b u ffer occupancy has a peak value of aroun d 250 packets. This is du e to n on-con gestive pac ket losses caused by chan nel noise (the testbed o perates in a real office en viron ment with significant interf erence, because there are bluetooth devices and WLANs working in chann el 1.) which pr ev ent the TCP congestion window from g rowing to completely fill the buffer . Nevertheless, it can be seen that th e buf fer r arely empties and thus it is sufficient to p rovide an indication of the throug hput when the wireless link is fully utilized. W e observe th at while buf fer histor ies are very different with a fixed size buf fer an d the A* algo rithm, the throug hput is very similar in these two cases (see T able I II). One immediate benefit of using smaller buf f ers is thu s a reduction in network delay s. T able IV shows the measu red delays e x perience d by the UDP flows sharing th e WLAN with the TCP traffic. It can be seen that for ST A 8 both the mea n and the maximu m delays are significantly red uced when the A* algorithm is used. T his potentially has major implications for time sensitive traffic wh en sharing a wir eless lin k with data traffic. Note th at the q ueuing delays fr om ST A 7 are for traffic p assing throug h th e high- priority traf fic class used for TCP A CKs, wh ile the measurements from ST A 8 are for tra ffic in the same class as TCP da ta packets. For th e of fered loads used, the ser vice rate of the high -prior ity class is suf ficient to av oid queu e b u ildup and th is is reflected in the measureme nts. The reductio n in network de lay not on ly benefits UDP traffic, b u t also short-lived TCP co nnection s. Fig. 22 sho ws the measured c ompletion time vs co nnection size for TCP flo ws. It can be seen that the completion time is consistently lower by a factor of at least two when A* dynamic buf fer sizing is used. Since the majority of internet flows are short- li ved TCP connectio ns (e.g., most web traffic), this potentially translates into a significant improvement in user experience. Note that ST A ’ s 2 and 3 in the A* colu mn o f T able III use fixed size buffers rather than the A* algorithm. The results shown are the throughp ut they achie ve when other stations ru n the A* algor ithm. It can be seen that the A* algo rithm does not significan tly impact ST As 2 and 3, co nfirming tha t A* can su pport increm ental roll-o ut withou t negativ ely impacting legacy stations th at are using fixed size buffers. V I I . R E L AT E D W O R K The classical approa ch t o sizing Internet router buf fers is the BDP rule prop osed in [31]. Recently , in [5] it is argued that the BDP rule may be overly co nservati ve on links shared by a large number of flows. In this case it is unlikely that TCP congestion wind ow sizes (cwnd) ev olve synchron ously an d due to statistical multiplexing of cwnd backoff, the combined 13 50 100 150 200 250 0 50 100 150 200 250 300 350 400 Time (seconds) AP buffer(pkts) Buffer occupancy: fixed 400 pkts Buffer occupancy: A* Buffer size: A* Fig. 21. Buffer size and occupanc y time hist ories measured at the AP with fixed 400-packe t buffe rs and the A* al gorithm. Fixed 400 packet s A* Throughput of ST A 1 1.36Mbps 1.33Mbps Throughput of ST A 2 1.29Mbps 1.30Mbps Throughput of ST A 3 1.37Mbps 1.33Mbps Throughput of ST A 4 0.35Mbps 0.41Mbps Throughput of ST A 5 0.39Mbps 0.39Mbps Throughput of ST A 6 0.52Mbps 0.42Mbps T ABLE III M E A S U R E D T H R O U G H P U T . buf fer requir ement can be con siderably less than the BDP . The ana lysis in [5] suggests that it may be suf ficient to size buf fers as B D P / √ n . Th is work is extended in [25], [10] and [33] to co nsider th e p erform ance of TCP cong estion con trol with many connectio ns und er the assumption of sma ll, medium and large buf fer sizes. Se vera l auth ors have pointed out that the value n can b e difficult to determine for realistic traffic patterns, which no t only inc lude a mix of connections sizes and R TT s, but can also be strongly time-varying [9], [32]. In [32], it is observed f rom me asurements on a productio n link that traffic p atterns vary significantly over time, an d may contain a com plex mix of flow connection leng ths and R TTs. It is demonstra ted in [9][3 2] that the use of very small buffers can lead to an e xcessiv e loss rate. Motiv ated by these o bservations, Fixed 400 packet s A* mean (max ) mean (max) R TT to ST A 7 201ms (239ms) 200ms (236ms) R TT to ST A 8 1465ms (2430ms) 258ms (482ms) T ABLE IV M E A S U R E D D E L AYS O F T H E U D P FL O W S . S TA 7 ’ S T R A FFI C I S P R I O R I T I Z E D T O AVO I D Q U E U E B U I L D U P A N D T H I S I S R E FL E C T E D I N T H E M E A S U R E M E N T S . 5 20 30 50 100 0 5 10 15 Connection size (KB) Mean completion time (seconds) Fixed 400 packets A* Fig. 22. Measured completion time vs connecti on size. Results are avera ges of multi ple runs. in [2 7] [1 2] a measurement-based adaptive buffer size tun ing method is p roposed . Howe ver , this app roach is not applicab le to WLANs since it requir es a p riori knowledge of the link capacity o r line rate, which in WLANs is time-varying and load depend ent. [34] introdu ces another adaptive buf fer sizing algorithm based on c ontrol theory for Internet core routers. [24], [14] consider the role of the o utput/inp ut capacity ratio at a n etwork link in determining th e required buffer size. [6] experimentally in vestigates the analytic results reported in [5], [25], [10] and [33]. [11] considers sizing buf fers managed with activ e queues mana gement techniq ues. The for egoing work is in the context of wir ed links, and to our kn owledge the question o f b uffer sizin g for 80 2.11 wireless links has received a lmost no attention in the literature. Exception s includ e [21] [23] [29]. Sizing of b u ffers fo r voice traffic in WLANs is in vestigated in [21 ]. The impact of fixed buf fer sizes o n TCP flows is studied in [23]. In [29], TCP perfor mance with a variety of AP buffer sizes and 802 .11e parameter settings is investigated. In [16] [17], in itial in ves- tigations are rep orted related to the eBDP algo rithm and the AL T algorithm of the A* algorithm. W e sub stantially extend the p revious work in this paper with theoretical a nalysis, experiment implementation s in both testbed an d a pro duction WLAN, and additional NS simulation s. V I I I . C O N C L U S I O N S W e c onsider th e sizing of network buffers in 802 .11 based wireless networks. W ir eless network s face a num ber of fun- damental iss ues that do no t arise in wired networks. W e demonstra te that the u se o f fixed size buffers in 802.1 1 networks inevitably lea ds to either u ndesirable channe l under- utilization or u nnecessary high d elays. W e pr esent two novel buf fer sizin g algor ithms th at achieve high th rough put while maintaining low de lay across a wide rang e o f network con- ditions. Experimental measurements demonstrate the utility of the pro posed algorith ms in a real en vironm ent with real traffic. The sou rce cod e u sed in the NS-2 simula tions and the experimental impleme ntation in MadW ifi can b e downloade d from www .ham ilton.ie/tianji li/buf f ersizing.htm l. A P P E N D I X In th e productio n WLAN o f th e Hamilton I nstitute, th e AP is eq uipped with an Atheros 8 02.11 /a/b/g PCI car d and an external anten na. Th e op erating system is a recent Fedora 8 (kernel version 2.6.24.5 ). The latest MadW ifi dr i ver version (0.9.4 ) is u sed, in which the b u ffer size is fixed at 2 50 pa ckets. The AP is running in 802.1 1g mode with the default rate adaptation algorith m enabled (i.e., SampleRate [7]). All da ta traffic is pro cessed via the Best Effort queue, i.e., MadWifi is o perating in 802.1 1 rather th an 802 .11e mod e. A mix of W ind ows/Apple MA C/Linu x lap tops and PCs use the WLAN from time to time. R E F E R E N C E S [1] IEE E 802.11 WG. Internat ional standard for informati on technol ogy – local and metropolitan area net works, part 11: wire less L AN MA C and PHY spe cificat ions, 1999. 14 [2] P art 11: wir eless LAN me dium acc ess contr ol (MA C) and phy sical la yer (PHY) specifi cations: Me dium Access Contr ol (MA C) Quality of Service (QoS) Enhanc ements , IEEE 802.11e/D8.0, Feb . 2004. [3] S. A. Mujtaba, et. al., “TGn Sync Proposal T echnical S pecifica tion, ” www .tgnsync.org, IEEE 802.11-04/ 889r6, May 2005. [4] Online, madwifi-proje ct.org. [5] G. Appenzel ler , I. K eslassy , and N. McK eown, “Siz ing Router Buf fers, ” in Pr oc. of ACM SIGCOMM , 2004. pp. 281–292. [6] N. Beheshti , Y . Ganjali, M. Ghobadi, N. McKeo wn, and G. Salmon, “Experiment al Study of Router Buffer Sizing, ” in Proc . of IMC , Oct. 2008. [7] J. Bic ket, “Bit-rat e Selectio n in Wi reless Networks, ” MSc. Thesis, MIT 2005. [8] C. Chatfield, The Analysis of T ime Series, An Intr oduction , CRC Press 2004. [9] A. Dhamdher and C. Dovrolis, “Open Issues in Router Buf fer Sizing, ” in Computer Commu nicatio n Revie w , Jan. 20 06. [10] M. Enachescu, Y . Ganjali, A. Goel, N. McKe own , and T . Roughgarden, “Route rs with V ery Small Buffers, ” in Pr oc. of IN FOCOM , Dec. 20 06. [11] D. Y . Eun and X. W ang, “ Achievi ng 100% Throughput in TCP/A QM Under Aggressi ve Pac ket Marking Wit h Small Buf fer , ” IEEE/A CM T rans- actions on Networking , v ol. 16, no. 4, pp. 945 - 956, Aug. 20 08. [12] C. Kelle tt, R. Shorte n, and D. Leit h, “Sizing Internet Router Buf fers, Acti ve Queue Management, and the L ur’e Problem, ” in P roc. of IE E E CDC , 2006. [13] K. Kumara n, M. Mandj es, A.L. Stol yar , “Con ve xity Properties of Loss and Ove rflo w Functions, ” Opera tions Resear ch Let ters , vol. 31, no.2, pp. 95-100, 200 3. [14] A. L akshmikant ha, R. Srikant , and C. Beck, “Impact of File Arriv als and Depart ures on Buffer Sizing in Core Routers, ” in Pr oc. of Infocom , Apr . 2008. [15] D. Leit h, P . C lif ford, D. Mal one, an d A. Ng, “TCP Fairne ss in 802.11e WLANs, ” IE EE Communi cations Lette rs , vol. 9, no. 11, Jun. 2005 . [16] T . Li and D. Leith, “Buffer Sizing for TCP Flows in 802.11e WLAN s, ” IEEE Co mmunicati ons Letters , Mar . 2008. [17] T . Li a nd D. Lei th, “ Adapti ve Buffer Si zing for TCP Flows in 802.11e WLANs, ” Chinacom 2008 . [18] T . Li, Q. Ni, D. Malone, D. Leith, T . Turlet ti, and Y . Xiao, “ Aggregat ion with Fragment Retran smission for V ery High-Speed WL ANs, ” IEEE/ACM T ransaction s on Netw orking , vol. 17, no. 2, pp. 591-6 04, Apr . 2009. [19] D. Malon e, K. Duf fy , and D.J. L eith, “Modeling the 802.11 dis- trib uted coordination fun ction in no n-saturat ed hetero geneous con ditions, ” IEEE/ACM T ransactions on Networking , vol. 15, no. 1, Feb . 2007. [20] D. Malone, D. J. Leith, A. Aggarwal , and I. Dangerfield, “Spurious TCP Time outs in 802.11 Net works, ” WiNMee , Apr . 2008. [21] D. Malone, P . Cl if ford, and D. J. Leith, “On Buffer Sizing for V oice in 802.11 WLANs, ” IEEE Communica tions Lette rs , vo l. 10, no. 10, pp 701–703, Oc t. 2006. [22] V . Paxson and S. Floyd, “W ide-Area Traf fic: The F ailure of Poisson Modelin g, ” IEEE /AC M T rans. Netw . , vol. 3, no. 3, pp. 226-244, Jun. 1995. [23] S. Pilosof, e t. al., “Understanding T CP fairn ess o ver Wire less LAN , ” in Pr oc. of IEEE INFOCOM 2003. AP Test laptop 10 m v Fig. 23. WLAN of the Hamilton Institute. Stars represent users’ approximate locat ions. [24] R. Prasad, C. Dovrol is, and M. Thottan, “Router Buf fer Sizing Re visited: The role of the input/ output capacity ratio, ” IEEE/AC M Tr ansactions on Network ing , to ap pear . [25] G. Raina and D. W ischik, “Buf fer Sizes for Large Multi plex ers: TCP Queueing Theory and Instab ility Analysis, ” in Pro c. of EuroN GI , Jul. 2005. [26] R. Shorten, F . Wirt h, and D. Leith, “ A Positi ve Systems Model of TCP- Like Congest ion Cont rol: Asym ptotic Results, ” IEEE/ACM T ransacti ons on Networking , vol. 14, no. 3, pp. 616–6 29, Jun. 2006 . [27] R. Stanoje vic, C. Kelle tt, and R. Shorten, “ Adapti ve T uning of D rop-T ail Buf fers for Reducing Queueing Dela ys, ” IE E E Communica tions Let ters , vol. 10, no. 7, pp 5 70–572, Jul. 2006. [28] D. T ang a nd M. Baker , “ Analysis of A L ocal-Are a W ireless Network, ” in Proc. of AC M Mobi Com , Aug. 200 0. [29] M. T hottan, and M. C. W eigle, “ Impact of 802.11e E DCA on mixed TCP-based application s, ” in Pr oc. of IEEE WICON 2006. [30] O. Tick oo and B. Sikda r , “On the Impac t of IEEE 802.11 MA C on Tra ffic Cha racteri stics, ” IEEE J. on Selected Are as in Commun. , vol. 21, no. 2 , Feb . 2003, pp. 189-203. [31] C. V illamizar and C. Song, “High Performance TCP in ANSNET , ” ACM Computer Communication Revie w , vol. 24, no. 5, pp . 45–60 , Oct. 199 4. [32] G. V u-Brugier , R. Sta noje vic, D. Leith, and R. Shorten, “ A Critiqu e of Recent ly Proposed Buffe r-Siz ing Strate gies, ” ACM Computer Commun i- cation Re view , v ol. 37. no. 1, Jan. 2007. [33] D. Wischi k and N . McKe own , “Part I: buffe r sizes for core router , ” ACM Computer Communication Revie w , vol. 35, no. 3, Jul . 2005 . [34] Y . Zhang and D. Loguinov , “ ABS: Adapti ve Buf fer Sizing for Hetero- geneous Networks, ” in P r oc. of IWQos , Jun. 2008. [35] Z. Zhao, S. Darbha, and A. L. N. Reddy , “ A Method fo r Estimating the Proportion of Nonresponsi ve Traf fic At a Router , ” IE EE/ACM T ransact ions on Networking , vol. 12, no. 4, pp. 708–7 18, Aug. 2004 . PLA CE PHO TO HERE Tianji Li recei ved the M.Sc. (2004) degree in netw orking and di stribu ted computa tion from ´ Ecole Doctoral e STIC, Uni versit´ e de Nice-Sophia Ant ipo- lis, France, and th e Ph.D. (2008) degre e from th e Hamilton Instit ute, National Uni versity of Ireland Maynooth, Ireland, w here he i s currently a research fello w . He is interested in improving performance for computer and tel ecommunicat ion netw orks. PLA CE PHO TO HERE Douglas Leith graduated from the Unive rsity of Glasgo w in 1986 an d was a warded his PhD, also from the Uni versi ty of Glasgo w , in 1989. In 2001, Prof. Leith mov ed to the Nationa l Uni versity of Ireland , Maynooth to assume the position of SFI Principa l In vest igator and to estab lish the Hamilt on Institut e (www . hamilton.ie ) of w hich he i s Director . His current research inter ests inc lude the analysis and design of netw ork congestion c ontrol and dis- trib uted resourc e allocat ion in wirele ss networks. PLA CE PHO TO HERE Da vid Malone re cei ved B.A.(mod), M.Sc. a nd Ph.D. degre es in mathematics from Trinity Colle ge Dublin. During his time as a postgraduate , he became a member of the FreeBSD dev elopment team. He is a research fell o w at Hamilto n Institute, NUI Maynooth, working on wireless networking . His intere sts include wav elets, mathematics of networks, IPv6 and systems administration. He is a co-aut hor of O’Reilly’ s ”IPv6 Network Admini stratio n”.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment