Anakyzing the performance of Active Queue Management Algorithms
Congestion is an important issue which researchers focus on in the Transmission Control Protocol (TCP) network environment. To keep the stability of the whole network, congestion control algorithms have been extensively studied. Queue management meth…
Authors: ** G.F. Ali Ahammed, Reshma Banu **
! 10.5121/ijc nc.2010.2201 1 G.F . A li Ahammed 1 , Resh m a Ban u 2 , 1 Department of Electronics & Communication, Ghousia college of Engg.Ramana garam. ali_ah ammed@redi ffmail.com 2 Department of Informa tion S cie nce & Engg, Ghousia college of Engg.Ramana garam. A BS TRA CT Cong esti on is an impor tant issue whi ch resear cher s focus on in the Transm is sion Cont rol Protoco l ( TCP ) n etwo rk environ men t. T o keep th e stabili ty of the whole networ k, congest ion c on trol algor ithm s hav e be en ex tensi vel y st udie d. Q ueu e m anag emen t m etho d emplo yed b y th e ro uters i s on e o f the impor tan t issue s in th e congesti on contro l study. Ac tive qu eue manag ement (A QM ) h as been p ropo sed as a ro uter -bas ed m echan ism for earl y det ectio n of congest ion insid e the networ k. In this pa per we anal yzed sever al activ e queu e manag em ent alg orit hms w ith respec t to th eir abili ties of maint aini ng hig h res ource utiliz at ion, identi fy ing and res tri ctin g disp ropor tiona t e bandwid th usage, and their dep lo yment comp lex ity. W e compar e the perfor mance of FRED, BLUE, SFB, an d CHOKe based on simula tion res ult s, using RED and Drop Tail as the eva luati on base lin e. The chara cter isti cs of diff eren t algor ithm s are also discu ssed and compa red . S imula tion is done by usi ng Networ k Sim ula tor(NS 2 ) and the graphs are draw n us ing X- grap h. K EY WOR DS RED ; Drop tai l; Fair ness In dex; T hrou ghp ut; AQM; NS 2 F RED , BLU E, S FB , CHOKe , ECN 1. I NTRODUCTION When the re are too many coming packets contendi ng f or the limi ted shared resources, such as the q ueue buffer in the router and the outgoing bandw idt h, congestion may happen in the data communication. During congestion, large amounts of p acket experience d elay or even be dropped due to t he queue overflow. Severe congestion problems resu lt in degradati on of t he throughput and large pa cket lo ss rate. Congestion w ill also d ecrease efficiency and re liabili ty of the who le network, furthermore, if at ve ry high traffic, performance collapses c ompletely and almost no packets are delivere d. As a result, many con gestion c ontrol methods[2] are proposed to solve this prob lem and avoid the damage. Most o f the congestion c ontrol algorit hms are based on evaluating the network feedbacks[2] to de tect when and wher e congestion o ccurs, and take actions to adjust th e output source, such as reduce the c ongestion window ( cwnd ). Various feedbacks are used in the con gestion detection and analysis. However, there are mainly two categories: explicit feedback and implicit feedback. In explicit feedback algorithms, so me signal p ackets a re sent bac k from the congesti on p oint to warn the source to slow down [4], wh ile in the implicit feedback algorithms, the source deduces the congestion exi stence by observing the change of some network fa cto rs, suc h as delay, throughput difference and packet lo ss [4]. Researchers and the IET F proposed active queue ma nagement (A QM) as a mech anism for d etecting con gestion inside the n etwork. ! 2 Further, they h ave strongly r ecommended the d eployment o f AQM in rou ters as a measure to preserve a nd i mprove WAN p erformance . AQM algorithms run on routers and de tect incipient congestion by typ ically monitor ing the instanta neous or av erag e q ueue siz e. When the average queue size exceeds a certain threshold bu t is still less than the capacity o f the queue, AQM algorithms infer congestion on the li nk and notify the end systems to back off by proactively dropping some of the pac ket s arriving at a router. Alternately , in stead of dropping a p acket, AQM a lgorithms c an also set a s pecific bit in the header of that pac ket and fo rward that packet toward the r eceiver after congestion has been inferred. Upon receiving that packet, the receiver in turns sets another bi t in its next ACK. When the sender receives this AC K, it redu ces it transmission rate as if its pa cket were lost. The process of setting a spec ific b it in the packet h eader b y AQM algorithms and forwarding the packet is also call ed mar king. A packet tha t h as th is specific b it turned on is called a marked packet. End systems that experience the m arked o r dropped p ackets reduce their transmission rates to relieve congesti on and prevent the queue fro m overflowing . In practice, most of t h e routers being d eployed u se simplistic D rop Tail algorithm, which is sim ple to implement with minimal computation over h ead, but provides unsat isfactory performance. To attack th is prob lem, many qu eue management algorithms a re pr opose d , su ch as Random Early Drop (R ED) [3 ], Flow Random Early Drop (FRED) [4] , BLUE [5], Stochastic Fair BLUE ( SFB) [5], and C HOKe (CHOose and Keep for r es ponsive flows, CHOose and Kill for un res ponsive flows) [7]. Most of th e algorithms claim that they can provide fair sharing am ong different f lows without imposing too much deployment complex ity. Most of the pr op osals focus on only one aspect of the problem (whether it is f airness, d eployme n t complexity, or computational overhead), or fix the imperfectio ns of previous al go rithms, and their sim ulations sett in g are different from each other. These all make it difficult to evaluate, and to choose one to use under certai n traffic load. This paper aims at a thorough e val uation amon g these algorithms and illustrations of their characteristics by simulation. W e compare the performa n ce of FRED, BLUE, SFB, and CHOKe, using RED and Drop Tail a s the evaluation baseline. For each of these algorithms, three aspects a r e d iscussed: (1) resource u tilizati o n (whether t h e l in k ba ndwidth is fully utilized), (2) fairness among different tra ffic flows (whether different flows get their fair share), and (3) imp leme n tation and deployment complexity (whether the algorithm requires too much space and computati o n resources). This pap er i s o rganized as follows. I n section 2 , we in troduce the queue management algorithms to be evaluated and how to configure their key p arameters. Section 3 presents our simulation d esign, p arameter s ettings, simulation result, and comparison. Section 4 d iscusse s the key features of d iffere nt algorithms, and their imp act on performance. Section 5 summaries our conclusion. 2. Q UEUE M ANAGEMENT A LGORITHMS 2.1 RED (Random Ear ly Drop ) RED [2] was designed with t h e objectives to (1) minimize p acket loss and queuing de lay, (2) avoid global synchronization o f sources, (3) maintain h igh link u til i zation, and (4) remove biases against bursty sources. T he basic i dea behind RED queue management is to detect incipient congestion early and t o convey con gesti on notification to the end-hosts, allowing them to reduce their transmission rates before queues in t h e network ov erflow and p ackets are dropped. ! 3 To d o t h is, RED mai n tain an exponentially-weighted moving aver age (EWMA) o f the queue length wh ich it uses to detect congestion. When the average queue length exceeds a minimum thre shold ( m i n th ), p ackets are randomly dropped or marked with an exp licit congestion notification (ECN) b it [2]. When the average q ueue length exceeds a maximum threshold ( max th ), all packets are dropped or marked. While R ED is certainly an imp rovement o ver tradit ion al d rop tail queues, it h as se veral shortcomings. One of the fundamental problems with RED is that they rely on queue length as an estimat or o f con gestion. While the presence of a persiste n t qu eue indicates con gestion, its length gives very little information as to the s everity of congestion. That is , the number of competing connections sharing the link. In a busy period , a single source transmitting at a rate greater than the bo ttleneck link capacity can cause a queue to build up j ust as easi l y as a large number of sources can. Since the R ED algorithm relies on queue lengths, it has a n inherent problem in de termining the severity o f congesti on . As a res ult, R ED requires a w ide range of parameters to o perate correctly und er d ifferent congestion scenarios. While RED can achieve an ideal opera ting po i n t, it can only d o so when it has a suff icient amount of bu ffer sp ace and is correctly parameterized. RED represents a class of queue ma n agement me chanisms that does not keep the state of each flow. Tha t is, they put the d ata from t h e all the f l o ws into one qu eue, and focus on their overall p erformance. It is tha t which o rigi nate the pro b lems ca u sed b y non -respo n sive fl o ws. To deal with that, a few congestion control algorithms h ave tried to s eparate different kind of data flows, for example Fair Queue [6], Weighted Fair Queue [6], etc. But their per -flow- scheduling philosophy is different wit h that of RED, which we will not discuss here. 2.2 FRED (Flow Rando m Early Drop) Flow Random Early D rop (FRED) [4] is a modif ied ver si on of RED, which uses p er-active- flow accounting t o make different dropping decisions for connections with d ifferent bandwidth usages. FRED on ly keeps tra ck of flows that have packets in the buffer, thus t h e cost o f FRED is proportional to the buff er size and independ ent of the total f l ow numbers (including the short-lived and idle flows). FRED can ach ieve the benefits of per-flow queuing and round-robin scheduling with s ub stantially less complexity. Some other interesting features of FRED include: (1) penalizing non-adaptive flows by imposing a maximum nu mber of b uffered packets, a nd surpassing their sha re to average p er- flow buffer usage; (2) pro t ecting fragil e flows b y deterministically accepting flows from low bandwidth connections; (3) p roviding fa ir sharing for large numbers o f flows by using “two- packet-buffer” when buffer is us ed up; (4) fixing several imperfections of R ED by calculate average queue length at both packet arrival a n d depart ure (which also causes more overhead). Two p arameters a re introduced into FRED: min q and ma x q , which a re minimu m and maximum nu m b ers of packets that each flow is allow to buffer. In orde r to track the average per-active-flow buffer usage, FRED uses a g lob al var i a ble avgcq to estimate it. It maintains the nu mber o f active flows, and for each of them, F R ED maintains a count o f b uffer packets, qlen , and a count of times when the flow is not responsive ( q len > ma x q ). FRED will penalize flows with h igh s trike va lues. FRED processes arriving packets us ing the fo llowi ng algorithm: For each arriving packet P: Calculate average queue le ngth Obtain connection ID of the arrivi ng packet: flowi connectionID(P ) if flowi has no state ta ble then ! 4 qleni 0 strikei 0 end if Compute the drop probability like RED: p maxp maxth−avg maxth−minth maxq minth if (avg _ maxth) then maxq 2 end if if (qleni _ maxq|| ( avg _ maxth&&qleni > 2 _ avgcq)||(qleni _ avgcq&&strikei > 1)) then strikei strikei + 1 Drop arriving packet and return end if if (minth _ avg < maxth) then if (qleni _ max(minq, avgcq)) t hen Drop packet P with a probability p like RED end if else if (avg < minth) then return else Drop packet P return end if if (qleni == 0) then Nactive Nactive + 1 end if Enqueue packet P For each departing packet P : Calculate average queue le ngth if (qleni == 0) then Nactive Nactive − 1 Delete state table for flow i end if if (Nactive) then avgcq avg/Nactive else avgcq avg end if Pseudo code for the FRED algorithm 2.3 BLUE BLUE is an active qu eue management algorithm to manage con gesti o n con trol by packet loss and link utilization h istory instead of qu eue occupancy . BLUE maintains a single p robability, P m , to mark (or drop) packets. If t h e queue i s continually dropp ing pa ckets du e t o buffer overflow, BLUE increases P m , thus increasing the rate at which it sends back congestion notification or dropping packets. Conversely, if the queue b ecomes empty or if the link is idle, BLUE decreases its marking probability. This effec tivel y allows B LUE to “learn” the correct rate it needs to send back congestion noti fication or dropping packets. ! 5 The typical para mete rs of BLUE are d1 , d2 , and freeze_time . d1 determines the amount by which Pm is increase d when t h e queue overflows, while d 2 d etermi nes the amount b y wh ic h Pm is decreased when the l ink is idle. freeze_time is an im p ortant parameter that de termines the min im u m time in terval between two su ccessive updates of Pm . This allows the changes in the marking probability to t ake effec t before the value is upd ated a gain. Based o n those parameters. The basic blue alg or ithms can be summariz ed as following: Upon l ink idle event: if ((now- last_u pdate)>fr eeze_time) Pm = Pm-d 2; Last_u pdate = n ow; Upon pac ket loss ev ent: if ((now– last_upd atte)>freez e_time) Pm = P m+d1; last_upd ate = now; 2.4 SFB Based on B LUE, Stochastic Fair Blue (SFB) is a novel technique for protec ti ng TCP flo ws against non-responsive f lows. SFB is a FIFO queuing algorithm that identifies and rate-lim its non-responsive flows based on accounti n g mechanisms si milar to those used with BLUE. SFB ma intains accounting bins. The b ins are organi zed in L levels with N bins in each level. In addition, SF B maintains L independent hash functi ons, each ass o ciated w ith one level of the a cc ounting bins. Eac h hash function maps a flow into o ne of the acc ounting bins in that level. The accounting bins are used to keep track of queue occupancy statistics o f packets belonging to a p articular b in. A s a p acket arri ves at the qu eue, it is ha shed i n to one o f the N bins i n each of the L levels. If the number of packets mapped to a bin goes above a certain threshold (i.e., the size of the bin), the packet d ropping probability P m for that bin i s increased. If the number o f packets i n that b in drops to zero, P m is d ecreased. T he observation is that a non-responsive flow quickly drives P m to 1 in a ll of the L bins it is ha shed into. Responsive flows m a y share one or two bins with non-responsive flows, however, unless the number of non-responsive f lows is extremely large co mpared to the nu mber of bins, a respons ive flow is likely to be hashed into at leas t o ne b i n that is not polluted w it h non-responsive flows and thus h as a normal value. The decision to mark a packet is based on P min the min imum P m value of all bins t o wh ich th e flow is mapped in to. If P min is 1, the packet is identified as belonging to a non-responsive fl ow and is then rate-limited. On every packet arrival: Calculate hashes h0, h1, . . . , hL− 1 Update bins at each level for i = 0 to L − 1 do if (B[i][hi].qlen > bin size) then B[i][hi].pm B[i][hi].pm + _ Drop packet else if (B[i][hi].qlen == 0) then B[i][hi].pm B[i][hi].pm − _ end if end for pmin min(B[0][h0].pm,B[1][h1].pm, . . . ,B[L − 1][hL−1].pm) if (pmin == 1) then ratelimit() else Mark or drop packet with probabilit y p min end if On every packet departure: Calculate hashes h0, h1, . . . , hL− 1 Update bins at each level for i = 0 to L − 1 do ! 6 if (B[i][hi].qlen == 0) then B[i][hi].pm B[i][hi].pm − _ end if end for Pseudo code for the SFB a lgorithm The typical parameters of SFB algorithm are QLen , B in_Size , d1 , d2 , freeze_time , N , L , Boxtime , Hinterval . Bin_Size is th e buffer space of each b in. Qlen is the act ual queue length of each b in. Fo r eac h bin, d1 , d2 and freeze_time have the same mea ning as tha t in BLUE. Besides, N and L are related to the s ize of the ac counti n g b ins, for the b ins a re organized in L levels with N b ins in each level. Boxtime is us ed by penalty box of SFB as a time interval used to control how much b andwidth those non-responsive flows could take fr om bottleneck links. H i n terval is the time inter val used to cha n ge hash ing functions in our implementatio n for t h e double buffer ed moving ha shing. Based on t h ose par ameters, the basic SF B queue management algorithm is show n in the above table. 2.5 CHOKe As a qu eue mana geme n t algorithm, CHOKe [4] d iffe ren tially penalizes non-responsive and unfriendly f lows u sing queu e buffer occupancy i n formation of each flow. CHO Ke c alculates the average occupancy of the FIF O buffer using an exponential mo ving average, ju st as RED does. It also marks two thresh olds on the buffer, a minimum threshol d mi n th and a maxim um threshold max th . If the average queue size is less than m in th , e very arriving p acket is queued into the FIFO buffer. If the agg regated arrival r ate is smaller than the o utput link c apacity, the average queue size should not bu ild up to min th very o ften and packets are not dropped frequently. If t he average queue size is greater than ma x th , every arrivin g packet is dr opped. This moves the queue o ccupancy back to below max th . When the average queue size is bigger than m in th , each a r riving p acket is c ompared with a rand omly selected p acket, c alled d rop candidate packet , from the FIF O buffer. If they have the same flow ID, they are both dropped. Otherwise, the randomly ch osen packet is k ept in the b uffer (in the same posit io n as before) and t h e a r riving packet is dropped w ith a p robabili t y tha t d epends on t h e averag e queue size. The drop probability is compu ted exactly as in RED. In particular, th is means t h at packets are dropped with probability 1 if the y arrive when the average queue size exceeds max th . A flow chart of the algorithm is given in Figure 2 . In order to bring the q ueue occupancy back to below max th as fast as possib le, we still compare and drop packets from the queue when the queue size is above the max th . CHOKe has three varia n ts: A. Basice C HOKe (CHOKe): It behaves e xactly as described in the above, that is, choose one packet each tim e to compare with the incoming packet. B. Multi-drop CHOKe (M-CHOKe): In M- C HOKe, m p ackets are chosen from the b uffer to compare with the in coming packet, and drop the p ackets that have the same flow ID as the incoming packet. Easy to understand that choosing more than one cand i date packet improves C HOKe’s performance. This is es p ecially true when t h ere are m ultiple non- responsive flows; indeed, as the number of non -resp onsive flows increases, it is nec essary to choose more d rop candidate packets. B asic CH OKe is a spe cial c ase of M- CHOKe with m =1. C. A daptive C HOKe (A-CHOKe): A more sophisticated way to do M-CHOKe is to let algorithm automatically choose the p roper number of pa ckets chosen from buffer. I n A- CHOKe, it is to p artit io n the in terval betwee n min th and max th into k regions, R 1 , R 2 , …, R- k . When the average buffer occupancy i s in R i , m is automaticall y set as 2 i (i = 1, 2, …, k) . ! 7 On every packet arrival: if avg _ minth then Enqueue packet else Draw a random packet fr om th e router queue if Both packets from the sam e flow then Drop both packets else if avg _ maxth the n Enqueue packet with a prob a bility p else Drop packet end if end if Pseudo code for the CHOKe al go rithm 3. S IMULATION AND C OMPARISON In this section, we w ill compare the performances of FRED, BLUE, S FB a nd CH OKe. We use RED and Drop Tail as th e evaluation baseline . Our simulation is b ased on ns- 2 [8] . Both RED and FRED have implementati on for ns-2. BLUE and SFB are originally imple mented in a p revi o us version of ns, n s-1.1, and are re-implemented in ns-2 . Ba se d on th e C HOKe paper [7], we im p lemented CHOKe in ns-2. In our simulation, ECN support is disabled, and “marking a packet” means “dropping a packet”. 3.1 Simulation Settings As different algorithms have different preferences or assumpti ons for the netwo rk configuration and traffic p att e rn, one of the challenges in d esigning our simulation is to s elect a typical set of network topology and parameters (link b andwidt h, RTT, and gateway buffer size), as well a s load parameters (numbers of TCP and UDP flow, p acket siz e, TCP w indo w size, traffic patterns) as the b asis for evaluatio n. Currently w e haven’t found systematic way or g uidance information t o design the s im u lation. S o we make the dec isi on b y reading all related papers and extractin g and combining the key characteristi cs from their simulations. Figure 3. Simulatio n topo l ogy The n etwork topology we used is a classic dumb-bell conf ig u ration as shown in Figure 3. This is a ty pical scenario that different types o f traffic share a bo ttleneck rout er. TCP ( FTP application in particular) and UD P flows (CBR app licat io n in particular) are chos en as typical traffic patterns. ! 8 In our sim u lation, we u se 10 TC P flows and 1 UDP flow. T h e b ottleneck link in this scenario i s the link between two gateways. We set TCP window size as 50 packets, and the router queue buffer size i n the simulation as 15 0 packets (the p ackets size f or both TCP and UDP are 1000 bytes). For RED, we also need t o choose values for min th and max th , which are typically set as 20% and 80% q ueue buffer s ize. In the following, we set them as 50 and 100 packets. 3.2 Metric s Throughput and queue size are the two major metrics in ou r simulations. The throughput of each flow is u sed to illustrate the f airness among d ifferent f lows, and the total throughpu t can be compared with the b ottleneck bandwidth as an indicator of resource u tiliza t ion. Queue size is a d irect i ndicator of rou ter resource utilizati o n. The a verage queue size o f eac h flow illustrates the fairness of router resource a llocati on, which a lso show s the d ifferent characteristics of different algorithms. We calculate th e average q ueue size using exponentially weighted average (EWM A ), and the aging weig h t is set to 0.002. 3.3 Algori thm Parameter s How to configure different algorithms for the simulation is also an issue. First, we want to show the b est performance of e ach algorithm under the same ne twork topology and traffic load. For the best performance, we need to fine-tune these algorithms for the fixed setting (as described abo ve) to achieve the fairest sharing with a high u til ization value. The r esult will be presented in Section 3.4 to show thei r “best-effort” performance. On the other hand, an ideal algorithm should always achieve the best performance under all po s sible settings without hu ma n intervention. The dif fere n t p arameters set of these algorithms will i mp act their pe rformance in different w ays. We will disc u ss the impact of algorithm-specific parameters and t h e easiness of algo rithm configuration in Section 4. 3.3.1 FRED It’s easy to s et the parameters of FRED compared with RED. For the parameters coming from RED, FRED uses a simple formula to calculate min th and max th , and assigns fixed values to w q (0.002) and max q (0.02). The only p arameter new to FRED is min q , who se value depends on the rou ter buffer size. It’s usually set to 2 or 4 because a TCP source sends no more than 3 packets b ack-to-back: two bec ause of deployed ACK, and one more du e to con gestion window increase. We chose t o set it to 2 (which is also t he built-in setting of the FRED implementation in n s-2) after some experimentation. For most cases it turned out a F R ED is not sensitive to min q . 3.3.2 BLUE In our s imulation, t h e defa u lt values of BLUE stati c parameters a re: d1 = 0 .02, d2 = 0.002, freeze_time = 0.01s . d1 is set sig n ificantly larger than d2 . This is because link under utilization can o ccur when congestion management is either too conservative or too aggressive, but packet l o ss occurs o nly when con gestion management is t o o cons ervati ve. By weighting heavily against packet loss, BLUE can quickly react to a substantial increase in traffic load. A rule of thrum is: d2 = d1 / 10 . 3.3.3 SFB The default para meter v alues for SFB are: d1 = 0 .005, d2 = 0.001, freeze_time = 0.001s, N=23, L=2, Boxtime = 0 .05s, Hinterval = 5 . Bin_S ize is set as ( 1 .5/N ) of the total bu ffer size of the bo ttle neck link. N and L are related to number o f flows in t he router. If the number of non-responsive f lows is larg e while N and L a re s mall, th e TCP f l o ws are easily misclassified as non-responsive fl ows [5]. F u rther more, s i nce Boxtime in directly d etermines the total bandwidth t h at those non-responsive flows c o uld take in th e bo ttl en eck link, it is fine -tuned ! 9 according to d ifferent policies to tre at those non -respo nsive flows. So in SFB, ideal parameters for one case might not necessarily go od for ot her cases. 3.3.4 CHOKe Except the parameters from RED ( min th, max th , etc), our implementation maintai n s three parameters specific for CHOKe: • adaptive_ : co ntrol wh eth er or no t A-CHOKe should be app lie d , set adap tive _ = 1 will enable A-CHOKe; • cand _num_: e ffective when adaptive _ is not set, when cand _num_ = 1 , it is basic C HOKe, otherwise, it is M-CHOKe, and cand _num_ is the number of packets to be selected from the que ue; • interval_ num_ : effective when ada ptive_ is set, and th is parameters determines the number of intervals t o be derided . With our experience on running CHOKe, A-CHOKe has the best performance. So in the following simulation, we choose adaptive _ = 1 and interval_nu m_ = 5. 3.4 Comparison Fig ure 4 an d F igure 5 sho w the maj or res u l t of the sim ulati on. The total throug hput val ues o f a ll TCP and UD P flo ws a re n o t shown here. For al l t he si mulat ions, the total thr oug hputs a re reas onabl y h ig h (a bout 90-96% of avai labl e ban dwi d t h), indi cating th at all t hese alg o rit hms provi de high link utiliz atio n. Fi gur e 4-1 show s the UDP thro ughput and que ue le ngth unde r si mulat ions usi n g 1 0 TC P flows , 1 UDP fl ow, whe n UDP sen d i ng rate cha nges from 0. 1Mbps t o 8 Mbps 1 . Accor ding to thi s diag ram, Drop T ail is the wor st in term s o f unfai rness, whic h provi des no prote ctio n for adap ti ve fl ows an d yi elds t he hi ghest UDP t hroug hput. RED a nd BLUE do not wo rk well under hig h UDP se nding ra te. Wh en U DP sendi ng rat e is ab ove the bot tle li nk ban dwidt h, UDP flo w quic kly domina te s t he t ra nsmi ssion on t he bott lenec k l ink, an d TC P flow s co u l d only share t he rem aining bandwi dth. On t he othe r hand , F RED, SF B and C HO Ke proper ly pe nal ize UDP flow, and T CP co uld ac hiev e t heir fair sha re. One inte resti ng poi nt i n Fig ure 4- 1 is the be havi or of CHOKe . UDP throug hput decre ase s wit h the i n cre ase of U DP ra te fr om 2Mbps to 8Mbps. This i s becaus e, wit h the incr ease of UD P rat e, the t otal num ber o f packe ts se lect ed to compare incre ases, whic h wil l incr ease t he dr opping pro b a bil ity for U DP p acket s, and decre ase UDP fl ow t hroug hput as a r esult . Fi gur e 4-2 ill u st rate s the size of queue buff er oc cupie d by UDP fl ow. I t seem s t hat buf fer usag e is a go od in dica tor o f l ink ba ndwidt h uti liz ation. Simi lar to F igure 4- 1 , Drop Ta il i s the wors t in f ai rnes s. Alth ough RED and BL UE are sim ilar in permi ssiv e to n on- res ponsiv e flows , BLUE uses muc h l ess b u ffe r. FRE D and S FB a re also the f airest . 1 Due to the method for cha nging the U DP rate in ns-2, the sample inter vals we choos e are not unif orm, but the y will n ot affect our anal ys is ! 10 0 0.2 0.4 0.6 0.8 1 0.1 0.2 0.4 0.8 1 2 4 8 UDP Thpt (Mb ps) UDP Rate (M bps) DropTail RED FRED BLUE SFB CHOKe Figu re 4-1. UDP f low throug h put 0 40 80 120 160 0.1 0.2 0.4 0. 8 1 2 4 8 UDP Q ueu e Siz e ( packet) UDP Rat e ( Mbp s) Dr o p Ta il RED FRED B L UE S FB CH OK e Figure 4-2. UDP flow q ueue size Fi gur e 5 i llust rates the a verag e que ue siz e for U DP and TCP fl ows as wel l as the ave rage tot al buffer u sa ge. The di ffere nce of alg orit h ms is cl early capt ured in t h e b u ff er usag e plots . We can s ee, f or Dr op Tail , RED and BL UE, m ost of the pa c kets i n t he q ueue a re UDP flo w pack ets, whi le only a smal l perce ntag e belo ngs to TC P flows. FRE D, SFB an d CHOKe effect ive ly pena li z e UDP flo w and allow TCP f lows to ac hieve a hi gher throug hput . It is also inte resti ng to noti ce the diff erenc e am ong the tot al queu e siz es. Sinc e Drop Tail only drops pa cket s when the que u e buff er is full , at most time , its tota l queue size i s the max imum que u e buffer si ze. For R E D, alt h oug h it begi n s to provi de co n g esti on noti fic at ion when the queu e siz e reac hes mi n th , i t only affec ts TCP fl ows, whi le UDP flow wi ll kee p the sam e s endi ng rate, which drive s t he tota l queue siz e to max t h quickl y, afte r whic h a ll the inc oming packet s wil l be drop p e d an d the total queue size will be kept at max t h . In CHOK e, howe ver, th e ran dom pack et sele ction mec hanis m effec ti vely penali zes UDP fl ow afte r the aver age qu eue size reache s min t h . Wha t’ s more, UDP drop ping rate is pr oport ional to it s inc oming rat e, which wil l effect iv ely ke ep the tot al queue si ze aro und mi n th , as il lustra ted in Fig ure 5f. FR ED, BLUE and SFB are not dire ct ly affec te d by min t h and max th set tings , so thei r tot al que ue si zes have no obvi ously rel ation w it h the se t wo parame ters in F igure 5. In som e of the fi gures i n Fig ure 5 wher e TCP fl ow queue siz e is very smal l, UDP flow queu e size is the same as t hat of the tot al que ue siz e, but t he corr espon ding queue siz e for TCP flows ar e not ze ro, whi ch see ms to be a cont radict ion. The reaso n is that we draw thes e fig ures usi n g th e EW MA val ue of the qu eue siz e. Alth ough we calcu la te the que ue siz e every ti me we ge t a new packet , only EWMA val ue ( weig ht = 0.00 2) is plot ted 2 . It i s E WMA that eli minat e s 2 The figures of the real queue size has a lot of jitters and dif ficult to read. ! 11 the diff er ence betw ee n UDP flow que ue siz e and tot al queue siz e when TCP fl o w queu e size is very sma ll. (a) Drop Tai l (b) RE D (c) FRED (d) BLUE (e) SFB (f) CHOKe. Figure 5. Queue size in dif f erent algorithms (Notice t h at the total queue size s of different algorithms are different) 4. A LGORITHM C HARACTERISTICS 4.1 FRED FRED algorithm focuses on the mana geme n t of per-flow queue length. The parameter qlen is compared with min q and ma x q , a n d used as a traffic c l a ssifier. Fragi le flows are those who se qlen <= min q , robust f lows are those whose min q
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment