The Lotus-Eater Attack

Many protocols for distributed and peer-to-peer systems have the feature that nodes will stop providing service for others once they have received a certain amount of service. Examples include BitTorent's unchoking policy, BAR Gossip's balanced excha…

Authors: Ian A. Kash, Eric J. Friedman, Joseph Y. Halpern

The Lotus-Eater Attack
The Lotus-Eat er Attack Ian A. Kash Computer Science Dept. Cornell Univ ersity kash@cs .cornell.edu Eric J . Friedman School of Operat ions Research and Informat ion Engineeri ng Cornell Univ ersity ejf27@co rnell.edu Jos eph Y . Halper n Computer Science Dept. Cornell Univ ersity halper n@cs.cornell.e du The y started at once , and went about among the Lotus-eat ers, w ho did them no hurt, bu t gave them to eat of th e lotus, w hic h was so deliciou s that thos e who a te of it left of f caring abou t home, and did not e ven want to go back and say what had happened to them, b ut wer e for stayin g and munchin g lotus with the Lotus- eater without thinkin g further of their r eturn. The Odyssey [9] Abstract Many protocols for distrib uted and peer -to-peer systems ha ve the f eature tha t nodes wil l stop pro viding service for others once they hav e recei ved a certain amoun t of service. Examples includ e BitT orent’ s uncho king polic y , BAR Gossip ’ s balance d exchan ges, and thres hold strate- gies in scrip systems. An attack er can exp loit this by pro viding service in a tar geted way to pre ven t chosen nod es from provid ing service . While such attack s cannot be pre vente d, we dis- cuss techniques that can be used to limit the damage the y do. These techni ques presume that a certain number of p rocesses will follo w the reco mmended proto col, ev en if the y coul d do b etter by “gaming” the syst em. 1 Intr oduction Many current di strib uted and peer -to-peer systems ha ve the feature that they are satiable ; the y hav e users that (by d esign) will sto p providin g servic e to others if th ey are th emselves rece ivin g a su fficien t qua ntity of servic e. In man y c ases this is the product of “tit-f or-t at-like ” design s, which attempt to combat free riding by den ying servic e to those who are not providin g it. While this approach pro vides an in centi ve for coope ration, it has th e unfort unate side effect that if there is no service for a peer to pr ovide, then he will generally r ecei ve reduce d or no servic e. Ironic ally , this opens the syste ms up to an attack that we call the lotus-eater attac k : the attacke r does no direct harm to any peer; ins tead he supplies the service to some peers, thus satiat ing them. Once those peers are satiated , they stop prov iding service to other s. The peers not being satiate d by the attack er then recei ve reduced or no service. A w ide range of systems are satiabl e and thus potent ially vulner able to this attack. In dir ect rec ipr ocity systems like BitT orrent [4] and B AR Gossip [1 6 ], peers trade with the best partn ers th ey can find (BitT orrent) or stop trading when there is noth ing the y want (BAR Gossip). An attack er can prev ent a peer from serving others by being a good trading part ner and satisfyin g all of its requests. In indir ect r ecipr ocity systems , such as repu tation systems [7, 12] and scrip sys tems [10, 19], peers need to perfor m servic e for othe rs often enoug h to maintain a good reput ation or supply of money . If an attacke r can ensure that a peer m aintai ns a good reputat ion or supply of money despite any reques ts the peer makes, then that peer will no longer pro vide service for others. Even systems not desig ned to be tit-for -tat-lik e may be satia ble. For examp le, a node in a sensor netwo rk might shut do wn to sa ve po wer if it has receiv ed all the updates it needs. All of these systems hav e users that will stop pro viding service in respon se to this attack. Ho we ver , the exa ct way that t he attac k is carrie d out and t he o vera ll impact of the attack on the system v aries s ignificantly . 1 Consider the c ase of BAR Gossip. In most gossi p protocols, nodes randomly select part ners to pass update s on to so that the u pdates spread through the e ntire sy stem. H o wev er , this allo w s nodes t o free ride by recei v- ing update s while not using their own bandwidth to pass them on to any one else. B AR Gossip encou rages ration al nodes to provide service by havin g nodes giv e away updates on an excha nge basis. The do wnside is that a node follo wing the protoc ol will not continue to provide service when there are no more updates for it to recei ve. If an attack er successfully distrib utes all of the updates to a larg e percenta ge of the nodes in the system, then the m ajority of intera ctions will result in no upda tes being exch anged. For those nodes being satia ted by th e attack, this is a wond erful outcome; they are rec eiv ing perfect serv ice. Howe ver , thos e nodes t hat are not recei ving the updates from t he attac ker will h av e fe w op portuniti es to g et the updat es they need. S ince the updates in the i ntended appli cation of BAR Gossip (for e xample, a s treaming video s ervice) are time sensit iv e, this mino rity of no des will miss up dates and may find th e service unusable. B y chang ing who is satiated ov er time, the attack er could e ven make th e service intermittently unusable for all no des. Another contex t where this attack can be effecti ve is in a scrip system. In these systems users are paid for provi ding servic e in scrip, a currenc y issu ed by the system. They can then redeem thi s scrip later in exc hange for service. An optimal strategy for a ratio nal agent in such a system is to choose a threshol d and pro vide servic e only when he has l ess than that thres hold amount of scrip [14]. If an attacker can ensure that an agent has a large amount of money (either by givin g money aw ay , or provi ding cheap service to him), the agen t w ill stop providi ng service . By target ing a user or users who contro l important or rare resources , the attack er could prev ent all users from recei ving certain kinds of services. This type of beha vior occurs reg ularly in the traditional economy whe n companies sign an exc lusi ve contract or put particu lar la wyers on retaine r to deny oth ers acces s to them. Despite the attack being poss ible in BitT orrent, it seems lik ely to do significantl y less damage. In BitT or- rent, peers (known as leecher s ) coopera tiv ely downlo ad a file. Each leecher has k other unchok ed peers to whom he provide s pieces of the file. These uncho ked peers are m ainly leech ers that hav e recently pro vided it with t he m ost service, b ut some may be chosen randomly ( opti mistic unch okes ) to tr y and find better peers. It is quite possib le to ensure that, excludi ng these rando m choices , all of his unchoke d peers are controlled by the attac ker . Howe ver , since most leecher s are do wnloading more than they upl oad, this is ofte n actually a net benefit to the torrent. E ven targeti ng users that are uploadi ng more than the y down load seems likely to only modes tly impair the progr ess of the torr ent, especial ly since the attac ker must contrib ute significan t bandwid th of his o wn to mak e sure he stays unchok ed. The attacke r could try and tar get leech ers who hav e rare pieces to artificially create a “last pieces problem, ” but BitT orrent’ s rarest first polic y does a good job of resol ving this problem [15]. These thre e examples are cases where, to v arying exten ts, a lotus-eater attack can impair the performan ce of a system. In order to understand ho w the attack works in general and why the effe ctiv eness vari es, we state an informal theorem that cha racterizes the con ditions under which an attacke r can cause nodes in a system to stop providin g servic e, and dev elop a simple model of how this lack of service can be used to harm the syste m. Using that m odel we exa mine design principles that make systems resilie nt to lotus-ea ter attack s. T wo of these are traditional : tole rating non-rand om failures and making satiation hard by the use of coding or a scrip system. The other two are ne wer princip les that hav e recei ved relativ ely little study: making use of obedie nt nodes and encou raging altruism. The remainde r of this paper is orga nized as follo ws. In Section 2 we examine in detail the effe ctiv eness of a lotus-e ater attack on B AR Gossip as well as change s to the algorith m that can make it more robu st. In S ection 3 w e present a model that abstract s the general structure of systems bu ilt on tit-for -tat style mechanis ms and state an informal theorem that captures the essenti al nature of the attack and the possible a venue s for prev enting it. In Section 4 w e examine design principles rele v ant to prev enting the attack and some of the subtleties in vol ved in implementin g them. W e conclude in Section 5 w ith discuss ion of some open ques tions raised by this attack. 2 T able 1: S imulation Par ameters Paramete r V alu e Number of Nodes 250 Updates per Round 10 Update Lifetime (rds) 10 Copies Seeded 12 Opt. Push Size (upd) 2 2 Attacking B AR Gossip In a BAR Gossip system, a broadcaster is releasin g updates t hat node s need to co llect within a c ertain peri od of time. For example, in a streaming video applicatio n, the updates are frames of the video that need to be recei ve d in time to display . Each rou nd, the broadcaste r sends each of the updates fo r tha t round to a random subset of the no des. Nodes then gossip the updates through two protocols , which each node can initiate once with a pseudoran domly chosen partner (nodes ha ve no contro l o ver who their partner will be). In a balan ced exc hang e , nodes exchange as many updates as possibl e on a one-for -one basis. In an optimistic push , the n ode initiatin g the p ush sends a list of recently releas ed updat es it has to of fer and a list of updates exp iring relati vely soon it needs. The other nod e can then recei ve a limited number of the recent upd ates in exc hange for ol der upda tes or jun k data. The push is optimis tic on the part of the initia tor beca use he hopes he recei ves useful upda tes in return. In particu lar , if a node has no m issing older updates, he has nothing to gain by initiating an optimistic push and a rational node will not. The se protocols are described in greater detail in [16]. T o mount an attack on B AR gossip, the attack er div ides the nodes into two groups . The first group is the satiat ed nodes , to whom the attacker attempt s to provide as m any upd ates as possible. The secon d group is the isolate d nodes , to whom the attack er pro vides no service. If the attacke r provid es enou gh upda tes to the satiate d nodes, they will make relati vely few and small balanced exchan ges becau se most of their updates are pro vided by the atta cker . T his also mea ns they will r arely be missing ve ry old upda tes and so will rare ly initiat e optimistic pushe s. Since isolated nodes recei ve no servic e from attack ing nodes and limited service from satia ted nodes, the y hav e fe w opport unities to trade for each update. In our ex periments, the attac ker attempts to satiate 70% of the system (inclu ding whatev er percen tage he contro ls). The reason for our choice of 70% will be expl ained shortly . Figure 1 shows the results of three ver sions of the attack on a BAR Gos sip syst em using the same parameters as and an updated version of the simulatio n from [16]. The para meters are summariz ed in T able 1. In their simula tion, nod es need to recei ve more than 93% of th e updates for the str eam to be usable. The results in Figure 1 a nd later figures are gi ven for isolate d nodes; satiate d nodes recei ve near perfec t service . The curve labeled “ crash at tack” pr ovides a basel ine where the a ttacke r simply does not hing. He may sim- ply h av e crashed or be a Byz antine node fol lowing the stra tegy of initiating but nev er compl eting exchang es to waste bandwidth (this was the strateg y used by Byzantine nodes in [16 ]). W ith this attack, the attacke r needs to contro l 42% of the system to ensure fe wer than 93% of the update s are deli vere d. This curve is ver y simila r to Figure 6 of [16] where collud ing nodes provide very little ser vice to others beca use they recei ve most upda tes from oth er colluding nodes. T his curv e also guided our decision to choose 70% as the fractio n to satiate; it strikes a balanc e between the need to satiate enoug h nodes to limit trade opportuniti es for isolate d nodes and a desire to isolat e as many nod es as possib le. The “ideal lotus-ea ter attack” curv e assumes that attacking nodes can immediately send updates to all satiate d nodes as soon as the y recei ve them. This might be the case if the attack er can exploit the imple- mentatio n of the protocol to send updates to nodes with whom he has not started an exchange . Attacking nodes nev er trade, merely forwarding all updat es they receiv e from the broadcaster . Note that this m eans 3 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fraction of nodes controlled by attacker Fraction of updates received by isolated nodes Crash attack Ideal lotus−eater attack Trade lotus−eater attack Figure 1: Three attacks on B AR Gossip. that satiate d nodes will ha ve to trade for an y updates the attack er did not recei ve from the br oadcaster . W ith this attack, the attack er can contr ol as fe w as 4% of the system and still make service unreliable . Note that with so few nodes under his contro l, the attack er is receiv ing only 39% of the updates. T his shows that freque nt partial satiation can be suf ficient to attack the system. The “trade lotu s-eater attack” curv e makes the typically more reasonab le assumption that the attacke r can gi ve updates to nodes only during inter actions dictate d by the pro tocol. Ho wev er , he is able to gi ve nodes more updates than a normal node wou ld (e very updat e he has). Because the attack er needs to contro l enoug h nodes to communica te with satia ted nodes reasona bly often, he n eeds significantl y more node s than in the ideal lotus-e ater attack; with this version of the attack, the attack er needs to control at least 22% of the nodes in the system, far less than the approximate ly 42% percent that the more traditiona l crash attack requir ed. This may make it possible to launch a lotus- eater attack in some sett ings where a cras h attack would be impossible. W e should note, howe ver , that this does require enough bandwidth at each attacking node to sati ate multiple nodes eve ry round while the crash attack requires essen tially no bandwidth beyo nd that neede d to maintain the nodes in the system. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fraction of nodes controlled by attacker Fraction of updates received by isolated nodes Crash attack Ideal lotus−eater attack Trade lotus−eater attack Figure 2: Larg er push size redu ces effecti veness. In anticipati on of coming d iscussion , we also in vest igate the impact of two changes on the e ffec tiv eness of lotus- eater attack s. F irst, Figure 2 sho ws the effect of increa sing the maximum size of an optimis tic push to 10 update s. T his has the ef fect that nodes that are willin g to initiate optimistic pushes will be a more altruis tic 4 to wards other nodes; they are willing to gi ve away m ore updates at the risk of recei ving junk. This makes partial satiati on much les s effecti ve, so the ideal lotus-ea ter attack no w requires at least 15% o f the nodes in the syst em, which is enough to al low him to provide 85% of th e updates to sati ated nodes. It al so makes the trade lotus-eater attack impractica l, by nearl y doubling the required fractio n of nodes to 40%. This change does ha ve two d ownsi des. First, ration al nodes might no longe r be willing to parti cipate in optimist ic pushes if the y tend to r ecei ve sign ificantly more junk up dates due to the h igher push size . Second, Byzantin e nodes can create more work by as king for a lar ger number of updates . 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fraction of nodes controlled by attacker Fraction of updates received by isolated nodes Push size 2, balanced exchanges Push size 2, unbalanced exchanges Push size 4, balanced exchanges Push size 4, unbalanced exchanges Figure 3: Obedient nodes reduce ef fecti venes s. The other chan ge we consid er is to the beha vior of the node s in the system. In addition to Byzantine and ration al nodes , the BAR model includ es the possibili ty of obedient “ altruist” nodes wh o a re w illing to follo w the protoco l ev en if it is not optimal. One way we could exp loit this is by allo wing balanced exchange s to be slightly unbala nced. W e modified the protocol so that nodes are willing to giv e one more update than the y recei ve, assumin g they are recei ving at least one updat e. Since the node already has the ov erhead of a balanc ed exch ange, it doesn’ t seem unreas onable that a node would be w illing to upload a little ex tra data. Incent iv es to free rid e or exploit by Byzanti ne agent s are minimal beca use there must al ready be a balanced exc hange occurri ng and there is only a single additional upda te in volv ed. Figure 3 sho ws the ef fects of this chang e on a trade lotus-eat er attack, both alon e and in conjun ction with a more modest incre ase in the push size to 4. The combin ation of these two small changes is enoug h to increase the fraction of the system the attack er needs to contr ol by almost 50%. 3 Understanding the Lotus-eater Attack In this secti on, w e dev elop an unders tanding, at an abstract le vel, of ho w lotus-eater attacks work and what can be done about them. W e first examine what prop erties of a system allo w an attacker to use a lotus-eat er attack , and state a “the orem” that encapsu lates the essence of the attack. W e then examin e how this attack, which is not directly harmful , can be used to harm a system. T o do this, we de velo p a simple model whose paramete rs character ize features of systems that aff ect the viability of an attack. A system is charact erized by a graph G = ( V , E ) . The nodes are the users, each of whic h is a state machin e implementi ng some protocol . The edges are the pairs of nodes that can potent ially communicate. There is a set T of labeled tokens; one feature of a node’ s state is the set of tokens that the node currently has. A node may reach the point where it has all the token s that it wishes to collect. This is captured by a sati ation functi on s , a monotone function tha t maps a user i , a time t , and a set T ′ ⊆ T o f tok ens to { true , false } . Intuiti vely , sat ( i , t , T ′ ) = true if i is does not need any more toke ns at time t if it has all the tok ens in T ′ . A state s for i is satiated at time t if sat ( i , t , T ′ ) = true , where T ′ is the tok ens associ ated with s at time t . 5 In a satiated state, a node has all of its current desir es met. It may ev entually leav e the state if new toke ns enter the system or it loses some of its current toke ns, b ut until that time it can gain no benefit from other nodes . Many protocols hav e the property that a nod e in a satiated state w ill not prov ide servi ce to other nodes . In many cases this design is due to a desire to make the protocol incenti ve- compatible. W e adopt the term satiation -compatible to desc ribe protocols where node s in a satiated state do not prov ide service . 1 W ith thes e definit ions in place we can state our “theor em. ” Observ ation 3.1 In a system wher e a satiation-co mpatible pr otocol is used, an attac ker that can pr ovid e a node with toke ns suf ficien tly rapidly can pr ev ent it fr om ever pr ovid ing service . The observ ation is tri vial. In the extreme case, if an attack er prov ides a full set of tok ens instan taneously , then clearl y the nod e w ill be sati ated when a messag e arriv es. The impo rtance of thi s observ ation is it hel ps abstra ct out the two ke y fact ors that allo w the lotus -eater attack to stop a node from provid ing serv ice: a satiati on-compatib le protocol and an attack er that provi des toke ns “suf ficiently rapidly . ” These two facto rs are what a protocol desig ner must tar get in order to mitigate lotus -eater at tacks. In S ection 4, we w ill analyz e these two fa ctors in more detail. While the observ ation tells us that an attack er can cause no des to stop provid ing service using a lotus-e ater attack , it does not say an ything about why this would be a bad thing . Indeed, for a (well-des igned) system this attack shoul d ha ve little or no negati ve impact. In order to get a se nse o f the ways this attack can actua lly damage a syste m, w e consider a simplified model of a toke n-collecti ng system. T his system uses a simple protoc ol. In each round, each node i selects up to c communicatio n partners from among its adjacent nodes and i gets a cop y of the tokens that each partn er has, while each partner gets a cop y of the token s i has (for simplicit y , assume all o f these e ven ts happen simultaneo usly). Once i has a copy of all th e tokens (i.e., o nce i is satiated), he stops communicating. In many real systems, rather than stopp ing service entire ly , nodes actual ly conti nue to prov ide some service ev en through they are satiated (for ex ample s eeding i n BitT orrent). W e allo w for this possibili ty in our model by having the probabili ty that a node respon ds to requests ev en when satiated be nonzero . A system in this model is a tuple ( G , T , sat , f , c , a ) wher e: • G = ( V , E ) is the unde rlying graph, which we assu me to be connec ted; • T is a finite set of toke ns; • sat ( i , t , T ′ ) = true iff T ′ = T (i.e. ev ery node wishes to collect ev ery token ); • f : V → T is an initial allocation of tokens to nodes; • c is a bound on the number of nodes that each node can contact each round ; • a is the probabilit y that a node responds to requests e ven if satiated. This captures the amou nt of altruis m in the system. W e assume that, at the start of ev ery roun d, an attack er choose s a sub set of the nod es and giv es each node in the set all the toke ns. Clear ly this ov erestimat es the power of the attack er in most real syste ms, and ignore s the pos sibility that T will grow o ver time. Howe ver , this simple model s uffices to he lp us see where proble ms may lie. Of the six parameters in our model, T and sat are typic ally beyon d the control of the design er (althou gh, as we discuss in S ection 4, techniques like coding, w hich can be viewed as changin g the set of tok ens, may be of some use in specific cases). Kno wledge of G , f , and c can help an attacker kno w what to targ et; we discus s each of these three parameter s in turn, and then conside r the role of a . 1 Note that ince ntiv e-compatible and satiation-comp atible are not e quiv alent notions. It is easy to co nstruct protocols tha t satisfy one property and not the other . 6 Suppose that the unde rlying graph G is a grid. T hen at any time the attacker can partition the graph with relati vely little cost by removin g any set of nodes that const itutes a cut. If some side of the cut is missing a tok en, nodes on that side of the cut will ne ver be able to collect a ll the tokens. Clearly , an attack er can alw ays make a cut around a single node, bu t doing this on a large scale is expensi ve. While finding inex pensi ve cuts depen ds on the structu re of G , the damage is significan t only if some side of the cut is missing a token . Whether this is the case depends (in part) on f , the initial allocatio n. If many nodes start w ith each token and those node s are w ell spread, this attack is likely to be inef fecti ve. (Note that, in a real system, what we are calling the “initial allocation” may actually include some of the initial exch anges, becaus e an attack er canno t al ways satiate instantly .) This version of the attack is also likely to be inef fecti ve in random netwo rks, b ut in, for example , sensor networks , there is often an inherent structure an attacke r may be able to make use of. Even in the absence of signi ficant structure, kno w ledge of the initial allocation may help an attacker , particu larly if one o r more of the to kens is rare . In the e xtreme case whe re some toke n is initially at a sing le node, an attacker can deny the entire system access to that toke n for the cost of satiating one node. T his ver sion of the attack may be relev ant to networks set up for file sharin g, grid computing, and other similar applic ations, where some r esources is o ften rare. Furthermor e, in t hese syste ms it tend s to be rel ativ ely easy to deter mine what the rare resourc es and who has them. As an alternati ve to tar geted remov als, an attack er with suf ficient resour ces may simply a ttempt t o satiate a lar ge fractio n of the system. Here the paramete r c is relev ant. This parameter is, in a sen se, a measure of ho w many “trade opportu nities” a node gets each round. If the attack er can successf ully reduce the number of trade op portuniti es, the o verall r ate at whi ch tok ens spread th rough the sy stem may decr ease. T his ap proach is what we used in th e case of BAR Gossip, and it was p articularly damaging there because the updates had hard deadl ines. The final parameter a is a fact or that helps m itigate lotus-e ater attacks. A system with a > 0 is not truly satiati on-compatib le (and genera lly not tru ly incenti ve-c ompatible either , because agents can often free ride on altruisticall y provi ded service). Howe ver , adding a little bit of altr uism can m ake a big dif ference in reduci ng the harm of attacks , since satiated nodes can still provid e some servic e. For our simple m odel, any system with a > 0 w ill e ventua lly end up w ith all nodes satiated. Although we captu re the degree of altruis m here by the probabilit y of responsi veness e ven when satiate d, altruism can be introdu ced into the system in other ways. For exampl e, seeding and optimis tic unchok es in BitT orrent and optimistic push es in B AR G ossip can all be viewed as ways to introduce some altruism into the system. W e discus s ways that altruis m can be le verag ed in greater detail in Section 4. 4 Pr eventing the Lotus-eater Attack Our observ ation says that if a satiati on-compatib le protoco l is us ed in a syste m, then a lotus-e ater attack s uc- ceeds if an attack er can provide service “suf ficiently rapidly . ” This suggests that one way to prev ent lotus- eater att acks is to aba ndon satiation-c ompatibility entirely . In gen eral this seems und esirable as many p opu- lar syst ems are satia tion compatib le. Furthermore , it seems dif ficult to design a rob ust incenti ve -compatible system that is not also satiatio n-compatib le. Most design s for incenti ve- compatible systems are tit-for -tat- like ; the y rely on some notion of reciprocit y to prov ide incent iv e-compatib ility . Satiation -compatibil ity is a natura l consequ ence of this, beca use when a node is satiate d there is no room for reciproc ity . De- spite being among the simplest systems in which to analyz e the incenti ves of users, e ven BitT orrent is still vulnerable to free riding [11, 18]. S o abandoning these relati vely simple systems to try and maintain incent iv e-compatib ility while av oiding satiation-co mpatibility seems likely to introd uce as many problems as it solv es. Since we do not wish to abando n satiation-co mpatibility entirely , we focus on ways to tolerate lotus- eater attacks. In this section, we examine four desig n principl es that can help do this: being resilient to non- random failur es, making satiat ion dif ficult, lev eraging obedienc e, and encouragi ng altruism. Of the four princ iples, resilience to non- random failures is the best studied; we hav e nothin g new to add. 7 As we saw i n Sectio n 3, att acks based on the stru cture of G and f are essen tially indepe ndent of the fac t that we are us ing a lotus-eat er attack. These attacks work by remov ing key n odes from the system; the wa y they are removed is essential ly incident al. A syste m vulnerable to this type of attack is also vulnerab le to many others , and may ex perience dif ficulties e ven w ithout an attack if ke y nodes happen to become satiated. W e thus assume that G and f ha ve been chosen to pre vent this. The second principl e, making satiation hard, is m ore interestin g. As a general princi ple, it is good ev en when an attack is not underw ay , becau se less sa tiation means more opp ortunitie s for u seful work to b e done. In the context of our model, making satiation hard means focusing on T and sa t . While there m ay be an underl ying set of token s that a user wants to collect, using a scrip system or reputatio n system effe ctiv ely allo ws t he set of r elev ant tok ens to be changed. In such a syste m, a node will determine satiation based on i ts curren t amount of money or reputati on. This generally makes it easy to satiate a fe w nodes , bu t difficu lt to satiate a larg e number of nodes. For example, in a scrip syste m there is gener ally a fixed amount of money . While it is easy for an attacke r to accumulate enoug h money to satia te a few nodes , there may not e ven be enoug h m one y in the system to satiate a significant fraction of the nodes. This sugges ts that scrip could be the basis for an incenti ve-compati ble gossip system that is rob ust against lotus-eate r attack s. In many systems, the goal is to collect a complete set of token s. A node might need the complete set of update s or the all the pieces of a fi le. If a node only has a few tok ens, he may be unable to trade with m ost agents because they already hav e the m making him “effect iv ely satiate d. ” Similarly , a node with almost e very token may hav e a hard time finding nodes w ith the remaining tokens he needs. Another way to make satiati on hard is to adopt policies that increase the likelihoo d that nodes in such a situation will be able to make a useful exchange . B itT orrent uses a number of optimizations for these cases. In general it tries to a voi d thi s e ffec tiv e s atiation by using a “rarest-first ” polic y , wher e leeche rs will tar get rare pieces first. W hen first joining the system, leeche rs will request rando m pieces to get pieces to trade as quickly as possible. Finally , BitT orrent has a special “endgame mode” to allow for the rapid acquisition of the final pieces [4]. Another approach is to use i deas from netwo rk co ding, as done b y A va lanche [ 6], t o change t he requirement s so that nodes need to collect only enough independ ent tok ens to recon struct the full information rather than the complete set of tok ens. The last two principle s, lev eraging obedience and encou raging altruism, are perhaps the most interes ting, in terms of broad both applicabilit y and dir ections for futur e research. W ork in fault tol erance typical ly consid ers all nodes to be either good or bad; wor k in game theory consid ers all nodes to be rationa l. But in practice, e ven in a system with rational nodes , there will be a pool of users running the def ault client on the default setting s as long as this serv es them reasonab ly well. The B AR model [2] bridges this gap by conside ring systems w ith a mix of Byzantine, rationa l, and altruistic nodes . (W e prefer to use the term obedie nt for what Aiye r et al. ca ll “altruist ic, ” since the se are nodes that s imply follo w the pr otocol, and use the term altruistic somewhat dif ferently , using it to refer to nodes that provi de service ev en when satiated. Of co urse, such node s may be o bedient as wel l, if the y are si mply follo wing the proto col.) W e kno w of only one protocol that expl icitly seeks to explo it these obedient nodes [17 ]. In the remaind er of this section , we exa mine how obed ience and altruism can be used to pre vent lotu s-eater attacks . One use of obedie nce is to pre vent suf ficiently rapid s atiation, by limiting the rate at w hich an attacker can pro vide ser vice. Doing this re presents a radical de parture from the typical desig n goa l for most P 2P systems . In gener al a designe r stri ves to pro vide as much service as rapidly as possi ble. No w the goal beco mes that of pro viding servi ce at a reasonable pace , and en forcing that pace. A t firs t glan ce, reducing the rate at w hich a node pro vides servic e see ms like a silly idea. Howe ver , there are a number of cases be yond lotus-e ater attack s in which this might be ben eficial. In gene ral, the incenti ve for a u ser to contri but e to a syste m is that the service he woul d recei ve if he free-ri des is inferior . If the servic e provi ded to free-ride rs is good, there is little incenti ve for participatio n. L imiting the amount of service provid ed can increase the incenti ves for coope ration and, in some cases, ev en make all nodes better off . For example, in a scrip system, if altruis ts are not hand led appropriate ly they can cause what w ould otherwise be a thri ving economy to cras h, m aking 8 all agents worse of f becaus e the y now recei ve only the le vel of ser vice altruis ts are pro viding [14]. The B AR Gossip protocol [16] gi ves some insight into ho w the number of nodes an attackin g no de contacts each round can be limited, b ut limiting the amount of service the attacker pro vides in each trade is more subtle . Only two people kno w if an attack er provi des exce ssi ve service: the attack er and the node that benefits from it. Suppos e that we req uire a node to report if it is get ting exce ssi ve servic e from another node. Since this excessi ve service is to its bene fit, a rational node might not report it. B ut an obedient node would , if its protocol required it. A node can use the signed messages generat ed by BAR G ossip to prove that exces siv e s ervice occurred, a nd get t he reported node remo ved from they system. If there a re suffici ently many obed ient nodes in the system, then we can essential ly prev ent a lotus -eater attack. Moreov er , the cost of obedi ence will be lo w if the attack is succe ssfully pre vented , so it seems reason able to exp ect that a reason able propo rtion of nodes will in fact be obedie nt. 2 Even if an attacke r suc cessfully satiates a lar ge fraction of the nodes, this will hav e no negati ve impact if the remaining nod es still recei ve suf ficient servic e. One wa y to achiev e this wou ld be to increas e the oppor tunities that the remainin g nodes ha ve to trade. The p arameter c desc ribes the boun d on the numbe r of peers a node has. BitT orrent has caps on both the number of open conn ections to maintain and the number of those connections to unchok e. BAR Gossip limits the number of exc hanges per round to minimize the damage Byzantine nodes can do. H o wev er , these systems need to make sure that c is still lar ge enoug h that the system per forms w ell. T hus, selecting a good value for c in vo lves a careful balancing by the system design er . W e h av e seen that an attacker who can sa tiate a large fra ction of t he no des can e ffec tiv ely decr ease c to the point where performance becomes unaccep table. This could be prev ented by increasin g c but, to guaran tee the desired rob ustness, c might hav e to be unacceptab ly high. An alternati ve is to increa se the v alue of a . Adding enough altruism means that isolated nodes will still receiv e service despite the attack. One way to enco urage altruism is to provid e incenti ves for rationa l nodes to beha ve in a way that ends up being altruistic. This can be done by ha ving nodes optimistic ally provid e servic e in exchange for the hope of return service . If this gener ally ends up being a net benefit for the node, a rational node will still part icipate e ven though he might get away w ith pro viding less service. In BitT orrent, eve n if ev ery other leecher is satiate d, a leeche r w ill still recei ve service through optimistic unchok es. When a leeche r in BitT orren t optimist ically unch okes another leecher , he is picking someo ne to send data to in hope s of finding a reli able partne r for the future. W e sa w anoth er example of this with the optimis tic push protoc ol of B AR G ossip in Figure 2. R ationa l agen ts may be willing to partici pate in lar ge optimistic pushes if there is a reason able chance it will get them an upda te they w ould otherwise miss. Another way to add altruism to a system is to le verag e obed ience by havin g a protocol that requires nodes to behav e altruistic ally . In BitT orrent , an isolate d leecher gets service from seeds (who hav e alread y do wnloaded the complete file). Seeding in BitT orrent is not an incen tiv e-compati ble beha vior . P erhaps unsurp risingly , many leechers ne ver remain to seed or seed only for a limited time Howe ver , a sufficien t number do seed to mai ntain reasonab ly popular torr ents. In the c ontext of B A R Gossip, Figur e 3 sho ws that ha ving nodes perform slightly unbalance d exchang es can make lotus-e ater attacks more dif ficult. Increa sing a does make systems more rob ust, bu t there is a tradeof f. T he same mechanisms also pro vide a limited amount of free service to all node s, whether attack ed or not. If this amount is too generous, nodes ha ve an inc enti ve to free ride using just this amount of service. For exa mple, in BitT orrent, a nod e that conne cts to a large number of peers can get good service ev en if he ne ver uploads any data [18]. In m any cases this incent iv e can be elimina ted if nod es hav e to perform nonpro ducti ve work in exchang e for the altruis tic service. For example, in B AR Gossip, nodes that recei ve upda tes through an optimis tic push that ha ve no update s to return must upload junk data. 2 W e remark that this is an example of the more general phenomenon that maintaining cooperation often requires the existence of players willing to incur costs [8, 13]. 9 5 Conclusion The lotus-eat er attack is, at least in the conte xt of incen tiv e-compati ble systems, an attack on the incenti ves of agents. As incenti ve-co mpatible systems gro w in popularity , w e exp ect that other ways will be foun d for an attack er to target systems through the incenti ves of their users. On a theoretical le vel, this points to the need fo r a better unde rstanding of equilib ria in t he presenc e of Byzantine agents . Some work has bee n done in this directio n with solution concepts like k fault tolerant Nash equilibria [5], ( k , t ) - rob ust equilibr ia [1], and B AR games [3]. The last is the solutio n concept used to analyze BAR Gossip. What that definition in particu lar excl udes (at least with th e assumpti on of risk-a verse a gents, which is typically made in p ractice) is the possibil ity for Byzantin e and rationa l nodes to co llude either e xplicitly or , as in the case of the lotus-e ater attack , implicitly . Thu s, to the ext ent that we want to provid e guarantees about system performan ce in the presen ce of both B yzanti ne and rational agents, w e need a solution concept that conside rs the possibility of such collu sion. Another concrete open problem that arises from this attack is how we can design a syste m that limits the rate at w hich nodes can provid e service. A s we saw in Section 4 , this potenti ally is a strong technique for pre ven ting lotus-eat er attack s by pre venti ng an attack er from pro viding serv ice sufficie ntly rapidly to s atiate tar geted nodes. This probl em seems to be relev ant for other attacks on incenti ves as well, since typically these require the attack er to be “too nice. ” Even if they are not explic itly atta cking, nodes that prov ide a dispro portionat e amount of service ca n become a poi nt of c entralizat ion in what is othe rwise a decentralize d system. Acknowledgeme nts W e would like to thank Harry Li and Lorenzo A lvisi for allo wing us to use their BAR Gossip simulation. EF , IK and JH are supported in part by N SF grant ITR-03254 53. JH is also support ed in part by NSF grant IIS-0534 064 and by AFOS R gr ant F A9550-05 -1-0055. Refer ences [1] I. Abraham, D . Dole v , R. Gonen, and J. Halpern. Distrib uted computing meets game theory: Robus t mechanis ms for rationa l secret sharing and multiparty computatio n. In P r oc. 25th Symposium on Principle s of Distrib uted C omputing (PODC) , page s 53–62 , 2006. [2] A. A yer , L. Alvisi, A. C lement, M. Dahl in, J. Martin, and C . Porth. B AR fault tolerance for coo perati ve servic es. In Pr oc. 20th A CM Sy mposium on Operat ing Systems Principles (SOSP 2 005) , pages 45–58 , 2005. [3] A. Clement, J. Napper , H. C . L i, J.-P . Martin, L . Alvisi, and M. Dahlin. Theory of B AR games. In Pr oc. 26th Symposium on P rincip les of Distrib uted Computing (PO DC) , pag es 358– 359, 2007. [4] B. C ohen. Incen tiv es bui ld robus tness in BitTorrent. In Fir st W orksh op on the E conomics of P eer -to- P eer Systems (P2PEC ON) , 200 3. [5] K. Eliaz. Fau lt tolerant implementatio n. Review of Econ omic Studies , 69:5 89–610, 2002. [6] C. Gkantsid is and P . Rodriguez. Network coding for large scale content distrib ution. In INFOCO M 2005. 24th Annual Joint Confer ence of the IEEE Computer and C ommunicati ons Societ ies , page s 2235– 2245, 2005. [7] R. Guha, R. Ku mar , P . Raghav an, and A. T omkins. Propagat ion of trust and d istrust. In Confer ence on the W orld W ide W eb (WWW) , pages 403–412 , 2004. [8] C. Hauert, A. T raulsen, H. Brandt, M . A. Nowak , and K. Sigmund. V ia freedom to coercion: The emer gence of costly punishment. Science , 316:190 5–1907, 2007. [9] Homer . The Odyssey . Trans lated by S . Butler , http://www .gutenbe rg.or g/dirs/etext99/dyssy10.txt, 1900. [10] J. Ioanni dis, S. Ioannidis, A. D. Kero mytis, and V . Prev elakis. Fileteller : Paying and getting paid for 10 file storag e. In F inancial Cryptogr aphy , pages 282– 299, 2002. [11] S. Jun and N. A hamad. Incenti ve s in BitTorrent induce free riding. In W orksho p on E conomics of P eer-to -P eer Systems (P2PECON ) , p ages 116–121, 2005. [12] S. D. Kamv ar , M. T . Schlosse r , and H. Garcia-Molina. T he Eigentrust algorith m for reputat ion man- agement in P2P netw orks. In C onfer ence on the W orld W ide W eb (WWW) , pages 640–65 1, 2003. [13] M. Kandori. Social norms and community enforce ment. Revie w of Economic Studies , 59(1): 63–80, 1992. [14] I. A. Kash, E . J. Friedman, and J. Y . Halpern. Optimizing scrip systems: Ef ficiency , crashes, hoarde rs and altruis ts. In Pr oc. Eighth ACM Conf er ence on Electr onic Commer ce (EC) , pages 305–31 5, 2007. [15] A. L ego ut, G. Urvo y-K eller , and P . Michiardi. Rarest first and choke algorithms are enough. In Pr oceedings of the 6th ACM SIGCOMM Confe r ence on Internet Measur ent , pages 203–216, 2006. [16] H. C. Li, A. Clement, E . L . W ong, J. Napper , I. R oy , L. Alvisi, and M. Dahlin. B AR gossip . In Sixth Symposiu m on Opera ting Systems D esign and Implemen tation (OSDI) , pages 191– 204, 2006. [17] J.-P . Martin. Lev eraging a ltruism in coop erati ve services. T echnical report , Microso ft Research, 2007. [18] M. Sirivian os, J. H. Park, R . Chen, and X. Y ang. Free-ridin g in BitTorrent network s with the lar ge vie w explo it. In Sixth Intern ational W orkshop on P eer-to -P eer Systems (IPTPS) , 2007. [19] V . V ishnumurth y , S. Chandra kumar , and E. G. Sirer . KARMA: a secu re economic frame work for peer -to-pe er resource sharing . In Fir st W orkshop on Economics of P eer -to-P eer Systems (P2PEC ON) , 2003. 11

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment