InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies fo…

Authors: Rajkumar Buyya, Rajiv Ranjan, Rodrigo N. Calheiros

InterCloud: Utilit y-Oriented Feder ation of Cloud Computin g Environ ments for Scaling of Application Services Rajkum ar Buyya 1 , 2 , Rajiv Ran jan 3 , Rodrigo N . Calheiros 1 1 Clou d Com puting and D istributed S ystem s (CLOUDS) Labo ratory Departm ent of Computer Science an d Software Eng ineering The University of Melbourne, Aust ralia 2 Manjrasoft P ty Ltd, Australia 3 School of Com puter Science and E ngineering The University of New S outh W ales, Sydney, A ustralia Abstract Cloud computing pr oviders have setup several data centers at different geographical locations over th e Internet in order to op timally serve needs of their customers around the world. How ever, existing systems do not support mechanisms and policies for dynamicall y coordinating load distribution amon g different Cloud -based data centers in o rder to determine o ptimal location for hosting applicatio n services to achieve reasonable Qo S lev els. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load co ordination must happen a utomatically, and distribution of services must change in response to changes in the load. To counter this p roblem, we advocate creation o f federated Cloud co mputing environment (InterCloud) that facilitates j ust- in -time, opportunistic, and scalable provisioning of application s ervices, consistently achievin g QoS targets under variable workload, resource and net work conditions . T he over all goal is to create a computing en vironment that supp orts d ynamic expa nsion or contraction of capabilities (VMs, services, storage, and datab ase ) for handling sudden variations in service de mands. This paper p resents vision, challenge s, and architectural elements o f InterCloud for utility-oriented federation o f Cloud co mputing en vironments. T he proposed InterCloud en vironment supports scaling of applications across multiple vendor clouds. W e have vali dated our appro ach by conductin g a set of rigoro us performance e valuation s tudy using t he CloudS im toolkit. T he results de monstrate that federated Cloud co mputing model has immense pot ential as it offers significant performance gains as regards to respon se ti me an d cost savin g under dynamic workload scenarios. 1. Introduction In 1969, Leonard Kleinrock [1], one of the chief scientists of the original A d- vanced Researc h P rojects Agency Network (ARPANET ) project which seeded t he Internet, said : “A s of now, co mputer networks are still in th eir infancy, but as they grow up and become so phisticated, we will probably see the spread of „compu ter utilities‟ which, like present el ectric and telephon e utilities, will service individua l homes a nd o ffices across the c ountry.” T his vision o f co mputing utilities based on a service provisio ning mod el anticipated t he massi ve transf ormation of the e ntire computing i ndustry in the 21 st centur y whereb y co mputing services will be readily available o n demand, like oth er utility service s a vailable i n today’s societ y. Sim i- larly, computing service users (consumers) need to pay providers only when the y access co mputing ser vices. In ad dition, consumers no lo nger need to invest heav i- ly or encounter difficulties in building and maintainin g complex IT infrastructure. In such a model, u sers access service s based o n their req uirements without r e- gard to w here th e services are hosted. T his m odel has bee n referred to as utility computing , or recently as Cloud computing [3][7]. T he latter term de notes the i n- fr a str ucture as a “C loud” fro m which b usinesses and users are ab le to acce ss a p- plication services fro m anywhere in the world on de mand. Hence, Clo ud co mp u- ting can be classi fied as a new par adigm for the d ynamic provisioning o f computing services, typically supported by state - of -the-art data centers contai ning ensembles of net worked Virtual Machi nes. Cloud co mputing delivers in frastructure, p latform, and so ftware (applicatio n) as service s, which are made available as su bscrip tion -based services in a pay- as - you-go model to cons umers. These services in ind ustry are respectively re ferred to as Infrastructure as a Ser vice (IaaS), P latform as a Service (P aaS), and Soft ware as a Service (SaaS). A Berkeley Report in Feb 2009 states “Cl oud co mputing, the long-held dream o f co mputing as a utility, has the potenti al to transform a large part of the IT industry, making software even more attractive as a service” [2]. Clouds aim to power the next generation data centers by arc hitecting them as a network o f virtual services ( hardware, database, user -interface, application logic) so that users are able to acc ess and deploy applications from anywhere in the world on demand at competitive co sts depending o n u sers QoS (Quality of Se r- vice) requirements [3 ]. Developer s with innovative id eas fo r new Internet services no lo nger r equire large capita l outla ys in h ard ware to dep loy the ir service or h u- man expe nse to o perate it [ 2]. I t offers significant be nefit to IT companies b y fre e- ing the m from the lo w level task of setti ng up basic hardwar e (servers) and sof t- ware infrastructure s and thus enabling more focus on innovation and crea ting business value for their ser vices. The business potential of Clo ud computing is r ecognised by several market r e- search fir ms including ID C, whic h reports tha t world wide spending o n Cloud se r- vices w ill grow from $16 billion by 2 008 to $42 billion in 201 2. Furthermore, many ap plications ma king use of these utilit y-oriented computing systems s uch as clouds e merge simply as catal ysts or market makers t hat bring bu yers and sellers together. This creates se veral trillion dollars o f worth to the utility/per vasive co m- 3 puting ind ustry a s noted b y S un Mi crosyste ms co -founder Bill Joy [4]. He also i n- dicated “It would take time until these markets to mature to generate this kind of value. Predicting now which companies will capture the value is impossible. Many o f them have not even been created yet.” 1.1 Application Scaling and Cloud Infrastructure: Challenges and R e- quirements Providers such as Amazon [ 15], Google [1 6], Salesforce [21] , IBM, Micro soft [17], and Sun Microsyste ms have begun to establish new data centers for host ing Cloud computing application services suc h a s social net working and gaming po r- tals, b usiness applicatio ns ( e.g., SalesForce.co m), media content deli very, and sc i- entific workflo ws. Actual u sage patter ns of many rea l-world application services vary with time, most o f the time i n unpredictable w ays. To illustrate thi s, let us consider an “elastic” applicat ion in the busines s/social networking do main that needs to scale up and do wn over the course of its dep loyment. Social Netw orking Web Applications Social net works suc h as Faceboo k and M ySpace ar e pop ular Web 2.0 base d appl i- cations. The y serve dynamic co ntent to millions of user s, whose access and int e- raction p atterns are hard to predict. In addition, their features are very d ynamic in the sense that ne w plug-ins ca n be created by independent d evelopers, ad ded to the main system and used b y o ther users. In several situat ions load spikes can take place, for instance, whenever new s ystem features beco me pop ular or a ne w plug - in ap plication is deplo yed. As these social networks are organized i n co mmunities of highly interacti ng users d istributed all over the world, load spikes ca n ta ke place at different locations at an y time. In or der to handle unpredictable seaso nal and geographical cha nges in s ystem workload, an auto matic scali ng sc heme is p a- ramount to keep QoS and reso urce co nsumption at suitable levels. Social networking websites are built using multi -tiered web technologies, which consist o f application s ervers such as IB M Web Sphere and per sistency la y- ers such a s the MySQL relatio nal database. Usually, ea ch component runs in a separate virt ual machi ne, which can b e hosted in data centers that are owned b y different clo ud co mputing pro viders. Additio nally, eac h plug -in developer has the freedom to choose which Clo ud computing provider offers the ser vices that are more suitable to r un his/her p lug-in. As a consequence, a typical social networking web ap plication is formed b y hundreds of differen t ser vices, which ma y be hosted by dozens o f Cloud data ce nters ar ound the world. W henever there is a variation in temporal and spatial locality of workload, ea ch applicatio n co mponent must d y- namically scale to offer goo d quality of experience to u sers. 1.2 Federated Cloud In frastructures for E lastic Applicat ions In order to support a large number of ap plication ser vice co nsumers fr o m around the world, Cloud in frastructure providers (i.e., Iaa S providers) have establis hed data centers in multiple geogr aphical locations to p rovide redundancy a nd ensure reliability in case of site failures. For exa mple, Amazon has data centers in the U S (e.g., one i n the Ea st Coast and another in the West Coast) and Europe. However, currently t hey (1) expect their Cloud custo mers (i.e ., SaaS provider s) to express a preference ab out the location w here t hey want their ap plication service s to be hosted and (2) d on’t provide seamless/automatic mechanisms f or scaling their hosted serv ices acr oss multiple, geographicall y distrib uted data centers. This a p- proach has many shortcomings, which include (1 ) it is difficult for Cloud custo m- ers to determine i n advance the best lo cation for hosting t heir services as the y may not kno w origin o f con sumers of their ser vices a nd (2) Clo ud SaaS providers may not be able to meet QoS expectatio ns o f the ir service consumers originating from multiple geographical locatio ns. This nece ssitates building mechani sms for sea m- less federatio n of data centers of a Cloud provider or p roviders supp orting d yna m- ic scaling of ap plications ac ross multiple do mains in order to meet QoS tar gets o f Cloud customers. In addition, no single Cloud in frastructure p rovider will be able to establish their data ce nters at all po ssible loca tions throughout the world. As a result Cloud application service (SaaS) providers will ha ve difficulty in meeting Qo S expect a- tions f or all their co nsumers. Hence, they would like to make use of services of multiple Cloud infrastructure service p roviders who ca n pro vide better support for their spec ific consumer need s. T his kind of require ments often arises in enterpri s- es with global operations and applications such as I nter net service, media hosting, and Web 2.0 applicatio ns. This nece ssitates b uilding mechanisms for federation o f Cloud in frastructure ser vice p roviders for seamless provisioning o f ser vices acr oss different Cloud providers. T here are many challenges i nvolv ed in crea ting such Cloud interconnections thro ugh federation. To meet these r equirements, next generation Cloud service pr oviders shoul d be able to: (i) dynamically exp and or resize their provisioning ca pability ba sed on sudden spikes in workload demands by lea sing a vailable computational and st o- rage capab ilities fro m other Cloud service providers; (ii) op erate as p arts o f a ma r- ket driven reso urce lea sing fed eration, w here applicatio n service providers suc h as Salesforce.co m host t heir ser vices based on negotia ted Service Level Agre e- ment (SLA) contract s driven b y co mpetitive market p rices; and (iii) deliver on demand, reliable, cost-effecti ve, and QoS aware services based on virtualization technologies while e nsuring high QoS s tandards and minimizing ser vice costs. They need to be able to utiliz e market-based utilit y models as the basis for prov i- sioning o f virtualized software services and federated hard ware infrastruct ure among users with hetero geneous applications and QoS tar gets . 1.3 Research Issue s The diversity and flexibility of the functionalities (d ynamically shrinking and growing co mputing s ystems) en visioned b y federated Cloud co mputing model, combined with the magnitudes and uncertainties of its c omponents (workload, compute servers, services , workload), pose diff icult problems i n effective pro v i- sioning and delivery o f applicatio n services. P rovisioning means “hi gh -level management of co mputing, network, and stora ge resources that al low them to e f- fectively provide and deliver services to customers ”. In particular, finding eff i- cient solutions for following challe nges is critical to exploiting the po tential of f e- derated Cloud infrastructures: 5 Application Serv ice Behavior P rediction: It is critical that the syste m is able to predict the demands an d b ehaviors o f the hosted services, so that it inte l- ligently undertake decisions related to dynamic scaling or de -scaling of ser vices over federated Clo ud infrastructures . Concrete pred iction or forecasti ng models must be built before the behavior o f a service, in ter ms o f co mputing, storage, and network bandwidth requirements, can be predicted a ccurately. The real challenge in devising such m odels is a ccurately learning a nd fitting statistical functions [31] to the observed distributio ns o f service behaviors such as requ est arrival pattern, service time distributions, I/O system behavio rs, a nd netwo rk usage. This challenge is furt her aggravated by the existenc e of statistical corr e- lation (such as stationary, shor t- and long -range dep endence, and p seu do - periodicity) between di fferent b ehaviors of a service. Flexible M apping of Services to Reso urces: W ith increa sed oper ating costs and ener gy requ irements of c omposite s ystems, it beco mes critical to maximize their efficiency, cost -effectiveness, and utilization [30] . T he process of mapping services to reso urces is a co mplex undertaki ng, a s it req uires the system to co m- pute the best software an d hardware config uration (syst em size an d mix of r e- sources) to en sure that QoS targets of services are achieved, while maximizin g system efficiency an d u tilization. This process is further complicated by the u n- certain beha vior o f resou rces an d services. Consequently, there is an i mmediate need to d evise perfor mance modeling and market-based service mapping tec h- niques that ensure e fficient system utilization without having an unacceptable impact on QoS targe ts. Economic M odels Driven Optimization Techniques: The market-driven decision making p roblem [6] is a combinatorial optimizatio n pro blem that searches t he opti mal combinatio ns of services a nd their de ployment pla ns. U n- like many existi ng multi-obj ective optimization sol utions, the optimizati o n models that ultimatel y ai m to optimize both resource -ce ntric (utilization, avail a- bility, reliab ility, i ncentive) and user -centric (resp onse ti me, budget sp ent, fai r- ness) QoS targets need to be developed. Integration and Interopera bility: For many S MEs, there is a large a mount of IT assets in house, in the for m of line of b usiness app lications t hat are unlik e- ly to e ver be migrated to t he cloud. Further, t here is huge amount o f sensiti ve data in a n e nterprise, which i s unlikel y to migrate to the cloud due to p rivacy and security issues. As a result, there is a need to lo ok into i ssues related to int e- gration and intero perability bet ween the software on pre mises and the ser vices in the clo ud. In particular [28]: (i) Identity ma nagement: authentication a nd a u- thorization of service users; provisioning user acces s; federated security model; (ii) Data Management: not all data will be stored in a relati onal database in the cloud, eventual co nsistency (BASE) is taking o ver from the tr aditional ACID transaction guarantees, in or der to ensu re sharable data str uctures that ac hieve high scalab ility. (iii) Business process or chestration: how does integration at a business process level happen across the software o n pre mises an d service in the Cloud boundar y? Where do we store b usiness rules that govern t he business process orchestration? Scalable M onitoring of System Components: Altho ugh the components that contribute to a federated s ystem may b e distributed, ex isting techniq ues usually e mploy ce ntralized approaches to overall system m onitori ng and ma n- agement. We claim tha t centralized appro aches are not an appropr iate solution for this purpose, due to concerns o f scalabilit y, perfor mance, and reliabilit y ari s- ing fro m t he management of multiple ser vice q ueues and the e xpected large volume o f service requests. M onitoring of system compo nents is required for e f- fecting on -line control through a collectio n of system performance ch aracteri s- tics . T herefore, we advocate architecti ng ser vice monitoring and management services based on decentra lize d messaging and indexing models [2 7]. 1.4 Overall Vision To meet afore mentioned require ments of auto -scalin g C loud applications, future efforts should focus on design , development, and implementation of so ftware sy s- tems and policies for f ederation of Clouds across network and administrative boundaries. The key elements for enabling federation of Cl ouds and auto-scalin g application services are: Cloud Coordinators, Brokers, and an Exchange. The r e- source pro visioning within these federated clouds will be driven b y market - oriented pr inciples for efficie nt resource allocatio n depending on user QoS tar gets and workload de mand pattern s. To reduce p ower consu mpt ion cost and improve service loca lization while compl ying with the Ser vice Level Agreement (SLA) contracts, new on -line algorithms for energy- aware placement and live migratio n of virtual machines bet ween Clouds would need to b e developed. The approach for realisation of th is researc h visio n consists of i nvestigation, design, and deve l- opment of the follo wing:  Architectural framework and principles for the d evelopment of utility - oriented clouds and their federation  A Cloud Coord inator for e xporting Cloud services and t heir manageme nt driven by market-based trading and negotiation protocols for optimal QoS delivery at mini mal cost and energy.  A Cloud B roker responsible for mediatin g bet ween servic e co nsumers and Cloud coordinator s.  A Cloud E xchange act s as a market maker e nabling capability sharing across multiple Cloud d omains through its match making services.  A so ftware platform implementing Cloud Coordinator, Broker, and E x- change for federation. The rest of this paper is organized as follo ws: First, a conci se survey o n the exis t- ing state- of -the-art i n Cloud pr ovisioning i s presented. Next, the co mprehensive description related to overall system architecture and it s el ements t hat f orms t he ba sis for constructi ng feder ated Cloud infrastructures is give n . T his is follo wed by some initial e xperiments and results, which qua ntifies the performance gains d e- 7 livered b y the proposed approach. Finally, the paper ends with brief co nclusive remarks and discussio n on perspective future research direct ions. 2. State-of-the-art in Cloud Provisioning The key Cloud p latforms in Cloud co mputing do main including Amazon Web Services [15] , Microsoft Azure [ 17], Go ogle AppEngi ne [ 16] , Ma njrasoft Aneka [32] , Eucalyptus [22], and GoGrid [23] offer a v ariety of pre-packaged service s for m onitoring, m anaging and p rovisioning resources and application services . However, the tech niques imple mented in each of these C loud platforms vary (r efer to Table 1). The three Amazon Web Ser vices ( AWS), Elastic Load Balancer [ 25], Auto Scaling and CloudWatch [24] to gether expose functionalities which ar e required for undertaking provi sioning of ap plication services on Amazon EC2. Elastic Load Balancer service automaticall y pro visions inco ming app lication workload across available Amazon EC2 instances. Au to -Scaling service can b e used for d y- namically scali ng-in or scal ing-out the number of Amazo n EC2 instances for ha n- dling changes i n ser vice de mand patterns. And finally the CloudWatch service ca n be integrated with above ser vices for strategic decision making based on real -time aggregated resource and ser vice per formance information . Table 1: Summ ary of provisioning capabilities expose d by public Cloud platfor ms Manjrasoft Aneka is a platform for buildin g and deploying distr ibuted applic a- tions on Clouds. It pr ovides a rich set of APIs for transpar ently explo iting distr i- buted resources and expressing the busine ss logic of app lications b y using the pr e- ferred programming ab stractions. Aneka i s also a market -o riented Cloud platfo rm since it allo ws users to build and schedule applications, provision resources a nd monitor r esults using pricing, accounting, and QoS/SLA services in pri vate and/or public (leased) Cloud enviro nments. Aneka also allows u sers to b uild different run-time environments suc h a s enterprise/private Cloud b y harness computing r e- Cloud Platforms Load Balancing Provisioning Auto Scaling Amazon Elastic Com pute Cloud √ √ √ Eucalyptus √ √ × Microsoft Windows Azure √ √ (fixed templates so far) √ (Manual) Google App Engine √ √ √ Manjrasoft Aneka √ √ √ GoGrid Cloud Hosting √ √ √ (Programmatic way only) sources in network o r enterpr ise data centers, public Clouds such as Amazon EC2, and hybrid clo uds by co mbining enterpr ise private Clouds managed b y Aneka with re sources from Amazo n EC2 or other enterprise Cloud s build a nd managed using technologies s uch as XenServer. Eucalyptus [22 ] is an op en source Cloud co mputing platform. I t is co mposed o f three controllers. Among the contro llers, the Cluster Contro ller is a key co mp o- nent to application service pro visioning a nd load balancing. Each Cluster Co ntro l- ler is ho sted on t he head node o f a cluster to i nterconnect outer public networks and inner private networks t ogether. B y monitoring the state information o f i n- st ances in the pool of server controllers, the Cluster Controller can select the available ser vice/server for provisioning inco ming reque sts. Ho wever, a s co m- pared t o AWS, Eucal yptus sti ll lacks some of the critical fun ctionalities, such a s auto scaling for b uilt-in provisioner. Fundamentally, W indows Azure Fabric [1 7] has a weave-like structure, which is co mposed of node (server s and load balancers), and ed ges (po wer, Ethernet a nd serial co mmunications). T he Fabric Controller manages a ser vice node throu gh a built-in service, named Azure Fabr ic Controller Agent, which runs i n the bac k- ground, trac king t he state of the server, and r eporting t hese metrics to the Contro l- ler. If a fault state is rep orted, the Controller can ma nage a reboot of the server o r a migration of ser vices from t he current server to other healthy servers. Moreover, the Controller also supports s ervice p rovisioning b y matchi ng the services against the VMs that meet req uired demands. GoGrid Cloud Hosting offer s develop ers the F5 Load Balance rs [2 3] for distr i- buting ap plication service traffic across servers, as long as IP s and specific por ts of these ser vers are attached. The load balancer allo ws Round Rob in algorithm and Least Connect algorit hm for routing applicatio n service requests. Also, t he load balancer is able to sen se a crash of the server, red irecting further requests to other available servers. But currently, GoGrid Cloud Hosting only gives devel o p- ers programmatic APIs to i mplement their custom au to -scaling service. Unlike other Clo ud platfor ms, Google App Engine offers developers a scalable platform in which ap plication s can r un, rather than p roviding ac cess direc tly to a customized virtual machine. T herefore, access to the underl ying oper ating s ystem is restricted in App E ngine. And load -balanci ng strategies, service provisionin g and au to scaling are all a utomatically managed b y the s ystem b ehind the scenes . However, at this time Go ogle App Engi ne can onl y support provisioning o f web hosting type of application s. However, no single Cloud inf rastructure pro vider s have their d ata centers at all possible locations throughout th e world. A s a result Cloud app lication servic e (SaaS) providers will have difficulty i n meeting QoS expectations for all their u s- ers. Hence, they would p refer to logicall y construct feder ated Cloud infrastru c- tures ( mixing multiple p ublic and private clo uds) to pr ovide better support for their speci fic user needs. This kind o f requirements often arises i n enterp rises with global o perations and ap plications s uch as Interne t service, media hosting, and Web 2 .0 app lications. T his n ecessitates bu ilding technolo gies and algorithms for seamless federation of Cloud infrastructure servi ce providers for autonomic pr ov i- sioning of services acro ss different Cloud providers. 9 3. System Architecture and Elements of InterCloud Figure 1 sho ws the high level components of the ser vice -oriented architectural framework consisting o f cl ient’s b rokeri ng and co ordinator services that support utility-driven federation of clouds: app lication sc heduling, resource allocation and migration o f workloads. The architecture cohesi vely co uples the administratively and to pologically dis tributed storage and co mputes capabilities o f Clouds as parts of single resource leasin g abstrac tion. T he system w ill ease the cro ss -domain c a- pabilities integration for on d emand, flexible, energy -effici ent, and reliable access to the infrastructure based o n emerging virtualizatio n tec hnologies [8] [9]. Cloud Exchange (CEx) Directory Bank Auctioneer Negotiate/Bid Request Capacity Cloud Broker N User . . . . . . Publish Offers Compute Cloud C l u s t e r ( V M P o o l ) P o o l n o d e V M V M V M P o o l n o d e V M V M V M M a n a g e r P o o l n o d e V M V M Cluster (VM Pool) P o o l n o d e V M V M V M P o o l n o d e V M V M V M M a n a g e r P o o l n o d e V M V M P o o l n o d e V M V M V M P o o l n o d e Pool node VM VM VM P o o l n o d e V M V M P o o l n o d e Pool node VM VM VM Manager P o o l n o d e V M V M P o o l n o d e Pool node VM VM Storage Cloud Compute Cloud Storage Cloud Enterprise Resource Server (Proxy) Enterprise IT Consumer User Cloud Broker 1 Cloud Coordinator Cloud Coordinator Cloud Coordinator Figure 1: Federated network of clouds mediated b y a Cloud exchange. The Cloud Exch ange ( CEx) acts as a market maker for bringing together se r- vice pr oducers and consumers. It ag gregates t he i nfrastructure demands fro m the application brokers and eval uates them against the a vailable supply curre ntly pu b- lished b y the Clou d Coor dinators. It supports trading o f Cl oud services b ased o n competitive ec onomic models [ 6] such as commodity markets and auctions. CEx allows the participan ts (Cloud Coor dinators and Cloud B rokers) to locate provi d- ers and co nsumers with fittin g offers. Such markets enable s ervices to be comm o- ditized and thus, would pave the way for creatio n of d ynamic market infrastru c- ture for trading based o n SLAs. An SLA specifie s the details o f t he service to be provided in ter ms o f metrics a greed upon b y a ll parties, and incentives and pena l- ties for meeting and violati ng the expectation s, respectively . The availabilit y of a banking s ystem within the ma rket ensure s that financial transactio ns pertai ning to SLAs b etween par ticipants are carried out in a secure and dep endable enviro n- ment. Every clie nt in the feder ated platfor m needs to i nstantiate a Cloud Brokering service that can dynamically establis h service co ntracts with Cloud Coordinators via the trading functions e xposed by the Cloud Exc hange. 3.1 Cloud Coo rdinator (CC) The Cloud Coor dinator se rvice is r esponsible for the manage ment of domain sp e- cific e nterprise Clouds a nd th eir m embership to t he overall feder ation driven b y market-based trading and neg otiation protocols. It provides a pr ogramming, ma n- agement, and deployment e nvironment for appl ications in a federation of Clouds. Figure 2 shows a detailed depiction of resource m anagement components in the Cloud Coordinator service. S c h eduling & All ocatio n Monitoring All ocator Schedu ler Ma rke t & P olicy E ngine Accounting SLA Prici ng Bi ll ing V i rtualizat ion S e nsor Hype rvisor Vi rt ual Machine VM Manager Pow er Cl o u d Co ordina to r Data Center Res ource s Heat Util ization D i scovery & Mo n i tor ing Querying Upda ting Mobil ity Manager A p plication C omposit ion E ngine User Inter face Depl oyer Database Appl i cation Server Re m ot e i n t eract ions S e r v i c e s e-Bu si n ess Workfl ow e-Scien ce Workfl ow CDN Parameter sweep Web Hostin g Soci al Networkin g Programm ing La yer Applic ation Programm ing Interfac e (API) Workload Models Perform ance Models Figure 2. Cloud Coord inator software architecture. The Cloud Coordinato r exports the service s of a clo ud to the fed eration b y i m- plementing basic functionali ties for resource management s uch as sc heduling, a l- location, (workload and per formance) models, market enabl ing, virtualization, d y- namic sensing/ monitoring, discover y, a nd applicatio n composition a s discus sed below: Scheduling and Alloca tion: T his component allocates virtual machines to the Cloud nodes based o n user ’s QoS tar gets and the Clouds ener gy manage ment goals. O n r eceiving a user app lication, the sc heduler d oes the follo wing: (i) co n- sults the Applicatio n Co mposition Engine about availa bility of soft ware and hardware infrastr ucture services that are req uired to satis fy t he r equest locally, (ii) asks the Sensor co mponent to submit feedback o n the local Cloud nodes’ e nergy consumption and utilization status; and (iii) enquires the Market and Policy E n- gine about accountab ility o f the submitted reque st. A request i s ter med as accou n- table if t he co ncerning user has avai lable cred its in t he Cloud bank and based on the speci fied Qo S co nstraints the es tablishment of S LA is feasible. In case all 11 three co mponents rep ly favorab ly, the application is hosted locally a nd is period i- cally monitored u ntil it finishes exec ution. Data center r esources may deliver d ifferent level s of p erformance to t heir clients; he nce, QoS -aware resource selection plays a n i mportant ro le in Cloud computing. Additionally, Clo ud ap plications ca n present varying workloads. It i s therefore es sential to carr y out a study o f Clo ud service s and their workloads in order to identify co mmon behaviors, patte rns, a nd explor e load forecasting a p- proaches that ca n potentially lead to more ef ficient sched uling and allo cation. In this co ntext, there is need to a nalyse sample application s an d correlations between workloads, and attempt to build performance models t hat c an help explore trade - offs between QoS tar gets. Market and Po licy Engine: T he SLA module stores the service ter ms a nd conditions t hat are being s upported by the Cloud to each respective Cloud Br oker on a per user basis. Based o n these terms a nd conditi ons, t he Pricing module ca n determine how ser vice req uests are charged based on the available suppl y and r e- quired demand of co mputing resources within the Cloud. The Accounting module stores the actual usage information o f resources b y req uests so that the total usage cost of each user can be calculated. T he Billing module then charges the usag e costs to users accordingl y. Cloud customers can normally associate two or m ore conflicting QoS targets with their application services . In such case s, it is necessary to trade off one or more QoS tar gets to fi nd a su perior so lution. Due to such diverse Qo S targets a nd varying optimization obj ectives, we end up with a Multi-d imensional Optimiz a- tion P roblem (MOP ). Fo r solving the MOP , o ne can explore multiple heteroge n e- ous optimization algorith ms, such as dynamic progra mming, hill clim bi ng, parallel swarm optimization, a nd multi-obj ective genetic algorithm. Application Composition en gine: T his co mponent of the Cloud Coordinator encompasses a set o f features intended to help ap plication d evelopers crea te and deploy [18] applications, includ ing the ability for on de mand interaction with a d a- tabase bac kend such as SQL Data serv ices pro vided b y Micr osoft Az ure, an ap pl i- cation ser ver such as I nternet I nformation Ser ver (II S) enabled with sec ure ASP.Net scrip ting engine to host web applicatio ns, and a SOAP driven Web se r- vices A PI for pr ogrammatic access along with co mbination and integratio n with other applications and data. Virtualization: VMs support flexible and u tility driven configurations that control the s hare of proce ssing power t hey can consu me based on t he time critica l- ity of the underl ying applicatio n. Ho wever, the current app roaches to VM -based Cloud co mputing are limited to r ather inflexible co nfigurations within a Cloud. This li mitation can be solved b y developing mechanis ms fo r transparent migratio n of VMs acro ss ser vice boundaries with the aim of minimizing cost of service del i- very (e.g., by migrating to a Cloud located in a region where the e nergy cost i s low) and wh ile sti ll meeti ng the S LAs. T he Mobility Ma nager is respo nsible f or dynamic migratio n of VMs based on the r eal-ti me feedback given b y t he Sensor service. Currentl y, hypervisors s uch as VM ware [8] and Xen [9] have a li mitation that VMs ca n only be migrated b etween hypervisors t hat are within the same su b- net and share co mmon stor age. Clearl y, this is a serious bo ttleneck to achieve adaptive m igration of VMs in federated Clou d environments. T his li mitation has to be addressed in order to support utility driven, power -aware migration of VMs across service do mains. Sensor : Se nsor infrastructure will monitor the po wer consump tio n, hea t diss i- pation, and utilizatio n of co mputing nodes in a virt ualized Cloud environment. To this end, we will exte nd o ur Service Oriented Sensor W eb [14] software s ystem. Sensor Web provides a m iddleware i nfrastructure a nd pro gramming model for creating, accessi ng, and uti lizing tiny sensor device s that are d eployed within a Cloud. The Cloud Co ordinator service makes use o f Sensor Web services for d y- namic sen sing o f Cloud nodes and surrounding te mperature. T he o utput data r e- ported by sensors are feedbac k to t he Coordinator’s Virtualizatio n a nd Scheduling components, to optimize the placement, migration, and a llocation of VMs in the Cloud. Such se nsor-based real time monitoring of t he Cloud operating enviro n- ment aids in avoiding server breakdo wn and achieving optimal thro ughput out of the available co mputing and storage nodes. Discovery and Monitoring: I n order to dynamically perfor m sc heduling, r e- source allo cation, and VM migration to m eet SLAs in a federated netw ork, it is mandatory that up - to - date information related to Cloud’s availability, pricing and SLA rules are made availabl e to the o utside domains via the Cloud Exc hange. This co mponent of Cloud Coord inator is solely responsible for interacting with th e Cloud Exc hange t hrough r emote messaging. T he Discovery and Mo nitoring co m- ponent u ndertakes the follo wing activitie s: (i) updates the r esource st atus metrics including ut ilization, heat dissip ation, po wer consumptio n based on feedback gi v- en by the Sensor co mponent; (ii) facilit ates the Market and Policy Engine in p e- riodically p ublishing the pricing policie s, S LA rules to t he Cloud Exchange; (iii) aids the Scheduling and Allocation co mponent in d ynamically disco vering t he Clouds that offer better opti mizatio n for S LA constrai nts s uch as dead line and budget limits; and (i v) helps the Virtualization compone nt in determining load and power consumption; such info rmation aids the Virtualization co mponent in pe r- forming load-balanci ng through dyna mic VM migration. Further, system co mponents will need to share scalable methods for collecti ng and representing monitored data. This leads us to believe t hat we should interco n- nect and monitor system components based on dec entralized messaging a nd i n- formation inde xing i nfrastr ucture, called Distributed Hash T ables (DHTs) [26] . However, implementing scalable techniques that monitor the d ynamic beha viors related to services and resources is non -trivial. In order to support a scala ble se r- vice monitoring algor ithm ov er a DHT infrastructure, add itional data d istribution indexing techniques such as logical multi-di mensional o r spatial indices [2 7] (MX-CIF Quad tree, Hilbert Curve s, Z Curves) s hould be imple mented. 3.2 Cloud Bro ker (CB) The Cloud Broker acting on behalf of users ident ifies suitab le Cloud service pr o- viders thro ugh the Cloud Exchange a nd negotiates with Clo ud Coord inators for an allocation of reso urces that mee ts QoS needs of us ers. The ar chitecture of C loud Broker is shown in Fig ure 3 and its components are d iscussed below: 13 User Interface : T his provides the access linkage between a user app lication i n- terface and t he broker. T he Applicatio n Interpreter translates the e xecution r e- quirements of a user application which include what is to be executed, the descri p- tion of task inputs includi ng r emote data files (if req uired), the information about task o utputs (if present), and the desired QoS. The Service I nterpreter understands the service requirements needed for the exec ution which comprise service lo c a- tion, service t ype, and specific details such as remote batch job su bmission sy s- tems for co mputational services. The Credential Interpr eter reads the credentials for accessing necessar y services. Job Monitor Job Dispatcher Persistence Database Execution Interface User Interface Core Services Cloud Coordinator 1 Application Interpreter Service Interpreter Credential Interpreter Global Cloud Market Service Monitor Service Negotiator Scheduler Cloud Coordinator n . . . . . . . . Figure 3: High lev el architecture of Cloud Bro ker service. Core Serv ices: T hey enable t he main functionalit y of the b roker. The Service Negotiator bargains for Cloud services fro m the Cloud E xc hange. T he Scheduler determines t he most appro priate Cloud services for t he u se r applicatio n based o n its ap plication and ser vice require ments. T he Service Mo nitor maintains the status of Cloud ser vices by perio dically checking th e availability of known Clo ud se r- vices a nd disco vering new ser vices that ar e available. I f t he lo cal Cloud is unable to satisfy appl ication req uirements, a Cloud Broker lo okup req uest that encaps u- lates the user’s QoS param eter is submitted to the Cl oud Exchan ge, which matches the lo okup req uest a gainst the available offers. T he matchin g procedure considers t wo main s ystem pe rformance metric s: first, the user specified QoS ta r- gets must be satisfied within a cceptable b ounds and, second, the allocation should not lead to overloading (in terms of utilization, po wer co nsu mption) of the nodes. In case the match o ccurs t he q uote is forwarded t o the req uester (Scheduler). Fo l- lowing that, t he Schedulin g and Allocation co mponent dep loys t he applicatio n with the Cloud that was s uggested by Cloud market. Execution I nterface: This p rovides execution s upport for the user application. The J ob Dispatcher creates t he necessar y broker age nt and r equests data files (if any) to be dispatc hed with the user application to the remote Clo ud resources for execution. The Job Mo nitor observes t he exec ution status of the job so that the r e- sults of the job are returned to the user upon job co mpletion. Persistence: T his maintains t he state of the User Inter face, Core Services, and Execution Interface i n a database. T his facilitates reco very when the bro ker fails and assists in user -level accounting. 3.3 Cloud Exchang e (CEx) As a market maker, the CE x a cts as an i nformation registry that stores t he Cloud ’s current usage costs and de mand patter ns. Cloud Coordinator s periodically update their availability, pricing, and SLA p olicies with the CEx. Cloud Bro kers query the registr y to lear n infor mation abo ut existing SLA offers and resource availabil i- ty o f member Clouds in the federation. Further more, it pr ovides match -making services that map user requests to suitable service p roviders. Mapping functions will be implemented by lever aging various ec onomic models such a s Contin uous Double Auction (CD A) as proposed in earlier works [6 ]. As a m arket m aker, the Cloud Exchange pro vides directory, dynamic bid ding based service clearance, and pa ym ent ma nagement services as d iscussed belo w.  Directory: The market director y allo ws the global CEx participants to locate providers or co nsum ers with the appr opriate bids/offers. Cloud pr oviders can publish the available suppl y of resources a nd the ir o ffered prices. Cloud co n- sumers ca n the n searc h for suitable p roviders and submit their bids for r e- quired resources. Standard inter faces need to be provided so that bot h pr ovi d- ers and co nsumers can acce ss resource information from one another readil y an d seamlessly.  Auctioneer: Auctioneers perio dically clear b ids and asks r eceived from the global CEx participants. Auctioneers are third p arty controller s that do not represent any providers or co nsumers. Since the auc tioneers ar e in total co n- trol of the entire tradin g process, they need to be trusted by particip ants.  Bank: The banki ng system enforces the fina ncial transacti ons pertaining to agreements betwee n the global CEx participants. T he banks are also indepe n- dent and not controlled by a ny providers and consumers; th us facilitati ng i m- partiality and trust a mong all Cloud market participant s th at the financial transactions are conducted co rrectly without any bias. T his should be realized by integrating w ith online pa yment management services, such as P ayPa l, with Clouds providing accou nting services. 4. Early Experiments and Preliminary Results Although we have been working towards the implementation of a software sy stem for federatio n o f clo ud co mputing e nvironments, it is still a work - in -progress. Hence, i n this section, we p resent our experi ments and evaluation that w e unde r- took using CloudSim [29] framework for studyi ng the feasibility of the propo sed research vision. The experiments were conducted o n a Ce lero n machine havin g the follo wing configuration : 1. 86GHz with 1MB o f L2 cac he a nd 1 GB of RAM running a standard Ubuntu Linux version 8.04 and JDK 1.6 . 15 4.1. Evaluat ing Performance of Federated Clo ud Computing Environment s The first experi ment aims at proving t hat federated in frastructure of clo uds has p o- tential to deliver better p erformance and service quality as co mpared to existing non-federated approaches. To this end, a simulation environment that models fe d- eration of three Cloud p roviders and a user (Cloud B roker) is modeled. Every pr o- vider instantiate s a Sensor componen t, which is responsib le for d ynamically sen s- ing t he a vailability informatio n related to the local hos ts. Ne xt, t he sensed statistics are reported to the Cloud Coordinator that utilizes the infor mation in u n- de rtaking load-migration deci sions. W e evaluate a straightfor ward load -migration policy t hat performs online migration of VMs among federated Cloud providers only if the origin provider does n ot have th e req uested number o f free VM slots available. T he migration process in volves the following steps: ( i) crea ting a virtual machine i nstance that has the sa me configuration, which is supp orted at the dest i- nation pro vider; and (ii) migratin g the applicatio ns ass igned to the o riginal virtual machine to the ne wly instantiated virtual machine at t he de stina tion provider. The federated net w ork of Cloud p roviders is crea ted based on t he topolog y shown in Figure 4. Public Cloud Provider 1 Public Cloud Provider 2 Public Cloud Provider 0 Cloud Coor dinator Cloud Coor dinator Cloud Coor dinator Loa d St atu s Applicati o n Cloud Broker Mon itors: Reso urce Utiliza tion Net work T r affic Disk Rea ds /Wr ites... T1 T3 T4 T 2 T T T T Figure 4: A net work topology of federated Data Centers. Every P ublic Cloud provider in t he s ystem i s modeled to have 50 co mputing hosts, 10GB of memory, 2TB of storage, 1 pro cessor with 1 000 MIPS of capacity, and a ti me-shared VM sc heduler. Cloud Broker on be half of t he user requests instantiation of a VM that r equires 25 6MB o f memory, 1 GB of storage, 1 CP U, and time-shared Cloudlet sc heduler. T he broker requests instantiation o f 25 VM s and associates one Cloudlet (Clou d app lication abstractio n) to each VM to b e executed. T hese req uests are originall y sub mitted with the Cloud Provider 0. Each Cloudlet i s m odeled to have 1800000 MIs ( Million Instrictio ns). The simulation experiments were run under t he following s ystem con figurations: (i) a federated network of clo uds is available, he nce data ce nters are ab le to cope with peak demands b y migrating the exc ess of load to the least load ed o nes; and (ii) the data centers ar e modeled as independent entities ( without federation). All the workload submitted to a Cloud pr ovider must be pro cessed and executed locally. Table 2 shows t he average t urn- ar ound time for each Cloudlet and the ove rall makespan o f the user applicati on for both cases. A user appli cation consist s of o ne or more Cloudlets with seque ntial dependenc ies. T he simul ation results r eveal t hat the availabilit y of f ederated infrastructure of clo uds reduces the a verage tu rn - around ti me b y more tha n 50 %, while i mproving the makespan by 2 0%. It sho ws that, even for a very simple load -migration policy, availabilit y of federatio n brings significant benefits to user’s applicatio n performance. Ta ble 2: Perform ance Results . Performa nce Metrics With Federation Without Federation % Improvement Average Turn Aro und Time (Secs) 2221.13 4700.1 > 50% Makespan (Secs) 6613.1 8405 20% 4.2 Evaluating a Cloud provisioning strateg y in a federated enviro nment In previous subsection, we focused on eval uation of the federat ed service and resource pro visioning sce nario s. I n this section, a more co mplete e xperiment tha t also models t he inter-co nnection network between federated clouds, is presented. This example shows how the adoption of federated clo uds can im pro ve productivity of a company with expansion o f private cloud capacit y by dynamically leasing resources from public clouds at a reaso nably low cost. The simulation scenario is b ased on federating a p rivate c loud with the A mazon EC2 cloud. T he public and the pr ivate clouds are represented as t wo data centers in the simulation. A Cloud Coordinator in the private d ata center receives the user’s app lications and processes t hem in a FCFS basis, queuing the t asks when there is available capacit y for them in t he infrastructure. T o evaluate t he effectiveness o f a hybrid clo ud in speedi ng up ta sks execut ion, t wo scenarios ar e simulated. In the first sce nario, tasks are kept in the waiting queue until active tasks finish (currently exec uting) in the private clo ud. All the workload is processed locally within the private cloud. In the second scenario, the waiting tasks are d irectly sen t to avai lable public cloud. In ot her words, second scenario simulates a Cloud B ursts case for integrating local private c loud with public clo ud form handin g peak in service d emands. Before submitting tasks to the Amazon cloud, the VM images (AMI) are loaded and insta ntiated. T he number of i mages instantiated in the Cloud is varied i n the experiment, from 10% to 10 0% of t he number of machines available in t he pr ivate cloud. Once im ages are crea ted, tasks in the waiting queues are submitted to t hem, in such a way that onl y one task r un on each VM at a gi ven instance of time. Every time a tas k finishes, t he next task i n 17 the waiting q ueue is submitted to the avai lable VM host. When there were no tasks to be submitted to the VM, it is destro yed in the cloud. The local p rivate data center hosted 100 m achines. Each machine has 2GB o f RAM, 10TB of storage an d one CPU run 100 0 MIPS. The virtual machines created in the public clo ud were based in a n Amazon's sm all instance (1.7 GB of memory, 1 virtual core, and 1 60 GB o f instance storage). W e co nsider in this example t hat the virtual core o f a small insta nce has the sa me pr ocessing p ower as the local machine. The workload sent to the p rivate cloud is co mposed o f 1 000 0 tasks. Each task takes bet ween 2 0 and 22 m inutes to run in one CPU. The exact amount of time was rando mly generated based on the nor mal distributio n. Each o f the 100 00 tasks is submitted at the same ti me to the scheduler q ueue. Table 3 shows the makespa n of the tasks ru nning only i n the p rivate clo ud and with e xtra alloca tion of reso urces from t he p ublic clo ud. In the third c olumn, we quantify the overall co st of the ser vices. The pricing p olicy wa s designed based o n the Amazon’s small instances (U$ 0.10 per ins tance per hour) pricing model. It means tha t the cost per instan ce is charged hourly in the beginning of execution. And, if an instance r uns during 1 hour a nd 1 minute, the amount for 2 hours (U$ 0.20) will be charged. Table 3: Cost and p erformance of several public/pri vate cloud strategies Makespan (s) Cloud Cost (U$) Private only 127155.77 0.00 Public 10% 115902.34 32.60 Public 20% 106222.71 60.00 Public 30% 98195.57 83.30 Public 40% 91088.37 103.30 Public 50% 85136.78 120.00 Public 60% 79776.93 134.60 Public 70% 75195.84 147.00 Public 80% 70967.24 160.00 Public 90% 67238.07 171.00 Public 100% 64192.89 180.00 Increasing the number of resources b y a rate reduces the j ob makespan at the same rate, which is an expected observation or outcome. Ho wever, the co st associated with t he processing increase s significantly at higher rates. Nevert heless, the cost is still acceptable , considering that peak d emands happ en only occasionally and that most p art o f time this e xtra p ublic cloud capacit y is not required. So, leasing public cloud resources is cheap est tha n buying a nd maintaining extra resources t hat will spend most part o f time idle. 5. Conclusions and Future Directions Development of funda mental technique s and software s ystems tha t i ntegrate distributed clouds in a federated fashion is critical to enablin g composition and deployment of elastic applicati on services. We b elieve that outcomes of thi s research vision will make significant scie ntific advance ment i n understandi ng the theoretical and practical problems of engineering services for federated environments. T he resultin g framework facilitates the feder ated management of system co mpone nts a nd protect s customers with guaranteed quality of services in large, federated and highly dynamic environments. The different co mponents of the pro posed fram ework o ffer pow erful cap abilities to address both services and resources management, but their end - to -end combination ai ms to dramatically improve the e ffective usage, mana gement, and administration o f Cloud systems. This will provide en hanced degrees o f scalabilit y, flexibility, and simplicit y for management and d elivery of services in federation of clouds . In our fut ure work, we will focus o n de veloping co mprehensive model dr iven approach to provisioning a nd delivering ser vices in federated environments. These models will be impor tant bec ause they allo w adap tive system management by e s- tablishing useful relationships between high -level performance targets (specified by operators) and lo w-level c ontrol para meters and observ ables that s ystem co m- ponents ca n control or monito r. We will model the behavio ur and p erfor mance of different types of ser vices and resources to adap tively transfor m ser vice requests. We will use a bro ad range of analytical m odels and statistical curve -fitting tec h- niques such as multi -class q ueuing model s and linear regress ion time series. T hese models will drive and possibly tr ansform the i nput to a serv ice provisioner, which improves the efficiency o f the system. Suc h improvements w ill better ens ure t he achievement of p erformance targets, while reducing costs due to improved utiliz a- tion of r esources. It will be a major advancement in the field to d evelop a rob ust and scalable sy stem m onitoring infrastructure to collect real-time data and re - adjust these models dynamically with a minimum of data and traini ng ti me. We believe t hat t hese models a nd techniques are critical for the design of stochastic provisioning algorithms across large feder ated Cloud systems where resource availability is uncertain. Lowering the ener gy usage of data centers i s a c hallenging and co mplex iss ue because co mputing applications and data are growing so quickly that increasi ngly larger server s and di sks are needed to pro cess them fast enough within the required time period. Gr een Clou d co mputing is envisioned to achieve not only efficient pro cessing and u tili zation of co mputing i nfrastructure, but also minimization o f e nergy co nsumption. T his is essential for ensuring that the future growth of Cloud Computing is sustainable. Other wise, Cloud computi ng with increasingly per vasive fron t-end client d evices interacting with back-end d ata centers will ca use a n enormous escalatio n of energy usage. T o address this problem, data center resources need to be managed in an energy -efficie nt manner to drive Gree n Cloud computi ng. In p articular, Cloud resources need to be allocated not o nly to satisfy Qo S targets specified by user s via Service Level 19 Agreements (SLAs), but also to red uce energ y usage. T his ca n be achieved b y applying market-based utility models to accep t req uests t hat can be fulfilled out o f the many co mpeting r equests so that revenue can be op timized alo ng with ener gy- efficient utilization o f Cloud infrastructure. Acknowledge ments: We acknowledge all members of Melb ourne CLOUDS Lab (especially Willia m Vo orsluys and Suraj Pande y) for their contrib utions to InterCloud investigation. References [1] L. Kleinrock. A Vision for the Inte rnet. ST Journal of Research , 2(1):4-5, Nov . 2005. [2] M. Armbrust, A. Fox, R. Griffith, A. J oseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, M. Zaharia. Above t he Clouds: A Berkeley View of Cloud Co mputing . University o f California at Berkley, USA. Te chnical Rep UCB/EECS-2009- 28 , 2009. [3] R. Buyya, C. Yeo, S. Venugopal, J. Broberg, and I. Brandic. Cloud Computing and Emer g- ing IT P latforms: Vision, Hype, a nd Reality for Deli vering Computing as the 5th Utility. Future Generation Computer Systems , 25(6): 599-616, Elsevier Science, Amsterdam, The Netherlands, June 2009. [4] S. London. Inside Track: The high-tech rebels . Finan cial Times, 6 Sept. 2002. [5] The Reservoir Seed Team. Reservoir – An ICT Infrastructure for Reliable and Effective D e- livery of Services as Uti lities . IBM Research Report, H-0262, Fe b. 2008. [6] R. B uyy a, D. A bramson, J. Giddy, and H. Stocki nger. Economic Models for Resource Management an d Scheduling in Gr id Computing. Concurrency and Computation: Practice and Experience , 14(13-15): 1507-1 542, Wiley Press, New York, USA, Nov .- Dec. 2002. [7] A. Weiss. Com puting in the Clouds. netWorker , 11(4):16-25, ACM Press, New York, USA, Dec. 2007. [8] VMware: Migrate Virtual Machines with Z ero Downtime. http://www.v mware.com/. [9] P. Barham et al. Xen and the Art o f Virtualization . Proceedings of the 19 th ACM Symp o- sium on Ope rating Systems Principles, ACM Press, New York, 2003. [10] R. Buyya, D. Abr amson, and S. Venugopal. The Grid Economy . Special Issue on Grid Computing, Proceedings of t he IE EE , M. Parashar a nd C. Le e (eds.), 93(3) , I EEE Press, March 2005, pp. 698-714. [11] C. Yeo and R. Buyya. Managing Risk of Inaccurate Runtime Estimates for Deadline Co n- strained Job Admission Control in Clusters . Pr oc. of the 35th Intl. Conference on Parallel Processing, Colum bus, Ohio, USA, Aug . 2006. [12] C. Yeo and R. Buyya. Integrated Risk Analysis for a Commercial Computing Service . Proc. of the 21st IEEE International Parallel and Distributed Processing Symposium, Long Beach, California, USA, Ma rch 2007. [13] A. Sulistio, K. Kim, and R. Buyya. Managing Cancellations and No-shows o f Reservations with Overbooking to Increase Resource Revenue . Proceedings of the 8 th IEEE International Symposium on Cluster Computing and the Grid, Lyo n, France, May 2008. [14] X. Chu and R. B uyya. Service Oriented Sensor Web . Sensor Network a nd Configuration: Fundamentals, Standards, Pl atforms, and Application s, N. P. Mahalik (ed ), Springer, Berlin, Germany , Jan. 2007. [15] Amazon E lastic Compute Cloud (EC2), http://www.amazon.com /ec2/ [17 March 2010 ]. [16] Goog le App Engine , http://appengine.goog le.com [ 17 March 2010 ]. [17] Windows Azure Platform , http://www.m icrosoft.com/azure/ [17 Marc h 2010 ]. [18] Spring.NET, http://www.s pringframewo rk.net [ 17 March 2010 ]. [19] D. Chappell. Introducing t he Azure Services Platform . White Paper, http://www.microsoft.com /azure [Jan 2009 ]. [20] S. Venugopal, X. Chu, a nd R. Buyya. A Negotiation Mechanism for Advance Resource Reservation using th e Alternate Offers Protocol . Proceeding s of the 16 th International Workshop on Quality of Service (IWQoS 2008), Twen te, The Netherlands, June 2008. [21] Salesforce.com (2 009) Applicat i on De ve lopment with Force.com ’s Cl oud Computing Pla t- form http://www.salesforce. com/platform/. Acce ssed 16 December 20 09 [22] D. Nurmi, R. Wolski, C. Grzegorczy k, G. Obertelli, S. Soman, L. Youseff, D . Zagorodnov . The Eucalyptus Open- Sou r ce Cloud-Computing Syst em . Proceedings of the 9th IEEE/ACM International Symposium on Cluste r Com puting a nd the Grid ( CCGrid 2009), May 18 -May 21, 2010, Shangh ai, China. [23] GoGrid Cloud Hosting (2009) F5 Load Balancer. GoGrid Wiki. http://wiki.gogrid.com /wiki/index.php/(F5)_Load_ Bala ncer. Accessed 21 Septem ber 2009 [24] Amazon CloudWatch Serv ice http://aws.amazon.com /cloudwatch/. [25] Amazon Load Balancer Serv ice http://aws.amazon.com/elasticlo adbalancing/. [26] K. Lua, J. Crowcroft, M. Pias, R. Sharma, and S. Lim. A Survey and Comparison of Peer- to -Peer Overlay Network Schemes . In Communic ations Surveys and Tutorials, 7(2), Was h- ington, DC, US A, 2005. [27] R. Ranjan. Coordinated Resource Provisioning in Federated Grids . Ph.D. Thesis, The Un i- versity of Melbou rne, Australia, March, 2009. [28] R. Ranjan and Anna Liu. Autonomic Cloud Services Aggregation . CRC Smart Ser vices R e- port, July 15, 2009. [29] R. Buyya, R. Ranjan and R. N. Calheiros. Modeling and Simulation of Scalable Cloud Computing E nvironments an d the CloudSim Toolkit: Challenges and Opportunities. Pr o- ceedings of the 7th High Performance Computing and Simulation Conference (HPCS 2009, IEEE Press, New York, USA), Leipzig, Germany , June 21 -24, 2009. [30] A. Quiroz, H. Kim, M. Pa rashar, N. Gnanasambandam, and N. Sharma. Towards Autono m- ic Workload Provisioning for E nterprise Grids and Clouds . Proceeding s of the 10th IEEE International Conference o n Grid Computing (Grid 2009), Banff, Alberta, Canada, October 13 -15, 2009. [31] D. G. F eitelson, Workload Modelling f or Computer Systems Performance Evaluat ion , in preparation, www.cs.huji.ac.il /~feit/wlmo d/ . (Acc essed on March 19, 2 010). [32] C. Vecchiola, X. Chu, and R. Buyya. Anek a: A Software Platform for .NET-based Cloud Computing . High Speed and Large Scale Scientific Computing, 26 7-295pp, W. Gentzsch, L. Gra ndinetti, G. Joubert (Eds.), ISBN: 978 -1-60750- 073 -5, IOS P ress, Amsterdam , Net h- erlands, 2009.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment