GRID Architecture through a Public Cluster
An architecture to enable some blocks consisting of several nodes in a public cluster connected to different grid collaborations is introduced. It is realized by inserting a web-service in addition to the standard Globus Toolkit. The new web-service …
Authors: Z. Akbar, L.T. H, oko
GRID Ar chitecture thr ough a Public Cluster Z. Akbar and L.T . Handoko Gr oup for Theor etical and Computational Physics, R esear ch center for Physics, Indonesian Institute of Sciences, Kompleks Puspiptek Serpong, T angerang 15310, Indonesia Email : zaenal@teori.fisika.lipi.go.id, handoko@teori.fisika.lipi.go.id Abstract An architecture to enable some blocks consisting of several nodes in a public cluster connected to different grid collabora tions is introduced. It is realized by inserting a web- service in addition to the standard Globus T oolkit. The new web-service performs two main tasks : authenticate the digital certificate contained in an incoming requests and forward i t to the designated block. The appropriate block is mapped with the username of the block's owner contained in the digital certificate. It is argued that this algorithm opens an opportun ity for any blocks in a public cluster to join various global grids. Keywords : distributed systems, public cluster , middleware, internet applications 1. Introduction LIPI Public Cluster (LPC) is a unique, may be the first one in its kind [1,2], parallel machine which is open for public use [3]. The main difference is LPC provides full ownership to the user on a block of parallel machine consisting of several nodes using the so- called multi block approach [4]. On the contrary , the conven tional “publi c” paralle l machines allow the users to only put their jobs in the queues, and let the resource allocation management system like openPBS to distribu te it appropriately according to the currently available resources [5]. In LPC, the users are granted with much higher degree of freedoms to control their own blocks, although all commands are execu ted through a web interface [6]. Microcontroller-based remote con trol and monitoring for hardwares in LPC makes it possible to be fully controlled by the users [7]. Since a user at a certain allocated period fully owns a block of nodes, the resource management system is irrelevant in LPC. Instead, the resource allocation is more l ikely needed to help the administrator to assign appropriate number and types of nodes for each incoming initial request according to the current availability and user needs [8]. This is the ma in reason in LPC we have removed the conventional resource allocation module [6,8]. On th e other hand, advancing the LPC and connecting a block as a participating node in a global grid would require a tool to guide a request from partn er grid to be forwarded properly to an appropriate block. In other word we have to define the queues available in LPC associated with the blocks. In th is context we should still deploy the resource a llocation management, but only to define the queues in LPC. In this paper we discuss a compact architecture to connect any blocks in LPC to any global grids. The architecture makes use of available tools like some modules embedded in the Globus T oolkit (GT) [10], openPBS to set up the queues associated with each block [5]. Further , we develop a “router web service” to reroute the incoming request to an appropriate block by inserting a label on it containing the username of block's owner . Because in LPC each block is exclusively owned by a particular user . 2. Concept and architectur e Fig. 1. A considerable case of connecting some blocks in LPC to different global grids. Connecting a block in LPC to a grid simultaneously requires implement ations of several things : ● A grid middleware. In our case, we borrow a widely used the Globus T oolkit version 4 (GT4) [10]. ● A smart au thentication method to rerout e any requests from separate grids to appropriate blocks in LPC. As mentioned above, due to its unique characteristic, in LPC there is originally no need to deploy a resource managemen t tool at all. Because the whole nodes in a block is allocated for a single user who has full control for everything as if the user has their own parallel machine. Unfortunately , this rare concept leads to a severe problem once the block is going to be involved in a global grid. The problem is how to forward a request to th e block appropriately . Moreover , there is also a considerable case when s ome blocks in LPC are connected to different grids as depicted in Fig. 1. Using a resource management tool in a conventional way would not resolve the problem, since all available nodes are already assigned in several independent blocks. Independent here means each of them is owned by different u ser , and may deploy various middlewares according to user needs. How should we overcome this problem ? Also, how do we fulfill the needs while deploying the modules of GT as they are ? Exp loiting the fact that a block in LPC is always occupied by a single user , we should deploy a resource management too l to define a queue name for the block. In our case we deploy openPBS for th is purpose [5]. The problem is then how to specify each incoming request with a label to reroute it to the appropriate block through GT properly . In order to overcome this problem we have developed a unique web service, namely W eb Service Public Cluster (WSPC). WSPC plays an important role to authenticate the digital certificate contained in the requests, to retrieve the u sername and to map it with the queue name associated with particular block. The WSPC is placed beyond GT which also acts as certificate authority (CA) as usual. Since we deploy the GT as it is, double authentication occurs at WSPC and MyProxy of GT as well. Fig. 2. The flow diagram for an incoming request from collaborating grid to a block through WSPC and WS GRAM of GT . 3. Implementation Installing the whole GT4 and our WSPC, in principle, could provide smooth communication between the global grids and the partic ipating blocks. More importantly , the blocks still have full flexibilities in the context of middleware for both parallel and grid environments. This is very crucial to keep LPC as an open infrastructure for public with varying needs and requirements. The overa ll flow of these processes is shown in Fig. 2. In reality the implementation is done as follows : 1. The approved user is issued with a username and a set of nodes as a block. At the same time, a new queue is created with the same username for the allocated block. The queue is defined with specific nodes, and not allowed to use another ones. Also, the authorized user for the block is only the user which has just been approved.. 2. When the user makes a request to his / her block, the following parameters should be sent together : username and userCA. 3. The queue name where the request will be forwarded to is deter mined by WSPC according to the containing username after mapping it with the data of available blocks at certain time. Finally , both GT4 and WSPC are installed in the gateway of LPC. All components to handle the above-mentioned tasks are written in Fig. 3. W e should rem ark here that we can deploy several grid middlewares which can be chosen by users. In order to distinguish all of them, each middleware is put in different directories. Hence the user request should follow the correct path of desired middleware according to its initial choice as activating the block. Fig. 3. The components of WSPC and i ts relation with the LPC. 4. Summary W e have introduced a new architecture for grid computing using public clusters with a characteristic like LPC. The architecture consists of the standard GT and an additional web service. The web service, WSPC, has a role as an intermediate interface to authenticate the digital certificate in a request, to retrieve the username and finally to map it with the queue name associated with par ticular block. It is argued that this architecture keeps the freedom and flexib ility for users. Bec ause different parallel and grid middlewares can be used simultaneously in separate blocks according to user needs. As future work and issue, we are going to integra te the WSPC as a module or web service of GT4. This would improve the authentication process and avoid some delays due to double authentications. This can be achieved by removing MyProxy and e mbed some of its relevant features to the WSPC, or inversely adding unique functions of WSPC to MyProxy of GT4. Acknowledgment This work is financially supported by the Riset Kompetitif LIPI in fiscal year 2008 under Contract no. 1 1.04/SK/KPPI/II/2008. References [1] LIPI Public Cluster , http://www .cluster .lipi.go.id. [2] L.T . Handoko,, Indonesian Copyright, No. B 268487 (2006). [3] Z. Akbar , Slamet, B.I. Ajinagoro, G.J. Ohara, I. Firmansyah, B. Hermanto and L.T . Handoko, “Open and Free Cluster for Publi c”, Proceeding of the International Conference on Rural Information and Communication T echnology 2007, Bandung, Indonesia (2007). [4] Z. Akbar, Slamet, B.I. Ajinagoro, G.J. Ohara, I. Firmansyah, B. Hermanto and L.T . Handoko, “Public Cluster : parallel machine with multi-block approach”, Proceeding of the Intern ational Conference on Electri cal Engineering and Informatics, Bandung, Indonesia (2007). [5] OpenPBS, http://www .openpbs.org. [6] Z. Akbar and L.T . Handoko, “W eb-based Interface in Public Cluster”, Proceeding of the 9th International Conference on Information Integration and W eb-based Applications and Services, Jakarta, Indonesia (2007). [7] I. Firmansyah, B. Hermanto, Slam et, Hadiyanto and L.T . Handoko, “Real-time control and monitoring system for LIPI’ s public cluster”, Proceeding of the Intern ational Conference on Instrumentation, Communication, and Information T echnology , Bandung, Indonesia (2007). [8] Z. Akbar and L.T . Handoko, “Resource Allocation in Public Cluster with Ex tended Optimization Algorithm”, Proceeding of the Interna tional Conference on Instrumentation, Communication, and Information T echnology , Bandung, Indonesia (2007). [9] Z. Akbar et.al ., “openPC : T oolkit f or Publ ic Clu ster”, http ://sourcef or ge.ne t/pro jects/ope npc/. [10] Ian Foster , “Globus T oolkit V ersion 4 : Software for Service Oriented Systems”, Journal of Computational Science and T echnology 21 , 513-520 (2006).
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment