The request of a task execution is managed from the Resource Broker that forwards this request to a Computing Element (CE). The CE is the central server of a testbed site. It represents the gateway of the nodes of a cluster, and has the main task to switch the jobs at the local schedulers, and to make available the fabric characteristics and the status. The Working Node (WN) is a single system (generally a Workstation): a grid site can have one or more WNs. The Storage Element (SE) is a server that maintains and checks the data distributed in the grid, and duplicates data. The automatic procedure for the site installation is done using the server LoCal ConFiguration Next Generation (LCFGng -Division of Informatics University of Edinburgh). The LCFGng server has the main task to make available the software, using RPM (RedHat Package Management system), via NFS shared directory with the local system (CE, SE, WN...) site and makes available the client's profiles via HTTP server and the system status as XML pages, containing the entire description of a node.
The set up of our grid site, has required the installation of the following systems: N.1 Workstation for the LCFGng server. It is a server having the services NFS, DHCP, HTTP and with a large disk area to store all the Data Grid RPMs. N.1 Workstation for the CE system. N.1 Workstation for the SE system. The IBM SP system that is the main WN of our site.
We have created two access point: the first is for the INFN production grid and the second is for the GILDA test grid
Astrocomp web portal is a user friendly interface that allows to run some astrophysical parallel codes on a pool of powerful computational resources, hiding the underlying complexity of the MPP systems actually used. Astrocomp was born by an idea of V. Antonuccio, U. Becciani (both of INAF Obs. of Catania), R. Capuzzo Dolcetta (dep. of Phys., Univ. of Roma La Sapienza) and V. Rosato (ENEA) in collaboration with Oneiros s.r.l. Astrocomp-G is instead a grid-enabled portal to run codes over a computational grid. Specifically, we re-implemented the portal authentication mechanisms adopting standards currently used in many international computational grids. A user who wants to log into the portal will need, besides username/password released by the portal web master upon registration, a digital X509 certificate that will allow a proxy creation, stored by a MyProxy server. It is used to authenticate users who are accessing the facilities offered by the grid. The use of proxies and X509 certificates is a key point of GSI (Grid Security Infrastructure), a standard used by all grid infrastructures all over the world. To implement the portal login and proxy creation, we made use of Grid Port 2, a Perl toolkit designed by Npaci to aid in portal creation.To give a further level of security Astrocomp-G is accessed through HTTPS/SSL connections made available by Apache web server. Moreover, we re-engineered the job submission mechanisms of the available parallel codes to handle their running on the GRID-IT production grid. In particular, through the portal, we are able to check job status, watch logs, retrieve the output from a Resource Broker and visualize the output. Because of the particular features of available applications, the portalis currently set to directed submitted jobs towards our IBM SP machine previously integrated in out testbed. The experiments have been done using MARA, a parallel code for the analysis of light curves of closed binary systems.