Grid Computing in the Collider Detector at Fermilab (CDF) scientific experiment

Reading time: 6 minute
...

📝 Original Info

  • Title: Grid Computing in the Collider Detector at Fermilab (CDF) scientific experiment
  • ArXiv ID: 0810.3453
  • Date: 2009-09-29
  • Authors: Researchers from original ArXiv paper

📝 Abstract

The computing model for the Collider Detector at Fermilab (CDF) scientific experiment has evolved since the beginning of the experiment. Initially CDF computing was comprised of dedicated resources located in computer farms around the world. With the wide spread acceptance of grid computing in High Energy Physics, CDF computing has migrated to using grid computing extensively. CDF uses computing grids around the world. Each computing grid has required different solutions. The use of portals as interfaces to the collaboration computing resources has proven to be an extremely useful technique allowing the CDF physicists transparently migrate from using dedicated computer farm to using computing located in grid farms often away from Fermilab. Grid computing at CDF continues to evolve as the grid standards and practices change.

💡 Deep Analysis

Deep Dive into Grid Computing in the Collider Detector at Fermilab (CDF) scientific experiment.

The computing model for the Collider Detector at Fermilab (CDF) scientific experiment has evolved since the beginning of the experiment. Initially CDF computing was comprised of dedicated resources located in computer farms around the world. With the wide spread acceptance of grid computing in High Energy Physics, CDF computing has migrated to using grid computing extensively. CDF uses computing grids around the world. Each computing grid has required different solutions. The use of portals as interfaces to the collaboration computing resources has proven to be an extremely useful technique allowing the CDF physicists transparently migrate from using dedicated computer farm to using computing located in grid farms often away from Fermilab. Grid computing at CDF continues to evolve as the grid standards and practices change.

📄 Full Content

34th International Conference on High Energy Physics, Philadelphia, 2008

1 Grid Computing in the Collider Detector at Fermilab (CDF) scientific experiment D. Benjamin (for the CDF Collaboration) Duke University, Durham, NC 27708, USA The computing model for the Collider Detector at Fermilab (CDF) scientific experiment has evolved since the beginning of the experiment. Initially CDF computing was comprised of dedicated resources located in computer farms around the world. With the wide spread acceptance of grid computing in High Energy Physics, CDF computing has migrated to using grid computing extensively. CDF uses computing grids around the world. Each computing grid has required different solutions. The use of portals as interfaces to the collaboration computing resources has proven to be an extremely useful technique allowing the CDF physicists transparently migrate from using dedicated computer farm to using computing located in grid farms often away from Fermilab. Grid computing at CDF continues to evolve as the grid standards and practices change.

  1. INTRODUCTION
    The Collider Detector at Fermilab (CDF) scientific experiment is a large multipurpose particle physics experiment located at the Fermi National Accelerator Laboratory (Fermilab), west of Chicago Illinois in the United States. The CDF detector measures the particles produced in proton – anti proton collisions at the Tevatron operating at s = 1.96 TeV The are two experiments at the Tevatron and this paper will describe the grid computing for CDF. The CDF detector is comprised of a tracking system located closest to the collision point inside of a solenoid magnet. Outside of the solenoid magnet, calorimeter detectors used to measure the energy of the particles resulting from the collisions. Finally on the outside are muon detectors. The CDF experiment started collecting data in 1988 and will continue to take data until at least October 2009. There are ¾ million electronic channels readout and there are collisions every 396 ns.
    The CDF computing model has several components. The events passing the trigger are readout and written to robotic tape storage for further calibration, reconstruction and analysis. The reconstruction program reads the data from tape using dCache system [1] to stage the data to disk prior to copying the data files to the worker nodes located in an onsite computer farm and writing the results to tape. CDF users typically analyze the data at FNAL and use remote computer farms for Monte Carlo production with the results send back to FNAL for archival storage.
    The CDF analysis farm (CAF) [2] is a software infrastructure developed to allow CDF to use computer farms of commodity hardware for data reconstruction, data analysis, simulated data production. Users develop software, debug programs and submit jobs from their own desktops. User authentication is based on Kerberos v5. Monitoring of system performance and user jobs can be done either through a web interface or command line tools. The interactive monitoring tools allow the users to check the progress of their running jobs by viewing the log files on the worker nodes and obtain other information from the worker nodes (like directory of user job’s remote working area). Users can also stop all of part of their jobs if needed. Most of the monitoring functionality is available to the CDF users running on the grid. The users are notified of the completion of their job via job summary sent via e-mail. The user can also specify the location to which output compressed tar ball containing the job logs and whatever files existed at the job completion in the users work area. Kerberized rcp is used to transfer the files at present.
    The current version of the CDF CAF used for processing the data with the production reconstruction executable at FNAL is based on a dedicated Condor pool [3]. Condor daemons (collector, negotiator, multiple schedd) run on the head node (or portal machine). The condor starter and startd daemons run on the worker nodes. CAF code includes the job submission client run on the user’s desktop. CAF code used on the portal machine includes job server daemon, monitoring daemons (web and command line), mailer daemons. On the worker nodes a CDF specific job wrapper containing the user job executables is used. The initial implementation was not grid complaint and used remote dedicated computer farms. It has since been adapted to use computer farms located at grid sites. Since the users interact to the CDF computing through portals, the users are removed from the underlying computing farm architecture.
  2. TRANSITION TO GRID COMPUTING

34th International Conference on High Energy Physics, Philadelphia, 2008

2 Use of the grid, allowed CDF to reduce the need for dedicated computer farms. As CDF transitioned from dedicated computing farms

…(Full text truncated)…

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut