Archive | 2019

Improving efficiency of analysis jobs in CMS

 

Abstract


Data collected by the Compact Muon Solenoid experiment at the Large Hadron Collider are continuously analyzed by hundreds of physicists thanks to the CMS Remote Analysis Builder and the CMS global pool, exploiting the resources of the Worldwide LHC Computing Grid. \nMaking an efficient use of such an extensive and expensive system is crucial. \nSupporting a variety of workflows while preserving efficient resource usage poses special challenges, like: scheduling of jobs in a multicore/pilot model where several single core jobs with an undefined run time run inside pilot jobs with a fixed lifetime; avoiding that too many concurrent reads from same storage push jobs into I/O wait mode making CPU cycles go idle; monitoring user activity to detect low efficiency workflows and provide optimizations, etc. \nIn this contribution we report on two novel complementary approaches adopted in CMS to improve the scheduling efficiency of user analysis jobs: automatic job splitting and automated run time estimates. They both aim at finding an optimal value for the scheduling run time. We also report on how we use the flexibility of the global CMS computing pool to select the amount, kind, and running locations of jobs exploiting remote access to the input data.

Volume 351
Pages 15
DOI 10.22323/1.351.0015
Language English
Journal None

Full Text