Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ajit Mohapatra is active.

Publication


Featured researches published by Ajit Mohapatra.


Journal of Physics: Conference Series | 2012

A new era for central processing and production in CMS

E Fajardo; Oliver Gutsche; S Foulkes; J Linacre; V Spinoso; A Lahiff; G. Gomez-Ceballos; M. Klute; Ajit Mohapatra

The goal for CMS computing is to maximise the throughput of simulated event generation while also processing event data generated by the detector as quickly and reliably as possible. To maintain this achievement as the quantity of events increases CMS computing has migrated at the Tier 1 level from its old production framework, ProdAgent, to a new one, WMAgent. The WMAgent framework offers improved processing efficiency and increased resource usage as well as a reduction in operational manpower. In addition to the challenges encountered during the design of the WMAgent framework, several operational issues have arisen during its commissioning. The largest operational challenges were in the usage and monitoring of resources, mainly a result of a change in the way work is allocated. Instead of work being assigned to operators, all work is centrally injected and managed in the Request Manager system and the task of the operators has changed from running individual workflows to monitoring the global workload. In this report we present how we tackled some of the operational challenges, and how we benefitted from the lessons learned in the commissioning of the WMAgent framework at the Tier 2 level in late 2011. As case studies, we will show how the WMAgent system performed during some of the large data reprocessing and Monte Carlo simulation campaigns.


Proceedings of XII Advanced Computing and Analysis Techniques in Physics Research — PoS(ACAT08) | 2009

Large scale job management and experience in recent data challenges within the LHC CMS experiment

D. Evans; O. Gutsche; F. Van Lingen; Ajit Mohapatra; M. Miller; S. Metson; Ali Hassan; D. Mason; D. Hufnagel; S. Wakefield

From its conception the job management system has been distributed to increase scalability and robustness. The system consists of several applications (called ProdAgents) which manage Monte Carlo, reconstruction and skimming jobs on collections of sites within different Grid environments (OSG, NorduGrid, LCG) and submission systems such as GlideIn, local batch, etc... Production of simulated data in CMS mainly takes place on so called Tier2s (small to medium size computing centers) resources. Approximately ~50% of the CMS Tier2 resources are allocated to running simulation jobs. While the so-called Tier1s (medium to large size computing centers with high capacity tape storage systems) will be mainly used for skimming and reconstructing detector data. During the last one and a half years the job management system has been adapted such that it can be configured to convert Data Acquisition (DAQ) / High Level Trigger (HLT) output from the CMS detector to the CMS data format and manage the real time data stream from the experiment. Simultaneously the system has been upgraded to facilitate the increasing scale of the CMS production and adapting to the procedures used by its operators. In this paper we discuss the current (high level) architecture of ProdAgent, the experience in using this system in computing challenges, feedback from these challenges, and future work including migration to a set of core libraries to facilitate convergence between the different data management projects within CMS that deal with analysis, simulation, and initial reconstruction of real data. This migration is important, as it will decrease the code footprint used by these projects and increase maintainability of the code base.


Journal of Physics: Conference Series | 2008

CMS Monte Carlo production in the WLCG computing Grid

J. M. Hernandez; P. Kreuzer; Ajit Mohapatra; N D Filippis; S D Weirdt; C. Hof; S. Wakefield; W Guan; A. Khomitch; A. Fanfani; D. Evans; A. Flossdorf; J. Maes; P v Mulders; I. Villella; A. Pompili; S. My; M. Abbrescia; G. Maggi; Giacinto Donvito; J. Caballero; J A Sanches; C. Kavka; F v Lingen; W. Bacchi; G. Codispoti; P. Elmer; G. Eulisse; C. Lazaridis; S. Kalini

Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day.


International conference on Computing in High Energy and Nuclear Physics (CHEP) | 2011

CMS distributed computing workflow experience

Jennifer K. Adelman-McCarthy; Oliver Gutsche; Jeffrey J.D. Haas; Harrison Prosper; V. Dutta; G. Gomez-Ceballos; Kristian K. Hahn; M. Klute; Ajit Mohapatra; Vincenzo V. Spinoso; D. Kcira; Julien Caudron; Junhui J. Liao; Arnaud Pin; N. Schul; Gilles De Lentdecker; Joseph Mccartin; L. Vanelderen; X. Janssen; Andrey A. Tsyganov; D. Barge; Andrew Lahiff

The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.


Nuclear Physics B - Proceedings Supplements | 2008

CMS Monte Carlo production operations in a distributed computing environment

Ajit Mohapatra; C. Lazaridis; J. M. Hernandez; J. Caballero; C. Hof; S. Kalinin; A. Flossdorf; M. Abbrescia; N. De Filippis; Giacinto Donvito; G. Maggi; S. My; A. Pompili; S. Sarkar; J. Maes; P. Van Mulders; I. Villella; S. De Weirdt; G. H. Hammad; S. Wakefield; W. Guan; J.A.S. Lajas; P. Kreuzer; A. Khomich; P. Elmer; D. Evans; A. Fanfani; W. Bacchi; G. Codispoti; F. van Lingen

Collaboration


Dive into the Ajit Mohapatra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Wakefield

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

C. Lazaridis

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

G. Gomez-Ceballos

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

M. Klute

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

P. Elmer

Princeton University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Hof

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar

P. Kreuzer

RWTH Aachen University

View shared research outputs
Researchain Logo
Decentralizing Knowledge