C. Kavka
University of Perugia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by C. Kavka.
ieee international conference on high performance computing data and analytics | 2007
D. Spiga; Stefano Lacaprara; W. Bacchi; Mattia Cinquilli; G. Codispoti; Marco Corvo; A. Dorigo; A. Fanfani; Federica Fanzago; F. M. Farina; M. Merlo; Oliver Gutsche; L. Servoli; C. Kavka
The CMS experiment will produce several Pbytes of data every year, to be distributed over many computing centers geographically distributed in different countries. Analysis of this data will be also performed in a distributed way, using grid infrastructure. CRAB (CMS Remote Analysis Builder) is a specific tool, designed and developed by the CMS collaboration, that allows a transparent access to distributed data to end physicist. Very limited knowledge of underlying technicalities are required to the user. CRAB interacts with the local user environment, the CMS Data Management services and with the Grid middleware. It is able to use WLCG, gLite and OSG middleware. CRAB has been in production and in routine use by end-users since Spring 2004. It has been extensively used in studies to prepare the Physics Technical Design Report (PTDR) and in the analysis of reconstructed event samples generated during the Computing Software and Analysis Challenge (CSA06). This involved generating thousands of jobs per day at peak rates. In this paper we discuss the current implementation of CRAB, the experience with using it in production and the plans to improve it in the immediate future.
ieee nuclear science symposium | 2008
G. Codispoti; Mattia Cinquilli; A. Fanfani; Federica Fanzago; F. M. Farina; C. Kavka; Stefano Lacaprara; Vincenzo Miccio; D. Spiga; Eric Wayne Vaandering
Starting from 2008, the CMS experiment will produce several Pbytes of data every year, to be distributed over many computing centers geographically distributed in different countries. The CMS computing model defines how the data has to be distributed and accessed in order to enable physicists to run efficiently their analysis over the data. The analysis will be thus performed in a distributed way using Grid infrastructure. CRAB (CMS Remote Analysis Builder) is a specific tool, designed and developed by the CMS collaboration, that allows a transparent access to distributed data to end physicist. CRAB interacts with the local user environment, the CMS Data Management services and with the Grid middleware: it takes care of the data and resources discovery; it splits the user task in several analysis processes (jobs) and distribute and parallelize them over different Grid environments; it takes care of the process tracking and output handling. Very limited knowledge of underlying technical details are required to the end user. The tool can be used as a direct interface to the computing system or can delegate the task to a server, which takes care of the user jobs handling, providing services as automatic resubmission in case of failures and notification to the user of the task status. Its current implementation is able to interact with WLCG, gLite and OSG Grid middlewares. Furthermore it allows in the very same way the access to local data and batch systems such as LSF. CRAB has been in production and in routine use by end-users since Spring 2004. It has been extensively used in studies to prepare the Physics Technical Design Report, in the analysis of reconstructed event samples generated during the Computing Software and Analysis Challenges and in the preliminary cosmic rays data taking. The CRAB architecture and the usage inside the CMS community will be described in detail, as well as the current status and future development.
Journal of Grid Computing | 2010
A. Fanfani; Anzar Afaq; Jose Afonso Sanches; Julia Andreeva; Giusepppe Bagliesi; L. A. T. Bauerdick; Stefano Belforte; Patricia Bittencourt Sampaio; K. Bloom; Barry Blumenfeld; D. Bonacorsi; C. Brew; Marco Calloni; Daniele Cesini; Mattia Cinquilli; G. Codispoti; Jorgen D’Hondt; Liang Dong; Danilo N. Dongiovanni; Giacinto Donvito; David Dykstra; Erik Edelmann; R. Egeland; P. Elmer; Giulio Eulisse; D Evans; Federica Fanzago; F. M. Farina; Derek Feichtinger; I. Fisk
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.
Journal of Physics: Conference Series | 2008
S. Metson; S. Belforte; Brian Bockelman; K Dziedziniewicz; R. Egeland; P. Elmer; Giulio Eulisse; D Evans; A. Fanfani; Derek Feichtinger; C. Kavka; V. E. Kuznetsov; F. van Lingen; Dave M Newbold; L. Tuura; S. Wakefield
We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned developments. The CMS collaboration comprises of nearly 3000 people from all over the world. As well as its collaborators, its computing resources are spread all over globe and are accessed via the LHC grid to run analysis, large scale production and data transfer tasks. Due to the distributed nature of collaborators effective provision of collaborative tools is essential to maximise physics exploitation of the CMS experiment, especially when the size of the CMS data set is considered. CMS has chosen to provide such tools over the world wide web as a top level service, enabling all members of the collaboration to interact with the various offline computing components. Traditionally web interfaces have been added in HEP experiments as an afterthought. In the CMS offline we have decided to put web interfaces, and the development of a common CMS web framework, on an equal footing with the rest of the offline development. Tools exist within CMS to transfer and catalogue data (PhEDEx and DBS/DLS), run Monte Carlo production (ProdAgent) and submit analysis (CRAB). Effective human interfaces to these systems are required for users with different agendas and practical knowledge of the systems to effectively use the CMS computing system. The CMS web tools project aims to provide a consistent interface to all these tools.
IEEE Transactions on Nuclear Science | 2009
G. Codispoti; Cinquilli Mattia; A. Fanfani; Federica Fanzago; F. M. Farina; C. Kavka; Stefano Lacaprara; Vincenzo Miccio; D. Spiga; Eric Wayne Vaandering
Beginning in 2009, the CMS experiment will produce several petabytes of data each year which will be distributed over many computing centres geographically distributed in different countries. The CMS computing model defines how the data is to be distributed and accessed to enable physicists to efficiently run their analyses over the data. The analysis will be performed in a distributed way using Grid infrastructure. CRAB (CMS remote analysis builder) is a specific tool, designed and developed by the CMS collaboration, that allows the end user to transparently access distributed data. CRAB interacts with the local user environment, the CMS data management services and with the Grid middleware; it takes care of the data and resource discovery; it splits the users task into several processes (jobs) and distributes and parallelizes them over different Grid environments; it performs process tracking and output handling. Very limited knowledge of the underlying technical details is required of the end user. The tool can be used as a direct interface to the computing system or can delegate the task to a server, which takes care of the job handling, providing services such as automatic resubmission in case of failures and notification to the user of the task status. Its current implementation is able to interact with gLite and OSG Grid middlewares. Furthermore, with the same interface, it enables access to local data and batch systems such as load sharing facility (LSF). CRAB has been in production and in routine use by end users since Spring 2004. It has been extensively used in studies to prepare the Physics Technical Design Report, in the analysis of reconstructed event samples generated during the Computing Software and Analysis Challenges and in the preliminary cosmic ray data taking. The CRAB architecture and the usage inside the CMS community will be described in detail, as well as the current status and future development.
Journal of Physics: Conference Series | 2008
J. M. Hernandez; P. Kreuzer; Ajit Mohapatra; N D Filippis; S D Weirdt; C. Hof; S. Wakefield; W Guan; A. Khomitch; A. Fanfani; D. Evans; A. Flossdorf; J. Maes; P v Mulders; I. Villella; A. Pompili; S. My; M. Abbrescia; G. Maggi; Giacinto Donvito; J. Caballero; J A Sanches; C. Kavka; F v Lingen; W. Bacchi; G. Codispoti; P. Elmer; G. Eulisse; C. Lazaridis; S. Kalini
Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day.
NUCLEAR PHYSICS B-PROCEEDINGS SUPPLEMENTS | 2008
D Evans; A. Fanfani; C. Kavka; F. van Lingen; G. Eulisse; W. Bacchi; G. Codispoti; D. Mason; N. De Filippis; J. M. Hernandez; P. Elmer
Nuclear Physics B - Proceedings Supplements | 2008
D. Spiga; S. Lacaprara; W. Bacchi; Mattia Cinquilli; G. Codispoti; M. Corvo; A. Dorigo; A. Fanfani; Federica Fanzago; F. M. Farina; Oliver Gutsche; C. Kavka; M. Merlo; L. Servoli
grid computing | 2008
D. Spiga; S. Lacaprara; Mattia Cinquilli; G. Codispoti; Marco Corvo; A. Fanfani; A. Fanzago; F. M. Farina; C. Kavka; Vincenzo Miccio; E. W. Vaandering
Nuclear Physics B - Proceedings Supplements | 2008
Ajit Mohapatra; C. Lazaridis; J. M. Hernandez; J. Caballero; C. Hof; S. Kalinin; A. Flossdorf; M. Abbrescia; N. De Filippis; Giacinto Donvito; G. Maggi; S. My; A. Pompili; S. Sarkar; J. Maes; P. Van Mulders; I. Villella; S. De Weirdt; G. H. Hammad; S. Wakefield; W. Guan; J.A.S. Lajas; P. Kreuzer; A. Khomich; P. Elmer; D. Evans; A. Fanfani; W. Bacchi; G. Codispoti; F. van Lingen