M. Gheata
CERN
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by M. Gheata.
Journal of Physics: Conference Series | 2016
Guilherme Amadio; A Ananya; J. Apostolakis; A Aurora; M Bandieramonte; A Bhattacharyya; C Bianchini; R. Brun; Philippe Canal; F. Carminati; L Duhem; Daniel Elvira; A. Gheata; M. Gheata; I Goulas; R Iope; Soon Yung Jun; G Lima; A Mohanty; T Nikitina; M Novak; Witold Pokorski; A. Ribon; R Seghal; O Shadura; S Vallecorsa; S Wenzel; Yang Zhang
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.
Journal of Physics: Conference Series | 2014
Ananya; A Alarcon Do Passo Suaide; C. Alves Garcia Prado; T. Alt; L. Aphecetche; N Agrawal; A Avasthi; M. Bach; R. Bala; G. G. Barnaföldi; A. Bhasin; J. Belikov; F. Bellini; L. Betev; T. Breitner; P. Buncic; F. Carena; S. Chapeland; V. Chibante Barroso; F Cliff; F. Costa; L Cunqueiro Mendez; Sadhana Dash; C Delort; E. Dénes; R. Divià; B. Doenigus; H. Engel; D. Eschweiler; U. Fuchs
ALICE (A Large Ion Collider Experiment) is a detector dedicated to the studies with heavy ion collisions exploring the physics of strongly interacting nuclear matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). After the second long shutdown of the LHC, the ALICE Experiment will be upgraded to make high precision measurements of rare probes at low pT, which cannot be selected with a trigger, and therefore require a very large sample of events recorded on tape. The online computing system will be completely redesigned to address the major challenge of sampling the full 50 kHz Pb-Pb interaction rate increasing the present limit by a factor of 100. This upgrade will also include the continuous un-triggered read-out of two detectors: ITS (Inner Tracking System) and TPC (Time Projection Chamber)) producing a sustained throughput of 1 TB/s. This unprecedented data rate will be reduced by adopting an entirely new strategy where calibration and reconstruction are performed online, and only the reconstruction results are stored while the raw data are discarded. This system, already demonstrated in production on the TPC data since 2011, will be optimized for the online usage of reconstruction algorithms. This implies much tighter coupling between online and offline computing systems. An R&D program has been set up to meet this huge challenge. The object of this paper is to present this program and its first results.
Journal of Physics: Conference Series | 2016
Guilherme Amadio; A Ananya; J. Apostolakis; A Arora; M Bandieramonte; A Bhattacharyya; C Bianchini; R. Brun; Philippe Canal; F. Carminati; L Duhem; Daniel Elvira; A. Gheata; M. Gheata; I Goulas; R Iope; Soon Yung Jun; G Lima; A Mohanty; T Nikitina; M Novak; Witold Pokorski; A. Ribon; R Sehgal; O Shadura; S Vallecorsa; S Wenzel; Yang Zhang
The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPUs having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also to formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. We also describe the current work and preliminary results for using the GeantV transport kernel on GPUs.
Journal of Physics: Conference Series | 2018
G Amadio; Ananya; J Apostolakis; M Bandieramonte; S Behera; A Bhattacharyya; R Brun; P Canal; F. Carminati; G Cosmo; V Drogan; L Duhem; D Elvira; K Genser; A. Gheata; M. Gheata; I Goulas; F Hariri; V Ivantchenko; S Jun; P Karpinski; G Khattak; D Konstantinov; H Kumawat; G Lima; J Martínez-Castro; P Mendez Lorenzo; A Miranda-Aguilar; K Nikolics; M Novak
In the fall 2016, GeantV went through a thorough community evaluation of the project status and of its strategy for sharing the R&D results with the LHC experiments and with the HEP simulation community in general. Following this discussion, GeantV has engaged onto an ambitious 2-year road-path aiming to deliver a beta version that has most of the final design and several performance features of the final product, partially integrated with some of the experiments frameworks. The initial GeantV prototype has been updated to a vector-aware concurrent framework, which is able to deliver high-density floating-point computations for most of the performance-critical components such as propagation in field and physics models. Electromagnetic physics models were adapted for the specific GeantV requirements, aiming for the full demonstration of shower physics performance in the alpha release at the end of 2017. We have revisited and formalized GeantV user interfaces and helper protocols, allowing to: connect to user code, provide recipes to access efficiently MC truth and generate user data in a concurrent environment.
Journal of Physics: Conference Series | 2017
Guilherme Amadio; J. Apostolakis; M Bandieramonte; S.P. Behera; R. Brun; Philippe Canal; F. Carminati; G. Cosmo; L Duhem; Daniel Elvira; G. Folger; A. Gheata; M. Gheata; I Goulas; F. Hariri; Soon Yung Jun; D. Konstantinov; H. Kumawat; V. Ivantchenko; G Lima; T Nikitina; M Novak; Witold Pokorski; A. Ribon; R Seghal; O Shadura; S. Vallecorsa; S Wenzel
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.
Journal of Physics: Conference Series | 2011
C. Grigoras; F. Carminati; Olga Vladimirovna Datskova; S. Schreiner; Sehoon Lee; Jianlin Zhu; M. Gheata; A. Gheata; P. Saiz; L. Betev; Fabrizio Furano; Patricia Mendez Lorenzo; A. Grigoras; S. Bagnasco; Andreas Peters; Maria Dolores Saiz Santos
With the LHC and ALICE entering a full operation and production modes, the amount of Simulation and RAW data processing and end user analysis computational tasks are increasing. The efficient management of all these tasks, all of which have large differences in lifecycle, amounts of processed data and methods to analyze the end result, required the development and deployment of new tools in addition to the already existing Grid infrastructure. To facilitate the management of the large scale simulation and raw data reconstruction tasks, ALICE has developed a production framework called a Lightweight Production Manager (LPM). The LPM is automatically submitting jobs to the Grid based on triggers and conditions, for example after a physics run completion. It follows the evolution of the job and publishes the results on the web for worldwide access by the ALICE physicists. This framework is tightly integrated with the ALICE Grid framework AliEn. In addition to the publication of the job status, LPM is also allowing a fully authenticated interface to the AliEn Grid catalogue, to browse and download files, and in the near future will provide simple types of data analysis through ROOT plugins. The framework is also being extended to allow management of end user jobs.
Journal of the Korean Physical Society | 2011
Sang In Bak; R. Brun; F. Carminati; Jong Seo Chai; A. Gheata; M. Gheata; Seung-Woo Hong; Y. Kadi; V. K. Manchanda; Tae-Sun Park; Claudio Tenreiro
Archive | 1997
M. M. Aggarwal; Yu. A. Alexandrov; Ramila Amirikas; N. P. Andreeva; F. A. Avetyan; S.K. Badyal; A. M. Bakich; E. S. Basova; Kuhulika Bhalla; Anju Bhasin; V. S. Bhatia; V. G. Bogdanov; V. Ya. Bradnova; V. I. Bubnov; X. Cai; I.Yu. Chasnikov; Guimin Chen; L.P. Chernova; M. M. Chernyavski; Ying Deng; Seema Dhamija; K. El Chenawi; Daniel Felea; S.-J. Feng; A. Sh. Gaitinov; Eberhard Ganssauge; Sten Garpman; S.G. Gerassimov; A. Gheata; M. Gheata