Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where E. Karavakis is active.

Publication


Featured researches published by E. Karavakis.


Journal of Physics: Conference Series | 2012

Experiment Dashboard - a generic, scalable solution for monitoring of the LHC computing activities, distributed sites and services

Julia Andreeva; M Cinquilli; D Dieguez; Ivan Dzhunov; E. Karavakis; P Karhula; M Kenyon; Lukasz Kokoszkiewicz; M Nowotka; G Ro; P. Saiz; L Sargsyan; J. Schovancova; D Tuckett

The Experiment Dashboard system provides common solutions for monitoring job processing, data transfers and site/service usability. Over the last seven years, it proved to play a crucial role in the monitoring of the LHC computing activities, distributed sites and services. It has been one of the key elements during the commissioning of the distributed computing systems of the LHC experiments. The first years of data taking represented a serious test for Experiment Dashboard in terms of functionality, scalability and performance. And given that the usage of the Experiment Dashboard applications has been steadily increasing over time, it can be asserted that all the objectives were fully accomplished.


Journal of Physics: Conference Series | 2010

CMS Dashboard Task Monitoring: A user-centric monitoring view

E. Karavakis; Julia Andreeva; Akram Khan; Gerhild Maier; Benjamin Gaidioz

We are now in a phase change of the CMS experiment where people are turning more intensely to physics analysis and away from construction. This brings a lot of challenging issues with respect to monitoring of the user analysis. The physicists must be able to monitor the execution status, application and grid-level messages of their tasks that may run at any site within the CMS Virtual Organisation. The CMS Dashboard Task Monitoring project provides this information towards individual analysis users by collecting and exposing a user-centric set of information regarding submitted tasks including reason of failure, distribution by site and over time, consumed time and efficiency. The development was user-driven with physicists invited to test the prototype in order to assemble further requirements and identify weaknesses with the application.


Journal of Grid Computing | 2010

Experiment Dashboard for Monitoring Computing Activities of the LHC Virtual Organizations

Julia Andreeva; Max Boehm; Benjamin Gaidioz; E. Karavakis; Lukasz Kokoszkiewicz; Elisa Lanciotti; Gerhild Maier; William Ollivier; Ricardo Rocha; P. Saiz; Irina Sidorova

The Large Hadron Collider (LHC) is preparing for data taking at the end of 2009. The Worldwide LHC Computing Grid (WLCG) provides data storage and computational resources for the high energy physics community. Operating the heterogeneous WLCG infrastructure, which integrates 140 computing centers in 33 countries all over the world, is a complicated task. Reliable monitoring is one of the crucial components of the WLCG for providing the functionality and performance that is required by the LHC experiments. The Experiment Dashboard system provides monitoring of the WLCG infrastructure from the perspective of the LHC experiments and covers the complete range of their computing activities. This work describes the architecture of the Experiment Dashboard system and its main monitoring applications and summarizes current experiences by the LHC experiments, in particular during service challenges performed on the WLCG over the last years.


Journal of Physics: Conference Series | 2012

Implementing data placement strategies for the CMS experiment based on a popularity model

F Barreiro Megino; Mattia Cinquilli; D. Giordano; E. Karavakis; M. Girone; N Magini; V Mancinelli; D. Spiga

During the first two years of data taking, the CMS experiment has collected over 20 PetaBytes of data and processed and analyzed it on the distributed, multi-tiered computing infrastructure on the WorldWide LHC Computing Grid. Given the increasing data volume that has to be stored and efficiently analyzed, it is a challenge for several LHC experiments to optimize and automate the data placement strategies in order to fully profit of the available network and storage resources and to facilitate daily computing operations. Building on previous experience acquired by ATLAS, we have developed the CMS Popularity Service that tracks file accesses and user activity on the grid and will serve as the foundation for the evolution of their data placement. A fully automated, popularity-based site-cleaning agent has been deployed in order to scan Tier-2 sites that are reaching their space quota and suggest obsolete, unused data that can be safely deleted without disrupting analysis activity. Future work will be to demonstrate dynamic data placement functionality based on this popularity service and integrate it in the data and workload management systems: as a consequence the pre-placement of data will be minimized and additional replication of hot datasets will be requested automatically. This paper will give an insight into the development, validation and production process and will analyze how the framework has influenced resource optimization and daily operations in CMS.


Journal of Physics: Conference Series | 2014

Common Accounting System for Monitoring the ATLAS Distributed Computing Resources

E. Karavakis; Julia Andreeva; S. Campana; Stavro Gayazov; S Jezequel; P. Saiz; L Sargsyan; J. Schovancova; I Ueda

This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.


Proceedings of EGI Community Forum 2012 / EMI Second Technical Conference — PoS(EGICF12-EMITC2) | 2012

hBrowse - Generic framework for hierarchical data visualisation

Lukasz Kokoszkiewicz; Julia Andreeva; Ivan Dzhunov; E. Karavakis; M. Lamanna; Jakub T. Moscicki; Laura Sargsyan

CERNE-mail: [email protected] hBrowse framework is a client-side JavaScript application that can be adjusted and imple-mented according to each specific community’s needs. It utilises the latest web technologies (e.g.jQuery framework, Highcharts plotting library and DataTables jQuery plugin) and capabilitiesthat modern browsers expose to the user. It can be combined with any kind of server as long asit can send JSON formatted data via the HTTP protocol. Each part of this software (dynamictables overlay, user selection etc.) is in fact a separate plugin which can be used separately fromthe main application. The Experiment Dashboard framework utilises hBrowse to provide genericjob monitoring applications for the ATLAS and CMS Large Handron Collider (LHC) VirtualOrganisations (VOs). hBrowse is also used in mini-Dashboard which is part of the EGI Introduc-tory Package and it is used to monitor the status of jobs submitted through the Ganga or Dianesubmission systems.EGI Community Forum 2012 / EMI Second Technical Conference26-30 March, 2012Munich, Germany


Journal of Physics: Conference Series | 2010

Job monitoring on the WLCG scope: Current status and new strategy

Julia Andreeva; Max Boehm; Sergey Belov; James Casey; Frantisek Dvorak; Benjamin Gaidioz; E. Karavakis; O. Kodolova; Lukasz Kokoszkiewicz; Ales Krenek; Elisa Lanciotti; Gerhild Maier; Milas Mulac; Daniele Filipe Rocha Da Cuhna Rodrigues; Ricardo Rocha; P. Saiz; Irina Sidorova; Jirí Sitera; Elena Tikhonenko; Kumar Vaibhav; Michal Vocu

Job processing and data transfer are the main computing activities on the WLCG infrastructure. Reliable monitoring of the job processing on the WLCG scope is a complicated task due to the complexity of the infrastructure itself and the diversity of the currently used job submission methods. The paper will describe current status and the new strategy for the job monitoring on the WLCG scope, covering primary information sources, job status changes publishing, transport mechanism and visualization.


Journal of Physics: Conference Series | 2014

Processing of the WLCG monitoring data using NoSQL

Julia Andreeva; A Beche; Sergey Belov; I Dzhunov; I Kadochnikov; E. Karavakis; P. Saiz; J. Schovancova; D Tuckett

The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.


Journal of Physics: Conference Series | 2012

ATLAS job monitoring in the Dashboard Framework

Julia Andreeva; S. Campana; E. Karavakis; Lukasz Kokoszkiewicz; P. Saiz; L Sargsyan; J. Schovancova; D Tuckett

Monitoring of the large-scale data processing of the ATLAS experiment includes monitoring of production and user analysis jobs. The Experiment Dashboard provides a common job monitoring solution, which is shared by ATLAS and CMS experiments. This includes an accounting portal as well as real-time monitoring. Dashboard job monitoring for ATLAS combines information from the PanDA job processing database, Production system database and monitoring information from jobs submitted through GANGA to Workload Management System (WMS) or local batch systems. Usage of Dashboard-based job monitoring applications will decrease load on the PanDA database and overcome scale limitations in PanDA monitoring caused by the short job rotation cycle in the PanDA database. Aggregation of the task/job metrics from different sources provides complete view of job processing activity in ATLAS scope.


Journal of Physics: Conference Series | 2017

IOP : Consolidating WLCG topology and configuration in the Computing Resource Information Catalogue

Maria Alandes; Maria Dimou; E. Karavakis; A. Sciaba; Andrea Valassi; Stephan Lammel; Julia Andreeva; S. Campana; Jose Flix; A. Forti; G. Bagliesi; Maarten Litmaath; Stephano Belforte; Alexey Anisenkov; A. Di Girolamo

The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data centres all over the world and provides CPU and storage capacity to the LHC experiments to perform data processing and physics analysis. In order to be used by the experiments, these distributed resources should be well described, which implies easy service discovery and detailed description of service configuration. Currently this information is scattered over multiple generic information sources like GOCDB, OIM, BDII and experiment-specific information systems. Such a model does not allow to validate topology and configuration information easily. Moreover, information in various sources is not always consistent. Finally, the evolution of computing technologies introduces new challenges. Experiments are more and more relying on opportunistic resources, which by their nature are more dynamic and should also be well described in the WLCG information system. This contribution describes the new WLCG configuration service CRIC (Computing Resource Information Catalogue) which collects information from various information providers, performs validation and provides a consistent set of UIs and APIs to the LHC VOs for service discovery and usage configuration. The main requirements for CRIC are simplicity, agility and robustness. CRIC should be able to be quickly adapted to new types of computing resources, new information sources, and allow for new data structures to be implemented easily following the evolution of the computing models and operations of the experiments. 1. Current WLCG Information System Architecture The WLCG information system [1] is a mission-critical component in the WLCG grid infrastructure [2]. It provides detailed information about grid services which is needed for various different tasks. As represented in Figure 1, currently the grid information system has a hierarchical structure of three levels.

Collaboration


Dive into the E. Karavakis's collaboration.

Researchain Logo
Decentralizing Knowledge