Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rafael Mayo-García is active.

Publication


Featured researches published by Rafael Mayo-García.


international conference on high performance computing and simulation | 2012

Performance improvements for the neoclassical transport calculation on Grid by means of pilot jobs

A. J. Rubio-Montero; Francisco Castejón; E. Huedo; Manuel Rodríguez-Pascual; Rafael Mayo-García

The neoclassical transport is a lower limit of the whole transport in plasmas confined in fusion devices, either stellarators or tokamaks. Even more, the determination of a vast database compiling monoenergetic and transport coefficients is very useful for coupling different codes, which can use those values as input data. The DKEsG application is able to obtain such parameters on Grid infrastructures. Since a large number of regular jobs are needed for filling the aforementioned database, a fast and robust execution scheme is necessary. For this purpose, a new DRMAA-enabled DKEsG version that makes use of a new generic pilot-job platform is used, avoiding the most significant overheads related to standard Grid middleware. This new developed mechanism is suitable for many other scientific applications involving high-throughput calculations.


cluster computing and the grid | 2016

The Latin American Giant Observatory: A Successful Collaboration in Latin America Based on Cosmic Rays and Computer Science Domains

H. Asorey; Luis A. Núñez; M. Suárez-Durán; L. Torres-Niño; Manuel Rodríguez-Pascual; A.J. Rubio-Montero; Rafael Mayo-García

In this work the strategy of the Latin American Giant Observatory (LAGO) to build a Latin American collaboration is presented. Installing Cosmic Rays detectors settled all around the Continent, from Mexico to the Antarctica, this collaboration is forming a community that embraces both high energy physicist and computer scientists. This is so because the data that are measured must be analytical processed and due to the fact that a priori and a posteriori simulations representing the effects of the radiation must be performed. To perform the calculi, customized codes have been implemented by the collaboration. With regard to the huge amount of data emerging from this network of sensors and from the computational simulations performed in a diversity of computing architectures and e-infrastructures, an effort is being carried out to catalog and preserve a vast amount of data produced by the water-Cherenkov Detector network and the complete LAGO simulation workflow that characterize each site. Metadata, Permanent Identifiers and the facilities from the LAGO Data Repository are described in this work jointly with the simulation codes used. These initiatives allow researchers to produce and find data and to directly use them in a code running by means of a Science Gateway that provides access to different clusters, Grid and Cloud infrastructures worldwide.


2014 Annual Global Online Conference on Information and Computer Technology | 2014

A Fault Tolerant Workflow for Reproducible Research

Manuel Rodríguez-Pascual; A. J. Rubio-Montero; Rafael Mayo-García; Christos Kanellopoulos; Ognjen Prnjat; Diego Darriba; David Posada

In this work, the authors present a set of tools to overcome the problem of creating and executing distributed applications on dynamic environments in a resilient way, also ensuring the reproducibility of the performed experiments. The objective is to provide a portable, unattended and fault-tolerant set of tools, encapsulating the infrastructure-dependent operations away from the application developers and users, allowing to perform experiments based on open access data repositories. In this way, users can seamlessly search and lately access datasets that can be automatically retrieved as input data into a code already integrated in the proposed workflow. Such a search is based on metadata standards and relies on Persistent Identifiers (PID) to assign specific repositories. The applications profit from Distributed Toolbox, a newly created framework devoted to the creation and execution of distributed applications and includes tools for unattended Cluster and Grid execution, where a total fault tolerance is provided. By decoupling the definition of the remote tasks from its execution and control, the development, execution and maintenance of distributed applications is significantly simplified with respect to previous solutions, increasing their robustness and allowing running them on different computational platforms with little effort. The integration with open access databases and employment of PIDs for long-lasting references ensures that the data related to the experiments will persist, closing a complete research circle of data access / processing/ storage / dissemination of results.


The Journal of Supercomputing | 2018

On the modelling of optimal coordinated checkpoint period in supercomputers

José A. Moríñigo; Manuel Rodríguez-Pascual; Rafael Mayo-García

This work revises current assumptions adopted in the checkpointing modelling and evaluates their impact on the attained prediction of the optimal coordinated single-level checkpoint period. An accurate a priori assessment of the optimal checkpoint period for a given computing facility is necessary as it drives the incurred overhead due to frequent checkpointing and, as a result, implies a drop in the resource steady-state availability. The present study discusses the impact of the order of approximation used in the single-level coordinated checkpoint modelling and follows on extending previous results of the optimal checkpoint period to explore the effects of the checkpoint rate on the cluster performance under total execution time and energy consumption policies, and in terms of resource availability. A consequence of a prescribed checkpoint rate with current technology is a critical size of the cluster above which the attained availability is too poor to become a cost-effective platform. Thus, some guidelines for the cluster sizing are indicated.


ieee international conference on high performance computing data and analytics | 2017

Benchmarking Performance: Influence of Task Location on Cluster Throughput

Manuel Rodríguez-Pascual; José A. Moríñigo; Rafael Mayo-García

A variety of properties characterizes the execution of scientific applications on HPC environments (CPU, I/O or memory-bound, execution time, degree of parallelism, dedicated computational resources, strong- and weak-scaling behaviour, to cite some). This situation causes scheduling decisions to have a great influence on the performance of the applications, making difficult to achieve an optimal exploitation with cost-effective strategies of the HPC resources. In this work the NAS Parallel Benchmarks have been executed in a systematic way in a modern state-of-the-art and an older cluster, to identify dependencies between MPI tasks mapping and the speedup or resource occupation. A full characterization with micro-benchmarks has been performed. Then, an examination on how different task grouping strategies and cluster setups affect the execution time of jobs and infrastructure throughput. As a result, criteria for cluster setup arise linked to maximize performance of individual jobs, total cluster throughput or achieving better scheduling. It is expected that this work will be of interest on the design of scheduling policies and useful to HPC administrators.


ieee international conference on high performance computing data and analytics | 2016

Adapting Reproducible Research Capabilities to Resilient Distributed Calculations

Manuel Rodríguez-Pascual; Christos Kanellopoulos; A. J. Rubio-Montero; Diego Darriba; Ognjen Prnjat; David Posada; Rafael Mayo-García

Nowadays, computing calculations are becoming more and more demanding due to the huge pool of resources available. This demand must be satisfied in terms of computational efficiency and resilience, which is compromised in distributed and heterogeneous platforms. Not only this, data obtained are often either reused by other researchers or recalculated. In this work, a set of tools to overcome the problem of creating and executing fault tolerant distributed applications on dynamic environments is presented. Such a set also ensures the reproducibility of the performed experiments providing a portable, unattended and resilient framework that encapsulates the infrastructure-dependent operations away from the application developers and users, allowing performing experiments based on Open Access data repositories. In this way, users can seamlessly search and lately access datasets that can be automatically retrieved as input data into a code already integrated in the proposed workflow. Such a search is based on metadata standards and relies on Persistent Identifiers PID to assign specific repositories. The applications profit from Distributed Toolbox, a framework devoted to the creation and execution of distributed applications and includes tools for unattended cluster and grid execution, where a total fault tolerance is provided. By decoupling the definition of the remote tasks from its execution and control, the development, execution and maintenance of distributed applications is significantly simplified with respect to previous solutions, increasing their robustness and allowing running them on different computational platforms with little effort. The integration with Open Access databases and employment of PIDs for long-lasting references ensures that the data related to the experiments will persist, closing a complete research circle of data access/processing/storage/dissemination of results.


ieee international conference on high performance computing data and analytics | 2016

Enhancing Energy Production with Exascale HPC Methods

José J. Camata; José María Cela; Danilo Costa; Alvaro Lga Coutinho; Daniel Fernández-Galisteo; Carmen Jiménez; Vadim Kourdioumov; Marta Mattoso; Rafael Mayo-García; Thomas Miras; José A. Moríñigo; Jorge Navarro; Philippe Olivier Alexandre Navaux; Daniel de Oliveira; Manuel Rodríguez-Pascual; Vítor Silva; Renan Souza; Patrick Valduriez

High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.


2015 IST-Africa Conference | 2015

Enabling intercontinental e-Infrastructures - a case for Africa

Ognjen Prnjat; Bruce Becker; R. Barbera; Christos Kanellopoulos; Kostas Koumantaros; Rafael Mayo-García; F. Ruggieri

CHAIN-REDS, an EU co-funded project, focuses on promoting and supporting technological and scientific collaboration across different e-Infrastructures established and operated in various continents. The project implemented a Regional Operations Centre (ROC) model for enabling Grid computing interoperation across continents, and an operational ROC has been set up for Africa. Moreover, the project operates a global Cloud federation test-bed, where also nascent African Cloud sites contribute. Finally, the project is supporting a real-life use-case from Africa that of APHRC, using its data e-Infrastructure services.


Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment | 2016

The data acquisition system of the Latin American Giant Observatory (LAGO)

M. Sofo Haro; L.H. Arnaldi; W. Alvarez; C. Alvarez; C. Araujo; O. Areso; H. Arnaldi; H. Asorey; M. Audelo; H. Barros; X. Bertou; M. Bonnett; R. Calderon; M. Calderon; A. Campos-Fauth; A. Carramiñana; E. Carrasco; E. Carrera; D. Cazar; E. Cifuentes; D. Cogollo; R. Conde; J. Cotzomi; S. Dasso; A. R. B. de Castro; J. De La Torre; R. De León; A. Estupiñan; A. Galindo; L. Garcia


Archive | 2014

A CHAIN-REDS solution for accessing computational services

Roberto Barbera; Bruce Becker; Carla Carrubba; Giuseppina Inserra; Salma Jalife Villalón; Christos Kanellopoulos; Kostas Koumantaros; Rafael Mayo-García; Luis Núñez de Villavicencio; Ognjen Prnjat; Rita Ricceri; Manuel Rodriguez Pascual; A. J. Rubio-Montero; F. Ruggieri

Collaboration


Dive into the Rafael Mayo-García's collaboration.

Top Co-Authors

Avatar

Manuel Rodríguez-Pascual

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

A. J. Rubio-Montero

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Ognjen Prnjat

Greek Research and Technology Network

View shared research outputs
Top Co-Authors

Avatar

H. Asorey

National Scientific and Technical Research Council

View shared research outputs
Top Co-Authors

Avatar

Christos Kanellopoulos

Greek Research and Technology Network

View shared research outputs
Top Co-Authors

Avatar

E. Cifuentes

Universidad de San Carlos de Guatemala

View shared research outputs
Top Co-Authors

Avatar

W. Alvarez

Universidad de San Carlos de Guatemala

View shared research outputs
Top Co-Authors

Avatar

M. Calderon

National Technical University

View shared research outputs
Top Co-Authors

Avatar

A. Campos-Fauth

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

E. Carrera

Universidad San Francisco de Quito

View shared research outputs
Researchain Logo
Decentralizing Knowledge