Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where F. Gomez-Folgar is active.

Publication


Featured researches published by F. Gomez-Folgar.


international symposium on parallel and distributed processing and applications | 2012

Performance of the CloudStack KVM Pod Primary Storage under NFS Version 3

F. Gomez-Folgar; Antonio J. Garcia-Loureiro; Tomás F. Pena; Raul Valin

Currently, there are an increasing number of open-source solutions for building Clouds. The performance of the Virtual Machines running in these Clouds is a key point. The way the Virtual Machines are created in Clouds can have important effects upon their disk I/O operations. We have selected CloudStack platform to study the disk I/O performance for KVM Virtual Machines.


ieee/acm international symposium cluster, cloud and grid computing | 2015

Study of the KVM CPU Performance of Open-Source Cloud Management Platforms

F. Gomez-Folgar; Antonio J. Garcia-Loureiro; Tomás F. Pena; J.I. Zablah; Natalia Seoane

Nowadays, there are several open-source solutions for building private, public and even hybrid clouds such as Eucalyptus, Apache Cloud Stack and Open Stack. KVM is one of the supported hypervisors for these cloud platforms. Different KVM configurations are being supplied by these platforms and, in some cases, a subset of CPU features are being presented to guest systems, providing a basic abstraction of the underlying CPU. One of the reasons for limiting the features of the Virtual CPU is to guarantee the guest compatibility with different hardware in heterogeneous environments. However, in a large number of situations, the cloud is deployed on an homogeneous set of hosts. In these cases, this limitation can affect the performance of applications being executed in guest systems. In this paper, we have analyzed the architecture, the KVM setup, and the performance of the Virtual Machines deployed by three popular cloud management platforms: Eucalyptus, Apache Cloud Stack and Open Stack, employing a representative set of applications.


spanish conference on electron devices | 2011

An e-Science infrastructure for nanoeletronic simulations based on grid and cloud technologies

F. Gomez-Folgar; J. López Cacheiro; C. Fernández Sánchez; Antonio J. Garcia-Loureiro; Raul Valin

We propose the development of a new e-Science infrastructure that would take the best of both grid and cloud technologies, and it would allow different research groups that perform nanoelectronic simulations to share their local clusters and create a common infrastructure accessible through a unified point of access. Therefore, more computational power can be used to perform nanoelectronic simulations, with the consequent reduction of time required to obtain the results. The integration of local clusters to share resources, through the proposed cloud management stack, will allow deploying an elastic infrastructure that will also permit to prioritize local computing tasks over shared ones. Furthermore, it will allow not only the deployment of ad-hoc virtual machines across local sites to achieve specific tasks but also to deploy virtual machines in public clouds like Amazon AWS to get additional computing resources, and even avoiding data losing by using public storage clouds like Amazon S3.


ieee international conference on cloud computing technology and science | 2011

DIRAC Integration with Cloud Stack

V. Fernandez Albor; J.J. Saborido; F. Gomez-Folgar; J. Lopez Cacheiro; R. Graciani Diaz

Grid is one option that researchers are already using to submit their scientific simulations. Several organizations such as CERN (European Organization for Nuclear Research) or EMBL (European Molecular Biology Laboratory) are currently using grid in order to run a large part of their simulation jobs. Nowadays the increasing availability of cloud resources its making the scientific community to shift focus from grid to cloud as a way that will allow them to extend the pool of resources where they can run their jobs. Unfortunately running scientific jobs in the cloud usually requires to start again from the beginning and learn how to use new interfaces. CERNs LHCb experiment has developed a software framework called DIRAC (Distributed Infrastructure with Remote Agent Control) which provides researchers with the perfect environment for running their jobs and get the results through a browser. Cloud Stack is an open source cloud platform, now owned by Citrix Systems, that allows building any type of cloud including public, private, and hybrid. It is the cloud management software selected in the FORMIGA CLOUD project to manage non-dedicated computer lab resources belonging to different Spanish universities. This article explains the work involved in the integration between Cloud Stack and the grid framework DIRAC. This integration will allow users to use cloud resources transparently through a common interface.


international conference on communications | 2015

General Workload Manager: A task manager as a service

Guillermo Indalecio; F. Gomez-Folgar; Antonio J. Garcia-Loureiro

During the recent past, the demand on High Throughput Computing has been increasing because of the new scientific challenges. Since the access to several computational resources to manage thousands of simulations can be difficult for scientists, different initiatives have tried to provide the scientific community with interfaces that are user-friendly for several computational resources. Usually, these are designed for some specific codes and for a given research field, such as oceanographic, climate modeling and physics, among others. To overcome this situation, we have developed the General Workload Manager (GWM), a universal-purpose very light management system, capable of working with different computing resources with the least configuration as possible, such as HPC and HTC clusters, standalone worker nodes, hypervisor-enabled servers, and cloud platforms. The suggested system is able to deploy thousands of different simulation tasks using several computing resources, and collecting the results in an easy way.


high performance computing and communications | 2015

Improving CPU Service Offerings in Apache CloudStack

F. Gomez-Folgar; Antonio J. Garcia-Loureiro; Tomás F. Pena

Cloud computing has grown in popularity in recent years, partly due to the availability of different open source solutions for building private, public and even hybrid clouds, as Apache CloudStack. One of the challenges posed by cloud applications is Quality-of-Service (QoS) management. In the particular case of Apache CloudStack, Service Offerings are employed as the way to specify QoS levels, as they allow to define the characteristics of the Virtual Machines (VMs) running in the Compute Nodes (CNs) in the Cloud. However, the current implementation of the Apache CloudStack Agent cannot limit the CPU usage of the VMs accurately when the KVM hypervisor is used, presenting a high variability in the performance. This is due to the fact that the performance of the VM does not only depend on the Service Offering selected by the user but also depends on the Service Offerings of the VMs coexisting in the same CN, even if the node is not over-provisioned. Cloud users expect providers to deliver a fixed quality characteristics. If the VM characteristic fluctuate, depending on other VMs sharing the same hardware, it is going to be impossible for the user to determine the best cloud configuration to achieve her requirements. To overcome this situation, we have implemented a supervised Apache CloudStack Agent that can limit the CPU usage accurately, providing performance isolation to the user VM, so each user VM behaves exactly how it is specified in the Service Offerings. The behaviour of this new agent is analysed in this paper under several scenarios.


Computing | 2018

MPI-Performance-Aware-Reallocation: method to optimize the mapping of processes applied to a cloud infrastructure

F. Gomez-Folgar; Guillermo Indalecio; Natalia Seoane; Tomás F. Pena; Antonio J. Garcia-Loureiro

The cloud brings new possibilities to run traditional HPC applications, giving its flexibility and reduced cost. However, running MPI applications in the cloud can reduce appreciably its performance, because the cloud hides its internal network topology information, and existing topology-aware techniques to optimize MPI communications cannot be directly applied to virtualized infrastructures. In this paper it is presented the MPI-Performance-Aware-Reallocation method (MPAR), a general approach to improve MPI communications. This new approach: (i) is not linked to any specific software or hardware infrastructure, (ii) is applicable to cloud, (iii) abstracts the network topology performing experimental tests, and (iv) is able to improve the performance of the MPI users application via the reallocation of the involved MPI processes. The MPAR has been demonstrated for cloud infrastructures, via the implementation of the Latency-Aware-MPI-Cloud-Scheduler (LAMPICS) layer. LAMPICS is able to improve the latency of MPI communications in clouds, without the need of creating ad-hoc MPI implementations or modifying the source code of user’s MPI applications. We have tested LAMPICS with the Sendrecv micro benchmark provided by the Intel MPI Benchmarks, with performance improvements of up to 70%, and with two real-world applications from the Unified European Applications Benchmark Suite, obtaining performance improvements of up to 26.5%.


spanish conference on electron devices | 2015

Comparison of state-of-the-art distributed computing frameworks with the GWM

Guillermo Indalecio; F. Gomez-Folgar; Antonio J. Garcia-Loureiro

We have analysed the landscape of heterogeneous computing solutions in order to understand and explain the position of our application, the General Workload Manager, in that landscape. We have classified several applications in the following groups: Grid middleware, Grid powered applications, Cloud computing, and modern lightweight solutions. We have successfully analysed the characteristics of those groups and found similar characteristics in our application, which allows for a better comprehension of both the landscape of existing solutions and the General Workload Manager.


spanish conference on electron devices | 2015

A tool to deploy nanodevice simulations on cloud

F. Gomez-Folgar; Guillermo Indalecio; E. Comesaña; Antonio J. Garcia-Loureiro; Tomás F. Pena

The emerging of cloud computing has made it possible to deploy scientific applications in which users can manage the computational capacity on demand. In order to provide for the nanodevice researches the flexibility they require, we present in this paper the Flexible Cluster Manager (FCM), a tool that allows them to deploy Sentaurus TCAD virtual clusters on demand on cloud infrastructures by means of a web interface. The virtual clusters provided by FCM can be resized on-line, so adapting the computational capacity to the requirements of the researchers, which increase the reuse ratio of the computing resources. It is also possible to use it with other commercial or in-house simulator tools.


high performance computing and communications | 2015

A Flexible Cluster System for the Management of Virtual Clusters in the Cloud

F. Gomez-Folgar; Guillermo Indalecio; Antonio J. Garcia-Loureiro; Tomás F. Pena

Cluster computing is a fundamental tool to support enterprise services. It also provides the computing capacity for modelling and simulation research fields. There have been several initiatives to improve the access of the scientific community to the cluster resources that they need. Some of them are focused on specific research field, or they are enterprise grade solutions. In order to overcome this situation and to provide system administrators and users the possibility of deploying specific Virtual Clusters on demand in Cloud, we have developed a new tool called Flexible Cluster Manager (FCM). It allows user selectable cluster configuration packages, and it is very easy to include more software by means of the definition of the deployment workflow. FCM allows changing the software configuration of the deployed cluster on-line, including the support of fixing damaged virtual clusters, i.e clusters that have damaged or missing nodes. The performance of our tool, using commodity hardware, is also presented using serial and parallel deploying of the virtual cluster.

Collaboration


Dive into the F. Gomez-Folgar's collaboration.

Top Co-Authors

Avatar

Antonio J. Garcia-Loureiro

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Tomás F. Pena

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Guillermo Indalecio

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

E. Comesaña

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Natalia Seoane

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge