Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rosa Filgueira is active.

Publication


Featured researches published by Rosa Filgueira.


BMC Medical Informatics and Decision Making | 2013

The cloud paradigm applied to e-Health.

Jordi Vilaplana; Francesc Solsona; Francesc Abella; Rosa Filgueira; Josep Rius

BackgroundCloud computing is a new paradigm that is changing how enterprises, institutions and people understand, perceive and use current software systems. With this paradigm, the organizations have no need to maintain their own servers, nor host their own software. Instead, everything is moved to the cloud and provided on demand, saving energy, physical space and technical staff. Cloud-based system architectures provide many advantages in terms of scalability, maintainability and massive data processing.MethodsWe present the design of an e-health cloud system, modelled by an M/M/m queue with QoS capabilities, i.e. maximum waiting time of requests.ResultsDetailed results for the model formed by a Jackson network of two M/M/m queues from the queueing theory perspective are presented. These results show a significant performance improvement when the number of servers increases.ConclusionsPlatform scalability becomes a critical issue since we aim to provide the system with high Quality of Service (QoS). In this paper we define an architecture capable of adapting itself to different diseases and growing numbers of patients. This platform could be applied to the medical field to greatly enhance the results of those therapies that have an important psychological component, such as addictions and chronic diseases.


Simulation Modelling Practice and Theory | 2012

SIMCAN: A flexible, scalable and expandable simulation platform for modelling and simulating distributed architectures and applications

Alberto Núñez; Javier Fernández; Rosa Filgueira; Félix García; Jesús Carretero

Abstract In this paper we propose a new simulation platform called SIMCAN, for analyzing parallel and distributed systems. This platform is aimed to test parallel and distributed architectures and applications. The main characteristics of SIMCAN are flexibility, accuracy, performance, and scalability. Thence, the proposed platform has a modular design that eases the integration of different basic systems on a single architecture. Its design follows a hierarchical schema that includes simple modules, basic systems (computing, memory managing, I/O, and networking), physical components (nodes, switches, …), and aggregations of components. New modules may also be incorporated as well to include new strategies and components. Also, a graphical configuration tool has been developed to help untrained users with the task of modelling new architectures. Finally, a validation process and some evaluation tests have been performed to evaluate the SIMCAN platform.


ieee international conference on high performance computing data and analytics | 2011

Adaptive-Compi: Enhancing Mpi-Based Applications - Performance and Scalability by using Adaptive Compression

Rosa Filgueira; David E. Singh; Jesús Carretero; Alejandro Calderón; Félix García

This paper presents an optimization of MPI communication, called Adaptive-CoMPI, based on runtime compression of MPI messages exchanged by applications. The technique developed can be used for any application, because its implementation is transparent for the user, and integrates different compression algorithms for both MPI collective and point-to-point primitives. Furthermore, compression is turned on and off and the most appropriate compression algorithms are selected at runtime, depending on the characteristics of each message, the network behavior, and compression algorithm behavior, following a runtime adaptive strategy. Our system can be optimized for a specific application, through a guided strategy, to reduce the runtime strategy overhead. Adaptive-CoMPI has been validated using several MPI benchmarks and real HPC applications. Results show that, in most cases, by using adaptive compression, communication time is reduced, enhancing application performance and scalability.


The Journal of Supercomputing | 2012

Dynamic-CoMPI: dynamic optimization techniques for MPI parallel applications

Rosa Filgueira; Jesús Carretero; David E. Singh; Alejandro Calderón; Alberto Núñez

This work presents an optimization of MPI communications, called Dynamic-CoMPI, which uses two techniques in order to reduce the impact of communications and non-contiguous I/O requests in parallel applications. These techniques are independent of the application and complementaries to each other. The first technique is an optimization of the Two-Phase collective I/O technique from ROMIO, called Locality aware strategy for Two-Phase I/O (LA-Two-Phase I/O). In order to increase the locality of the file accesses, LA-Two-Phase I/O employs the Linear Assignment Problem (LAP) for finding an optimal I/O data communication schedule. The main purpose of this technique is the reduction of the number of communications involved in the I/O collective operation. The second technique, called Adaptive-CoMPI, is based on run-time compression of MPI messages exchanged by applications. Both techniques can be applied on every application, because both of them are transparent for the users. Dynamic-CoMPI has been validated by using several MPI benchmarks and real HPC applications. The results show that, for many of the considered scenarios, important reductions in the execution time are achieved by reducing the size and the number of the messages. Additional benefits of our approach are the reduction of the total communication time and the network contention, thus enhancing, not only performance, but also scalability.


european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 2009

CoMPI: Enhancing MPI Based Applications Performance and Scalability Using Run-Time Compression

Rosa Filgueira; David E. Singh; Alejandro Calderón; Jesús Carretero

This paper presents an optimization of MPI communications, called CoMPI , based on run-time compression of MPI messages exchanged by applications. A broad number of compression algorithms have been fully implemented and tested for both MPI collective and point to point primitives. In addition, this paper presents a study of several compression algorithms that can be used for run-time compression, based on the datatype used by applications. This study has been validated by using several MPI benchmarks and real HPC applications. Show that, in most of the cases, using compression reduces the application communication time enhancing application performance and scalability. In this way, CoMPI obtains important improvements in the overall execution time for many of the considered scenarios.


Future Generation Computer Systems | 2017

A characterization of workflow management systems for extreme-scale applications

Rafael Ferreira da Silva; Rosa Filgueira; Ilia Pietri; Ming Jiang; Rizos Sakellariou; Ewa Deelman

Automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compelling case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. The paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.


european conference on parallel processing | 2014

Applying Selectively Parallel I/O Compression to Parallel Storage Systems

Rosa Filgueira; Malcolm P. Atkinson; Yusuke Tanimura; Isao Kojima

This paper presents a new I/O technique called Selectively Parallel I/O Compression (SPIOC) for providing high-speed storage and access to data in QoS enabled parallel storage systems. SPIOC reduces the time of I/O operations by applying transparent compression between the computing and the storage systems. SPIOC can predict whether to compress or not at runtime, allowing parallel or sequential compression techniques, guaranteeing QoS and allowing partial and full reading by decompressing the minimum part of the file. SPIOC maximises the measured efficiency of data movement by applying run-time customising compression before storing data in the Papio storage system.


The Journal of Object Technology | 2005

Specifying use case behavior with interaction models.

José Daniel García; Jesús Carretero; José María Pérez; Félix García Carballeira; Rosa Filgueira

Functional requirements for information systems can be modeled through use cases. Furthermore, use case models have been successfully used in broader contexts than software engineering, as systems engineering. Even if small systems may be modeled as a set of use cases, when large systems requirements are modeled with a plain use case model several difficulties arise. Traditionally, the behavior of use cases has been modeled through textual specifications. In this paper we present an alternate approach based on interaction modeling. The behavior modeling has two variants (one for UML 1.x and one for UML 2.0). We also integrate our behavior modeling with standard use case relationships.


workflows in support of large scale science | 2014

Workflows in a dashboard: a new generation of usability

Sandra Gesing; Malcolm P. Atkinson; Rosa Filgueira; Ian Taylor; Andrew Clifford Jones; Vlado Stankovski; Chee Sun Liew; Alessandro Spinuso; Gabor Terstyanszky; Péter Kacsuk

In the last 20 years quite a few mature workflow engines and workflow editors have been developed to support communities in managing workflows. While there is a trend followed by the providers of workflow engines to ease the creation of workflows tailored to their specific workflow system, the management tools still often necessitate much understanding of the workflow concepts and languages. This paper describes the approach targeting various workflow systems and building a single user interface for editing and monitoring workflows under consideration of aspects such as optimization and provenance of data. The design allots agile Web frameworks and novel technologies to build a workflow dashboard offered in a web browser and connecting seamlessly to available workflow systems and external resources like Cloud infrastructures. The user interface eliminates the need to become acquainted with diverse layouts. Thus, the usability is immensely increased for various aspects of managing workflows.


international conference on e-science | 2015

VERCE Delivers a Productive E-science Environment for Seismology Research

Malcolm P. Atkinson; Michele Carpenè; Emanuele Casarotti; Steffen Claus; Rosa Filgueira; Anton Frank; Michelle Galea; Tom Garth; André Gemünd; Heiner Igel; Iraklis Klampanos; Amrey Krause; Lion Krischer; Siew Hoon Leong; Federica Magnoni; Jonas Matser; Alberto Michelini; Andreas Rietbrock; Horst Schwichtenberg; Alessandro Spinuso; Jean-Pierre Vilotte

The VERCE project has pioneered an e-Infrastructure to support researchers using established simulation codes on high-performance computers in conjunction with multiple sources of observational data. This is accessed and organised via the VERCE science gateway that makes it convenient for seismologists to use these resources from any location via the Internet. Their data handling is made flexible and scalable by two Python libraries, ObsPy and dispel4py and by data services delivered by ORFEUS and EUDAT. Provenance driven tools enable rapid exploration of results and of the relationships between data, which accelerates understanding and method improvement. These powerful facilities are integrated and draw on many other e-Infrastructures. This paper presents the motivation for building such systems, it reviews how solid-Earth scientists can make significant research progress using them and explains the architecture and mechanisms that make their construction and operation achievable. We conclude with a summary of the achievements to date and identify the crucial steps needed to extend the capabilities for seismologists, for solid-Earth scientists and for similar disciplines.

Collaboration


Dive into the Rosa Filgueira's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jesús Carretero

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar

Amrey Krause

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar

Ewa Deelman

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Rafael Ferreira da Silva

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Yusuke Tanimura

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Alessandro Spinuso

Royal Netherlands Meteorological Institute

View shared research outputs
Top Co-Authors

Avatar

Ian G. Main

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Isao Kojima

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge