Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eduardo César is active.

Publication


Featured researches published by Eduardo César.


parallel computing | 2012

AutoTune: a plugin-driven approach to the automatic tuning of parallel applications

Renato Miceli; Gilles Civario; Anna Sikora; Eduardo César; Michael Gerndt; Houssam Haitof; Carmen B. Navarrete; Siegfried Benkner; Martin Sandrieser; Laurent Morin; François Bodin

Performance analysis and tuning is an important step in programming multicore- and manycore-based parallel architectures. While there are several tools to help developers analyze application performance, no tool provides recommendations about how to tune the code. The AutoTune project is extending Periscope, an automatic distributed performance analysis tool developed by Technische Universitat Munchen, with plugins for performance and energy efficiency tuning. The resulting Periscope Tuning Framework will be able to tune serial and parallel codes for multicore and manycore architectures and return tuning recommendations that can be integrated into the production version of the code. The whole tuning process --- both performance analysis and tuning --- will be performed automatically during a single run of the application.


parallel computing | 2006

Modeling master/worker applications for automatic performance tuning

Eduardo César; Andreu Moreno; Joan Sorribes; Emilio Luque

Parallel application development is a very difficult task for non-expert programmers, and therefore support tools are needed for all phases of this kind of application development cycle. This means that developing applications using predefined programming structures (frameworks/skeletons) should be easier than doing it from scratch. We propose to take advantage of the intrinsic knowledge that these programming structures provide about the application in order to develop a dynamic and automatic tuning tool. We show that using this knowledge the tool could efficiently make better tuning decisions. Specifically, we focus this work on the definition of the performance model associated to applications developed with the Master/Worker framework.


high level parallel programming models and supportive environments | 2004

Modeling master-worker applications in POETRIES

Eduardo César; José G. Mesa; Joan Sorribes; Emilio Luque

Parallel/distributed application development is a very difficult task for non-expert programmers, and therefore support tools are needed for all phases of this kind of application development cycle. This means that developing applications using predefined programming structures (frameworks) should be easier than doing it from scratch. We propose to take advantage of the knowledge about the structure of the application in order to develop a dynamic and automatic tuning tool. In this sense, we have designed POETRIES, which is a dynamic performance tuning tool based on the idea that a performance model could be associated to the high-level structure of the application. This way, the tool could efficiently make better tuning decisions. Specifically, we focus this work on the definition of the performance model associated to applications developed with the master-worker framework.


european conference on parallel processing | 2001

Dynamic Performance Tuning Environment

Anna Morajko; Eduardo César; Tomàs Margalef; Joan Sorribes; Emilio Luque

Performance analysis and tuning of parallel/distributed applications are very difficult tasks for non-expert programmers. It is necessary to provide tools that automatically carry out these tasks. Many applications have a different behavior according to the input data set or even change their behavior dynamically during the execution. Therefore, it is necessary that the performance tuning can be done on the fly by modifying the application according to the particular conditions of the execution. A dynamic automatic performance tuning environment supported by dynamic instrumentation techniques is presented. The environment is completed by a pattern based application design tool that allows the user to concentrate on the design phase and facilitates on the fly overcoming of performance bottlenecks.


cloud computing security workshop | 2010

First principles vulnerability assessment

James A. Kupsch; Barton P. Miller; Elisa Heymann; Eduardo César

Clouds and Grids offer significant challenges to providing secure infrastructure software. As part of a our effort to secure such middleware, we present First Principles Vulnerability Assessment (FPVA), a new analyst-centric (manual) technique that aims to focus the analysts attention on the parts of the software system and its resources that are most likely to contain vulnerabilities that would provide access to high-value assets. FPVA finds new threats to a system and is not dependent on a list of known threats. Manual assessment is labor-intensive, making the use of automated assessment tools quite attractive. We compared the results of FPVA to those of the top commercial tools, providing the first significant evaluation of these tools against a real-world known collection of serious vulnerabilities. While these tools can find common problems in a programs source code, they miss a significant number of serious vulnerabilities found by FPVA. We are now using the results of this comparison study to guide our future research into improving automated software assessment.


european conference on parallel processing | 2005

Automatic tuning of master/worker applications

Anna Morajko; Eduardo César; Paola Caymes-Scutari; Tomàs Margalef; Joan Sorribes; Emilio Luque

The Master/Worker paradigm is one of the most commonly used by parallel/distributed application developers. This paradigm is easy to understand and is fairly close to the abstract concept of a wide range of applications. However, to obtain adequate performance indexes, such a paradigm must be managed in a very precise way. There are certain features, such as data distribution or the number of workers, that must be tuned properly in order to obtain such performance indexes, and in most cases they cannot be tuned statically since they depend on the particular conditions of each execution. In this context, dynamic tuning seems to be a highly promising approach since it provides the capability to change the parameters during the execution of the application to improve performance. In this paper, we demonstrate the usage of a dynamic tuning environment that allows for adaptation of the number of workers based on a theoretical model of Master/Worker behavior. The results show that such an approach significantly improves the execution time when the application modifies its behavior during execution.


International Journal of Parallel Programming | 2014

Improving Performance on Data-Intensive Applications Using a Load Balancing Methodology Based on Divisible Load Theory

Claudia Rosas; Anna Sikora; Josep Jorba; Andreu Moreno; Eduardo César

Data-intensive applications are those that explore, query, analyze, and, in general, process very large data sets. Generally, these applications can be naturally implemented in parallel but, in many cases, these implementations show severe performance problems mainly due to load imbalances, inefficient use of available resources, and improper data partition policies. It is worth noticing that the problem becomes more complex when the conditions causing these problems change at run time. This paper proposes a methodology for dynamically improving the performance of certain data-intensive applications based on: adapting the size and number of data partitions, and the number of processing nodes, to the current application conditions in homogeneous clusters. To this end, the processing of each exploration is monitored and gathered data is used to dynamically tune the performance of the application. The tuning parameters included in the methodology are: (i) the partition factor of the data set, (ii) the distribution of the data chunks, and (iii) the number of processing nodes to be used. The methodology assumes that a single execution includes multiple related explorations on the same partitioned data set, and that data chunks are ordered according to their processing times during the application execution to assign first the most time consuming partitions. The methodology has been validated using the well-known bioinformatics tool—BLAST—and through extensive experimentation using simulation. Reported results are encouraging in terms of reducing total execution time of the application (up to a 40 % in some cases).


Scientific Programming | 2002

Dynamic performance tuning supported by program specification

Eduardo César; Anna Morajko; Tomàs Margalef; Joan Sorribes; Antonio Espinosa; Emilio Luque

Performance analysis and tuning of parallel/distributed applications are very difficult tasks for non-expert programmers. It is necessary to provide tools that automatically carry out these tasks. These can be static tools that carry out the analysis on a post-mortem phase or can tune the application on the fly. Both kind of tools have their target applications. Static automatic analysis tools are suitable for stable application while dynamic tuning tools are more appropriate to applications with dynamic behaviour. In this paper, we describe KappaPi as an example of a static automatic performance analysis tool, and also a general environment based on parallel patterns for developing and dynamically tuning parallel/distributed applications.


european conference on parallel processing | 2008

Dynamic Pipeline Mapping (DPM)

Andreu Moreno; Eduardo César; Andreu Guevara; Joan Sorribes; Tomàs Margalef; Emilio Luque

Parallel/distributed application development is an extremely difficult task for non-expert programmers, and support tools are therefore needed for all phases of the development cycle of this kind of applications. In particular, dynamic performance tuning tools can take advantage of the knowledge about the applications structure given by a skeleton based programming tool. This study shows the definition of a strategy for dynamically improving the performance of pipeline applications. This strategy, which has been called Dynamic Pipeline Mapping, improves the applications throughput by gathering the pipes fastest stages and replicating its slowest ones. We have evaluated the new algorithm by experimentation and simulation, and results show that DPM leads to significant performance improvements.


Proceedings of the ACM Workshop on Software Engineering Methods for Parallel and High Performance Applications | 2016

Autotuning of MPI Applications Using PTF

Anna Sikora; Eduardo César; Isaías Comprés; Michael Gerndt

The main problem when trying to optimize the parameters of libraries, such as MPI, is that there are many parameters that users can configure. Moreover, predicting the behavior of the library for each configuration is non-trivial. This makes it very difficult to select good values for these parameters. This paper proposes a model for autotuning MPI applications. The model is developed to analyze different parameter configurations and is expected to aid users to find the best performance for executing their applications. As part of the AutoTune project, our work is ultimately aiming at extending Periscope to apply automatic tuning to parallel applications. In particular, our objective is to provide a straightforward way of tuning MPI parallel codes. The output of the framework are tuning recommendations that can be integrated into the production version of the code. Experimental tests demonstrate that this methodology could lead to significant performance improvements.

Collaboration


Dive into the Eduardo César's collaboration.

Top Co-Authors

Avatar

Joan Sorribes

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Anna Sikora

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Emilio Luque

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Tomàs Margalef

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Andreu Moreno

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Anna Morajko

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Remo Suppi

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Andrea Martínez

Autonomous University of Barcelona

View shared research outputs
Top Co-Authors

Avatar

Josep Jorba

Open University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

J. Falguera

Autonomous University of Barcelona

View shared research outputs
Researchain Logo
Decentralizing Knowledge