Enric Tejedor
Polytechnic University of Catalonia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Enric Tejedor.
grid computing | 2014
Francesc Lordan; Enric Tejedor; Jorge Ejarque; Roger Rafanell; Javier Alvarez; Fabrizio Marozzo; Daniele Lezzi; Raül Sirvent; Domenico Talia; Rosa M. Badia
The rise of virtualized and distributed infrastructures has led to new challenges to accomplish the effective use of compute resources through the design and orchestration of distributed applications. As legacy, monolithic applications are replaced with service-oriented applications, questions arise about the steps to be taken in order to maximize the usefulness of the infrastructures and to provide users with tools for the development and execution of distributed applications. One of the issues to be solved is the existence of multiple cloud solutions that are not interoperable, which forces the user to be locked to a specific provider or to continuously adapt applications. With the objective of simplifying the programmers challenges, ServiceSs provides a straightforward programming model and an execution framework that helps on abstracting applications from the actual execution environment. This paper presents how ServiceSs transparently interoperates with multiple providers implementing the appropriate interfaces to execute scientific applications on federated clouds.
cluster computing and the grid | 2008
Enric Tejedor; Rosa M. Badia
This paper presents the design, implementation and evaluation of COMP Superscalar, a new and componentised version of the GRID superscalar framework that enables the easy development of Grid-unaware applications. By means of a simple programming model, COMP Superscalar keeps the Grid as transparent as possible to the programmer. Moreover, the performance of the applications is optimized by exploiting their inherent concurrency when executing them on the Grid. The runtime of COMP Superscalar has been designed to follow the Grid Component Model (GCM) and is therefore formed by several components, each one encapsulating a given functionality identified in GRID superscalar.
high performance distributed computing | 2011
Enric Tejedor; Montse Farreras; David Grove; Rosa M. Badia; Gheorghe Almasi; Jesús Labarta
Programming for large-scale, multicore-based architectures requires adequate tools that offer ease of programming while not hindering application performance. StarSs is a family of parallel programming models based on automatic function level parallelism that targets productivity. StarSs deploys a data-flow model: it analyses dependencies between tasks and manages their execution, exploiting their concurrency as much as possible. We introduce Cluster Superscalar (ClusterSs), a new StarSs member designed to execute on clusters of SMPs. ClusterSs tasks are asynchronously created and assigned to the available resources with the support of the IBM APGAS runtime, which provides an efficient and portable communication layer based on one-sided communication. This short paper gives an overview of the ClusterSs design on top of APGAS, as well as the conclusions of a productivity study; in this study, ClusterSs was compared to the IBM X10 language, both in terms of programmability and performance. A technical report is available with the details.
ieee international conference on cloud computing technology and science | 2011
Enric Tejedor; Jorge Ejarque; Francesc Lordan; Roger Rafanell; Javier Alvarez; Daniele Lezzi; Raül Sirvent; Rosa M. Badia
Cloud computing is inherently service-oriented: cloud applications are delivered to consumers as services via the Internet. Therefore, these applications can potentially benefit from the Service-Oriented Architecture (SOA) principles: they can be programmed as added-value services composed by pre-existing ones, thus favouring code reuse. However, new programming models are required to simplify their development, along with systems that are capable of orchestrating the execution of the resulting SaaS in the Cloud. In that regard, this paper presents Service Super scalar (Servicess), an alternative to existing PaaS which provides a programming model and execution runtime to ease the development and execution of service-based applications in clouds. Servicess is a task-based model: the user is only required to select the tasks, which can be services or regular methods, to be spawned asynchronously. The application, a composite service, is programmed in a totally sequential way and no API call must be included in the code. The runtime is in charge of automatically orchestrating the execution of the tasks in the Cloud, as well as of elastically deploying new virtual resources depending on the load. After describing the main characteristics of the programming model and the runtime, we evaluate the productivity of Servicess and show how it offers a good trade-off between programmability and runtime performance.
Concurrency and Computation: Practice and Experience | 2012
Enric Tejedor; Montse Farreras; David Grove; Rosa M. Badia; Gheorghe Almasi; Jesús Labarta
Programming for large‐scale, multicore‐based architectures requires adequate tools that offer ease of programming and do not hinder application performance. StarSs is a family of parallel programming models based on automatic function‐level parallelism that targets productivity. StarSs deploys a data‐flow model: it analyzes dependencies between tasks and manages their execution, exploiting their concurrency as much as possible.
International Journal of High Performance Computing Applications | 2017
Enric Tejedor; Yolanda Becerra; Guillem Alomar; Anna Queralt; Rosa M. Badia; Jordi Torres; Toni Cortes; Jesús Labarta
The use of the Python programming language for scientific computing has been gaining momentum in the last years. The fact that it is compact and readable and its complete set of scientific libraries are two important characteristics that favour its adoption. Nevertheless, Python still lacks a solution for easily parallelizing generic scripts on distributed infrastructures, since the current alternatives mostly require the use of APIs for message passing or are restricted to embarrassingly parallel computations. In that sense, this paper presents PyCOMPSs, a framework that facilitates the development of parallel computational workflows in Python. In this approach, the user programs her script in a sequential fashion and decorates the functions to be run as asynchronous parallel tasks. A runtime system is in charge of exploiting the inherent concurrency of the script, detecting the data dependencies between tasks and spawning them to the available resources. Furthermore, we show how this programming model can be built on top of a Big Data storage architecture, where the data stored in the backend is abstracted and accessed from the application in the form of persistent objects.
grid computing | 2010
Ramon Nou; Jacobo Giralt; Julita Corbalan; Enric Tejedor; J. Oriol Fitó; Josep M. Perez; Toni Cortes
Designing a job management system for the Grid is a non-trivial task. While a complex middleware can give a lot of features, it often implies sacrificing performance. Such performance loss is especially noticeable for small jobs. A Job Managers design also affects the capabilities of the monitoring system. We believe that monitoring a job or asking for a job status should be fast and easy, like doing a simple ‘ps’. In this paper, we present the job management of XtreemOS - a Linux-based operating system to support Virtual Organizations for Grid. This management is performed inside the Application Execution Manager (AEM). We evaluate its performance using only one job manager plus the built-in monitoring infrastructure. Furthermore, we present a set of real-world applications using AEM and its features. In XtreemOS we avoid reinventing the wheel and use the Linux paradigm as an abstraction.
grid computing | 2011
Enric Tejedor; Francesc Lordan; Rosa M. Badia
While object-oriented programming (OOP) and parallelism originated as separate areas, there have been many attempts to bring those paradigms together. Few of them, though, meet the challenge of programming for parallel architectures and distributed platforms: offering good development expressiveness while not hindering application performance. This work presents the introduction of OOP in a parallel programming model for Java applications which targets productivity. In this model, one can develop a Java application in a totally sequential fashion, without using any new library or language construct, thus favouring programmability. We show how this model offers a good trade-off between ease of programming and runtime performance. A comparison with other approaches is provided, evaluating the key aspects of the model and discussing some results for a set of the NAS parallel benchmarks.
CoreGRID Workshop - Making Grids Work | 2008
Enric Tejedor; Rosa M. Badia; Thilo Kielmann; Vladimir Getov
This paper presents the Integrated Toolkit, a framework which enables the easy development of Grid-unaware applications. While keeping the Grid transparent to the programmer, the Integrated Toolkit tries to optimize the performance of such applications by exploiting their inherent concurrency when executing them on the Grid. The Integrated Toolkit is designed to follow the Grid Component Model (GCM) and is therefore formed by several components, each one encapsulating a given functionality identified in the GRID superscalar runtime.
international conference on conceptual structures | 2010
Enric Tejedor; Rosa M. Badia; Romina Royo; Josep Lluís Gelpí
Abstract The continuously increasing size of biological sequence databases has motivated the development of analysis suites that, by means of parallelization, are capable of performing faster searches on such databases. However, many of these tools are not suitable for execution on mid-to-large scale parallel infrastructures such as computational Grids. This paper shows how COMP Superscalar can be used to effectively parallelize on the Grid a sequence analysis program. In particular, we present a sequential version of the HMMER hmmpfam tool that, when run with COMP Superscalar, is decomposed into tasks and run on a set of distributed resources, not burdening the programmer with parallelization efforts. Although performance is not a main objective of this work, we also present some test results where COMP Superscalar, using a new pre-scheduling technique, clearly outperforms a well-known parallelization of the hmmpfam algorithm.