Francesc Lordan
Barcelona Supercomputing Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Francesc Lordan.
grid computing | 2014
Francesc Lordan; Enric Tejedor; Jorge Ejarque; Roger Rafanell; Javier Alvarez; Fabrizio Marozzo; Daniele Lezzi; Raül Sirvent; Domenico Talia; Rosa M. Badia
The rise of virtualized and distributed infrastructures has led to new challenges to accomplish the effective use of compute resources through the design and orchestration of distributed applications. As legacy, monolithic applications are replaced with service-oriented applications, questions arise about the steps to be taken in order to maximize the usefulness of the infrastructures and to provide users with tools for the development and execution of distributed applications. One of the issues to be solved is the existence of multiple cloud solutions that are not interoperable, which forces the user to be locked to a specific provider or to continuously adapt applications. With the objective of simplifying the programmers challenges, ServiceSs provides a straightforward programming model and an execution framework that helps on abstracting applications from the actual execution environment. This paper presents how ServiceSs transparently interoperates with multiple providers implementing the appropriate interfaces to execute scientific applications on federated clouds.
international conference on parallel processing | 2012
Fabrizio Marozzo; Francesc Lordan; Roger Rafanell; Daniele Lezzi; Domenico Talia; Rosa M. Badia
The advent of Cloud computing has given to researchers the ability to access resources that satisfy their growing needs, which could not be satisfied by traditional computing resources such as PCs and locally managed clusters. On the other side, such ability, has opened new challenges for the execution of their computational work and the managing of massive amounts of data into resources provided by different private and public infrastructures. COMP Superscalar (COMPSs) is a programming framework that provides a programming model and a runtime that ease the development of applications for distributed environments and their execution on a wide range of computational infrastructures. COMPSs has been recently extended in order to be interoperable with several cloud technologies like Amazon, OpenNebula, Emotive and other OCCI compliant offerings. This paper presents the extensions of this interoperability layer to support the execution of COMPSs applications into the Windows Azure Platform. The framework has been evaluated through the porting of a data mining workflow to COMPSs and the execution on an hybrid testbed.
ieee international conference on cloud computing technology and science | 2011
Enric Tejedor; Jorge Ejarque; Francesc Lordan; Roger Rafanell; Javier Alvarez; Daniele Lezzi; Raül Sirvent; Rosa M. Badia
Cloud computing is inherently service-oriented: cloud applications are delivered to consumers as services via the Internet. Therefore, these applications can potentially benefit from the Service-Oriented Architecture (SOA) principles: they can be programmed as added-value services composed by pre-existing ones, thus favouring code reuse. However, new programming models are required to simplify their development, along with systems that are capable of orchestrating the execution of the resulting SaaS in the Cloud. In that regard, this paper presents Service Super scalar (Servicess), an alternative to existing PaaS which provides a programming model and execution runtime to ease the development and execution of service-based applications in clouds. Servicess is a task-based model: the user is only required to select the tasks, which can be services or regular methods, to be spawned asynchronously. The application, a composite service, is programmed in a totally sequential way and no API call must be included in the code. The runtime is in charge of automatically orchestrating the execution of the tasks in the Cloud, as well as of elastically deploying new virtual resources depending on the load. After describing the main characteristics of the programming model and the runtime, we evaluate the productivity of Servicess and show how it offers a good trade-off between programmability and runtime performance.
Journal of Grid Computing | 2017
Francesc Lordan; Rosa M. Badia
The advent of the Cloud and the popularization of mobile devices have led us to a shift in computing access where users have an interactive display, and heavy computations run remotely, in the Cloud servers. COMPSs-Mobile is a framework that aims to ease the development of energy-efficient and high-performing applications for this kind of environment. The framework provides an infrastructure-unaware programming model that allows developers to code regular Android applications whose computation is transparently parallelized and partially offloaded to remote resources. This paper gives an overview of the programming model and describes the internal components of the toolkit which supports it focusing on the offloading and checkpointing mechanisms. It also presents the results of some tests conducted to evaluate the behavior of the solution and to measure the potential benefits in Android applications.
parallel, distributed and network-based processing | 2016
Francesc Lordan; Jorge Ejarque; Raül Sirvent; Rosa M. Badia
Day after day, cloud technologies are more and more adopted by very diverse types of stakeholders, and this success creates a side-effect problem: the energy spent by this kind of infrastructures is growing bigger every day. With the objective of reducing energy consumption when programming applications for cloud infrastructures, we have implemented energy-aware mechanisms in the COMPSs Programming Model, inside the context of the ASCETiC Project. In this paper, we demonstrate that application-level scheduling can have a big impact on the energy consumed by an application when executed in a heterogeneous cloud. We have implemented an energy-aware scheduling mechanism in COMPSs, together with a versioning technique, and we have run experiments with a use case coming from the real estate sector that proves our hypotheses.
european conference on parallel processing | 2013
Daniele Lezzi; Francesc Lordan; Roger Rafanell; Rosa M. Badia
Recently cloud services have been evaluated by scientific communities as a viable solution to satisfy their computing needs, reducing the cost of ownership and operation to the minimum. The analysis on the adoption of the cloud computing model for eScience has identified several areas of improvement as federation management and interoperability between providers. Portability between cloud vendors is a widely recognized feature to avoid the risk of lock-in of users in proprietary systems, a stopper to the complete adoption of clouds.
cluster computing and the grid | 2016
Francesc Lordan; Rosa M. Badia
The advent of Cloud and the popularization of mobile devices have led us to a shift in computing access. Computing users will have an interaction display while the real computation will be performed remotely, in the Cloud. COMPSs-Mobile is a framework that aims to ease the development of energy-efficient and high-performing applications for this environment. The framework provides an infrastructure-unaware programming model that allows developers to code regular Android applications that, transparently, are parallelized, and partially offloaded to remote resources. This paper gives an overview of the programming model and describes the internal components of the toolkit which supports it focusing on the offloading and checkpointing mechanisms. It also presents the results of some tests conducted to evaluate the behavior of the solution and to measure the potential benefits in Android applications.
grid computing | 2011
Enric Tejedor; Francesc Lordan; Rosa M. Badia
While object-oriented programming (OOP) and parallelism originated as separate areas, there have been many attempts to bring those paradigms together. Few of them, though, meet the challenge of programming for parallel architectures and distributed platforms: offering good development expressiveness while not hindering application performance. This work presents the introduction of OOP in a parallel programming model for Java applications which targets productivity. In this model, one can develop a Java application in a totally sequential fashion, without using any new library or language construct, thus favouring programmability. We show how this model offers a good trade-off between ease of programming and runtime performance. A comparison with other approaches is provided, evaluating the key aspects of the model and discussing some results for a set of the NAS parallel benchmarks.
grid economics and business models | 2016
Karim Djemame; Richard E. Kavanagh; Django Armstrong; Francesc Lordan; Jorge Ejarque; Mario Macías; Raül Sirvent; Jordi Guitart; Rosa M. Badia
Energy consumption is a key concern in cloud computing. The paper reports on a cloud architecture to support energy efficiency at service construction, deployment, and operation. This is achieved through SaaS, PaaS and IaaS intra-layer self-adaptation in isolation. The self-adaptation mechanisms are discussed, as well as their implementation and evaluation. The experimental results show that the overall architecture is capable of adapting to meet the energy goals of applications on a per layer basis.
ieee international conference on high performance computing data and analytics | 2017
Francesc Lordan; Rosa M. Badia; Wen-mei W. Hwu
Using the GPUs embedded in mobile devices allows for increasing the performance of the applications running on them while reducing the energy consumption of their execution. This article presents a task-based solution for adaptative, collaborative heterogeneous computing on mobile cloud environments. To implement our proposal, we extend the COMPSs-Mobile framework – an implementation of the COMPSs programming model for building mobile applications that offload part of the computation to the Cloud – to support offloading computation to GPUs through OpenCL. To evaluate our solution, we subject the prototype to three benchmark applications representing different application patterns.