Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jorge Ejarque is active.

Publication


Featured researches published by Jorge Ejarque.


grid computing | 2014

ServiceSs: An Interoperable Programming Framework for the Cloud

Francesc Lordan; Enric Tejedor; Jorge Ejarque; Roger Rafanell; Javier Alvarez; Fabrizio Marozzo; Daniele Lezzi; Raül Sirvent; Domenico Talia; Rosa M. Badia

The rise of virtualized and distributed infrastructures has led to new challenges to accomplish the effective use of compute resources through the design and orchestration of distributed applications. As legacy, monolithic applications are replaced with service-oriented applications, questions arise about the steps to be taken in order to maximize the usefulness of the infrastructures and to provide users with tools for the development and execution of distributed applications. One of the issues to be solved is the existence of multiple cloud solutions that are not interoperable, which forces the user to be locked to a specific provider or to continuously adapt applications. With the objective of simplifying the programmers challenges, ServiceSs provides a straightforward programming model and an execution framework that helps on abstracting applications from the actual execution environment. This paper presents how ServiceSs transparently interoperates with multiple providers implementing the appropriate interfaces to execute scientific applications on federated clouds.


ieee international conference on cloud computing technology and science | 2010

A Multi-agent Approach for Semantic Resource Allocation

Jorge Ejarque; Ra ¨ ul Sirvent; Rosa M. Badia

This paper presents a new approach of the Semantically Enhanced Resource Allocation (SERA) distributed as a multi-agent system. It presents a distributed resource allocation process which combines the benefits of semantic web for making easier the integration between multiple resource providers in the Cloud and agent technologies for coordinating and adapting the execution accross the different providers. The allocation process is based on the negotiation of different agents which allows the combination of customer and providers policies getting scheduling results which satisfies both parts. The SERA agents can be deployed in multiple locations improving the system scalability. The new approach makes the SERA suitable for working as a scheduler inside a Service Provider as well as a metascheduler integrating resources from different providers and platforms (clusters, grids, clouds,...).


network computing and applications | 2009

Introducing Virtual Execution Environments for Application Lifecycle Management and SLA-Driven Resource Distribution within Service Providers

Íñigo Goiri; Ferran Julià; Jorge Ejarque; Marc de Palol; Rosa M. Badia; Jordi Guitart; Jordi Torres

Resource management is a key challenge that service providers must adequately face in order to ensure their profitability. This paper describes a proof-of-concept framework for facilitating resource management in service providers, which allows reducing costs and at the same time fulfilling the quality of service agreed with the customers. This is accomplished by means of virtualization. Our approach provides application-specific virtual environments and consolidates them in order to achieve a better utilization of the providers resources. In addition, it implements self-adaptive capabilities for dynamically distributing the providers resources among these virtual environments based on Service Level Agreements. The proposed solution has been implemented as a part of the Semantically-Enhanced Resource Allocator prototype developed within the BREIN European project. The evaluation shows that our prototype is able to react in very short time under changing conditions and avoid SLA violations by rescheduling efficiently the resources.


ieee international conference on escience | 2008

SLA-Driven Semantically-Enhanced Dynamic Resource Allocator for Virtualized Service Providers

Jorge Ejarque; M. de Palol; Íñigo Goiri; Ferran Julià; Jordi Guitart; Rosa M. Badia; Jordi Torres

In order to be profitable, service providers must be able to undertake complex management tasks such as provisioning, deployment, execution and adaptation in an autonomic way. This paper introduces a framework, the semantically-enhanced resource allocator (SERA), aimed to facilitate service provider management, reducing costs and at the same time fulfilling the QoS agreed with the customers. The SERA assigns resources depending on the information given by service providers according to its business goals and on the resource requirements of the tasks. Tasks and resources are semantically described and these descriptions are used to infer the resource assignments. Virtualization is used to provide a full-customized and isolated virtual environment for each task. In addition, the system supports fine-grain dynamic resource distribution among these virtual environments based on SLAs. The required adaptation is implemented using agents, guarantying to each task enough resources to meet the agreed performance goals.


Future Generation Computer Systems | 2018

Dynamic energy-aware scheduling for parallel task-based application in cloud computing

Fredy Juarez; Jorge Ejarque; Rosa M. Badia

Green Computing is a recent trend in computer science, which tries to reduce the energy consumption and carbon footprint produced by computers on distributed platforms such as clusters, grids, and clouds. Traditional scheduling solutions attempt to minimize processing times without taking into account the energetic cost. One of the methods for reducing energy consumption is providing scheduling policies in order to allocate tasks on specific resources that impact over the processing times and energy consumption. In this paper, we propose a real-time dynamic scheduling system to execute efficiently task-based applications on distributed computing platforms in order to minimize the energy consumption. Scheduling tasks on multiprocessors is a well known NP-hard problem and optimal solution of these problems is not feasible, we present a polynomial-time algorithm that combines a set of heuristic rules and a resource allocation technique in order to get good solutions on an affordable time scale. The proposed algorithm minimizes a multi-objective function which combines the energy-consumption and execution time according to the energy-performance importance factor provided by the resource provider or user, also taking into account sequence-dependent setup times between tasks, setup times and down times for virtual machines (VM) and energy profiles for different architectures. A prototype implementation of the scheduler has been tested with different kinds of DAG generated at random as well as on real task-based COMPSs applications. We have tested the system with different size instances and importance factors, and we have evaluated which combination provides a better solution and energy savings. Moreover, we have also evaluated the introduced overhead by measuring the time for getting the scheduling solutions for a different number of tasks, kinds of DAG, and resources, concluding that our method is suitable for run-time scheduling.


ieee international conference on cloud computing technology and science | 2011

A Cloud-unaware Programming Model for Easy Development of Composite Services

Enric Tejedor; Jorge Ejarque; Francesc Lordan; Roger Rafanell; Javier Alvarez; Daniele Lezzi; Raül Sirvent; Rosa M. Badia

Cloud computing is inherently service-oriented: cloud applications are delivered to consumers as services via the Internet. Therefore, these applications can potentially benefit from the Service-Oriented Architecture (SOA) principles: they can be programmed as added-value services composed by pre-existing ones, thus favouring code reuse. However, new programming models are required to simplify their development, along with systems that are capable of orchestrating the execution of the resulting SaaS in the Cloud. In that regard, this paper presents Service Super scalar (Servicess), an alternative to existing PaaS which provides a programming model and execution runtime to ease the development and execution of service-based applications in clouds. Servicess is a task-based model: the user is only required to select the tasks, which can be services or regular methods, to be spawned asynchronously. The application, a composite service, is programmed in a totally sequential way and no API call must be included in the code. The runtime is in charge of automatically orchestrating the execution of the tasks in the Cloud, as well as of elastically deploying new virtual resources depending on the load. After describing the main characteristics of the programming model and the runtime, we evaluate the productivity of Servicess and show how it offers a good trade-off between programmability and runtime performance.


ieee international conference on cloud computing technology and science | 2011

A Rule-based Approach for Infrastructure Providers' Interoperability

Jorge Ejarque; Javier Alvarez; Raül Sirvent; Rosa M. Badia

Cloud Computing is a new computing paradigm where a large amount of computing capacity is offered on demand and only paying for what you use. Several Infrastructure Providers have adopted this approach offering resources which are easily managed by means of web-based APIs. However, if a user wants to use different providers, the resource management becomes tedious because providers define different API requiring a special implementation for interacting with each of them. In this paper, we present a methodology for making the provider interoperability easier. In this methodology, each providers API is modeled by an ontology. Equivalences between these ontologies are modeled by rules, and messages used in a providers API are converted in calls to another providers API applying these rules. With our approach, users interact with Infrastructure Providers using their most familiar API and the translation to the other APIs is automatically done by the system.


parallel, distributed and network-based processing | 2016

Energy-Aware Programming Model for Distributed Infrastructures

Francesc Lordan; Jorge Ejarque; Raül Sirvent; Rosa M. Badia

Day after day, cloud technologies are more and more adopted by very diverse types of stakeholders, and this success creates a side-effect problem: the energy spent by this kind of infrastructures is growing bigger every day. With the objective of reducing energy consumption when programming applications for cloud infrastructures, we have implemented energy-aware mechanisms in the COMPSs Programming Model, inside the context of the ASCETiC Project. In this paper, we demonstrate that application-level scheduling can have a big impact on the energy consumed by an application when executed in a heterogeneous cloud. We have implemented an energy-aware scheduling mechanism in COMPSs, together with a versioning technique, and we have run experiments with a use case coming from the real estate sector that proves our hypotheses.


Journal of Grid Computing | 2018

Transparent Orchestration of Task-based Parallel Applications in Containers Platforms

Cristian Ramon-Cortes; Albert Serven; Jorge Ejarque; Daniele Lezzi; Rosa M. Badia

This paper presents a framework to easily build and execute parallel applications in container-based distributed computing platforms in a user-transparent way. The proposed framework is a combination of the COMP Superscalar (COMPSs) programming model and runtime, which provides a straightforward way to develop task-based parallel applications from sequential codes, and containers management platforms that ease the deployment of applications in computing environments (as Docker, Mesos or Singularity). This framework provides scientists and developers with an easy way to implement parallel distributed applications and deploy them in a one-click fashion. We have built a prototype which integrates COMPSs with different containers engines in different scenarios: i) a Docker cluster, ii) a Mesos cluster, and iii) Singularity in an HPC cluster. We have evaluated the overhead in the building phase, deployment and execution of two benchmark applications compared to a Cloud testbed based on KVM and OpenStack and to the usage of bare metal nodes. We have observed an important gain in comparison to cloud environments during the building and deployment phases. This enables better adaptation of resources with respect to the computational load. In contrast, we detected an extra overhead during the execution, which is mainly due to the multi-host Docker networking.


international conference on cloud computing | 2015

Towards Automatic Application Migration to Clouds

Jorge Ejarque; András Micsik; Rosa M. Badia

Porting applications to Clouds is one of the key challenges in software industry. The available approaches to perform this task are basically either services derived from alliances of major software vendors and Cloud providers focusing on their own products, or small platform providers focusing on the most popular software stacks. For migrating other types of software, the options are limited to Infrastructure-as-a-Service (IaaS) solutions which require a lot of programming effort for adapting the software to a Cloud providers API. Moreover, if it must be deployed in different providers, new integration procedures must be designed and implemented which could be a nightmare. This paper presents a solution for facilitating the migration of any application to the cloud, inferring the most suitable deployment model for the application and automatically deploying it in the available Cloud providers.

Collaboration


Dive into the Jorge Ejarque's collaboration.

Top Co-Authors

Avatar

Rosa M. Badia

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Raül Sirvent

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Cristian Ramon-Cortes

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Daniele Lezzi

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Francesc Lordan

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Jordi Guitart

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Javier Alvarez

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ferran Julià

Polytechnic University of Catalonia

View shared research outputs
Researchain Logo
Decentralizing Knowledge