Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Íñigo Goiri is active.

Publication


Featured researches published by Íñigo Goiri.


Information Systems Frontiers | 2012

Economic model of a Cloud provider operating in a federated Cloud

Íñigo Goiri; Jordi Guitart; Jordi Torres

Resource provisioning in Cloud providers is a challenge because of the high variability of load over time. On the one hand, the providers can serve most of the requests owning only a restricted amount of resources, but this forces to reject customers during peak hours. On the other hand, valley hours incur in under-utilization of the resources, which forces the providers to increase their prices to be profitable. Federation overcomes these limitations and allows providers to dynamically outsource resources to others in response to demand variations. Furthermore, it allows providers with underused resources to rent them to other providers. Both techniques make the provider getting more profit when used adequately. Federation of Cloud providers requires having a clear understanding of the consequences of each decision. In this paper, we present a characterization of providers operating in a federated Cloud which helps to choose the most convenient decision depending on the environment conditions. These include when to outsource to other providers, rent free resources to other providers (i.e., insourcing), or turn off unused nodes to save power. We characterize these decisions as a function of several parameters and implement a federated provider that uses this characterization to exploit federation. Finally, we evaluate the profitability of using these techniques using the data from a real provider.


international conference on cluster computing | 2010

Energy-Aware Scheduling in Virtualized Datacenters

Íñigo Goiri; Ferran Julià; Ramon Nou; Josep Lluis Berral; Jordi Guitart; Jordi Torres

The reduction of energy consumption in large-scale datacenters is being accomplished through an extensive use of virtualization, which enables the consolidation of multiple workloads in a smaller number of machines. Nevertheless, virtualization also incurs some additional overheads (e.g. virtual machine creation and migration) that can influence what is the best consolidated configuration, and thus, they must be taken into account. In this paper, we present a dynamic job scheduling policy for power-aware resource allocation in a virtualized datacenter. Our policy tries to consolidate workloads from separate machines into a smaller number of nodes, while fulfilling the amount of hardware resources needed to preserve the quality of service of each job. This allows turning off the spare servers, thus reducing the overall datacenter power consumption. As a novelty, this policy incorporates all the virtualization overheads in the decision process. In addition, our policy is prepared to consider other important parameters for a datacenter, such as reliability or dynamic SLA enforcement, in a synergistic way with power consumption. The introduced policy is evaluated comparing it against common policies in a simulated environment that accurately models HPC jobs execution in a virtualized datacenter including power consumption modeling and obtains a power consumption reduction of 15% with respect to typical policies.


architectural support for programming languages and operating systems | 2015

ApproxHadoop: Bringing Approximations to MapReduce Frameworks

Íñigo Goiri; Ricardo Bianchini; Santosh Nagarakatte; Thu D. Nguyen

We propose and evaluate a framework for creating and running approximation-enabled MapReduce programs. Specifically, we propose approximation mechanisms that fit naturally into the MapReduce paradigm, including input data sampling, task dropping, and accepting and running a precise and a user-defined approximate version of the MapReduce code. We then show how to leverage statistical theories to compute error bounds for popular classes of MapReduce programs when approximating with input data sampling and/or task dropping. We implement the proposed mechanisms and error bound estimations in a prototype system called ApproxHadoop. Our evaluation uses MapReduce applications from different domains, including data analytics, scientific computing, video encoding, and machine learning. Our results show that ApproxHadoop can significantly reduce application execution time and/or energy consumption when the user is willing to tolerate small errors. For example, ApproxHadoop can reduce runtimes by up to 32x when the user can tolerate an error of 1% with 95% confidence. We conclude that our framework and system can make approximation easily accessible to many application domains using the MapReduce model.


grid economics and business models | 2010

Resource-level QoS metric for CPU-based guarantees in cloud providers

Íñigo Goiri; Ferran Julià; J. Oriol Fitó; Mario Macías; Jordi Guitart

Success of Cloud computing requires that both customers and providers can be confident that signed Service Level Agreements (SLA) are supporting their respective business activities to their best extent. Currently used SLAs fail in providing such confidence, especially when providers outsource resources to other providers. These resource providers typically support very simple metrics, or metrics that hinder an efficient exploitation of their resources. In this paper, we propose a resource-level metric for specifying finegrain guarantees on CPU performance. This metric allows resource providers to allocate dynamically their resources among the running services depending on their demand. This is accomplished by incorporating the customers CPU usage in the metric definition, but avoiding fake SLA violations when the customers task does not use all its allocated resources. As demonstrated in our evaluation, which has been conducted in a virtualized provider where we have implemented the needed infrastructure for using our metric, our solution presents fewer SLA violations than other CPU-related metrics.


network operations and management symposium | 2010

Checkpoint-based fault-tolerant infrastructure for virtualized service providers

Íñigo Goiri; Ferran Julià; Jordi Guitart; Jordi Torres

Crash and omission failures are common in service providers: a disk can break down or a link can fail anytime. In addition, the probability of a node failure increases with the number of nodes. Apart from reducing the providers computation power and jeopardizing the fulfillment of his contracts, this can also lead to computation time wasting when the crash occurs before finishing the task execution. In order to avoid this problem, efficient checkpoint infrastructures are required, especially in virtualized environments where these infrastructures must deal with huge virtual machine images. This paper proposes a smart checkpoint infrastructure for virtualized service providers. It uses Another Union File System to differentiate read-only from read-write parts in the virtual machine image. In this way, read-only parts can be checkpointed only once, while the rest of checkpoints must only save the modifications in read-write parts, thus reducing the time needed to make a checkpoint. The checkpoints are stored in a Hadoop Distributed File System. This allows resuming a task execution faster after a node crash and increasing the fault tolerance of the system, since checkpoints are distributed and replicated in all the nodes of the provider. This paper presents a running implementation of this infrastructure and its evaluation, demonstrating that it is an effective way to make faster checkpoints with low interference on task execution and efficient task recovery after a node failure.


2013 International Green Computing Conference Proceedings | 2013

Providing green SLAs in High Performance Computing clouds

E. Haque; Kien Le; Íñigo Goiri; Ricardo Bianchini; Thu D. Nguyen

Demand for clean products and services is increasing as society is becoming increasingly aware of climate change. In response, many enterprises are setting explicit sustainability goals and implementing initiatives to reduce carbon emissions. Quantification and disclosure of such goals and initiatives have become important marketing tools. As enterprises and individuals shift their workloads to the cloud, this drive toward quantification and disclosure will lead to demand for quantifiable green cloud services. Thus, we argue that cloud providers should offer a new class of green services, in addition to existing (energy-source-oblivious) services. This new class would provide clients with explicit service-level agreements (which we call Green SLAs) for the percentage of renewable energy used to run their workloads. In this paper, we first propose an approach for High Performance Computing cloud providers to offer such a Green SLA service. Specifically, each client job specifies a Green SLA, which is the minimum percentage of green energy that must be used to run the job. The provider earns a premium for meeting the Green SLA, but is penalized if it accepts the job but violates the Green SLA. We then propose (1) a power distribution and control infrastructure that uses a small amount of hardware to support Green SLAs, (2) an optimization-based framework for scheduling jobs and power sources that maximizes provider profits while respecting Green SLAs, and (3) two scheduling policies based on the framework. We evaluate our framework and policies extensively through simulations. Our main results show the tradeoffs between our policies, and their advantages over simpler greedy heuristics. We conclude that a Green SLA service that uses our policies would enable the provider to attract environmentally conscious clients, especially those who require strict guarantees on their use of green energy.


grid computing | 2010

Multifaceted resource management for dealing with heterogeneous workloads in virtualized data centers

Íñigo Goiri; J. Oriol Fitó; Ferran Julià; Ramon Nou; Josep Li. Berral; Jordi Guitart; Jordi Torres

As long as virtualization has been introduced in data centers, it has been opening new chances for resource management. Now, it is not just used as a tool for consolidating underused nodes and save power, it also allows new solutions to well-known challenges, such as fault tolerance or heterogeneity management. Virtualization helps to encapsulate Web-based applications or HPC jobs in virtual machines and see them as a single entity which can be managed in an easier way.


Future Generation Computer Systems | 2012

Supporting CPU-based guarantees in cloud SLAs via resource-level QoS metrics

Íñigo Goiri; Ferran Julií; J. Oriol Fitó; Mario Macías; Jordi Guitart

Success of Cloud computing requires that both customers and providers can be confident that signed Service Level Agreements (SLA) are supporting their respective business activities to their best extent. Currently used SLAs fail in providing such confidence, especially when providers outsource resources to other providers. These resource providers typically support very simple metrics like availability, or metrics that hinder an efficient exploitation of their resources. In this paper, we propose a resource-level metric for specifying fine-grain guarantees on CPU performance. This metric allows resource providers to allocate dynamically their resources among running services depending on their demand. This is accomplished by incorporating the customers CPU usage in the metric definition, but avoiding fake SLA violations when the customers task does not use all its allocated resources. We have conducted the evaluation in a virtualized provider where we have implemented the needed infrastructure for using our metric. As demonstrated in our evaluation, our solution presents fewer SLA violations than other CPU-related metrics while maintaining the Quality of Service.


network computing and applications | 2009

Introducing Virtual Execution Environments for Application Lifecycle Management and SLA-Driven Resource Distribution within Service Providers

Íñigo Goiri; Ferran Julià; Jorge Ejarque; Marc de Palol; Rosa M. Badia; Jordi Guitart; Jordi Torres

Resource management is a key challenge that service providers must adequately face in order to ensure their profitability. This paper describes a proof-of-concept framework for facilitating resource management in service providers, which allows reducing costs and at the same time fulfilling the quality of service agreed with the customers. This is accomplished by means of virtualization. Our approach provides application-specific virtual environments and consolidates them in order to achieve a better utilization of the providers resources. In addition, it implements self-adaptive capabilities for dynamically distributing the providers resources among these virtual environments based on Service Level Agreements. The proposed solution has been implemented as a part of the Semantically-Enhanced Resource Allocator prototype developed within the BREIN European project. The evaluation shows that our prototype is able to react in very short time under changing conditions and avoid SLA violations by rescheduling efficiently the resources.


ieee international conference on escience | 2008

SLA-Driven Semantically-Enhanced Dynamic Resource Allocator for Virtualized Service Providers

Jorge Ejarque; M. de Palol; Íñigo Goiri; Ferran Julià; Jordi Guitart; Rosa M. Badia; Jordi Torres

In order to be profitable, service providers must be able to undertake complex management tasks such as provisioning, deployment, execution and adaptation in an autonomic way. This paper introduces a framework, the semantically-enhanced resource allocator (SERA), aimed to facilitate service provider management, reducing costs and at the same time fulfilling the QoS agreed with the customers. The SERA assigns resources depending on the information given by service providers according to its business goals and on the resource requirements of the tasks. Tasks and resources are semantically described and these descriptions are used to infer the resource assignments. Virtualization is used to provide a full-customized and isolated virtual environment for each task. In addition, the system supports fine-grain dynamic resource distribution among these virtual environments based on SLAs. The required adaptation is implemented using agents, guarantying to each task enough resources to meet the agreed performance goals.

Collaboration


Dive into the Íñigo Goiri's collaboration.

Top Co-Authors

Avatar

Jordi Guitart

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Jordi Torres

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ferran Julià

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

J. Oriol Fitó

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Jorge Ejarque

Barcelona Supercomputing Center

View shared research outputs
Top Co-Authors

Avatar

Ramon Nou

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Rosa M. Badia

Barcelona Supercomputing Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge