Rubén S. Montero
Complutense University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rubén S. Montero.
Ibm Journal of Research and Development | 2009
Benny Rochwerger; David Breitgand; Eliezer Levy; Alex Galis; Kenneth Nagin; Ignacio Martín Llorente; Rubén S. Montero; Yaron Wolfsthal; Erik Elmroth; Juan Caceres; Muli Ben-Yehuda; Wolfgang Emmerich; Fermín Galán
The emerging cloud-computing paradigm is rapidly gaining momentum as an alternative to traditional IT (information technology). However, contemporary cloud-computing offerings are primarily targeted for Web 2.0-style applications. Only recently have they begun to address the requirements of enterprise solutions, such as support for infrastructure service-level agreements. To address the challenges and deficiencies in the current state of the art, we propose a modular, extensible cloud architecture with intrinsic support for business service management and the federation of clouds. The goal is to facilitate an open, service-based online economy in which resources and services are transparently provisioned and managed across clouds on an ondemand basis at competitive costs with high-quality service. The Reservoir project is motivated by the vision of implementing an architecture that would enable providers of cloud infrastructure to dynamically partner with each other to create a seemingly infinite pool of IT resources while fully preserving their individual autonomy in making technological and business management decisions. To this end, Reservoir could leverage and extend the advantages of virtualization and embed autonomous management in the infrastructure. At the same time, the Reservoir approach aims to achieve a very ambitious goal: creating a foundation for next-generation enterprise-grade cloud computing.
Future Generation Computer Systems | 2012
Johan Tordsson; Rubén S. Montero; Rafael Moreno-Vozmediano; Ignacio Martín Llorente
In the past few years, we have witnessed the proliferation of a heterogeneous ecosystem of cloud providers, each one with a different infrastructure offer and pricing policy. We explore this heterogeneity in a novel cloud brokering approach that optimizes placement of virtual infrastructures across multiple clouds and also abstracts the deployment and management of infrastructure components in these clouds. The feasibility of our approach is evaluated in a high throughput computing cluster case study. Experimental results confirm that multi-cloud deployment provides better performance and lower costs compared to the usage of a single cloud only.
Software - Practice and Experience | 2004
Eduardo Huedo; Rubén S. Montero; Ignacio Martín Llorente
Grids offer a dramatic increase in the number of available processing and storing resources that can be delivered to applications. However, efficient job submission and management continue being far from accessible to ordinary scientists and engineers due to their dynamic and complex nature. This paper describes a new Globus based framework that allows an easier and more efficient execution of jobs in a ‘submit and forget’ fashion. The framework automatically performs the steps involved in job submission and also watches over its efficient execution. In order to obtain a reasonable degree of performance, job execution is adapted to dynamic resource conditions and application demands. Adaptation is achieved by supporting automatic application migration following performance degradation, ‘better’ resource discovery, requirement change, owner decision or remote resource failure. The framework is currently functional on any Grid testbed based on Globus because it does not require new system software to be installed in the resources. The paper also includes practical experiences of the behavior of our framework on the TRGP and UCM‐CAB testbeds. Copyright
Future Generation Computer Systems | 2010
Luis Rodero-Merino; Luis M. Vaquero; Victor Gil; Fermín Galán; Javier Fontan; Rubén S. Montero; Ignacio Martín Llorente
Clouds have changed the way we think about IT infrastructure management. Providers of software-based services are now able to outsource the operation of the hardware platforms required by those services. However, as the utilization of cloud platforms grows, users are realizing that the implicit promise of clouds (leveraging them from the tasks related with infrastructure management) is not fulfilled. A reason for this is that current clouds offer interfaces too close to that infrastructure, while users demand functionalities that automate the management of their services as a whole unit. To overcome this limitation, we propose a new abstraction layer closer to the lifecycle of services that allows for their automatic deployment and escalation depending on the service status (not only on the infrastructure). This abstraction layer can sit on top of different cloud providers, hence mitigating the potential lock-in problem and allowing the transparent federation of clouds for the execution of services. Here, we present Claudia, a service management system that implements such an abstraction layer, and the results of the deployment of a grid service (based on the Sun Grid Engine software) on such system.
IEEE Internet Computing | 2011
Dejan S. Milojicic; Ignacio Martín Llorente; Rubén S. Montero
In this installment of Trend Wars, we discuss cloud computing and OpenNebula with Ignacio M. Llorente and Rubén S. Montero, who are the principal investigator and the chief architect, respectively, of the open source OpenNebula project.
IEEE Computer | 2012
Rafael Moreno-Vozmediano; Rubén S. Montero; Ignacio Martín Llorente
As a key component in a modern datacenter, the cloud operating system is responsible for managing the physical and virtual infrastructure, orchestrating and commanding service provisioning and deployment, and providing federation capabilities for accessing and deploying virtual resources in remote cloud infrastructures.
Future Generation Computer Systems | 2013
José Luis Lucas-Simarro; Rafael Moreno-Vozmediano; Rubén S. Montero; Ignacio Martín Llorente
The current cloud market, constituted by many different public cloud providers, is highly fragmented in terms of interfaces, pricing schemes, virtual machine offers and value-added features. In this context, a cloud broker can provide intermediation and aggregation capabilities to enable users to deploy their virtual infrastructures across multiple clouds. However, most current cloud brokers do not provide advanced service management capabilities to make automatic decisions, based on optimization algorithms, about how to select the optimal cloud to deploy a service, how to distribute optimally the different components of a service among different clouds, or even when to move a given service component from a cloud to another to satisfy some optimization criteria. In this paper we present a modular broker architecture that can work with different scheduling strategies for optimal deployment of virtual services across multiple clouds, based on different optimization criteria (e.g. cost optimization or performance optimization), different user constraints (e.g. budget, performance, instance types, placement, reallocation or load balancing constraints), and different environmental conditions (i.e., static vs. dynamic conditions, regarding instance prices, instance types, service workload, etc.). To probe the benefits of this broker, we analyse the deployment of different clustered services (an HPC cluster and a Web server cluster) on a multi-cloud environment under different conditions, constraints, and optimization criteria.
workshop on automated control for datacenters and clouds | 2009
Rafael Moreno-Vozmediano; Rubén S. Montero; Ignacio Martín Llorente
In this paper we analyze the deployment of generic clustered services on top of a virtualized infrastructure layer that combines a VM manager (the OpenNebula engine) and a cloud resource provider (Amazon EC2). The use of this virtualization layer between the service and the physical infrastructure extends the classical benefits of VM platforms to distributed infrastructures. Additionally, the integration of the cloud in this layer allows us to give additional capacity to the services using an external provider, thus complementing the local infrastructure without notice from the users or affecting the service workload. This flexible approach, which separates the resource provisioning from the service management, provides important benefits: elastic service capacity to adapt it to its dynamic workload; physical infrastructure partitioning to isolate it from other running services; and support for heterogeneous configurations tailored for each service class. The feasibility of the proposed approach is analyzed for two different clustered services: a classical computing cluster and a web server.
Future Generation Computer Systems | 2007
Eduardo Huedo; Rubén S. Montero; Ignacio Martín Llorente
The last version of the Globus Toolkit includes both pre-WS and WS GRAM services to submit, monitor, and control jobs on remote Grid resources. In the medium term and until a full transition is accomplished, both pre-WS and WS GRAM services will coexist in Grid infrastructures. In this paper, we describe the modular architecture of the GridWay meta-scheduler, which allows the simultaneous and coordinated use of pre-WS and WS GRAM services and, therefore, makes easy the transition to a Web Service implementation of the Globus components. Such functionality is demonstrated on a infrastructure that comprises resources from a research testbed, based on the Globus Toolkit 4.0, and the EGEE production infrastructure, based on the LCG middleware. The Web Service implementation of Globus components has been optimized for flexibility, stability and scalability. However, part of the Grid community is still reluctant to transition to the Web Service model due mainly to its supposed lower performance. We demonstrate that WS GRAM achieves a performance comparable to that of pre-WS GRAM.
IEEE Transactions on Parallel and Distributed Systems | 2011
Rafael Moreno-Vozmediano; Rubén S. Montero; Ignacio Martín Llorente
Cloud computing is gaining acceptance in many IT organizations, as an elastic, flexible, and variable-cost way to deploy their service platforms using outsourced resources. Unlike traditional utilities where a single provider scheme is a common practice, the ubiquitous access to cloud resources easily enables the simultaneous use of different clouds. In this paper, we explore this scenario to deploy a computing cluster on the top of a multicloud infrastructure, for solving loosely coupled Many-Task Computing (MTC) applications. In this way, the cluster nodes can be provisioned with resources from different clouds to improve the cost effectiveness of the deployment, or to implement high-availability strategies. We prove the viability of this kind of solutions by evaluating the scalability, performance, and cost of different configurations of a Sun Grid Engine cluster, deployed on a multicloud infrastructure spanning a local data center and three different cloud sites: Amazon EC2 Europe, Amazon EC2 US, and ElasticHosts. Although the testbed deployed in this work is limited to a reduced number of computing resources (due to hardware and budget limitations), we have complemented our analysis with a simulated infrastructure model, which includes a larger number of resources, and runs larger problem sizes. Data obtained by simulation show that performance and cost results can be extrapolated to large-scale problems and cluster infrastructures.