Luis M. Vaquero
Hewlett-Packard
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luis M. Vaquero.
ieee international conference on cloud computing technology and science | 2011
Luis M. Vaquero; Luis Rodero-Merino; Daniel Morán
Cloud computing is expected to become a common solution for deploying applications thanks to its capacity to leverage developers from infrastructure management tasks, thus reducing the overall costs and services’ time to market. Several concerns prevent players’ entry in the cloud; security is arguably the most relevant one. Many factors have an impact on cloud security, but it is its multitenant nature that brings the newest and more challenging problems to cloud settings. Here, we analyze the security risks that multitenancy induces to the most established clouds, Infrastructure as a service clouds, and review the literature available to present the most relevant threats, state of the art of solutions that address some of the associated risks. A major conclusion of our analysis is that most reported systems employ access control and encryption techniques to secure the different elements present in a virtualized (multitenant) datacenter. Also, we analyze which are the open issues and challenges to be addressed by cloud systems in the security field.
IEEE Transactions on Education | 2011
Luis M. Vaquero
The cloud has become a widely used term in academia and the industry. Education has not remained unaware of this trend, and several educational solutions based on cloud technologies are already in place, especially for software as a service cloud. However, an evaluation of the educational potential of infrastructure and platform clouds has not been explored yet. An evaluation of which type of cloud would be the most beneficial for students to learn, depending on the technical knowledge required for its usage, is missing. Here, the first systematic evaluation of different types of cloud technologies in an advanced course on network overlays with 84 students and four professors is presented. This evaluation tries to answer the question whether cloud technologies (and which specific type of cloud) can be useful in educational scenarios for computer science students by focusing students in the actual tasks at hand. This study demonstrates that platform clouds are valued by both students and professors to achieve the course objectives and that clouds offer a significant improvement over the previous situation in labs where much effort was devoted to setting up the software necessary for course activities. These results most strongly apply to courses in which students interact with resources that are non-self-contained (e.g., network nodes, databases, mechanical equipment, or the cloud itself), but could also apply to other science disciplines that involve programming or performing virtual experiments.
Journal of Network and Systems Management | 2012
Luis M. Vaquero; Daniel Morán; Fermín Galán; Jose M. Alcaraz-Calero
The main contribution of this paper is the description of an architecture for dynamically controlling the behavior of the applications deployed in the Cloud by using a set of high-level rules. This architecture is flexible enough to enable the re-definition of behavior policies at runtime. This makes it possible to adapt the behavior of applications after deployment. It is also able to manage different cloud providers. This architecture has been implemented and the most relevant details of such implementation are also covered in this paper. Moreover, some use cases are also explained in order to provide a better description of the advantages of the proposed architecture.
ieee international conference on cloud computing technology and science | 2015
Luis M. Vaquero; Antonio Celorio; Félix Cuadrado; Ruben Cuevas
Public clouds have democratised the access to analytics for virtually any institution in the world. Virtual machines (VMs) can be provisioned on demand to crunch data after uploading into the VMs. While this task is trivial for a few tens of VMs, it becomes increasingly complex and time consuming when the scale grows to hundreds or thousands of VMs crunching tens or hundreds of TB. Moreover, the elapsed time comes at a price: the cost of provisioning VMs in the cloud and keeping them waiting to load the data. In this paper we present a big data provisioning service that incorporates hierarchical and peer-to-peer data distribution techniques to speed-up data loading into the VMs used for data processing. The system dynamically mutates the sources of the data for the VMs to speed-up data loading. We tested this solution with 1000 VMs and 100 TB of data, reducing time by at least 30 percent over current state of the art techniques. This dynamic topology mechanism is tightly coupled with classic declarative machine configuration techniques (the system takes a single high-level declarative configuration file and configures both software and data loading). Together, these two techniques simplify the deployment of big data in the cloud for end users who may not be experts in infrastructure management.
IEEE Communications Letters | 2012
Suksant Sae Lor; Luis M. Vaquero; Paul Murray
Emerging applications demand more efficient ways to handle the large amount of traffic they generate and send across sites. The current architecture of Content Delivery Network locates nodes at the network edges with fixed and pre-determined capacity, which is not suitable for dynamic behaviour of many services. This letter proposes an alternative approach that employs data centres in core locations and eliminates the need for capacity planning. This can be done on-demand, offering a variety of available locations and pay-as-you-go schemes. Through realistic emulations and a real testbed, we illustrate that our framework enables better performance than existing approaches and gives good incentives for network providers to implement the scheme due to potential new source of revenue.
IEEE Communications Letters | 2012
Luis M. Vaquero; Suksant Sae Lor; Dev Audsin; Paul Murray; Nick Wainwright
Large computer networks are too large to emulate or actually reproduce in conventional lab environments. Graph generation/reduction techniques have been a valuable tool to solve this limitation. However, current techniques focus on local features (e.g. router out-degree, clustering coefficient, traffic difference between edges for building a hierarchy) that do not preserve router-level backbone geographical/hierarchical features or the end-to-end delay between any arbitrary points. This letter proposes a geographical-based reduction mechanism that enables emulation in lab settings while preserving the global features of typical backbone networks. The performance evaluation is based on six inferred ISP backbone maps.
international conference on computer communications | 2014
Félix Cuadrado; Álvaro Navas; Juan C. Dueñas; Luis M. Vaquero
Federated clouds can expose the Internet as a homogeneous compute fabric. There is an opportunity for developing cross-cloud applications that can be deployed pervasively over the Internet, dynamically adapting their internal topology to their needs. In this paper we explore the main challenges for fully realizing the potential of cross-cloud applications. First, we focus on the networking dimension of these applications. We evaluate what support is needed from the infrastructure, and what are the further implications of opening the networking side. On a second part, we examine the impact of a distributed deployment for applications, assessing the implications from a management perspective, and how it affects the delivery of quality of service and non-functional requirements.
Future Generation Computer Systems | 2018
James Brook; Félix Cuadrado; Eric Deliot; Julio Guijarro; Rycharde Jeffery Hawkes; Marco Lotz; Romaric Pascal; Suksant Sae-Lor; Luis M. Vaquero; Joan Varvenne; Lawrence Wilcock
Abstract Interactive visual exploration techniques (IVET) such as those advocated by Shneiderman and extreme scale visual analytics have successfully increased our understanding of a variety of domains that produce huge amounts of complex data. In spite of their complexity, IT infrastructures have not benefited from the application of IVET techniques. Loom is inspired in IVET techniques and builds on them to tame increasing complexity in IT infrastructure management systems guaranteeing interactive response times and integrating key elements for IT management: Relationships between managed entities coming from different IT management subsystems, alerts and actions (or reconfigurations) of the IT setup. The Loom system builds on two main pillars: (1) a multiplex graph spanning data from different ITIMs; and (2) a novel visualisation arrangement: the Loom “Thread” visualisation model. We have tested this in a number of real-world applications, showing that Loom can handle million of entities without losing information, with minimum context switching, and offering better performance than other relational/graph-based systems. This ensures interactive response times (few seconds as 90th percentile). The value of the “Thread” visualisation model is shown in a qualitative analysis of users’ experiences with Loom.
international conference on management of data | 2017
Benjamin A. Steer; Alhamza Alnaimi; Marco Lotz; Félix Cuadrado; Luis M. Vaquero; Joan Varvenne
The property graph model has recently gained significant popularity, combining great expressiveness with powerful declarative graph query languages. However, in order to take advantage of these features, data must be loaded into a specialised graph database. Additionally, property graphs are often schema-free, complicating efficient query execution. In this paper we present Cytosm, a middleware application which enables the execution of property graph queries, on non-graph databases, without data migration. Cytosm relies on gTop, a schema containing an abstract property graph topology, and its mapping to specific database backends. Cytosm uses gTop to efficiently execute OpenCypher queries, exploiting schema information to optimise the query plan, and mapping query concepts to the relational backend. Our experiments show that Cytosm achieves competitive query execution times on relational backends, when compared to leading graph databases.
Concurrency and Computation: Practice and Experience | 2013
Luis M. Vaquero; Luis Rodero-Merino; Rajkumar Buyya
Warship construction is a time-consuming, complicated business. The original inception, funding, design, creation of prototypes and training of personnel alone can take years. The actual construction is not typically much faster. The expenses are excessive in both funds and highly specialized labor. As could be expected, the pressure on starship architects is enormous; once a vessel has been built, the Empire is committing itself to that vessel for the next several decades. At some point, any changes – even trivial ones – in the vessel’s design can cost literally billions of credits and thousands of extra man-hours. This is the point we are at today in cloud computing; the pre-construction and initial phases are completed, much experience has been accumulated, but some inherent cloud features are still causing some trouble. And providers stress their engineers to fulfill their ever-mounting expectations, especially those related with scalability. This can be understood: the illusion of a virtually infinite computing infrastructure/platform capable of providing an automated on-demand self-service is one of the paramount features of the cloud [1, 2] along with security (after all, no one aims for another massive and expensive Death Star vulnerable to a single X-Wing). Scalability is responsible for making any particular service something more than ‘just an outsourced service with a prettier marketing face’ [3]. This particular feature pushes cloud constructors to introduce changes to optimize resource consumption while preserving the performance of the deployed application. Cloud scalability is also an issue that is still poorly understood. Many open questions that call for new research that will eventually incorporate new insight into already running or newly built systems remain. State-of-the-art technologies in cloud scalability typically focus on handling several replicas (service clones) of the image and load balance requests among them ([4] or Amazon’s EC2) or federating clouds (infrastructure clones) to increase the pool of available resources [5, 6]. In some sense, these approaches can be compared with Corellian corvettes: they prove the concept in a quick and agile manner, but they are relatively vulnerable during huge business level loads (keeping our analogy with Star Wars, you would not face them with a Star Destroyer). Few are the examples of academic approaches that have reported reaching the scale of Amazon’s infrastructure in number of virtual machines. Sharing the lessons learned in that endeavor is still pending. This special issue covers some of the most relevant trends on scaling cloud infrastructures and platforms. Readers will gain insight on what are the required steps toward optimizing their own clouds to support more concurrent users or operations while minimizing the usage of resources. These articles are also in the interest of those wondering what elements would be good to have as users trying to obtain the maximum scalability for their applications. Section 2 of this document lists some of the strategies that researchers are working on to improve the scalability of clouds, whereas Section 3 briefly describes the contributions presented in this special issue.