Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Higuero is active.

Publication


Featured researches published by Daniel Higuero.


ieee/acm international symposium cluster, cloud and grid computing | 2011

Predictive Data Grouping and Placement for Cloud-Based Elastic Server Infrastructures

Juan M. Tirado; Daniel Higuero; Florin Isaila; Jesús Carretero

Workload variations on Internet platforms such as YouTube, Flickr, LastFM require novel approaches to dynamic resource provisioning in order to meet QoS requirements, while reducing the Total Cost of Ownership (TCO) of the infrastructures. The economy of scale promise of cloud computing is a great opportunity to approach this problem, by developing elastic large scale server infrastructures. However, a proactive approach to dynamic resource provisioning requires prediction models forecasting future load patterns. On the other hand, unexpected volume and data spikes require reactive provisioning for serving unexpected surges in workloads. When workload can not be predicted, adequate data grouping and placement algorithms may facilitate agile scaling up and down of an infrastructure. In this paper, we analyze a dynamic workload of an on-line music portal and present an elastic Web infrastructure that adapts to workload variations by dynamically scaling up and down servers. The workload is predicted by an autoregressive model capturing trends and seasonal patterns. Further, for enhancing data locality, we propose a predictive data grouping based on the history of content access of a user community. Finally, in order to facilitate agile elasticity, we present a data placement based on workload and access pattern prediction. The experimental results demonstrate that our forecasting model predicts workload with a high precision. Further, the predictive data grouping and placement methods provide high locality, load balance and high utilization of resources, allowing a server infrastructure to scale up and down depending on workload.


Computer Networks | 2010

Affinity P2P: A self-organizing content-based locality-aware collaborative peer-to-peer network

Juan M. Tirado; Daniel Higuero; Florin Isaila; Jesús Carretero; Adriana Iamnitchi

The last years have brought a dramatic increase in the popularity of collaborative Web 2.0 sites. According to recent evaluations, this phenomenon accounts for a large share of Internet traffic and significantly augments the load on the end-servers of Web 2.0 sites. In this paper, we show how collaborative classifications extracted from Web 2.0-like sites can be leveraged in the design of a self-organizing peer-to-peer network in order to distribute data in a scalable manner while preserving a high-content locality. We propose Affinity P2P (AP2P), a novel cluster-based locality-aware self-organizing peer-to-peer network. AP2P self-organizes in order to improve content locality using a novel affinity-based metric for estimating the distance between clusters of nodes sharing similar content. Searches in AP2P are directed to the cluster of interests, where a logarithmic-time parallel flooding algorithm provides high recall, low latency, and low communication overhead. The order of clusters is periodically changed using a greedy cluster placement algorithm, which reorganizes clusters based on affinity in order to increase the locality of related content. The experimental and analytical results demonstrate that the locality-aware cluster-based organization of content offers substantial benefits, achieving an average latency improvement of 45%, and up to 12% increase in search recall.


ieee international conference on high performance computing, data, and analytics | 2011

Multi-model prediction for enhancing content locality in elastic server infrastructures

Juan M. Tirado; Daniel Higuero; Florin Isaila; Jesús Carretero

Infrastructures serving on-line applications experience dynamic workload variations depending on diverse factors such as popularity, marketing, periodic patterns, fads, trends, events, etc. Some predictable factors such as trends, periodicity or scheduled events allow for proactive resource provisioning in order to meet fluctuations in workloads. However, proactive resource provisioning requires prediction models forecasting future workload patterns. This paper proposes a multi-model prediction approach, in which data are grouped into bins based on content locality, and an autoregressive prediction model is assigned to each locality-preserving bin. The prediction models are shown to be identified and fitted in a computationally efficient way. We demonstrate experimentally that our multi-model approach improves locality over the uni-model approach, while achieving efficient resource provisioning and preserving a high resource utilization and load balance.


Simulation Modelling Practice and Theory | 2015

CoSMiC: A hierarchical cloudlet-based storage architecture for mobile clouds

Francisco Rodrigo Duro; Javier Garcia Blas; Daniel Higuero; Óscar Pérez; Jesús Carretero

Abstract Storage capacity is a constraint for current mobile devices. Mobile Cloud Computing (MCC) is developed to augment device capabilities, facilitating to mobile users store/access of a large dataset on the cloud through wireless networks. However, given the limitations of network bandwidth, latencies, and devices battery life, new solutions are needed to extend the usage of mobile devices. This paper presents a novel design and implementation of a hierarchical cloud storage system for mobile devices based on multiple I/O caching layers. The solution relies on Memcached as a cache system, preserving its powerful capacities such as performance, scalability, and quick and portable deployment. The solution targets to reduce the I/O latency of current mobile cloud solutions. The proposed solution consists of a user-level library and extended Memcached back-ends. The solution aims to be hierarchical by deploying Memcached-based I/O cache servers across all the I/O infrastructure datapath. Our experimental results demonstrate that CoSMiC can significantly reduce the round-trip latency in presence of low cache hit ratios compared with a 3G connection even when using a multi-level cache hierarchy.


social network systems | 2011

Analyzing the impact of events in an online music community

Juan M. Tirado; Daniel Higuero; Florin Isaila; Jesús Carretero

The huge popularity of on-line social networking sites has increased the likelihood that locally-relevant events propagate globally throughout the Web. Conversely, real world events captured as digital content on the Web may influence the behavior of these digital social communities. In this work we collected and analyzed event-related data from LastFM, and global volume of searches from GoogleTrends, with the following objectives: (1) to study the event mechanism provided by LastFM, (2) to evaluate the impact of global and local events on system utilization, and (3) to understand the event-related propagation of information over social links. We analyze the impact of LastFM events on the user activity and interests. Our study indicates that half of LastFM events cause an increase of the interest for an artist. However, several peaks of popularity are not associated with LastFM events, while being highly correlated with global volume of Internet searches provided by Google Trends. Finally, our analysis shows that the interest for an artist appears to be disseminated over social links. We find out that there are two factors likely to make a user influential over his friends: the degree of interest and the number of social links.


international symposium on parallel and distributed processing and applications | 2012

Exploiting Parallelism in a X-ray Tomography Reconstruction Algorithm on Hybrid Multi-GPU and Multi-core Platforms

Ernesto Liria; Daniel Higuero; Monica Abella; Claudia de Molina; Manuel Desco

Most small-animal X-ray computed tomography (CT) scanners are based on cone-beam geometry with a flat-panel detector orbiting in a circular trajectory. Image reconstruction in these systems is usually performed by approximate methods based on the algorithm proposed by Feldkamp et al. Currently there are a strong need to speed-up the reconstruction of XRay CT data in order to extend its clinical applications. We present an efficient modular implementation of an FDK-based reconstruction algorithm that takes advantage of the parallel computing capabilities and the efficient bilinear interpolation provided by general purpose graphic processing units (GPGPU). The proposed implementation of the algorithm is evaluated for a high-resolution micro-CT and achieves a speed-up of 46, while preserving the reconstructed image quality.


international conference on parallel and distributed systems | 2012

Reconciling Dynamic System Sizing and Content Locality through Hierarchical Workload Forecasting

Juan M. Tirado; Daniel Higuero; Florin Isaila; Jesús Carretero

The cloud has recently surged as a promising paradigm for hosting scalable Web systems serving a large number of users with large workload variations. It makes possible to dynamically add and remove resources to horizontally scalable architectures in order to save costs, while maintaining the quality of service. However, in order to achieve these goals the resource management of a platform must include policies and mechanisms for dynamically resizing the system, redistributing content and redirecting user requests. In this work, we address the problem of reconciling dynamic system sizing and content locality. There are three main contributions of our study. First, we address the problem of determining the system size by employing a hierarchical prediction framework that proactively provisions resources based on statistical models of the incoming workload. Second, we show how to employ the hierarchical prediction framework for designing a dispatching mechanism which can be used with any request distribution policy. Third, we propose two novel prediction-based locality-aware request distribution policies: Oblivious Locality-Aware Request Distribution (OLARD) and Affinity-Based Locality-Aware Request Distribution (ABLARD). We demonstrate the advantages of using our hierarchical prediction framework and how our approach achieves a high content locality, while adapting to unexpected workload changes.


modeling, analysis, and simulation on computer and telecommunication systems | 2012

Enhancing File Transfer Scheduling and Server Utilization in Data Distribution Infrastructures

Daniel Higuero; Juan M. Tirado; Florin Isaila; Jesús Carretero

This paper presents a methodology for efficiently solving the file transfer scheduling problem in a distributed environment. Our solution is based on the relaxation of an objective-based time-indexed formulation of a linear programming problem. The main contributions of this paper are the following. First, we introduce a novel approach to the relaxation of the time-indexed formulation of the transfer scheduling problem in multi-server and multi-user environments. Our solution consists of reducing the complexity of the optimization by transforming it into an approximation problem, whose proximity to the optimal solution can be controlled depending on practical and computational needs. Second, we present a distributed deployment of our methodology, which leverages the inherent parallelism of the divide-and-conquer approach in order to speed-up the solving process. Third, we demonstrate that our methodology is able to considerably reduce the schedule length and idle time in a computationally tractable way.


IEEE Transactions on Parallel and Distributed Systems | 2014

CONDESA: A Framework for Controlling Data Distribution on Elastic Server Architectures

Juan M. Tirado; Daniel Higuero; Javier Garcia Blas; Florin Isaila; Jesús Carretero

Applications running in todays data centers show high workload variability. While seasonal patterns, trends and expected events may help building proactive resource allocation policies, this approach has to be complemented with adaptive strategies which should address unexpected events such as flash crowds and volume spikes. Additionally, the limitations of current I/O infrastructures in the face of dramatic increase of data generation require, the ability to build novel abstractions and models for robust decision making regarding data layout and data locality. In this work, we present CONDESA (CONtrolling Data distribution on Elastic Server Architectures), a framework for exploring adaptive data distribution strategies for elastic server architectures. To the best of our knowledge CONDESA is the first platform that permits to systematically study the interplay between five data related strategies: workload prediction, adaptive control of data distribution and server provisioning, adaptive data grouping, adaptive data placement, and adaptive system sizing. We demonstrate how CONDESA can be used for browsing the design space of adaptive data distribution policies. We show how prediction models can be compared in terms of overhead and accuracy. We evaluate the impact of change detection on prediction accuracy and how CONDESA can be used for choosing an adequate prediction horizon. We demonstrate how adaptive prediction can be used for sizing a server system. Finally, we show how prediction models, change detection strategies, and data placement policies can be combined and compared based on server utilization, load balance, data locality, over- and underprovisioning.


Proceedings of the 20th European MPI Users' Group Meeting on | 2013

Improving MPI applications with a new MPI_Info and the use of the memoization

Alejandro Calderón; Jesús Carretero; Félix García-Carballeira; Javier Fernández; Daniel Higuero; Borja Bergua

The MPI forum is actively working for a better MPI standard. The results are the new version 3 of the MPI standard, and the efforts for the incoming MPI 3.1/4.0. The technological changes provide many opportunities for improvements and new ideas. This paper introduces two main contributions in this direction: (1) how to improve the MPI_Info object implementation, and (2) a new way of using the former improved MPI_Info object as a storage solution. The MPI_Info object [1] is described as an unordered set of key-value pairs (both key and value are strings, and keys are unique). And it is implemented as a linked list in the two major MPI implementations available. We propose a new MPI_Info object implementation that (1) is based on the use of hash tables (what improves the overall performance), and (2) abstracts the underlying hash table infrastructure (what facilitates the usage of the most appropriated solution). Our proposal opens the possibility of extending the use of the MPI_Info object as a shared storage solution among MPI processes. To demonstrate the capabilities of our proposal, we explore the utilization of the memoization technique on MPI applications in order to improve the execution performance.

Collaboration


Dive into the Daniel Higuero's collaboration.

Top Co-Authors

Avatar

Jesús Carretero

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar

Juan M. Tirado

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar

Florin Isaila

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar

Javier Garcia Blas

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar

Borja Bergua

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar

Ernesto Liria

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Javier Fernández

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar

Monica Abella

Instituto de Salud Carlos III

View shared research outputs
Researchain Logo
Decentralizing Knowledge