Alberto Núñez
Complutense University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alberto Núñez.
grid computing | 2012
Alberto Núñez; José Luis Vázquez-Poletti; Agustín C. Caminero; Gabriel G. Castañé; Jesús Carretero; Ignacio Martín Llorente
Simulation techniques have become a powerful tool for deciding the best starting conditions on pay-as-you-go scenarios. This is the case of public cloud infrastructures, where a given number and type of virtual machines (in short VMs) are instantiated during a specified time, being this reflected in the final budget. With this in mind, this paper introduces and validates iCanCloud, a novel simulator of cloud infrastructures with remarkable features such as flexibility, scalability, performance and usability. Furthermore, the iCanCloud simulator has been built on the following design principles: (1) it’s targeted to conduct large experiments, as opposed to others simulators from literature; (2) it provides a flexible and fully customizable global hypervisor for integrating any cloud brokering policy; (3) it reproduces the instance types provided by a given cloud infrastructure; and finally, (4) it contains a user-friendly GUI for configuring and launching simulations, that goes from a single VM to large cloud computing systems composed of thousands of machines.
Simulation Modelling Practice and Theory | 2013
Gabriel G. Castañé; Alberto Núñez; Pablo Llopis; Jesús Carretero
Abstract Due to energy crisis of the last years, energy waste and sustainability have been brought both into public attention, and under industry and scientific scrutiny. Thus, obtaining high-performance at a reduced cost in cloud environments as reached a turning point where computing power is no longer the most important concern. However, the emphasis is shifting to manage energy efficiently, whereas providing techniques for measuring energy requirements in cloud systems becomes of capital importance. Currently there are different methods for measuring energy consumption in computer systems. The first consists in using power meter devices, which measure the aggregated power use of a machine. Another method involves directly instrumenting the motherboard with multimeters in order to obtain each power connector’s voltage and current, thus obtaining real-time power consumption. These techniques provide a very accurate results, but they are not suitable for large-scale environments. On the contrary, simulation techniques provide good scalability for performing experiments of energy consumption in cloud environments. In this paper we propose E-mc 2 , a formal framework integrated into the iCanCloud simulation platform for modelling the energy requirements in cloud computing systems.
Simulation Modelling Practice and Theory | 2012
Alberto Núñez; Javier Fernández; Rosa Filgueira; Félix García; Jesús Carretero
Abstract In this paper we propose a new simulation platform called SIMCAN, for analyzing parallel and distributed systems. This platform is aimed to test parallel and distributed architectures and applications. The main characteristics of SIMCAN are flexibility, accuracy, performance, and scalability. Thence, the proposed platform has a modular design that eases the integration of different basic systems on a single architecture. Its design follows a hierarchical schema that includes simple modules, basic systems (computing, memory managing, I/O, and networking), physical components (nodes, switches, …), and aggregations of components. New modules may also be incorporated as well to include new strategies and components. Also, a graphical configuration tool has been developed to help untrained users with the task of modelling new architectures. Finally, a validation process and some evaluation tests have been performed to evaluate the SIMCAN platform.
soft computing | 2013
Alberto Núñez; Mercedes G. Merayo; Robert M. Hierons; Manuel Núñez
The generation of test data for state-based specifications is a computationally expensive process. This problem is magnified if we consider that time constraints have to be taken into account to govern the transitions of the studied system. The main goal of this paper is to introduce a complete methodology, supported by tools, that addresses this issue by representing the test data generation problem as an optimization problem. We use heuristics to generate test cases. In order to assess the suitability of our approach we consider two different case studies: a communication protocol and the scientific application BIPS3D. We give details concerning how the test case generation problem can be presented as a search problem and automated. Genetic algorithms (GAs) and random search are used to generate test data and evaluate the approach. GAs outperform random search and seem to scale well as the problem size increases. It is worth to mention that we use a very simple fitness function that can be easily adapted to be used with other evolutionary search techniques.
The Journal of Supercomputing | 2012
Rosa Filgueira; Jesús Carretero; David E. Singh; Alejandro Calderón; Alberto Núñez
This work presents an optimization of MPI communications, called Dynamic-CoMPI, which uses two techniques in order to reduce the impact of communications and non-contiguous I/O requests in parallel applications. These techniques are independent of the application and complementaries to each other. The first technique is an optimization of the Two-Phase collective I/O technique from ROMIO, called Locality aware strategy for Two-Phase I/O (LA-Two-Phase I/O). In order to increase the locality of the file accesses, LA-Two-Phase I/O employs the Linear Assignment Problem (LAP) for finding an optimal I/O data communication schedule. The main purpose of this technique is the reduction of the number of communications involved in the I/O collective operation. The second technique, called Adaptive-CoMPI, is based on run-time compression of MPI messages exchanged by applications. Both techniques can be applied on every application, because both of them are transparent for the users. Dynamic-CoMPI has been validated by using several MPI benchmarks and real HPC applications. The results show that, for many of the considered scenarios, important reductions in the execution time are achieved by reducing the size and the number of the messages. Additional benefits of our approach are the reduction of the total communication time and the network contention, thus enhancing, not only performance, but also scalability.
international conference on cluster computing | 2008
Alberto Núñez; Javier Fernández; José Daniel García; Jesús Carretero
In this paper we present new techniques for simulating high performance MPI applications on large storage networks. Performance analysis of high performance application on large storage networks is a very complex and time-consuming task. However, modelling and studying the behaviour of any application on complex network architectures is crucial to obtain good performance. The goal of this work is to predict both scalability degree and performance of any high computing applications on any network architecture. A very interesting feature of this work is that our approach does not require to modify the application in order to simulate its behaviour. Also, there is no need to modify the simulator code to test different architectures. It can be done just creating a new configuration file. In order to perform those analyses we have used SIMCAN, a simulation tool to analyzing high-performance I/O architectures, developed at University Carlos III de Madrid. To validate this work we have used the BIPS3D application on several hardware-based architectures and on our simulator. The comparative results of those environments are presented to show the accuracy and efficiency of our approach.
Journal of Computational Science | 2014
Alberto Núñez; Mercedes G. Merayo
Abstract Satisfying the global throughput targets of scientific applications is an important challenge in high performance computing (HPC) systems. The main difficulty lies in the high number of parameters having an important impact on the overall system performance. These include the number of storage servers, features of communication links, and the number of CPU cores per node, among many others. In this paper we present a model that computes a performance/cost ratio using different hardware configurations and focusing on scientific computing. The main goal of this approach is to balance the trade-off between cost and performance using different combinations of components for building the entire system. The main advantage of our approach is that we simulate different configurations in a complex simulation platform. Therefore, it is not necessary to make an investment until the system computes the different alternatives and the best solutions are suggested. In order to achieve this goal, both the systems architecture and Map-Reduce applications are modeled. The proposed model has been evaluated by building complex systems in a simulated environment using the SIMCAN simulation platform.
annual simulation symposium | 2008
José Daniel García; Laura Prada; Javier Fernández; Alberto Núñez; Jesús Carretero
One of the most common techniques to evaluate the performance of a computer I/O subsystem performance has been found on detailed simulation models including specific features of storage devices like disk geometry, zone splitting, caching, read-ahead buffers and request reordering. However, as soon as a new technological innovation is added, those models need to be reworked to include new devices making difficult to have general models up to date. Another alternative is modeling a storage device as a black-box probabilistic model, where the storage device itself, its interface and the interconnection mechanisms are modeled as a single stochastic process, defining the service time as a random variable with an unknown distribution. This approach allows generating disk service times needing less computational power by means of a variate generator included in a simulator. This approach allows to reach a greater scalability in the I/O subsystems performance evaluation by means of simulation. In this paper, we present a method for building a variate generator from service time experimental data. In order to build the variate generator, both real workloads and synthetic workloads may be used. The workload is used to feed the evaluated disk to obtain service time measurements. From experimental data we build a variate generator that fits the disk service times distribution. We also present a use case of our method, where we have obtained a relative error ranging from 0.45% to 1%.
Information & Software Technology | 2015
Mercedes G. Merayo; Alberto Núñez
ContextThe design of complex systems demands methodologies to analyze its correct behaviour. It is usual that a correct behaviour is determined by the compliance with temporal requirements. Currently, testing is the most used technology to validate the correctness of systems. Although several techniques that take into account time aspects have been proposed, most of them require the tester interacts with the system. However, if this is not possible, it is necessary to apply a passive testing approach where the tester monitors the behaviour of the system. ObjectiveThe aim of this paper is to propose a methodology to perform passive testing on communicating systems in which the behaviour of their components must fulfill temporal restrictions associated with both performance and delays/timeouts. MethodOur framework uses algorithms for checking traces collected from the systems against invariants which formally represent the most relevant properties that must be fulfilled by the system. In order to support the feasibility of the methodology, we have performed an empirical study on a complex system for automatic recognition of images based on a pipeline architecture. We have analyzed the correctness of the systems behaviour with respect to a set of invariants. Finally, an experiment, based on mutations of the system, was conducted to study the level of detection of a set of invariants. ResultsDifferent errors were detected and fixed along the development of the system by means of the proposed methodology. The results of the experiments with the mutated versions of the system indicated that the designed set of invariants was more effective in finding errors associated to temporal aspects than those related to communication among components. ConclusionThe proposed technique has been shown to be very useful for analyzing complex timed systems, and find errors when the tester has no control over their behaviour.
Annales Des Télécommunications | 2015
Alberto Núñez; Robert M. Hierons
Cloud computing is a paradigm that provides access to a flexible, elastic and on-demand computing infrastructure, allowing users to dynamically request virtual resources. However, researchers typically cannot experiment with critical parts of cloud systems such as the underlying cloud architecture, resource-provisioning policies and the configuration of resource virtualisation. This problem can be partially addressed through using simulations of cloud systems. Unfortunately, the problem of testing cloud systems is still challenging due to the many parameters that such systems typically have and the difficulty in determining whether an observed behaviour is correct. In order to alleviate these issues, we propose a methodology to semi-automatically test and validate cloud models by integrating simulation techniques and metamorphic testing.