Guadalupe Miñana
Complutense University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Guadalupe Miñana.
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems | 2012
Victoria López; Guadalupe Miñana
Performance, reliability and safety are relevant factors when analyzing or designing a computer system. Many studies about on performance are based on monitoring and analyzing data from a computer system. One of the most useful pieces of data is the Load Average (LA) that which shows the load average of the system in the last minute, the sequence of in the last five minutes and the sequence of in the last fifteen last minutes. There are a lot ofmany studies of the system performance based on the load average. This is shown by mean means of monitoring the commands of the operative system, but sometimes they are sometimes difficult to understand and far of removed from human intuition. The aim of this paper is to show demonstrate a new procedure that allows us to determine the stability of a computer system from a list of load average sample data. The idea is shown as an algorithm based in statistic analysis, the aggregation of information and its formal specification. The result is an evaluation of the stability of the load and the computer system by monitoring but without adding any overhead to the system. In addition, the procedure can be used as a software monitor for risk prevention of on any vulnerable system.
international conference on parallel processing | 2006
José Manuel Colmenar; Oscar Garnica; Juan Lanchares; José Ignacio Hidalgo; Guadalupe Miñana; Sonia Martín López
In this paper we present sim-async, an architectural simulator able to model a 64-bit asynchronous superscalar microarchitecture. The aim of this tool is to serve the designers on the study of different architectural proposals for asynchronous processors. Sim-async models the data-dependant timing of the processor modules by using distribution functions that describe the probability of a given delay to be spent on a computation. This idea of characterizing the timing of the modules at the architectural level of abstraction using distribution functions is introduced for the first time with this work. In addition, sim-async models the delays of all the relevant hardware involved in the asynchronous communication between stages. To tackle the development of sim-async we have modified the source code of SimpleScalar by substituting the simulators core with our own execution engine, which provides the functionality of a parameterizable microarchitecture adapted to the Alpha ISA. The correctness of sim-async was checked by comparing the outputs of the SPEC2000 benchmarks with SimpleScalar executions, and the asynchronous behavior was successfully tested in relation to a synchronous configuration of sim-async.
ieee international conference on progress in informatics and computing | 2015
Victoria López; Guadalupe Miñana; Óscar Sánchez; Beatriz González; Gabriel Valverde; Raquel Caro
The way of thinking on Big Data, Open Data and their use by organizations or individuals has been a trending topic over the last few years. Big Data deals with collecting, storing, analyzing and putting data in value. Big, medium and small enterprises want to include information technologies in their management and decision processes. At the same time, movements about rights on public data have increased their presence and force. Data from governments must be open. Every day, more and more cities and countries are opening their data. Open Data has emerged as a new paradigm for the public service provision model with a special role in Smart City. The main goal of Big and Open Data in a Smart City is to develop systems which can be useful for citizens. In this work we analyze how both private enterprises and governments manage to improve their data value by combining private and public datasets and we give some examples of our work in this area.
ieee international conference on intelligent systems and knowledge engineering | 2014
Victoria López; Guadalupe Miñana; Juan Tejada; Raquel Caro
In this paper we propose a new benchmark to drive making decisions in maintenance of computer systems. This benchmark is made from load average sample data. The main goal is to improve reliability and performance of a set of devices or components. In particular, the stability of the system is measured in terms of variability of the load. A forecast of the behavior of this stability is also proposed as part of the reporting benchmark. At the final stage, a more stable system is obtained and its global reliability and performance can be then evaluated by means of appropriate specifications.
Archive | 2014
Guadalupe Miñana; H. Marrao; Raquel Caro; J. Gil; Victoria López; Beatriz González
The price of electricity in the Mibel is very changeable. This creates a lot of uncertainty and risk in market actors. Due to continuous changes in demand and marginal price adjustment, buyers and sellers cannot know in advance the evolution of prices. The study of this uncertainty motivates this work. Unlike other published work, this paper analyzes the perspective of the buyer and not the seller’s perspective, as is usual in the literature. The aim of this work is to develop predictive models of electric price to build tools to manage and reduce the risk associated with the volatility of the wholesale electricity market, and therefore provide better opportunities for small traders to participate in that market. On the other hand, these models are useful to large industrial consumers by enabling them to design strategies to optimize its production capacity in function to signals of electricity market price and can get better on their production costs. Therefore, this article is based on the prediction of energy prices instead of demand. This paper analyzes the model of energy prices to determine the key variables that define its final value. The proposed model has been applied to Mibel 2012. The results suggest the use of several models based on calendar and taking into account different combinations.
Iet Computers and Digital Techniques | 2007
Guadalupe Miñana; José Ignacio Hidalgo; Juan Lanchares; José Manuel Colmenar; Oscar Garnica; Sonia Martín López
A hardware technique to reduce static and dynamic power consumption in functional units of 64-bit high-performance processors is presented here. The instructions that require an adder have been studied it can be concluded and that, there is a large percentage of instruction where one of the two source operands is always narrow and does not require a 64-bit adder. Furthermore, by analysing the executed applications, it is feasible to classify their internal operations according to their bit-width requirements and select the appropriate adder type that each instruction requires. This approach is based on substituting some of the 64-bit power-hungry adders with 32-bit ones, which consume much lower power, and modifying the protocol to issue as much instructions as possible to these low power consumption units, while incurring in negligible performance penalties. Five different configurations were tested for the execution units. Results indicate that this technique can save between up to 50% of the power consumed by the adders and up to 21% of the overall power consumption in the execution unit of high-performance architectures. Moreover, the simulations show good results in terms of power efficiency (IPC/W) and it can be affirmed that it could prevent the creation of hot spots in the functional units.
digital systems design | 2006
Guadalupe Miñana; Oscar Garnica; José Ignacio Hidalgo; Juan Lanchares; José Manuel Colmenar
This paper presents a hardware technique to reduce the static and dynamic power consumption in functional units of a 64-bit superscalar processor. Our approach is based on substituting some of the 64-bit power-hungry adders by others with 32-bit lower power-consumption adders, and modifying the protocol in order to issue as much instructions as possible to those low power-consumption units incurring a negligible performance penalty. Our technique saves between 14.7% and a 50% of the power-consumption in the adders which is between 6.1% and a 20% of power-consumption in the execution units. This reduction is important because it can avoid the creation of a hot spot on the functional units
digital systems design | 2006
José Manuel Colmenar; Oscar Garnica; Juan Lanchares; José Ignacio Hidalgo; Guadalupe Miñana; Sonia Martín López
Nowadays, synchronous processor designers have to deal with severe problems related to the distribution of a complex clock network like skew reduction, high power-consumption, synchronization of clocks, etc. Asynchronous or self-timed architectures are becoming an interesting design alternative because they usually avoid these drawbacks, and they are able to achieve high performance at a low power consumption cost. However, on the first steps of the design process, the evaluation of the performance of such architectures through simulations is much more complicated due to the requirement of modeling the data-dependant timing of each system module. The aim of this paper is to evaluate the performance of a 64-bit fully-asynchronous superscalar processor microarchitecture with dynamically scheduled instruction flow, out-of-order speculative execution of instructions and advanced branch prediction. To tackle this goal we have described the asynchronous microarchitecture solving the synchronization between structures through a four-phase handshake protocol. Then, we have used a modification of the SimpleScalar suite to model the asynchronous microarchitecture in order to run Alpha programs on it. Finally, we have compared the performance of this fully-asynchronous processor with the performance obtained from its synchronous counterpart by running architectural simulations of the SPEC2000 benchmarks on both models
power and timing modeling optimization and simulation | 2005
Guadalupe Miñana; Oscar Garnica; José Ignacio Hidalgo; Juan Lanchares; José Manuel Colmenar
This paper presents a hardware technique to reduce of static and dynamic power consumption in FUs. This approach entails substituting some of the power-hungry adders of a 64-bit superscalar processor, by others with lower power-consumption, and modifying the slot protocol in order to issue as much instructions as possible to those low power consumption units incurring marginal performance penalties. Our proposal saves between a 2% and a 45% of power-performance in FUs and between a 16% and a 65% of power-consumption in adders.
ieee international conference on progress in informatics and computing | 2014
José Manuel Velasco; Beatriz González Pérez; Guadalupe Miñana; Victoria López; Raquel Caro
Since 1998, the Spanish and Portuguese Administrations began to share a common path in building the Iberian Electricity Market (MIBEL). This cooperation has been very successful, not only for its contribution to the existence of an electricity market on an Iberian level, but also on a European scale, as a significant step in building the Internal Energy Market. The price of electricity in the MIBEL is very changeable. This creates a lot of uncertainty and risk in market actors. Due to continuous changes in demand and marginal price adjustment, buyers and sellers can not know in advance the evolution of prices. Our interest is to study of this uncertainty since the perspective of the buyer and not the sellers perspective. The aim of this work is to develop a graphical analysis of the variables involved in the Spanish Energy Market in order to explain the electric price and provide to small traders, that could be interested in participating in that market, better knowledge of the schedule. On the other hand, large industrial consumers use this information to design strategies to optimize its production capacity and improve their production costs. In this article the variable of interest is the marginal price instead of demand. This paper provides a graphical analysis by means of an easily reproducible automatization with the R project for statistical computing that allows to explore, visualize and understand the key variables that define its final value. Mibel 2011 and 2012 data are used for ilustrations. The results show the importance of the calendar effect, seasonality and trend as principal factors to take into account for posterior fases: modeling and forecasting.