José Enrique Armendáriz-Iñigo
Universidad Pública de Navarra
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by José Enrique Armendáriz-Iñigo.
ACM Transactions on Database Systems | 2009
Yi Lin; Bettina Kemme; Ricardo Jiménez-Peris; Marta Patiño-Martínez; José Enrique Armendáriz-Iñigo
Database replication is widely used for fault tolerance and performance. However, it requires replica control to keep data copies consistent despite updates. The traditional correctness criterion for the concurrent execution of transactions in a replicated database is 1-copy-serializability. It is based on serializability, the strongest isolation level in a nonreplicated system. In recent years, however, Snapshot Isolation (SI), a slightly weaker isolation level, has become popular in commercial database systems. There exist already several replica control protocols that provide SI in a replicated system. However, most of the correctness reasoning for these protocols has been rather informal. Additionally, most of the work so far ignores the issue of integrity constraints. In this article, we provide a formal definition of 1-copy-SI using and extending a well-established definition of SI in a nonreplicated system. Our definition considers integrity constraints in a way that conforms to the way integrity constraints are handled in commercial systems. We discuss a set of necessary and sufficient conditions for a replicated history to be producible under 1-copy-SI. This makes our formalism a convenient tool to prove the correctness of replica control algorithms.
symposium on reliable distributed systems | 2006
F. D. Munoz-Esco ´; J. Pla-Civera; María Idoia Ruiz-Fuertes; Luis Irún-Briz; Hendrik Decker; José Enrique Armendáriz-Iñigo; J. R. González de Mendívil
Database replication protocols need to detect, block or abort part of conflicting transactions. A possible solution is to check their writesets (and also their readsets in case a serialisable isolation level is requested), which however burdens the consumption of CPU time. This gets even worse when the replication support is provided by a middleware, since there is no direct DBMS support in that layer. We propose and discuss the use of the concurrency control support of the local DBMS for detecting conflicts between local transactions and writesets of remote transactions. This allows to simplify many database replication protocols and to enhance their performance
european conference on parallel processing | 2005
Luis Irún-Briz; Hendrik Decker; Rubén de Juan-Marín; Francisco Castro-Company; José Enrique Armendáriz-Iñigo; Francesc D. Muñoz-Escoí
Data replication serves to improve the availability and performance of distributed systems. The price to be paid consists of costs caused by protocols by which a sufficient degree of consistency of replicated data is maintained. Different kinds of targeted applications require different kinds of replication protocols, each one requiring a different set of metadata. We discuss the middleware architecture used in the MADIS project for maintaining the consistency of replicated databases. Instead of reinventing wheels, MADIS makes use of basic resources provided by conventional database systems (e.g. triggers, views, etc) to achieve its purpose, to a large extent. So, the underlying databases can perform more efficiently many of the routines needed to support any consistency protocol, the implementation of which thus becomes much simpler and easier. MADIS enables the databases to simultaneously maintain different metadata needed for different replication protocols, so that the latter can be chosen, plugged in and exchanged on the fly as online-configurable modules, in order to fit the shifting needs of given applications best, at each moment.
acm symposium on applied computing | 2008
José Enrique Armendáriz-Iñigo; A. Mauch-Goya; J. R. González de Mendívil; Francesc D. Muñoz-Escoí
Database replication has been researched as a solution to overcome the problems of performance and availability of distributed systems. Full database replication, based on group communication systems, is an attempt to enhance performance that works well for a reduced number of sites. If application locality is taken into consideration, partial replication, i.e. not all sites store the full database, also enhances scalability. On the other hand, it is needed to keep all copies consistent. If each DBMS provides SI, the execution of transactions has to be coordinated so as to obtain Generalized-SI (GSI). In this paper, a partial replication protocol providing GSI is introduced that gives a consistent view of the database, providing an adaptive replication technique and supporting the failure and recovery of replicas.
IEEE Transactions on Industrial Informatics | 2013
Joan Navarro; Agustín Zaballos; Andreu Sancho-Asensio; Guillermo Ravera; José Enrique Armendáriz-Iñigo
Smart grids are typically built by means of several techniques and technologies concerning poorly correlated research disciplines. Up to now, practitioners have decomposed the Smart Grid problem according to each knowledge domain, and, thus, some partial solutions have been presented so far. However, these proposals are often difficult to integrate between each other and with existing platforms due to the fact that they do not consider the Smart Grid as a whole. The purpose of this paper is to propose a simple and complete secured QoS-aware ICT architecture with self-management capabilities, provided by a cognitive system, to meet the requirements of Smart Grids. Presented experimentations show the feasibility of our solution and encourage practitioners to focus their efforts in this direction.
Expert Systems With Applications | 2014
Andreu Sancho-Asensio; Joan Navarro; Itziar Arrieta-Salinas; José Enrique Armendáriz-Iñigo; Agustín Zaballos; Elisabet Golobardes
Abstract Data mining techniques are traditionally divided into two distinct disciplines depending on the task to be performed by the algorithm: supervised learning and unsupervised learning. While the former aims at making accurate predictions after deeming an underlying structure in data—which requires the presence of a teacher during the learning phase—the latter aims at discovering regular-occurring patterns beneath the data without making any a priori assumptions concerning their underlying structure. The pure supervised model can construct a very accurate predictive model from data streams. However, in many real-world problems this paradigm may be ill-suited due to (1) the dearth of training examples and (2) the costs of labeling the required information to train the system. A sound use case of this concern is found when defining data replication and partitioning policies to store data emerged in the Smart Grids domain in order to adapt electric networks to current application demands (e.g., real time consumption, network self adapting). As opposed to classic electrical architectures, Smart Grids encompass a fully distributed scheme with several diverse data generation sources. Current data storage and replication systems fail at both coping with such overwhelming amount of heterogeneous data and at satisfying the stringent requirements posed by this technology (i.e., dynamic nature of the physical resources, continuous flow of information and autonomous behavior demands). The purpose of this paper is to apply unsupervised learning techniques to enhance the performance of data storage in Smart Grids. More specifically we have improved the eXtended Classifier System for Clustering (XCSc) algorithm to present a hybrid system that mixes data replication and partitioning policies by means of an online clustering approach. Conducted experiments show that the proposed system outperforms previous proposals and truly fits with the Smart Grid premises.
acm symposium on applied computing | 2007
José Enrique Armendáriz-Iñigo; J. R. Juárez; J.R.G. de Mendivil; Hendrik Decker; Francesc D. Muñoz-Escoí
Several previous works have proven that there is no way of guaranteeing a snapshot isolation level in symmetrical replicated database systems without blocking transactions when they are started. As a result of this, the generalized snapshot isolation (GSI) level was defined, relaxing a bit the freshness of the snapshot being taken when a transaction is initiated in its local replica. This enhances performance, since transactions do not need to get blocked, but in some cases will increase the abortion rate. This paper proposes a flexible protocol that is able to bound the degree of snapshot outdateness from a relaxed GSI to the strict one-copy equivalent SI. Additionally, it proposes an optimistic solution where transactions do not block, and only need to be re-initiated when their optimistic start fails. Such re-initialization is made very soon and only rolls back the first transaction accesses, without waiting for the transaction completion. Finally, if 1CSI is not enough, this protocol is also able to manage transactions with serializable isolation, if such a level is requested.
international symposium on computer and information sciences | 2007
R. Salinas; Josep M. Bernabé-Gisbert; Francesc D. Muñoz-Escoí; José Enrique Armendáriz-Iñigo; J.R.G. de Mendivil
One of the weaknesses of database replication protocols, compared to centralized DBMSs, is that they are unable to manage concurrent execution of transactions at different isolation levels. In the last years, some theoretical works related to this research line have appeared but none of them has proposed and implemented a real replication protocol with support to multiple isolation levels. This paper takes advantage of our MADIS middleware and one of its implemented Snapshot Isolation protocols to design and implement SIRC, a protocol that is able to execute concurrently both generalized snapshot isolation (GSI) and generalized loose read committed (GLRC) transactions. We have also made a performance analysis to show how this kind of protocols can improve the system performance and decrease the transaction abortion rate in applications that do not require the strictest isolation level in every transaction.
OTM '08 Proceedings of the OTM 2008 Confederated International Conferences, CoopIS, DOA, GADA, IS, and ODBASE 2008. Part I on On the Move to Meaningful Internet Systems: | 2008
Francesc D. Muñoz-Escoí; María Idoia Ruiz-Fuertes; Hendrik Decker; José Enrique Armendáriz-Iñigo; José Ramón González de Mendívil
Current middleware database replication protocols take care of read-write conflict evaluation. If there are no such conflicts, protocols sanction transactions to commit. Other conflicts may arise due to integrity violation. So, if, in addition to the consistency of transactions and replicas, also the consistency of integrity constraints is to be supported, some more care must be taken. Some classes of replication protocols are able to seamlessly deal with the integrity support of the underlying DBMS, but others are not. In this paper, we analyze the support for integrity that can be provided in various classes of replication protocols. Also, we propose extensions for those that cannot directly manage certain kinds of constraints that are usually supported in DBMSs.
advanced information networking and applications | 2007
Luis H. García-Muñoz; José Enrique Armendáriz-Iñigo; Hendrik Decker; Francesc D. Muñoz-Escoí
The main goal of replication is to increase dependability. Recovery protocols are a critical building block for realizing this goal. In this survey, we present an analysis of recovery protocols proposed in recent years. In particular, we relate these protocols to the replication protocols that use them, and discuss their main advantages and disadvantages. We classify replication and recovery protocols by several characteristics and point out interrelationships between them.