Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J. R. Juárez-Rodríguez is active.

Publication


Featured researches published by J. R. Juárez-Rodríguez.


international symposium on computer and information sciences | 2006

A protocol for reconciling recovery and high-availability in replicated databases

José Enrique Armendáriz-Iñigo; Francesc D. Muñoz-Escoí; Hendrik Decker; J. R. Juárez-Rodríguez; J. R. González de Mendívil

We describe a recovery protocol which boosts availability, fault tolerance and performance by enabling failed network nodes to resume an active role immediately after they start recovering. The protocol is designed to work in tandem with middleware-based eager update-everywhere strategies and related group communication systems. The latter provide view synchrony, i.e., knowledge about currently reachable nodes and about the status of messages delivered by faulty and alive nodes. That enables a fast replay of missed updates which defines dynamic database recovery partition. Thus, speeding up the recovery of failed nodes which, together with the rest of the network, may seamlessly continue to process transactions even before their recovery has completed. We specify the protocol in terms of the procedures executed with every message and event of interest and outline a correctness proof.


international conference on move to meaningful internet systems | 2007

A deterministic database replication protocol where multicastwritesets never get aborted

J. R. Juárez-Rodríguez; José Enrique Armendáriz-Iñigo; Francesc D. Muñoz-Escoí; J. R. González de Mendívil; José Ramón Garitagoitia

Several approaches for the full replication of data in distributed databases [1] have been studied. One of the preferred techniques is the eager update everywhere based on the total-order multicast delivery service [2], where the most outstanding varieties are: certification-based and weak-voting [1]. Under this approach, the execution flow of a transaction can be split into two different main phases: the first one, all operations are entirely executed at the delegate replica of the transaction; and followed by the second phase, started when the transaction requests its commit, all updates are collected and grouped (denoted as writeset) at the delegate replica and sent to all replicas. The commitment or abortion of a transaction is decided upon the delivery of the message. In the case of certification-based ones, each replica holds an ordered log of already committed transactions and the writeset is certified [3], against the log, to commit or abort the transaction. On the other hand, weak-voting ones atomically apply the delivered writeset at remote replicas whilst the delegate, if it is still active, reliably multicasts [2] a commit message. Thus, the certification-based presents a better behavior in terms of performance, only one message is multicast per transaction, but with higher abortion rates [1]. Recently, due to the use of DBMS providing SI, we have found several certification-based protocols to achieve, actually a weaker form called GSI [3], this isolation level in a replicated setting [3] while quite a few weak-voting ones [4].


data and knowledge engineering | 2011

A formal characterization of SI-based ROWA replication protocols

José Enrique Armendáriz-Iñigo; J. R. Juárez-Rodríguez; J. R. González de Mendívil; José Ramón Garitagoitia; Luis Irún-Briz; Francesc D. Muñoz-Escoí

Snapshot isolation (SI) is commonly used in some commercial DBMSs with a multiversion concurrency control mechanism since it never blocks read-only transactions. Recent database replication protocols have been designed using SI replicas where transactions are firstly executed in a delegate replica and their updates (if any) are propagated to the rest of the replicas at commit time; i.e. they follow the Read One Write All (ROWA) approach. This paper provides a formalization that shows the correctness of abstract protocols which cover these replication proposals. These abstract protocols differ in the properties demanded for achieving a global SI level and those needed for its generalized SI (GSI) variant - allowing reads from old snapshots. Additionally, we propose two more relaxed properties that also ensure a global GSI level. Thus, some applications can further optimize their performance in a replicated system while obtaining GSI.


principles of distributed computing | 2008

Correctness criteria for replicated database systems with snapshot isolation replicas

José Enrique Armendáriz-Iñigo; J. R. Juárez-Rodríguez; José Ramón González de Mendívil; Francesc D. Muñoz-Escoí

In this work, we present the correctness criteria that ensures a replicated database behaves like a single copy where trans-actions see a weaker form of SI, called Generalized-SI, with deferred update protocols in a crash failure scenario.


availability, reliability and security | 2008

A Database Replication Protocol Where Multicast Writesets Are Always Committed

J. R. Juárez-Rodríguez; José Enrique Armendáriz-Iñigo; J.R.G. de Mendivil; E.D. Munoz-Escoi

Database replication protocols based on a certification approach are usually the best ones for achieving good performance. The weak voting approach achieves a slightly longer transaction completion time, but with a lower abortion rate. So, both techniques can be considered as the best ones for replication when performance is a must, and both of them take advantage of the properties provided by atomic broadcast. We propose a new database replication strategy that shares many characteristics with such previous strategies. It is also based on totally ordering the application of writesets, using only an unordered reliable broadcast, instead of an atomic broadcast. Additionally, the writesets of transactions that are aborted in the final validation phase are not broadcast in our strategy. Thus, this new approach reduces the communication traffic and also achieves a good transaction response time (even shorter than those previous strategies in some system configurations).


database and expert systems applications | 2006

Trying to Cater for Replication Consistency and Integrity of Highly Available Data

José Enrique Armendáriz-Iñigo; J. R. Juárez-Rodríguez; Hendrik Decker; Francesc D. Muñoz-Escoí

Replication increases the availability of data. Availability, consistency and integrity are competing objectives. They need to be reconciled, and adapted to the needs of different applications and users, by appropriate replication strategies. We outline work in progress on a middleware architecture for replicated databases. It simultaneously maintains several protocols, so that it can be reconfigured on the fly to the actual needs of availability, consistency and integrity of possibly simultaneous applications and users


The Journal of Supercomputing | 2012

An implementation of a replicated file server supporting the crash-recovery failure model

Itziar Arrieta-Salinas; José Enrique Armendáriz-Iñigo; J. R. Juárez-Rodríguez; José Ramón González de Mendívil

Data replication techniques are widely used for improving availability in software applications. Replicated systems have traditionally assumed the fail-stop model, which limits fault tolerance. For this reason, there is a strong motivation to adopt the crash-recovery model, in which replicas can dynamically leave and join the system. With the aim to point out some key issues that must be considered when dealing with replication and recovery, we have implemented a replicated file server that satisfies the crash-recovery model, making use of a Group Communication System. According to our experiments, the most interesting results are that the type of replication and the number of replicas must be carefully determined, specially in update intensive scenarios; and, the variable overhead imposed by the recovery protocol to the system. From the latter, it would be convenient to adjust the desired trade-off between recovery time and system throughput in terms of the service state size and the number of missed operations.


international conference on software and data technologies | 2009

A Hybrid Approach for Database Replication: Finding the Optimal Configuration between Update Everywhere and Primary Copy Paradigms

M. Liroz-Gistau; J. R. Juárez-Rodríguez; José Enrique Armendáriz-Iñigo; J. R. González de Mendívil; Francesc D. Muñoz-Escoí

Database replication has been subject of two different approaches, namely primary copy and update everywhere protocols. The former only allows performing update transactions in the primary replica, while the rest are only used to execute read-only transactions. Update everywhere protocols, on the other hand, allow the system to schedule update transactions in any replica, thus increasing its capacity to deal with update intensive workloads and overcoming failures. However, synchronization costs augment and its throughput may fall below the ones obtained by primary copy approaches. Under these circumstances, we propose a new database replication paradigm, halfway between primary copy and update everywhere approaches, which improve system’s performance by adapting its configuration depending on the workload submitted to the system. The core of this approach is a deterministic replication protocol which propagate changes so that broadcast transactions are never aborted. We also propose a recovery algorithm to ensure fault tolerance.


international conference on software and data technologies | 2008

Relaxed Approaches for Correct DB-Replication with SI Replicas

José Enrique Armendáriz-Iñigo; J. R. Juárez-Rodríguez; J. R. González de Mendívil; José Ramón Garitagoitia; Francesc D. Muñoz-Escoí; Luis Irún-Briz

The concept of Generalized Snapshot Isolation (GSI) has been recently proposed as a suitable extension of conventional Snapshot Isolation (SI) for replicated databases. In GSI, transactions may use older snapshots instead of the latest snapshot required in SI, being able to provide better performance without significantly increasing the abortion rate when write/write conflicts among transactions are low. We study and formally proof a sufficient condition that replication protocols with SI replicas following the deferred update technique must obey to achieve GSI. They must provide global atomicity and commit update transactions in the very same order at all sites. However, as this is a sufficient condition, it is possible to obtain GSI by relaxing certain assumptions about the commit ordering of certain update transactions.


international conference on software and data technologies | 2018

ON THE STUDY OF DYNAMIC AND ADAPTIVE DEPENDABLE DISTRIBUTED SYSTEMS

José Enrique Armendáriz-Iñigo; J. R. Juárez-Rodríguez; J. R. González de Mendívil; Francesc D. Muñoz-Escoí; R. de Juan-Marín

Collaboration


Dive into the J. R. Juárez-Rodríguez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francesc D. Muñoz-Escoí

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis Irún-Briz

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Hendrik Decker

Polytechnic University of Valencia

View shared research outputs
Top Co-Authors

Avatar

Itziar Arrieta-Salinas

Universidad Pública de Navarra

View shared research outputs
Top Co-Authors

Avatar

M. Liroz-Gistau

Universidad Pública de Navarra

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge