Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bettina Kemme is active.

Publication


Featured researches published by Bettina Kemme.


ACM Transactions on Database Systems | 2000

A new approach to developing and implementing eager database replication protocols

Bettina Kemme; Gustavo Alonso

Database replication is traditionally seen as a way to increase the availability and performance of distributed databases. Although a large number of protocols providing data consistency and fault-tolerance have been proposed, few of these ideas have ever been used in commercial products due to their complexity and performance implications. Instead, current products allow inconsistencies and often resort to centralized approaches which eliminates some of the advantages of replication. As an alternative, we propose a suite of replication protocols that addresses the main problems related to database replication. On the one hand, our protocols maintain data consistency and the same transactional semantics found in centralized systems. On the other hand, they provide flexibility and reasonable performance. To do so, our protocols take advantage of the rich semantics of group communication primitives and the relaxed isolation guarantees provided by most databases. This allows us to eliminate the possibility of deadlocks, reduce the message overhead and increase performance. A detailed simulation study shows the feasibility of the approach and the flexibility with which different types of bottlenecks can be circumvented.


symposium on reliable distributed systems | 2000

Database replication techniques: a three parameter classification

Matthias Wiesmann; Fernando Pedone; André Schiper; Bettina Kemme; Gustavo Alonso

Data replication is an increasingly important topic as databases are more and more deployed over clusters of workstations. One of the challenges in database replication is to introduce replication without severely affecting performance. Because of this difficulty, current database products use lazy replication, which is very efficient but can compromise consistency. As an alternative, eager replication guarantees consistency but most existing protocols have a prohibitive cost. In order to clarify the current state of the art and open up new avenues for research, this paper analyses existing eager techniques using three key parameters (server architecture, server interaction and transaction termination). In our analysis, we distinguish eight classes of eager replication protocols and, for each category, discuss its requirements, capabilities and cost. The contribution lies in showing when eager replication is feasible and in spelling out the different aspects a database replication protocol must account for.


ACM Transactions on Computer Systems | 2005

MIDDLE-R: Consistent database replication at the middleware level

Marta Patiño-Martínez; Ricardo Jiménez-Peris; Bettina Kemme; Gustavo Alonso

The widespread use of clusters and Web farms has increased the importance of data replication. In this article, we show how to implement consistent and scalable data replication at the middleware level. We do this by combining transactional concurrency control with group communication primitives. The article presents different replication protocols, argues their correctness, describes their implementation as part of a generic middleware, Middle-R, and proves their feasibility with an extensive performance evaluation. The solution proposed is well suited for a variety of applications including Web farms and distributed object platforms.


international conference on distributed computing systems | 1998

A suite of database replication protocols based on group communication primitives

Bettina Kemme; Gustavo Alonso

This paper proposes a family of replication protocols based on group communication in order to address some of the concerns expressed by database designers regarding existing replication solutions. Due to these concerns, current database systems allow inconsistencies and often resort to centralized approaches, thereby reducing some of the key advantages provided by replication. The protocols presented in this paper take advantage of the semantics of group communication and use related isolation guarantees to eliminate the possibility of deadlocks, reduce the message overhead, and increase performance. A simulation study shows the feasibility of the approach and the flexibility with which different types of bottlenecks can be circumvented.


ACM Transactions on Database Systems | 2003

Are quorums an alternative for data replication

Ricardo Jiménez-Peris; Marta Patiño-Martínez; Gustavo Alonso; Bettina Kemme

Data replication is playing an increasingly important role in the design of parallel information systems. In particular, the widespread use of cluster architectures often requires to replicate data for performance and availability reasons. However, maintaining the consistency of the different replicas is known to cause severe scalability problems. To address this limitation, quorums are often suggested as a way to reduce the overall overhead of replication. In this article, we analyze several quorum types in order to better understand their behavior in practice. The results obtained challenge many of the assumptions behind quorum based replication. Our evaluation indicates that the conventional read-one/write-all-available approach is the best choice for a large range of applications requiring data replication. We believe this is an important result for anybody developing code for computing clusters as the read-one/write-all-available strategy is much simpler to implement and more flexible than quorum-based approaches. In this article, we show that, in addition, it is also the best choice using a number of other selection criteria.


international conference on data engineering | 2005

Postgres-R(SI): combining replica control with concurrency control based on snapshot isolation

Shuqing Wu; Bettina Kemme

Replicating data over a cluster of workstations is a powerful tool to increase performance, and provide fault-tolerance for demanding database applications. The big challenge in such systems is to combine replica control (keeping the copies consistent) with concurrency control. Most of the research so far has focused on providing the traditional correctness criteria serializability. However, more and more database systems, e.g., Oracle and PostgreSQL, use multi-version concurrency control providing the isolation level snapshot isolation. In this paper, we present Postgres-R(SI), an extension of PostgreSQL offering transparent replication. Our replication tool is designed to work smoothly with PostgreSQLs concurrency control providing snapshot isolation for the entire replicated system. We present a detailed description of the replica control algorithm, and how it is combined with PostgreSQLs concurrency control component. Furthermore, we discuss some challenges we encountered when implementing the protocol. Our performance analysis based on the TPC-W benchmark shows that this approach exhibits excellent performance for real-life applications even if they are update intensive.


international symposium on distributed computing | 2000

Scalable Replication in Database Clusters

Marta Patiño-Martínez; Ricardo Jiménez-Peris; Bettina Kemme; Gustavo Alonso

In this paper, we explore data replication protocols that provide both fault tolerance and good performance without compromising consistency. We do this by combining transactional concurrency control with group communication primitives. In our approach, transactions are executed at only one site so that not all nodes incur in the overhead of producing results. To further reduce latency, we use an optimistic multicast technique that overlaps transaction execution with total order message delivery. The protocols we present in the paper provide correct executions while minimizing overhead and providing higher scalability.


dependable systems and networks | 2001

Online reconfiguration in replicated databases based on group communication

Bettina Kemme; Alberto Bartoli; Ozalp Babaoglu

Over the last years, many replica control protocols have been developed that take advantage of the ordering and reliability semantics of group communication primitives to simplify database system design and to improve performance. Although current solutions are able to mask site failures effectively, many of them are unable to cope with recovery of failed sites, merging of partitions, or joining of new sites. This paper addresses this important issue. It proposes efficient solutions for online system reconfiguration providing new sites with a current state of the database without interrupting transaction processing in the rest of the system. Furthermore, the paper analyzes the impact of cascading reconfigurations, and argues that they call be handled in an elegant way by extended forms of group communication.


international conference on distributed computing systems | 1999

Processing transactions over optimistic atomic broadcast protocols

Bettina Kemme; Fernando Pedone; Gustavo Alonso; André Schiper

Atomic broadcast primitives allow fault-tolerant cooperation between sites in a distributed system. Unfortunately, the delay incurred before a message can be delivered makes it difficult to implement high performance, scalable applications on top of atomic broadcast primitives. A new approach has been proposed which, based on optimistic assumptions about the communication system, reduces the average delay for message delivery. We develop this idea further and present a replicated database architecture that employs the new atomic broadcast primitive in such a way that the coordination phase of the atomic broadcast is fully overlapped with the execution of transactions, providing high performance without relaxing transaction correctness.


international conference on distributed computing systems | 2002

Improving the scalability of fault-tolerant database clusters

Ricardo Jiménez-Peris; Marta Patiño-Martínez; Bettina Kemme; Gustavo Alonso

Replication has become a central element in modem information systems playing a dual role: increase availability and enhance scalability. Unfortunately, most existing protocols increase availability at the cost of scalability; This paper presents architecture, implementation and performance of a middleware based replication tool that provides both availability and better scalability than existing systems. Main characteristics are the usage of specialized broadcast primitives and efficient data propagation.

Collaboration


Dive into the Bettina Kemme's collaboration.

Top Co-Authors

Avatar

Marta Patiño-Martínez

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Ricardo Jiménez-Peris

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge