Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Uwe Röhm is active.

Publication


Featured researches published by Uwe Röhm.


international conference on management of data | 2008

Serializable isolation for snapshot databases

Michael J. Cahill; Uwe Röhm; Alan Fekete

Many popular database management systems offer snapshot isolation rather than full serializability. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that individually maintain consistency. Until now, the only way to prevent these anomalies was to modify the applications by introducing artificial locking or update conflicts, following careful analysis of conflicts between all pairs of transactions. This paper describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation and performance study of the algorithm are described, showing that the throughput approaches that of snapshot isolation in most cases.


extending database technology | 2000

OLAP Query Routing and Physical Design in a Database Cluster

Uwe Röhm; Klemens Böhm; Hans-Jörg Schek

This article quantifies the benefit from simple data organization schemes and elementary query routing techniques for the PowerDB engine, a system that coordinates a cluster of databases. We report on evaluations for a specific scenario: the workload contains OLAP queries, OLTP queries, and simple updates, borrowed from the TPC-R benchmark.We investigate affinity of OLAP queries and different routing strategies for such queries. We then compare two simple data placement schemes, namely full replication and a hybrid one combining partial replication with partitioning. We run different experiments with queries only, with updates only, and with queries concurrently to simple updates. It turns out that hybrid is superior to full replication, even without updates. Our overall conclusion is that coordinator-based routing has good scaleup properties for scenarios with complex analysis queries.


international conference on data engineering | 2001

Cache-aware query routing in a cluster of databases

Uwe Röhm; Klemens Böhm; Hans-Jörg Schek

We investigate query routing techniques in a cluster of databases for a query-dominant environment. The objective is to decrease query response time. Each component of the cluster runs an off-the-shelf DBMS and holds a copy of the whole database. The cluster has a coordinator that routes each query to an appropriate component. Considering queries of realistic complexity, e.g., TPC-R, this article addresses the following questions: Can routing benefit from caching effects due to previous queries? Since our components are black-boxes, how can we approximate their cache content? How to route a query, given such cache approximations? To answer these questions, we have developed a cache-aware query router that is based on signature approximations of queries. We report on experimental evaluations with the TPC-R benchmark using our PowerDB database cluster prototype. Our main result is that our approach of cache approximation routing is better than state-of-the-art strategies by a factor of two with regard to mean response time.


international conference on data engineering | 2008

The Cost of Serializability on Platforms That Use Snapshot Isolation

Mohammad Alomari; Michael J. Cahill; Alan Fekete; Uwe Röhm

Several common DBMS engines use the multi- version concurrency control mechanism called Snapshot Isolation, even though application programs can experience non- serializable executions when run concurrently on such a platform. Several proposals exist for modifying the application programs, without changing their semantics, so that they are certain to execute serializably even on an engine that uses SI. We evaluate the performance impact of these proposals, and find that some have limited impact (only a few percent drop in throughput at a given multi-programming level) while others lead to much greater reduction in throughput of up-to 60% in high contention scenarios. We present experimental results for both an open- source and a commercial engine. We relate these to the theory, giving guidelines on which conflicts to introduce so as to ensure correctness with little impact on performance.


computational intelligence and data mining | 2007

Resource-aware Online Data Mining in Wireless Sensor Networks

Nhan Duc Phung; Mohamed Medhat Gaber; Uwe Röhm

Data processing in wireless sensor networks often relies on high-speed data stream input, but at the same time is inherently constrained by limited resource availability. Thus, energy efficiency and good resource management are vital for in-network processing techniques. We propose enabling resource-awareness for in-network processing algorithms by means of a resource monitoring component and designed a corresponding framework. As proof of concept, we implement an online clustering algorithm, which uses the resource monitor to adapt to resource availability, on the Sun SPOT sensor nodes from Sun Microsystem. We refer to this adaptive clustering algorithm as extended resource-aware cluster (ERA-cluster). Finally, we report on the outcome of several experiments to evaluate the validity of our approach in terms of resource adaptiveness and accuracy of the ERA-cluster. Results show that ERA-cluster can effectively adapt to resource availability while maintaining acceptable level of accuracy.


international conference on data engineering | 2014

YCSB+T: Benchmarking web-scale transactional databases

Akon Dey; Alan Fekete; Raghunath Nambiar; Uwe Röhm

Database system benchmarks like TPC-C and TPC-E focus on emulating database applications to compare different DBMS implementations. These benchmarks use carefully constructed queries executed within the context of transactions to exercise specific RDBMS features, and measure the throughput achieved. Cloud services benchmark frameworks like YCSB, on the other hand, are designed for performance evaluation of distributed NoSQL key-value stores, early examples of which did not support transactions, and so the benchmarks use single operations that are not inside transactions. Recent implementations of web-scale distributed NoSQL systems like Spanner and Percolator, offer transaction features to cater to new web-scale applications. This has exposed a gap in standard benchmarks. We identify the issues that need to be addressed when evaluating transaction support in NoSQL databases. We describe YCSB+T, an extension of YCSB, that wraps database operations within transactions. In this framework, we include a validation stage to detect and quantify database anomalies resulting from any workload, and we gather metrics that measure transactional overhead. We have designed a specific workload called Closed Economy Workload (CEW), which can run within the YCSB+T framework. We share our experience with using CEW to evaluate some NoSQL systems.


database systems for advanced applications | 2010

Corona: energy-efficient multi-query processing in wireless sensor networks

Raymes Khoury; Tim Dawborn; Bulat Gafurov; Glen Pink; Edmund Tse; Quincy Tse; K. Almi’Ani; Mohamed Medhat Gaber; Uwe Röhm; Bernhard Scholz

Wireless sensor networks (WSNs) are a core infrastructure for automatic environmental monitoring. We developed Corona as an in-network distributed query processor that allows to share a sensor network between several users with a declarative query language. It includes a novel approach for minimising sensor activations in shared wireless sensor networks: we introduce the notion of freshness into WSN so that users can ask for cached sensor reading with freshness guarantees. We further integrated a resource-awareness framework that allows the query processor to dynamically adapt to changing resource levels. The capabilities of this system are demonstrated with several aggregation queries for different users with different freshness and result precision needs.


international conference on data engineering | 2009

A Robust Technique to Ensure Serializable Executions with Snapshot Isolation DBMS

Mohammad Alomari; Alan Fekete; Uwe Röhm

Snapshot Isolation (SI) is a popular concurrency control mechanism that has been implemented by many commercial and open-source platforms (e.g. Oracle, Postgre SQL, and MS SQL Server 2005). Unfortunately, SI can result in nonserializable execution, in which database integrity constraints can be violated. The literature reports some techniques to ensure that all executions are serializable when run in an engine that uses SI for concurrency control. These modify the application by introducing conflicting SQL statements. However, with each of these techniques the DBA has to make a choice among possible transactions to modify — and as we previously showed, making a bad choice of which transactions to modify can come with a hefty performance reduction. In this paper we propose a novel technique called ELM to introduce conflicts in a separate lock-manager object. Experiments with two platforms show that ELM has peak performance which is similar to SI, no matter which transactions are chosen for modification. That is, ELM is much less vulnerable from poor DBA choices than the previous techniques.


international conference on data engineering | 1999

Working together in Harmony-an implementation of the CORBA object query service and its evaluation

Uwe Röhm; Klemens Böhm

The CORBA standard, together with its service specifications, has gained considerable attention in recent years. The CORBA Object Query Service allows for declarative access to heterogeneous storage systems. We have come up with an implementation of this service called Harmony. The objective of the article is to provide a detailed description and quantitative assessment of Harmony. Its main technical characteristics are data-flow evaluation, bulk transfer and intra-query parallelism. To carry out the evaluation, we have classified data exchange between components of applications in several dimensions: one is to distinguish between point-, context- and bulk data access. We have compared Harmony with: (1) data access through application-specific CORBA objects, and (2) conventional client/server communication, i.e., Embedded SQL. Our results show that Harmony performs much better than Alternative 1 for bulk data access. Besides that, due to the features mentioned above, Harmony, performs approximately as well as conventional client/server communication mechanisms.The CORBA standard, together with its service specifications, has gained considerable attention in recent years. The CORBA Object Query Service allows for declarative access to heterogeneous storage systems. We have come up with an implementation of this service called Harmony. The objective of the article is to provide a detailed description and quantitative assessment of Harmony. Its main technical characteristics are data-flow evaluation, bulk transfer and intra-query parallelism. To carry out the evaluation, we have classified data exchange between components of applications in several dimensions: one is to distinguish between point-, context- and bulk data access. We have compared Harmony with: (1) data access through application-specific CORBA objects, and (2) conventional client/server communication, i.e., Embedded SQL. Our results show that Harmony performs much better than Alternative 1 for bulk data access. Besides that, due to the features mentioned above, Harmony, performs approximately as well as conventional client/server communication mechanisms.


very large data bases | 2013

Scalable transactions across heterogeneous NoSQL key-value data stores

Akon Dey; Alan Fekete; Uwe Röhm

Many cloud systems provide data stores with limited features, especially they may not provide transactions, or else restrict transactions to a single item. We propose a approach that gives multi-item transactions across heterogeneous data stores, using only a minimal set of features from each store such as single item consistency, conditional update, and the ability to include extra metadata within a value. We offer a client-coordinated transaction protocol that does not need a central coordinating infrastructure. A prototype implementation has been built as a Java library and measured with an extension of YCSB benchmark to exercise multi-item transactions.

Collaboration


Dive into the Uwe Röhm's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akon Dey

University of Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyuck Han

Dongduk Women's University

View shared research outputs
Researchain Logo
Decentralizing Knowledge