Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guido Moerkotte is active.

Publication


Featured researches published by Guido Moerkotte.


international conference on management of data | 1990

Access support in object bases

Alfons Kemper; Guido Moerkotte

In this work access support relations are introduced as a means for optimizing query processing in object-oriented database systems. The general idea is to maintain redundant separate structures (disassociated from the object representation) to store object references that are frequently traversed in database queries. The proposed access support relation technique is no longer restricted to relate an object (tuple) to an atomic value (attribute value) as in conventional indexing. Rather, access support relations relate objects with each other and can span over reference chains which may contain collection-valued components in order to support queries involving path expressions. We present several alternative extensions of access support relations for a given path expression, the best of which has to be determined according to the application-specific database usage profile. An analytical cost model for access support relations and their application is developed. This analytical cost model is, in particular, used to determine the best access support relation extension and decomposition with respect to the specific database configuration and application profile.


database programming languages | 1993

Nested Queries in Object Bases

Sophie Cluet; Guido Moerkotte

Many declarative query languages for object-oriented databases allow nested subqueries. This paper contains the first (to our knowledge) proposal to optimize them. A two-phase approach is used to optimize nested queries in the object-oriented context. The first phase—called dependency-based optimization—transforms queries at the query language level in order to treat common subexpressions and independent subqueries more efficiently. The transformed queries are translated to nested algebraic expressions. These entail nested loop evaluation which may be very inefficient. Hence, the second phase unnests nested algebraic expresxadsions to allow for more efficient evaluation.


international conference on management of data | 1994

Optimizing disjunctive queries with expensive predicates

Alfons Kemper; Guido Moerkotte; Klaus Peithner; Michael Steinbrunn

In this work, we propose and assess a technique called bypass processing for optimizing the evaluation of disjunctive queries with expensive predicates. The technique is particularly useful for optimizing selection predicates that contain terms whose evaluation costs vary tremendously; e.g., the evaluation of a nested subquery or the invocation of a user-defined function in an object-oriented or extended relational model may be orders of magnitude more expensive than an attribute access (and comparison). The idea of bypass processing consists of avoiding the evaluation of such expensive terms whenever the outcome of the entire selection predicate can already be induced by testing other, less expensive terms. In order to validate the viability of bypass evaluation, we extend a previously developed optimizer architecture and incorporate three alternative optimization algorithms for generating bypass processing plans.


Information Systems | 1992

Access support relations: an indexing method for object bases

Alfons Kemper; Guido Moerkotte

In this work access support relations are introduced as a means for optimizing query processing in object-oriented database systems. The general idea is to maintain separate structures (dissociated from the object representation) to redundantly store those object references that are frequently traversed in database queries. The proposed access support relation technique is no longer restricted to relate an object (tuple) to an atomic value (attribute value) as in conventional indexing. Rather, access support relations relate objects with each other and can span over reference chains which may contain collection-valued components in order to support queries involving path expressions. We present several alternative extensions and decompositions of access support relations for a given path expression, the best of which has to be determined according to the application-specific database usage profile. An analytical performance analysis of access support relations is developed. This analytical cost model is, in particular, used to determine the best access support relation extension and decomposition with respect to specific database configuration and usage characteristics.


international conference on management of data | 1991

Function materialization in object bases

Alfons Kemper; Christoph Kilger; Guido Moerkotte

We describe funct


very large data bases | 1993

Generating consistent test data: restricting the search space by a generator formula

Andrea Neufeld; Guido Moerkotte; Peter C. Lockemann

on materzakation as an optimization concept in object-oriented databases. Exploiting the object-oriented paradigm—namely class lficatton, object identzty, and encapsalatzon-—facilitates a rather easy incorporation of function materialization mto (existing) object-oriented systems. Furthermore, the exploitation of encapsulation (information hiding) and object identity provides for additional performance tuning measures which drastically decrease the rematerialization overhead incurred by updates in the object base. The paper concludes with a quantitative analysis of function materialization based on a sample performance benchmark obtained from our experimental object base system GOM.


FODO '93 Proceedings of the 4th International Conference on Foundations of Data Organization and Algorithms | 1993

Partition-Based Clustering in Object Bases: From Theory to Practice

Carsten Andreas Gerlhof; Alfons Kemper; Christoph Kilger; Guido Moerkotte

To address the problem of generating test data for a set of general consistency constraints, we propose a new two-step approach: First the interdepen-dencies between consistency constraints are explored and a generator formula is derived on their basis. During its creation, the user may exert control. In essence, the generator formula contains information to restrict the search for consistent test databases. In the second step, the test database is generated. Here, two different approaches are proposed. The first adapts an already published approach to generating finite models by enhancing it with requirements imposed by test data generation. The second, a new approach, operationalizes the generator formula by translating it into a sequence of operators, and then executes it to construct the test database. For this purpose, we introduce two powerful operators: the generation operator and the test-and-repair operator. This approach also allows for enhancing the generation operators with heuristics for generating facts in a goal-directed fashion. It avoids the generation of test data that may contradict the consistency constraints, and limits the search space for the test data. This article concludes with a careful evaluation and comparison of the performance of the two approaches and their variants by describing a number of benchmarks and their results.To address the problem of generating test data for a set of general consistency constraints, we propose a new two-step approach: First the interdepen-dencies between consistency constraints are explored and a generator formula is derived on their basis. During its creation, the user may exert control. In essence, the generator formula contains information to restrict the search for consistent test databases. In the second step, the test database is generated. Here, two different approaches are proposed. The first adapts an already published approach to generating finite models by enhancing it with requirements imposed by test data generation. The second, a new approach, operationalizes the generator formula by translating it into a sequence of operators, and then executes it to construct the test database. For this purpose, we introduce two powerful operators: the generation operator and the test-and-repair operator. This approach also allows for enhancing the generation operators with heuristics for generating facts in a goal-directed fashion. It avoids the generation of test data that may contradict the consistency constraints, and limits the search space for the test data. This article concludes with a careful evaluation and comparison of the performance of the two approaches and their variants by describing a number of benchmarks and their results.


Tagungsband der GI Fachtagung Datenbanksysteme für Büro, Technik und Wissenschaft (BTW) | 1991

GOM: A Strongly Typed Persistent Object Model With Polymorphism

Alfons Kemper; Guido Moerkotte; Hans-Dirk Walter; Andreas Zachmann

We classify clustering algorithms into sequence-based techniques—which transform the object net into a linear sequence—and partition-based clustering algorithms. Tsangaris and Naughton [TN91, TN92] have shown that the partition-based techniques are superior. However, their work is based on a single partitioning algorithm, the Kernighan and Lin heuristics, which is not applicable to realistically large object bases because of its high running-time complexity. The contribution of this paper is two-fold: (1) we devise a new class of greedy object graph partitioning algorithms (GGP) whose running-time complexity is moderate while still yielding good quality results. (2) Our extensive quantitative analysis of all well-known partitioning algorithms indicates that no one algorithm performs superior for all object net characteristics. Therefore, we propose an adaptable clustering strategy according to a multi-dimensional grid: the dimensions correspond to particular characteristics of the object base—given by, e.g., number and size of objects, degree of object sharing—and the grid entries indicate the most suitable clustering algorithm for the particular configuration.


international conference on database theory | 1988

Efficient Consistency Control in Deductive Databases

Guido Moerkotte; Stefan Karl

In this paper the persistent object model GOM is described. GOM is an object-oriented data model that provides the most essential object features in a “lean” and coherent syntactical framework. These features include: object identity, object instantiation, subtyping and inheritance, operation refinement, dynamic (late) binding. One of the main goals in the design of GOM was type safety. In order to achieve this we developed a strongly typed language that enables the verification of type safety at compile time. It is shown in this paper how commonly encountered “traps” for strong typing are avoided in GOM by specifying a very clean subtyping semantics on the basis of substitutability and type signatures.


intelligent information systems | 1994

Autonomous objects: a natural model for complex applications

Alfons Kemper; Peter C. Lockemann; Guido Moerkotte; Hans-Dirk Walter

In this paper a theoretical framework for efficiently checking the consistency of deductive databases is provided and proven to be correct. Our method is based on focussing on the relevant parts of the database by reasoning forwards from the updates of a transaction, and using this knowledge about real or just possible implicit updates for simplifying the consistency constraints in question. Opposite to the algorithms by Kowalski/Sadri and Lloyd/Topor, we are neither committed to determine the exact set of implicit updates nor to determine a fairly large superset of it by only considering the head literals of deductive rule clauses. Rather, our algorithm unifies these two approaches by allowing to choose any of the above or even intermediate strategies for any step of reasoning forwards. This flexibility renders possible the integration of statistical data and knowledge about access paths into the checking process. Second, deductive rules are organized into a graph to avoid searching for applicable rules in the proof procedure. This graph resembles a connection graph, however, a new method of interpreting it avoids the introduction of new clauses and links.

Collaboration


Dive into the Guido Moerkotte's collaboration.

Top Co-Authors

Avatar

Peter C. Lockemann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christoph Kilger

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hans-Dirk Walter

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrea Neufeld

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andreas Zachmann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Holger Müller

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Klaus Radermacher

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Norbert Runge

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Martin Decker

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge