Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Norbert Ritter is active.

Publication


Featured researches published by Norbert Ritter.


web information systems engineering | 2000

XML content management based on object-relational database technology

Budi Surjanto; Norbert Ritter; Henrik Loeser

XML (Extensible Markup Language) is a textual markup language designed for the creation of self-describing documents which contain textual data combined with structural information describing the structure of the textual data. We introduce XCoP (XML Content Repository) as a repository which is based on an object-relational database management system (ORDBMS) and improves content management of XML documents, thereby exploiting their structural information. It allows users to reuse and process the textual portions of document contents, called fragments, which are flexibly configurable. Moreover, it enables collaborative development of documents and facilitates synchronization of fragment modification and versioning. Thus, XCoP offers comprehensive content management functionality by taking advantage of the availability of structural information.


Lecture Notes in Computer Science | 2013

On the Move to Meaningful Internet Systems: OTM 2013 Conferences : Confederated International Conferences: CoopIS, DOA-Trusted Cloud, and ODBASE 2013, Graz, Austria, September 9-13, 2013. Proceedings

Robert Meersman; Hervé Panetto; Tharam S. Dillon; Johann Eder; Zohra Bellahsene; Norbert Ritter; Pieter De Leenheer; Deijing Dou

The OnTheMove 2013 event, held 9-13 September in Graz, Austria, further consolidated the importance of the series of annual conferences that was started in 2002 in Irvine, California. It then moved to Catania, Sicily in 2003, to Cyprus in 2004 and 2005, Montpellier in 2006, Vilamoura in 2007 and 2009, in 2008 to Monterrey, Mexico, to Heraklion, Crete in 2010 and 2011, and to Rome in 2012. This prime event continues to attract a diverse and relevant selection of todays research worldwide on the scientific concepts underlying new computing paradigms, which, of necessity, must be distributed, heterogeneous, and supporting an environment of resources that are autonomous and yet must meaningfully cooperate. Indeed, as such large, complex, and networked intelligent information systems become the focus and norm for computing, there continues to be an acute and even increasing need to address the implied software, system, and enterprise issues and discuss them face to face in an integrated forum that covers methodological, semantic, theoretical, and application issues as well. As we all realize, email, the Internet, and even video conferences are not by themselves optimal nor even sufficient for effective and efficient scientific exchange.


BTW | 1999

Towards Generating Object-Relational Software Engineering Repositories

Wolfgang Mahnke; Norbert Ritter; Hans-Peter Steiert

Nowadays the complexity of design processes, no matter which design domain (CAD, software engineering, etc.) they belong to, requires system support by means of so-called repositories. Repositories help managing design artifacts by offering adequate storage and manipulation services. One among several important features of a repository is version management. Current repository technology lacks in adequately exploiting database technology and in being adaptable to special application needs, e. g. support of application-specific notions of versioning. For that reason, we propose new repository technology, which is not completely generic (as current repositories are), but exploits generic methods for generating tailored repository managers. Furthermore, we show that new, object-relational database technology is extremely beneficial for that purpose.


international conference on data engineering | 2010

Duplicate detection in probabilistic data

Fabian Panse; Maurice van Keulen; Ander de Keijzer; Norbert Ritter

Collected data often contains uncertainties. Probabilistic databases have been proposed to manage uncertain data. To combine data from multiple autonomous probabilistic databases, an integration of probabilistic data has to be performed. Until now, however, data integration approaches have focused on the integration of certain source data (relational or XML). There is no work on the integration of uncertain source data so far. In this paper, we present a first step towards a concise consolidation of probabilistic data. We focus on duplicate detection as a representative and essential step in an integration process. We present techniques for identifying multiple probabilistic representations of the same real-world entities.


conference on information and knowledge management | 2007

Towards workload shift detection and prediction for autonomic databases

Marc Holze; Norbert Ritter

Due to the complexity of industry-scale database systems, the total cost of ownership for these systems is no longer dominated by hardware and software, but by administration expenses. Autonomic databases intend to reduce these costs by providing self-management features. Existing approaches towards this goal are supportive advisors for the database administrator and feedback control loops for online monitoring, analysis and re-configuration. But while advisors are too resource-consuming for continuous operation, feedback control loops suffer from overreaction, oscillation and interference. In this position paper we give a general analysis of the parameters that affect the self-management of a database. Out of these parameters, we identify that the workload has major influence on both physical design of data and DBMS configuration. Hence, we propose to employ a workload model for light-weight, continuous workload monitoring and analysis. This model can be used for the identification and prediction of significant workload shifts, which require autonomic re-configuration of the database.


data and knowledge engineering | 1999

Semantic serializability: a correctness criterion for processing transactions in advanced database applications

A. Brayner; Theo Härder; Norbert Ritter

Serializability requires that the execution of each transaction must give the illusion to be an atomic action, i.e., the execution of a set of transactions must appear to be a serial one. This requirement, however, is too strong and unnecessarily restricts concurrency among transactions, when semantic information is available for the transaction processing mechanism. In this paper, a new correctness criterion for concurrent execution of database transactions, denoted semantic serializability, is proposed. Semantic serializability is based on the use of semantic information about database objects (and not about transactions). The main idea of our proposal is to provide different atomicity views for each transaction and, for this reason, to allow interleavings among transactions which are nonserializable, but which preserve database consistency. We develop two concurrency control protocols, which are based on semantic serializability. One protocol is based on a locking-mechanism and the other one uses a non-locking approach. Our proposal is suitable to a wide variety of advanced database applications, such as CAx, MDBS, GIS and WFMS.


conference on information and knowledge management | 2009

Consistent on-line classification of dbs workload events

Marc Holze; Claas Gaidies; Norbert Ritter

An important goal of self-managing databases is the autonomic adaptation of the database configuration to evolving workloads. However, the diversity of SQL statements in real-world workloads typically causes the required analysis overhead to be prohibitive for a continuous workload analysis. The workload classification presented in this paper reduces the workload analysis overhead by grouping similar workload events into classes. Our approach employs clustering techniques based upon a general distance function for DBS workload events. To be applicable for a continuous workload analysis, our workload classification specifically addresses a stream-based, lightweight operation, a controllable loss of quality, and self-management.


advances in databases and information systems | 2008

Autonomic Databases: Detection of Workload Shifts with n-Gram-Models

Marc Holze; Norbert Ritter

Autonomic databases are intended to reduce the total cost of ownership for a database system by providing self-management functionality. The self-management decisions heavily depend on the database workload, as the workload influences both the physical design and the DBMS configuration. In particular, a database reconfiguration is required whenever there is a significant change, i.e. shift, in the workload. In this paper we present an approach for continuous, light-weight workload monitoring in autonomic databases. Our concept is based on a workload model, which describes the typical workload of a particular DBS using n-Gram-Models. We show how this model can be used to detect significant workload changes. Additionally, a processing model for the instrumentation of the workload is proposed. We evaluate our approach using several workload shift scenarios.


database systems for advanced applications | 2001

Enriched relationship processing in object-relational database management systems

Nan Zhang; Norbert Ritter; Theo Härder

The authors bring together two important topics of current database research: enhancing the data model by refined relationship semantics and exploiting ORDBMS extensibility to equip the system with new functionality. Regarding the first topic, we introduce a framework to capture diverse semantic characteristics of application-specific relationships. Then, in order to integrate the conceptual extensions with the data model provided by SQL: 1999, the second topic comes into play. Our efforts to realize semantically rich relationships by employing current ORDB technology clearly point out the benefits as well as the shortcomings of its extensibility facilities. Unfortunately, deficiencies still prevail in the OR-infrastructure, since the features specific to the extensions cannot sufficiently be taken into account by DBMS-internal processing such as query optimization, and there are very limited mechanisms for adequately supporting the required properties, e.g., by adjusted index and storage structures as well as suitable operational units of processing.


Computer Science - Research and Development | 2017

NoSQL database systems: a survey and decision guidance

Felix Gessert; Wolfram Wingerath; Steffen Friedrich; Norbert Ritter

Today, data is generated and consumed at unprecedented scale. This has lead to novel approaches for scalable data management subsumed under the term “NoSQL” database systems to handle the ever-increasing data volume and request loads. However, the heterogeneity and diversity of the numerous existing systems impede the well-informed selection of a data store appropriate for a given application context. Therefore, this article gives a top-down overview of the field: instead of contrasting the implementation specifics of individual representatives, we propose a comparative classification model that relates functional and non-functional requirements to techniques and algorithms employed in NoSQL databases. This NoSQL Toolbox allows us to derive a simple decision tree to help practitioners and researchers filter potential system candidates based on central application requirements.

Collaboration


Dive into the Norbert Ritter's collaboration.

Top Co-Authors

Avatar

Theo Härder

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wolfgang Mahnke

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge