Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter C. Lockemann is active.

Publication


Featured researches published by Peter C. Lockemann.


Archive | 2000

Advances in Database Technology — EDBT 2000

Carlo Zaniolo; Peter C. Lockemann; Marc H. Scholl; Torsten Grust

While we can take as a fact “the Web changes everything”, we argue that “XML is the means” for such a change to make a significant step forward. We therefore regard XML-related research as the most promising and challenging direction for the community of database researchers. In this paper, we approach XML-related research by taking three progressive perspectives. We first consider XML as a data representation standard (in the small), then as a data interchange standard (in the large), and finally as a basis for building a new repository technology. After a broad and necessarily coarse-grain analysis, we turn our focus to three specific research projects which are currently ongoing at the Politecnico di Milano, concerned with XML query languages, with active document management, and with XML-based specifications of Web sites.


ACM Transactions on Database Systems | 1991

Reactive consistency control in deductive databases

Guido Moerkotte; Peter C. Lockemann

Classical treatment of consistency violations is to back out a database operation or transaction. In applications with large numbers of fairly complex consistency constraints this clearly is an unsatisfactory solution. Instead, if a violation is detected the user should be given a diagnosis of the constraints that failed, a line of reasoning on the cause that could have led to the violation, and suggestions for a repair. The problem is particularly complicated in a deductive database system where failures may be due to an inferred condition rather than simply a stored fact, but the repair can only be applied to the underlying facts. The paper presents a system which provides automated support in such situations. It concentrates on the concepts and ideas underlying the approach and an appropriate system architecture and user guidance, and sketches some of the heuristics used to gain in performance.


IEEE Transactions on Knowledge and Data Engineering | 1992

System-guided view integration for object-oriented databases

Willi Gotthard; Peter C. Lockemann; Andrea Neufeld

Some of the shortcomings of current view integration methodologies, namely, a low emphasis on full-scale automated systems, a lack of algorithmic specifications of the integration activities, inattention to the design of databases with new properties such as databases for computer-aided design, and insufficient experience with data models with a rich set of type and abstraction mechanisms, are attacked simultaneously. The focus is on design databases for software engineering applications. The approach relies on a semantic model based on structural object-orientation with various features tailored to these applications. The expressiveness of the model is used to take the first steps toward algorithmic solutions, and it is demonstrated how corresponding tools could be embedded methodically within the view integration process and technically within a database design environment. The central ideal is to compute so-called assumption predicates that express suggested similarities between structures in two schemas to be integrated, and then have a human integrator confirm or reject them. The basic method is exemplified for the CERM data model that includes molecular aggregation, generalization, and versioning. >


Proceedings of the 1969 24th national conference on | 1969

REL: A Rapidly Extensible Language system

F. B. Thompson; Peter C. Lockemann; B. Dostert; R. S. Deverill

In the first two sections of this paper we review the design philosophy which gives rise to these features, and sketch the system architecture which reflects them. Within this framework, we have sought to provide languages which are natural for typical users. The third section of this paper outlines one such application language, REL English. The REL system has been implemented at the California Institute of Technology, and will be the conversational system for the Caltech campus this fall. The system hardware consists of an IBM 360/50 computer with 256K bytes of core, a drum, IBM 2314 disks, an IBM 2250 display, 62 IBM 2741 typewriter consoles distributed around the campus, and neighboring colleges. Base languages provided are CITRAN (similar to RANDs JOSS), and REL English. A basic statistical package and a graphics package are also available for building special purpose languages around specific courses and user requirements.


ACM Transactions on Database Systems | 1979

Data abstractions for database systems

Peter C. Lockemann; Heinrich C. Mayr; Wolfgang H. Weil; Wolfgang H. Wohlleber

Data abstractions were originally conceived as a specification tool in programming. They also appear to be useful for exploring and explaining the capabilities and shortcomings of the data definition and manipulation facilities of present-day database systems. Moreover they may lead to new approaches to the design of these facilities. In the first section the paper introduces an axiomatic method for specifying data abstractions and, on that basis, gives precise meaning to familiar notions such as data model, data type, and database schema. In a second step the various possibilities for specifying data types within a given data model are examined and illustrated. It is shown that data types prescribe the individual operations that are allowed within a database. Finally, some additions to the method are discussed which permit the formulation of interrelationships between arbitrary operations.


very large data bases | 1993

Generating consistent test data: restricting the search space by a generator formula

Andrea Neufeld; Guido Moerkotte; Peter C. Lockemann

To address the problem of generating test data for a set of general consistency constraints, we propose a new two-step approach: First the interdepen-dencies between consistency constraints are explored and a generator formula is derived on their basis. During its creation, the user may exert control. In essence, the generator formula contains information to restrict the search for consistent test databases. In the second step, the test database is generated. Here, two different approaches are proposed. The first adapts an already published approach to generating finite models by enhancing it with requirements imposed by test data generation. The second, a new approach, operationalizes the generator formula by translating it into a sequence of operators, and then executes it to construct the test database. For this purpose, we introduce two powerful operators: the generation operator and the test-and-repair operator. This approach also allows for enhancing the generation operators with heuristics for generating facts in a goal-directed fashion. It avoids the generation of test data that may contradict the consistency constraints, and limits the search space for the test data. This article concludes with a careful evaluation and comparison of the performance of the two approaches and their variants by describing a number of benchmarks and their results.To address the problem of generating test data for a set of general consistency constraints, we propose a new two-step approach: First the interdepen-dencies between consistency constraints are explored and a generator formula is derived on their basis. During its creation, the user may exert control. In essence, the generator formula contains information to restrict the search for consistent test databases. In the second step, the test database is generated. Here, two different approaches are proposed. The first adapts an already published approach to generating finite models by enhancing it with requirements imposed by test data generation. The second, a new approach, operationalizes the generator formula by translating it into a sequence of operators, and then executes it to construct the test database. For this purpose, we introduce two powerful operators: the generation operator and the test-and-repair operator. This approach also allows for enhancing the generation operators with heuristics for generating facts in a goal-directed fashion. It avoids the generation of test data that may contradict the consistency constraints, and limits the search space for the test data. This article concludes with a careful evaluation and comparison of the performance of the two approaches and their variants by describing a number of benchmarks and their results.


data and knowledge engineering | 1998

Distributed events in active database systems: letting the genie out of the bottle

Arne Koschel; Peter C. Lockemann

Abstract Similar to grouping autonomous DBMSs within large-scale distributed, heterogeneous and loosely coupled networks into DBMS federations, active information sources may be grouped into active DBMS federations. The paper claims that as a prerequisite for such federations, event handling must be separated from the members of the federation and concentrated within an independent network component. Two consequences of such a decision are explored in the paper. The construction of suitable wrappers that perform the task of event detection for a large variety of active and passive information sources, and the unbundling of event processing based on ECA rules into a number of individual functions which can then be distributed across the network and configured into event handlers that pursue specific strategies.


international conference on management of data | 1985

Acquisition of terminological knowledge using database design techniques

Christoph F. Eick; Peter C. Lockemann

One of the most dlfflcult problems m knowledge base design 1s the acqulsltlon and formahzatlon of an expert’s rules concerning a special universe of discourse In most cases different experts and the knowledge base designer hnnself will use different termmologles. and ~111 represent rules concerning the same objects m a different way Therefore, one of the first steps m knowledge base design has to be the construction of an integrated, commonly accepted terrmnology, that can be shared by all persons involved in the design process This design step will be the topic of the paper The paper proposes concepts, methods and tools to support the extraction, mtegratlon, transformatlon and evaluation of termmologlcal knowledge that are based on database design techmques and discusses the posslblhtles and lm-ntatlons of automatmg these keywords and phrases knowledge based systems, knowledge base design, database design. conceptual modellmg. semantic modellmg. termmological knowledge acqulsltlon. knowledge mtegratlon, design automation


international database engineering and applications symposium | 2005

Agents and databases: friends or foes?

Peter C. Lockemann; René Witte

On first glance agent technology seems more like a hostile intruder into the database world. On the other hand, the two could easily complement each other, since agents carry out information processes whereas databases supply information to processes. Nonetheless, to view agent technology from a database perspective seems to question some of the basic paradigms of database technology, particularly the premise of semantic consistency of a database. The paper argues that the ensuing uncertainty in distributed databases can be modelled by beliefs, and develops the basic concepts for adjusting peer-to-peer databases to the individual beliefs in single nodes and collective beliefs in the entire distributed database.


Proceedings of an International Workshop on Advanced Programming Environments | 1986

DAMOKLES—a database system for software engineering environments

Klaus R. Dittrich; Willi Gotthard; Peter C. Lockemann

Comprehensive software engineering environments consist of a large number of cooperating tools in order to support the various phases of some software life cycle. The cooperation depends largely on the availability of basic mechanisms that manage the large quantities of information involved in a consistent fashion. While database concepts are desirable for this purpose, current systems prove to be unsuitable.

Collaboration


Dive into the Peter C. Lockemann's collaboration.

Top Co-Authors

Avatar

Klaus R. Dittrich

Forschungszentrum Informatik

View shared research outputs
Top Co-Authors

Avatar

Heinrich C. Mayr

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Stefan M. Lang

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Guido Moerkotte

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jens Nimis

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jutta A. Mülle

Forschungszentrum Informatik

View shared research outputs
Top Co-Authors

Avatar

Willi Gotthard

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hans-Dirk Walter

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Christoffel

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sebastian Pulkowski

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge