Peter Muth
Saarland University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Peter Muth.
intelligent information systems | 1998
Peter Muth; Dirk Wodtke; Jeanine Weissenfels; Angelika Kotz Dittrich; Gerhard Weikum
Current workflow management systems fall short of supporting large-scale distributed, enterprise-wide applications. We present a scalable, rigorously founded approach to enterprise-wide workflow management, based on the distributed execution of state and activity charts. By exploiting the formal semantics of state and activity charts, we develop an algorithm for transforming a centralized state and activity chart into a provably equivalent partitioned one, suitable for distributed execution. A synchronization scheme is developed that guarantees an execution equivalent to a non-distributed one. This basic solution is further refined in order to reduce communication overhead and exploit parallelism between partitions whenever possible. The developed synchronization schemes are compared in terms of the number and size of synchronization messages.
symposium on principles of database systems | 1990
Gerhard Weikum; Christof Hasse; Peter Broessler; Peter Muth
Multi-level transactions have received considerable attention as a framework for high-performance concurrency control methods. An inherent property of multi-level transactions is the need for compensating actions, since state-based recovery methods do no longer work correctly for transaction undo. The resulting requirement of operation logging adds to the complexity of crash recovery. In addition, multi-level recovery algorithms have to take into account that high-level actions are not necessarily atomic, e.g., if multiple pages are updated in a single action. In this paper, we present a recovery algorithm for multi-level transactions. Unlike typical commercial database systems, we have striven for simplicity rather than employing special tricks. It is important to note, though, that simplicity is not achieved at the expense of performance. We show how a high-performance multi-level recovery algorithm can be systematically developed based on few fundamental principles. The presented algorithm has been implemented in the DASDBS database kernel system.
international conference on data engineering | 1993
Peter Muth; Thomas C. Rakow; Gerhard Weikum; Peter Brössler; Christof Hasse
A locking protocol for object-oriented database systems (OODBSs) is presented. The protocol can exploit the semantics of methods invoked on encapsulated objects. Compared to conventional page-oriented or record-oriented concurrency control protocols, the proposed protocol greatly improves the possible concurrency because commutative method executions on the same object are not considered as a conflict. An OODBS application example is presented. The principle of open-nested transactions is reviewed. It is shown that, using the locking protocol in an open-nested transaction, the locks of a subtransaction are released when the subtransaction completes, and only a semantic lock is held further by the parent of the subtransaction.<<ETX>>
international conference on data engineering | 1991
Peter Muth; Thomas C. Rakow
A systematic discussion of atomic commitment for heterogeneous database systems is presented. An analysis is given of two alternative protocols for atomic commitment: commitment of local transaction after or before the global commit or abort decision is made. The impact of the protocols on recovery and concurrency control is shown. Atomicity, consistency, isolation, and durability properties are achieved for global transactions. It is demonstrated that commitment before fits best to multilevel transactions. In this case, the commitment protocol causes no additional overhead and a higher degree of concurrency can be achieved.<<ETX>>
NATO advanced study institute on workflow management systems | 1998
Peter Muth; Dirk Wodtke; Jeanine Weissenfels; Gerhard Weikum; Angelika Kotz Dittrich
This paper presents an approach towards the specification, verification, and distributed execution of workflows based on state and activity charts. The formal foundation of state and activity charts is exploited at three levels. At the specification level, the formalism enforces precise descriptions of business processes while also allowing subsequent refinements. In addition, precise specifications based on other methods can be automatically converted into state and activity charts. At the level of verification, state charts are amenable to the efficient method of model checking, in order to verify particularly critical workflow properties. Finally, at the execution level, a state chart specification forms the basis for the automatic generation of modules that can be directly executed in a distributed manner. Within the MENTOR project, a coherent prototype system has been built that comprises all three levels: specification, verification, and distributed execution.
very large data bases | 2000
Peter Muth; Patrick E. O'Neil; Achim Pick; Gerhard Weikum
Abstract. Numerous applications such as stock market or medical information systems require that both historical and current data be logically integrated into a temporal database. The underlying access method must support different forms of “time-travel” queries, the migration of old record versions onto inexpensive archive media, and high insertion and update rates. This paper presents an access method for transaction-time temporal data, called the log-structured history data access method (LHAM) that meets these demands. The basic principle of LHAM is to partition the data into successive components based on the timestamps of the record versions. Components are assigned to different levels of a storage hierarchy, and incoming data is continuously migrated through the hierarchy. The paper discusses the LHAM concepts, including concurrency control and recovery, our full-fledged LHAM implementation, and experimental performance results based on this implementation. A detailed comparison with the TSB-tree, both analytically and based on experiments with real implementations, shows that LHAM is highly superior in terms of insert performance, while query performance is in almost all cases at least as good as for the TSB-tree; in many cases it is much better.
acm multimedia | 1998
Y. Rompogiannakis; Guido Nerjes; Peter Muth; Michael Paterakis; Peter Triantafillou; Gerhard Weikum
Most multimedia applications require storage and retrieval of large amounts of continuous and discrete data at very high rates. Disk drives should be servicing such mixed workloads achieving low response times for discrete requests, while guaranteeing the uninterrupted delivery of continuous data. Disk scheduling algorithms for mixed workloads, although they play a central role in this task, have been overlooked by related multimedia research efforts, which so far have mostly concentrated on the scheduling of continuous requests only. The focus of this paper is on efficient disk I/O scheduling algorithms for mixed workloads in a multimedia storage server. We propose novel algorithms, a taxonomy of relevant algorithms, and study their performance through experimentation. Our results show that our proposed algorithms offer drastic improvements in discrete request average response times, low response-time variability, while serving continuous requests without interruptions.
symposium on principles of database systems | 1997
Guido Nerjes; Peter Muth; Gerhard Weikum
Continuous data types like video and audio require the real-time delivery of data fragments fmm a server’s disks to the client at which the data is displayed. This paper develops a stochastic model for analyzing the rate at which data fragments arrive too late at the client and thus cause display “glitches”. The model is based on deriving the Laplace-Stieltjes transform of the service time distribution for batched disk service under a multi-user load of concurrently served continuous-data streams, and applying Chemoff bounds to the tail of the service time distribution and the resulting distribution of the glitch rate per stream. The results from the model provide the basis for configuring a server and exerting an admission contxol such that the admitted streams suffer no mom than a specified (small) rate of glitches with a specified (very high) probability. The model cunsiders variable display bandwidth both across different streams and within a single stream, and also the variable transfer rate of modern multi-zone disks. The accuracy of the model is validated by detailed simulations.
international conference on data engineering | 1999
Peter Muth; Jeanine Weissenfels; Michael Gillmann; Gerhard Weikum
Workflow management systems (WfMSs) support the efficient, largely automated execution of business processes. However, using a WfMS typically requires implementing the applications control flow exclusively by the WfMS. This approach is powerful if the control flow is specified and implemented from scratch, but it has severe drawbacks if a WfMS is to be integrated within environments with existing solutions for implementing control flow. Usually, the existing solutions are too complex to be substituted by the WfMS all at once. Hence, the WfMS must support an incremental integration, i.e. the reuse of existing implementations of control flow as well as their incremental substitution. Extending the WfMSs functionality according to future application needs, e.g. by worklist and history management, must also be possible. In particular, at the beginning of an incremental integration process, only a limited amount of a WfMSs functionality is actually exploited by the workflow application. Later on, as the integration proceeds, more advanced requirements arise and demand the customization of the WfMS to the evolving application needs. In this paper, we present the architecture and implementation of a light-weight WfMS, coined Mentor-lite, which aims to overcome the above-mentioned shortcomings of conventional WfMSs. Mentor-lite supports an easy integration of workflow functionality into an existing environment, and can be tailored to specific workflow application needs.
international conference on management of data | 1997
Dirk Wodtke; Jeanine Weissenfels; Gerhard Weikum; Angelika Kotz Dittrich; Peter Muth
MENTOR (“Middleware for Enterprise-Wide Workflow Management”) is a joint project of the University of the Saarland, the Union Bank of Switzerland, and ETH Zurich [1, 2, 3]. The focus of the project is on enterprise-wide workflow management. Workflows in this category may span multiple organizational units each unit having its own workflow server, involve a variety of heterogeneous information systems, and require many thousands of clients to interact with the workflow management system (WFMS). The project aims to develop a scalable and highly available environment for the execution and monitoring of workflows, seamlessly integrated with a specification and verification environment. For the specification of workflows, MENTOR utilizes the formalism of state and activity charts. The mathematical rigor of the specification method establishes a basis for both correctness reasoning and for partitioning of a large workflow into a number of subworkflows according to the organizational responsibilities of the enterprise. For the distributed execution of the partitioned workflow specification, MENTOR relies mostly on standard middleware components and adds own components only where the standard components fall short of functionality or scalability. In particular, the run-time environment is based on a TP monitor and a CORBA implementation.