Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lothar F. Mackert.
international conference on management of data | 1986
Lothar F. Mackert; Guy M. Lohman
Few database query optimizer models have been validated against actual performance. This paper presents the methodology and results of a thorough validation of the optimizer and evaluation of the performance of the experimental distributed relational database management system R*, which inherited and extended to a distributed environment the optimization algorithms of System R. Optimizer estimated costs and actual R* resources consumed were written to database tables using new SQL commands, permitting automated control from SQL application programs of test data collection and reduction. A number of tests were run over a wide variety of dynamically-created test databases, SQL queries, and system parameters. The results for single-table access, sorting, and local 2-table joins are reported here. The tests confirmed the accuracy of the majority of the I/O cost model, the significant contribution of CPU cost to total cost, and the need to model CPU cost in more detail than was done in System R. The R* optimizer now retains cost components separately and estimates the number of CPU instructions, including those for applying different kinds of predicates. The sensitivity of I/O cost to buffer space motivated the development of more detailed models of buffer utilization unclustered index scans and nested-loop joins often benefit from pages remaining in the buffers, whereas concurrent scans of the data pages and the index pages for multiple tables during joins compete for buffer share. Without an index on the join column of the inner table, the optimizer correctly avoids the nested-loop join, confirming the need for merge-scan joins. When the join column of the inner is indexed, the optimizer overestimates the cost of the nested-loop join, whose actual performance is very sensitive to three parameters that are extremely difficult to estimate (1) the join (result) cardinality, (2) the outer tables cardinality, and (3) the number of buffer pages available to store the inner table. Suggestions are given for improved database statistics, prefetch and page replacement strategies for the buffer manager, and the use of temporary indexes and Bloom filters (hashed semijoins) to reduce access of unneeded data.
ACM Transactions on Database Systems | 1989
Lothar F. Mackert; Guy M. Lohman
Indexes are commonly employed to retrieve a portion of a file or to retrieve its records in a particular order. An accurate performance model of indexes is essential to the design, analysis, and tuning of file management and database systems, and particularly to database query optimization. Many previous studies have addressed the problem of estimating the number of disk page fetches when randomly accessing k records out of N given records stored on T disk pages. This paper generalizes these results, relaxing two assumptions that usually do not hold in practice: unlimited buffer and unique records for each key value. Experiments show that the performance of an index scan is very sensitive to buffer size limitations and multiple records per key value. A model for these more practical situations is presented and a formula derived for estimating the performance of an index scan. We also give a closed-form approximation that is easy to compute. The theoretical results are validated using the R* distributed relational database system. Although we use database terminology throughout the paper, the model is more generally applicable whenever random accesses are made using keys.
international conference on distributed computing systems | 1988
Lothar F. Mackert; R. Meyer; U. Scheere; Johannes Schneider; Roelof Jan Velthuys; J. Burmeister; J. de Meer; I. Schroer
The major objectives of testing the conformance of protocol implementations to standards are discussed. A design is presented for a generalized conformance test tool that permits testing of different protocols, that can be tailored to different test configurations, and is portable to different environments. Both interactive and automatic modes of testing are supported. The integration of three different test description languages, TTCN, LOTOS, and CRS, is discussed.<<ETX>>
Computer Networks and Isdn Systems | 1992
Jürgen M. Schneider; Lothar F. Mackert; Georg Zörntlein; Roelof Jan Velthuys; Udo Bär
Abstract Due to technological advances and the international trends to open interoperable systems, development of communication protocols for computer networks and distributed systems is becoming increasingly complex and cost sensitive. Protocol engineers require an improved methodology, supported by powerful tools over the whole development process. In this paper, we introduce a development life-cycle based on formal methods. It is used to identify the different activities, from requirements definition to specification, implementation, and testing, together with the set of tools that apply to each phase. We then describe the architecture of an integrated tools environment for protocol engineering and report on a realization of the basic components.
very large data bases | 1986
Lothar F. Mackert; Guy M. Lohman
Proceedings of the IFIP WG6.1 Seventh International Conference on Protocol Specification, Testing and Verification VII | 1987
Lothar F. Mackert; Iris B. Neumeier-Mackert
Proceedings of the IFIP WG6.1 International Symposium on Protocol Specification, Testing and Verification XI | 1991
Roelof Jan Velthuys; Lothar F. Mackert; Jürgen M. Schneider; Georg Zörntlein
very large data bases | 1986
Lothar F. Mackert; Guy M. Lohman
formal techniques for (networked and) distributed systems | 1989
Jürgen M. Schneider; Iris B. Neumeier-Mackert; Lothar F. Mackert; Roelof Jan Velthuys
Archive | 1986
Lothar F. Mackert; Guy M. Lohman