Andreas Reuter
Kaiserslautern University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andreas Reuter.
ACM Transactions on Database Systems | 1984
Andreas Reuter
Various logging and recovery techniques for centralized transaction-oriented database systems under performance aspects are described and discussed. The classification of functional principles that has been developed in a companion paper is used as a terminological basis. In the main sections, a set of analytic models is introduced and evaluated in order to compare the performance characteristics of nine different recovery techniques with respect to four key parameters and a set of other parameters with less influence. Finally, the results of model evaluation as well as the limitations of the models themselves are discussed.
symposium on principles of database systems | 1982
Andreas Reuter
In many large database applications there are certain elements mostly containing aggregate information, which are very frequently referred to (read and modified) by many transactions. If access to such fields has to obey to conventional two-phase lock protocols (1,2), transactions will be serialized in front of these hot spots, i.e. the degree of parallelism is reduced. To avoid this kind of lock contention some improved lock protocols have been proposed, the most interesting of which is the one implemented in IMS Fast Path (3,4), where add and subtract may be performed concurrently on numerical fields, since backout is always possible with the unique inverse of each operand. A similar scheme is proposed in (10). We expand this idea to parallel readers and writers on numerical data types, proving that under certain conditions the result of such concurrent operations is consistent in the sense that it is equal to some serial schedule (2,5).
international conference on management of data | 1989
Philip A. Bernstein; Umeshwar Dayal; David J. DeWitt; Dieter Gawlick; Jim Gray; Matthias Jarke; Bruce G. Lindsay; Peter C. Lockemann; David Maier; Erich J. Neuhold; Andreas Reuter; Lawrence A. Rowe; Hans-Jörg Schek; Joachim W. Schmidt; Michael Schrefl; Michael Stonebraker
On February 4-5, 1988, the International Computer Science Institute sponsored a two day workshop at which 16 senior members of the data base research community discussed future research topics in the DBMS area. This paper summarizes the discussion which took place.
IEEE Transactions on Software Engineering | 1984
Andreas Reuter; Horst Kinzinger
This paper describes the concepts and implementation of a design aid for the internal schema of an existing CODASYL-like database system. It allows for tailoring the storage structure level to a given logical schema and a specified workload. According to the 1978 CODASYL report, our DBMS provides two levels of schema declaration, the DDL-level for logical schema description and a DSDL-like level for specifying the storage structures to implement the objects of the logical schema. The repertoire of storage structures supported by our system is a good internal schema are basically heuristic. This approaach is justified by weighing its advantages and shortcomings against those of analytic models and simulation. Finally, some preliminary user experiences with a pilot version are related.
international conference on management of data | 2008
Andreas Reuter
In this article I will reflect on the writing of Transaction Processing -- Concepts and Techniques [1], which appeared at Morgan Kaufmann Publishers in 1992. The process of writing had many aspects of a typical software project: In the end, the book was more than twice as thick as we had planned, it covered only 3/4 of the material that we wanted to cover, and completing it took much longer than we had anticipated. Nevertheless, it was a moderate success and served as a basic reference for many developers in the industry for at least 10 years after its publication. It was translated to Chinese and Japanese, and occasionally one still finds references to it -- despite the fact that (apart from simple bug fixes) there has been no technical update of the material, and the book deals with outdated subjects like transaction processing and client/server architectures.
Messung, Modellierung und Bewertung von Rechensystemen, GI/NTG-Fachtagung | 1981
Wolfgang Effelsberg; Theo Härder; Andreas Reuter; J. Schultze-Bohl
Leistungsmessung hat ganz allgemein zum Ziel, das Verhalten eines existierenden Systems in verschiedenen Betriebszustanden zu bestimmen und zu bewerten, um beispielsweise Engpasse fruhzeitig erkennen und beseitigen, das Betriebsverhalten optimieren oder vorgegebene Leistungsspezifikationen nachweisen zu konnen.
Messung, Modellierung und Bewertung von Rechensystemen, GI/NTG-Fachtagung | 1981
Wolfgang Effelsberg; Theo Härder; Andreas Reuter; J. Schultze-Bohl
Die Entwicklung und Benutzung eines so komplexen Software-Produktes wie eines DBMS setzt detaillierte Kenntnisse seines dynamischen Verhaltens in Abhangigkeit von verschiedenen Lastcharakteristiken, der Auswirkungen der grundlegenden Entwurfsentscheidungen sowie der wichtigsten Parameter fur das Leistungsverhalten voraus. Die Ergebnisse von Performance-Untersuchungen an Datenbanksystemen sind also sowohl fur den Systementwickler von Bedeutung, als auch fur den Datenbankadministrator und den Anwendungsprogrammierer. Darauf wird in diesem Papier naher eingegangen. Nun sind die Resultate von Messungen, wie sie in /EHRS81/ beschrieben werden, nicht unmittelbar fur eine der genannten Zielgruppen benutzbar, da die Menge des dabei anfallenden Zahlenmaterials erst mit Hilfe von Bewertungsmodellen interpretierbar wird. Es ist allerdings nicht moglich, fur ein DBMS ein geschlossenes Interpretationshilfsmittel zu entwickeln, das alle relevanten Aspekte zu berucksichtigen erlaubt, und demnach fur alle an Leistungsmessungs-Ergebnissen Interessierten in gleicher Weise zu benutzen ist. Ein groser Teil dieser Arbeit wird sich daher mit der Frage beschaftigen, wie Meswerte aus n n nEinzelmessungen der Ausfuhrungszeiten von DML-Befehlen n n nEinbenutzer-Messungen der Ausfuhrungszeiten von Transaktionen n n nMessungen der Ausfuhrungszeiten, Behinderungen usw. im Mehrbenutzerbetrieb. n n n nso aggregiert werden konnen, das daraus zuverlassige Aussagen uber Leistungsengpasse, Optimierungsmoglichkeiten, die Grosen kritischer Parameter usw. abzuleiten sind. Das setzt voraus, das fur die jeweils untersuchten Teilaspekte hinreichend genaue Modelle des dynamischen Systemverhaltens entwickelt werden.
information systems technology and its applications | 2008
Andreas Reuter
A system is dependable if you can trust it to work. This seems to a completely obvious, almost trivial definition. But the question of what it means for a system to “work” is influenced by the type of system and the perspective of the user — among other things. Depending on the function, reliability can be an important criterion, but in other cases it can be throughput, response time, accuracy of computations, immunity against malicious attacks —this list could be continued for a while.
Formale Modelle für Informationssysteme | 1979
Wolfgang Effelsberg; Theo Härder; Andreas Reuter
Bei der Entwicklung groser Software-Systeme sollte moglichst fruhzeitig durch Untersuchung ihres Leistungsvermogens sichergestellt werden, das sie vorgegebene Leistungsanforderungen erfullen und geplante Anwendungen entsprechend den Leistungsspezifikationen unterstutzen konnen. Das Erkennen von Engpassen im Systemverhalten oder das Aufdecken von Performancefehlern verlangen wegen der Beschrankung ihrer Auswirkungen vom Entwickler moglichst fruhzeitig korrektive Masnahmen. Idealerweise sollten deshalb in den verschiedenen Entwicklungsphasen eines Systems — also bereits bei Planung und Entwurf und spater bei Implementierung und Betrieb — geeignete Methoden der Leistungsanalyse zur Verfugung stehen.
Archive | 1992
Jim Gray; Andreas Reuter