Philipp Reinecke
Free University of Berlin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philipp Reinecke.
international service availability symposium | 2004
Philipp Reinecke; Aad P. A. van Moorsel; Katinka Wolter
Restart is an application-level mechanism to speed up the completion of tasks that are subject to failures or unpredictable delays. In this paper we investigate if restart can be beneficial for Internet applications. For that reason we conduct and analyze a measurement study for restart applied to HTTP GET over TCP. Since application-level restart and TCP time-out mechanisms may interfere, we discuss in detail the relation between restart and transport protocol. The analysis shows that restart may especially be beneficial in the TCP set-up phase, in essence tuning TCP time-out values for the application at hand. In addition, we discuss the design of and experimentation with a proxy-based restart tool that includes a statistical oracle module to automatically adapt and optimize the restart time.
quantitative evaluation of systems | 2012
Philipp Reinecke; Tilman Krauss; Katinka Wolter
We present HyperStar, an extensible tool with a graphical user-interface that enables simple and efficient fitting of phase-type distributions to data sets. The fitting process can be optimised using intuitive graphical selection of important features of the density. An extensible module interface enables application of the tool as a GUI for prototype implementations of new fitting algorithms.
Computers & Mathematics With Applications | 2012
Philipp Reinecke; Tilman Krauí; Katinka Wolter
We present a clustering-based fitting approach for phase-type distributions that is particularly suited to capture common characteristics of empirical data sets. The distributions fitted by this approach are especially useful in efficient simulation approaches. We describe the Hyper-* tool, which implements the algorithm and offers a user-friendly interface to efficient phase-type fitting. We provide a comparison of cluster-based fitting with segmentation-based approaches and other algorithms and show that clustering provides good results for typical empirical data sets.
quantitative evaluation of systems | 2006
Philipp Reinecke; A. van Moorsel; K. Woher
Web services are typically deployed in Internet or Intranet environments, making message transfers susceptible to a wide variety of network, protocol and system failures. To mitigate these problems, reliable messaging solutions for Web services have been proposed that retry messages suspected to be lost. It is of interest to evaluate the performance of such reliable messaging solutions, and in this paper we therefore utilise a newly developed fault-injection environment for the analysis of time-out strategies for the Web services reliable messaging standard. We compare oracles that determine retransmission times with respect to the tradeoff between two metrics: the effective transfer time and the overhead in terms of additional message transmissions. Our fault-injection environment allows faults to be invoked in the IP layer, representing a variety of failure situations in the underlying system. The study presented in this paper includes two adaptive oracles, which set the length of the retransmission interval based on round trip time measurements, and two static oracles. The study considers both HTTP and Mail as SOAP transports. We conclude that adaptive oracles may significantly outperform static oracles when the underlying system exhibits more complex behaviour
spec international performance evaluation workshop | 2008
Philipp Reinecke; Katinka Wolter
Web-Services based Service-Oriented Architectures (SOAs) become ever more important. The Web Services Reliable Messaging standard (WSRM) provides a reliable messaging layer to these systems. In this work we present parameters for acyclic continuous phase-type (ACPH) approximations for message transmission times in a WSRM implementation confronted with several different levels of IP packet loss. These parameters illustrate how large data sets may be represented by just a few parameters. The ACPH approximations presented here can be used for the stochastic modelling of SOA systems. We demonstrate application of the models using an M/PH/1 queue.
formal methods | 2006
Philipp Reinecke; Aad P. A. van Moorsel; Katinka Wolter
In this paper we experimentally investigate if optimal retry times can be determined based on models that assume independence of successive tries. We do this using data obtained for HTTP GET. This data provides application-perceived timing characteristics for the various phases of web page download, including response times for TCP connection set-ups and individual object downloads. The data consists of pairs of consecutive downloads for over one thousand randomly chosen URLs. Our analysis shows that correlation exists for normally completed invocations, but is remarkably low for relatively slow downloads. This implies that for typical situations in which retries are applied, models relying on the independence assumption are appropriate
formal methods | 2010
Katinka Wolter; Philipp Reinecke
A tradeoff is a situation that involves losing one quality or aspect of something in return for gaining another quality or aspect. Speaking about the tradeoff between performance and security indicates that both, performance and security, can be measured, and that to increase one, we have to pay in terms of the other. While established metrics for performance of systems exist this is not quite the case for security. In this chapter we present standard performance metrics and discuss proposed security metrics that are suitable for quantification. The dilemma of inferior metrics can be solved by considering indirect metrics such as computation cost of security mechanisms. Security mechanisms such as encryption or security protocols come at a cost in terms of computing resources. Quantification of performance has long been done by means of stochastic models. With growing interest in the quantification of security stochastic modelling has been applied to security issues as well. This chapter reviews existing approaches in the combined analysis and evaluation of performance and security. We find that most existing approaches take either security or performance as given and investigate the respective other. For instance [34] investigates the performance of a server running a security protocol, while [21] quantifies security without considering the cost of increased security. For special applications, mobile Ad-hoc networks in [5] and the email system in [32] we will see that models exist which can be used to explore the performance-security tradeoff. To illustrate general aspects of the security-performance tradeoff we set up a simple Generalised Stochastic Petri Net (GSPN) model that allows us to study both, performance and security and especially the tradeoff between both. We formulate metrics, such as cost and an abstract combined performance and security measure that explicitly express the tradeoff and we show that system parameters can be found that optimise those metrics. These parameters are optimal for neither performance nor security, but for the combination of both.
workshop on software and performance | 2008
Philipp Reinecke; Katinka Wolter
Adaptivity, the ability of a system to adapt itself to its environment, is a key property of autonomous systems. In his paper we propose a benefit-based framework for the efinition of metrics to measure adaptivity. We demonstrate the application of the framework in a case study of the adaptivity of restart strategies for Web Services Reliable Messaging (WSRM). Using the framework, we define two adaptivity metrics for a fault-injection-driven evaluation of the adaptivity of three restart strategies in aWSRM implementation. The adaptivity measurements are complemented by a thorough discussion of the performance of the restart strategies.
Performance Evaluation | 2010
Philipp Reinecke; Katinka Wolter; Aad P. A. van Moorsel
Although adaptivity, the ability to adapt, is an important property of complex computing systems, so far little thought has been given to its evaluation. In this paper we propose a framework and methodology for the definition of benefit-based adaptivity metrics. The metrics thus defined allow an informed choice between systems based on their adaptivity to be made. We demonstrate application of the framework in a case study of restart strategies for Web Services Reliable Messaging. Additionally, we provide a broad survey of related approaches that may be used in the study of adaptivity (comprising, among others, robustness, performability, and control analysis), and evaluate their respective merits in relation to the proposed adaptivity metric.
MMB&DFT'10 Proceedings of the 15th international GI/ITG conference on Measurement, Modelling, and Evaluation of Computing Systems and Dependability and Fault Tolerance | 2010
Philipp Reinecke; Miklós Telek; Katinka Wolter
Phase-type (PH) distributions are proven to be very powerful tools in modelling and analysis of a wide range of phenomena in computer systems. The use of these distributions in simulation studies requires efficient methods for generating PH-distributed random numbers. In this work, we discuss algorithms for generating random numbers from PH distributions and propose two algorithms for reducing the cost associated with generating random numbers from Acyclic Phase-Type distributions (APH).