Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Teerat Pitakrat is active.

Publication


Featured researches published by Teerat Pitakrat.


Recommendation systems in software engineering | 2014

Dimensions and Metrics for Evaluating Recommendation Systems

Iman Avazpour; Teerat Pitakrat; Lars Grunske; John C. Grundy

Recommendation systems support users and developers of various computer and software systems to overcome information overload, perform information discovery tasks, and approximate computation, among others. They have recently become popular and have attracted a wide variety of application scenarios ranging from business process modeling to source code manipulation. Due to this wide variety of application domains, different approaches and metrics have been adopted for their evaluation. In this chapter, we review a range of evaluation metrics and measures as well as some approaches used for evaluating recommendation systems. The metrics presented in this chapter are grouped under sixteen different dimensions, e.g., correctness, novelty, coverage. We review these metrics according to the dimensions to which they correspond. A brief overview of approaches to comprehensive evaluation using collections of recommendation system dimensions and associated metrics is presented. We also provide suggestions for key future research and practice directions.


Proceedings of the 4th international ACM Sigsoft symposium on Architecting critical systems | 2013

A comparison of machine learning algorithms for proactive hard disk drive failure detection

Teerat Pitakrat; André van Hoorn; Lars Grunske

Failures or unexpected events are inevitable in critical and complex systems. Proactive failure detection is an approach that aims to detect such events in advance so that preventative or recovery measures can be planned, thus improving system availability. Machine learning techniques have been successfully applied to learn patterns from available datasets and to classify or predict to which class a new instance of data belongs. In this paper, we evaluate and compare the performance of 21 machine learning algorithms by using them for proactive hard disk drive failure detection. For this comparison, we use WEKA as an experimentation platform and benchmark publicly available datasets of hard disk drives that are used to predict imminent failures before the actual failures occur. The results show that different algorithms are suitable for different applications based on the desired prediction quality and the tolerated training and prediction time.


quality of software architectures | 2016

An Architecture-Aware Approach to Hierarchical Online Failure Prediction

Teerat Pitakrat; Dušan Okanović; André van Hoorn; Lars Grunske

Failures in software systems during operation are inevitable. They cause system downtime, which needs to be minimized to reduce or avoid unnecessary costs and customer dissatisfaction. Online failure prediction aims at identifying upcoming failures at runtime to enable proactive maintenance actions. Existing online failure prediction approaches focus on predicting failures of either individual components or the system as a whole. They do not take into account software architectural dependencies, which determine the propagation of failures. In this paper, we propose a hierarchical online failure prediction approach, HORA, which employs a combination of both failure predictors and architectural models. We evaluate our approach using a distributed RSS reader application by Netflix and investigate the prediction quality for two representative types of failures, namely memory leak and system overload. The results show that, overall, our approach improves the area under the ROC curve by 10.7% compared to a monolithic approach.


Journal of Systems and Software | 2017

Hora: Architecture-aware online failure prediction

Teerat Pitakrat; Dušan Okanović; André van Hoorn; Lars Grunske

Abstract Complex software systems experience failures at runtime even though a lot of effort is put into the development and operation. Reactive approaches detect these failures after they have occurred and already caused serious consequences. In order to execute proactive actions, the goal of online failure prediction is to detect these failures in advance by monitoring the quality of service or the system events. Current failure prediction approaches look at the system or individual components as a monolith without considering the architecture of the system. They disregard the fact that the failure in one component can propagate through the system and cause problems in other components. In this paper, we propose a hierarchical online failure prediction approach, called Hora , which combines component failure predictors with architectural knowledge. The failure propagation is modeled using Bayesian networks which incorporate both prediction results and component dependencies extracted from the architectural models. Our approach is evaluated using Netflix’s server-side distributed RSS reader application to predict failures caused by three representative types of faults: memory leak, system overload, and sudden node crash. We compare Hora to a monolithic approach and the results show that our approach can improve the area under the ROC curve by 9.9%.


european dependable computing conference | 2014

Increasing Dependability of Component-Based Software Systems by Online Failure Prediction (Short Paper)

Teerat Pitakrat; André van Hoorn; Lars Grunske

Online failure prediction for large-scale software systems is a challenging task. One reason is the complex structure of many-partially inter-dependent-hardware and software components. State-of-the-art approaches use separate prediction models for parameters of interest or a monolithic prediction model which includes different parameters of all components. However, they have problems when dealing with evolving systems. In this paper, we propose our preliminary research work on online failure prediction targeting large-scale component-based software systems. For the prediction, three complementary types of models are used: (i) an architectural model captures relevant properties of hardware and software components as well as dependencies among them, (ii) for each component, a prediction model captures the current state of a component and predicts independent component failures in the future, (iii) a system-level prediction model represents the current state of the system and-using the component-level prediction models and information on dependencies-allows to predict failures and analyze impacts of architectural system changes for proactive failure management.


ieee international conference on software architecture workshops | 2017

CASPA: A Platform for Comparability of Architecture-Based Software Performance Engineering Approaches

Thomas F. Düllmann; Robert Heinrich; André van Hoorn; Teerat Pitakrat; Jürgen Walter; Felix Willnecker

Setting up an experimental evaluation for architecture-based Software Performance Engineering (SPE) approaches requires enormous efforts. This includes the selection and installation of representative applications, usage profiles, supporting tools, infrastructures, etc. Quantitative comparisons with related approaches are hardly possible due to limited repeatability of previous experiments by other researchers. This paper presents CASPA, a ready-to-use and extensible evaluation platform that already includes example applications and state-of-the-art SPE components, such as monitoring and model extraction. The platform explicitly provides interfaces to replace applications and components by custom(ized) components. The platform builds on state-of-the-art technologies such as container-based virtualization.


international symposium on software reliability engineering | 2016

Antipattern-Based Problem Injection for Assessing Performance and Reliability Evaluation Techniques

Philipp Keck; André van Hoorn; Dušan Okanović; Teerat Pitakrat; Thomas F. Düllmann

A challenging problem with todays increasingly large and distributed software systems is their performance behavior. To help developers avoid or detect mistakes that lead to performance problems, many researchers in software performance engineering have come up with classifications of such problems, called antipatterns. To test the approaches for antipattern detection, data from running systems is required. However, the usefulness of this data is doubtful as it may or may not include manifestations of performance problems. In this paper, we classify existing performance antipatterns w.r.t. their suitability for being injected and, based on this, introduce an extensible tool that allows to inject instances of these antipatterns into existing applications. The approach can be useful for researchers to test and validate their automated runtime problem evaluation and prevention techniques. Using two exemplary performance antipatterns, it is demonstrated that the injection is easily possible and produces feasible, though currently rather clinical results.


performance evaluation methodolgies and tools | 2014

A framework for system event classification and prediction by means of machine learning

Teerat Pitakrat; Jonas Grunert; Oliver Kabierschke; Fabian Keller; André van Hoorn

During operation, software systems produce large amounts of log events, comprising notifications of different severity from various hardware and software components. These data include important information that helps to diagnose problems in the system, e.g., post-mortem root cause analysis. Manual processing of system logs after a problem occurred is a common practice. However, it is time-consuming and error-prone. Moreover, this way, problems are diagnosed after they occurred---even though the data may already include symptoms of upcoming problems. To address these challenges, we developed the SCAPE approach for automatic system event classification and prediction, employing machine learning techniques. This paper introduces SCAPE, including a brief description of the proof-of-concept implementation. SCAPE is part of our Hora framework for online failure prediction in component-based software systems. The experimental evaluation, using a publicly available supercomputer event log, demonstrates SCAPEs high classification accuracy and first results on applying the prediction to a real world data set.


european dependable computing conference | 2014

Increasing Dependability of Component-based Software Systems by Online Failure Prediction

Teerat Pitakrat; André van Hoorn; Lars Grunske


KPDAYS | 2013

Hora: Online Failure Prediction Framework for Component-based Software Systems Based on Kieker and Palladio.

Teerat Pitakrat

Collaboration


Dive into the Teerat Pitakrat's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lars Grunske

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Heinrich

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Iman Avazpour

Swinburne University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge