Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Petr Tuma is active.

Publication


Featured researches published by Petr Tuma.


cooperative information systems | 2002

Distributed Component System Based on Architecture Description: The SOFA Experience

Tomas Kalibera; Petr Tuma

In this paper, the authors share their experience gathered during the design and implementation of a runtime environment for the SOFA component system. The authors focus on the issues of mapping the SOFA component definition language into the C++ language and the integration of a CORBA middleware into the SOFA component system, aiming to support transparently distributed applications in a real-life environment. The experience highlights general problems related to the type system of architecture description languages and middleware implementations, the mapping of the type system into the implementation language, and the support for dynamic changes of the application architecture.


international parallel and distributed processing symposium | 2003

CORBA benchmarking: a course with hidden obstacles

Adam Buble; Lubomír Bulej; Petr Tuma

Numerous projects have evaluated the performance of CORBA middleware over the past decade. Interestingly, many of the published results are either gathered or analyzed imprecisely. We point out common causes of such imprecision related to the gathering of timing information and the effects of warm-up, randomization, cross talk and delayed or hidden functionality, and demonstrate their impact on the results of the evaluation. We also present suggestions related to reporting the results in a manner that is more relevant to the evaluation.


modeling, analysis, and simulation on computer and telecommunication systems | 2005

Automated detection of performance regressions: the mono experience

Tomas Kalibera; Lubomír Bulej; Petr Tuma

Engineering a large software project involves tracking the impact of development and maintenance changes on the software performance. An approach for tracking the impact is regression benchmarking, which involves automated benchmarking and evaluation of performance at regular intervals. Regression benchmarking must tackle the nondeterminism inherent to contemporary computer systems and execution environments and the impact of the nondeterminism on the results. On the example of a fully automated regression benchmarking environment for the mono open-source project, we show how the problems associated with nondeterminism can be tackled using statistical methods.


formal methods | 2006

Precise regression benchmarking with random effects: improving mono benchmark results

Tomas Kalibera; Petr Tuma

Benchmarking as a method of assessing software performance is known to suffer from random fluctuations that distort the observed performance. In this paper, we focus on the fluctuations caused by compilation. We show that the design of a benchmarking experiment must reflect the existence of the fluctuations if the performance observed during the experiment is to be representative of reality We present a new statistical model of a benchmark experiment that reflects the presence of the fluctuations in compilation, execution and measurement. The model describes the observed performance and makes it possible to calculate the optimum dimensions of the experiment that yield the best precision within a given amount of time Using a variety of benchmarks, we evaluate the model within the context of regression benchmarking. We show that the model significantly decreases the number of erroneously detected performance changes in regression benchmarking


Journal of Separation Science | 2008

A dual spectrophotometric/contactless conductivity detector for CE determination of incompletely separated amino acids

Jana Zikmundová; Petr Tuma; František Opekar

A new application is described of a dual photometric/contactless conductivity detector to CE determination of incompletely separated compounds. These compounds are differentiated when one of them provides signals in both the cells of the detector, whereas the other yields a signal in only one cell. The technique has been applied to determination of proline and tyrosine in a dietary supplement.


performance evaluation methodolgies and tools | 2006

Automated benchmarking and analysis tool

Tomas Kalibera; Jakub Lehotsky; David Majda; Branislav Repcek; Michal Tomcanyi; Antonin Tomecek; Petr Tuma; Jaroslav Urban

Benchmarking is an important performance evaluation technique that provides performance data representative of real systems. Such data can be used to verify the results of performance modeling and simulation, or to detect performance changes. Automated benchmarking is an increasingly popular approach to tracking performance changes during software development, which gives developers a timely feedback on their work. In contrast with the advances in modeling and simulation tools, the tools for automated benchmarking are usually being implemented ad-hoc for each project, wasting resources and limiting functionality.We present the result of project BEEN, a generic tool for automated benchmarking in a heterogeneous distributed environment. BEEN automates all steps of a benchmark experiment from software building and deployment through measurement and load monitoring to the evaluation of results. The notable features include separation of measurement from the evaluation and ability to adaptively scale the benchmark experiment based on the evaluation. BEEN has been designed to facilitate automated detection of performance changes during software development (regression benchmarking).


ieee/acm international symposium cluster, cloud and grid computing | 2015

Analyzing the Impact of CPU Pinning and Partial CPU Loads on Performance and Energy Efficiency

Andrej Podzimek; Lubomír Bulej; Lydia Y. Chen; Walter Binder; Petr Tuma

While workload collocation is a necessity to increase energy efficiency of contemporary multi-core hardware, it also increases the risk of performance anomalies due to workload interference. Pinning certain workloads to a subset of CPUs is a simple approach to increasing workload isolation, but its effect depends on workload type and system architecture. Apart from common sense guidelines, the effect of pinning has not been extensively studied so far. In this paper we study the impact of CPU pinning on performance interference and energy efficiency for pairs of collocated workloads. Besides various combinations of workloads, virtualization and resource isolation, we explore the effects of pinning depending on the level of background load. The presented results are based on more than 1000 experiments carried out on an Intel-based NUMA system, with all power management features enabled to reflect real-world settings. We find that less common CPU pinning configurations improve energy efficiency at partial background loads, indicating that systems hosting collocated workloads could benefit from dynamic CPU pinning based on CPU load and workload type.


TOOLS'12 Proceedings of the 50th international conference on Objects, Models, Components, Patterns | 2012

Turbo DiSL: partial evaluation for high-level bytecode instrumentation

Yudi Zheng; Danilo Ansaloni; Lukáš Marek; Andreas Sewe; Walter Binder; Alex Villazón; Petr Tuma; Zhengwei Qi; Mira Mezini

Bytecode instrumentation is a key technique for the implementation of dynamic program analysis tools such as profilers and debuggers. Traditionally, bytecode instrumentation has been supported by low-level bytecode engineering libraries that are difficult to use. Recently, the domain-specific aspect language DiSL has been proposed to provide high-level abstractions for the rapid development of efficient bytecode instrumentations. While DiSL supports user-defined expressions that are evaluated at weave-time, the DiSL programming model requires these expressions to be implemented in separate classes, thus increasing code size and impairing code readability and maintenance. In addition, the DiSL weaver may produce a significant amount of dead code, which may impair some optimizations performed by the runtime. In this paper we introduce Turbo, a novel partial evaluator for DiSL, which processes the generated instrumentation code, performs constant propagation, conditional reduction, and pattern-based code simplification, and executes pure methods at weave-time. With Turbo, it is often unnecessary to wrap expressions for evaluation at weave-time in separate classes, thus simplifying the programming model. We present Turbos partial evaluation algorithm and illustrate its benefits with several case studies. We evaluate the impact of Turbo on weave-time performance and on runtime performance of the instrumented application.


Lecture Notes in Computer Science | 2003

Fighting Class Name Clashes in Java Component Systems

Petr Hnetynka; Petr Tuma

This paper deals with class and interface name clashes in Java component systems that occur because of evolutionary changes during the lifecycle of a component application. We show that the standard facilities of the Java type system do not provide a satisfactory way to deal with the name clashes, and present a solution based on administering the names of classes and interfaces with a version identifier using a byte code manipulation tool. We provide a proof of concept implementation.


modeling, analysis, and simulation on computer and telecommunication systems | 2013

I/O Performance Modeling of Virtualized Storage Systems

Qais Noorshams; Kiana Rostami; Samuel Kounev; Petr Tuma; Ralf H. Reussner

Server virtualization is a key technology to share physical resources efficiently and flexibly. With the increasing popularity of I/O-intensive applications, however, the virtualized storage used in shared environments can easily become a bottleneck and cause performance and scalability issues. Performance modeling and evaluation techniques applied prior to system deployment help to avoid such issues. In current practice, however, virtualized storage and its effects on the overall system performance are often neglected or treated as a black-box. In this paper, we present a systematic I/O performance modeling approach for virtualized storage systems based on queueing theory. We first propose a general performance model building methodology. Then, we demonstrate our methodology creating I/O queueing models of a real-world representative environment based on IBM System z and IBM DS8700 server hardware. Finally, we present an in-depth evaluation of our models considering both interpolation and extrapolation scenarios as well as scenarios with multiple virtual machines. Overall, we effectively create performance models with less than 11% mean prediction error in the worst case and less than 5% prediction error on average.

Collaboration


Dive into the Petr Tuma's collaboration.

Top Co-Authors

Avatar

Lubomír Bulej

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lubomír Bulej

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lukáš Marek

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Libič

Charles University in Prague

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrej Podzimek

Charles University in Prague

View shared research outputs
Researchain Logo
Decentralizing Knowledge