Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias Hauswirth is active.

Publication


Featured researches published by Matthias Hauswirth.


architectural support for programming languages and operating systems | 2009

Producing wrong data without doing anything obviously wrong

Todd Mytkowicz; Amer Diwan; Matthias Hauswirth; Peter F. Sweeney

This paper presents a surprising result: changing a seemingly innocuous aspect of an experimental setup can cause a systems researcher to draw wrong conclusions from an experiment. What appears to be an innocuous aspect in the experimental setup may in fact introduce a significant bias in an evaluation. This phenomenon is called measurement bias in the natural and social sciences. Our results demonstrate that measurement bias is significant and commonplace in computer system evaluation. By significant we mean that measurement bias can lead to a performance analysis that either over-states an effect or even yields an incorrect conclusion. By commonplace we mean that measurement bias occurs in all architectures that we tried (Pentium 4, Core 2, and m5 O3CPU), both compilers that we tried (gcc and Intels C compiler), and most of the SPEC CPU2006 C programs. Thus, we cannot ignore measurement bias. Nevertheless, in a literature survey of 133 recent papers from ASPLOS, PACT, PLDI, and CGO, we determined that none of the papers with experimental results adequately consider measurement bias. Inspired by similar problems and their solutions in other sciences, we describe and demonstrate two methods, one for detecting (causal analysis) and one for avoiding (setup randomization) measurement bias.


conference on object-oriented programming systems, languages, and applications | 2004

Vertical profiling: understanding the behavior of object-priented applications

Matthias Hauswirth; Peter F. Sweeney; Amer Diwan; Michael Hind

Object-oriented programming languages provide a rich set of features that provide significant software engineering benefits. The increased productivity provided by these features comes at a justifiable cost in a more sophisticated runtime system whose responsibility is to implement these features efficiently. However, the virtualization introduced by this sophistication provides a significant challenge to understanding complete system performance, not found in traditionally compiled languages, such as C or C++. Thus, understanding system performance of such a system requires profiling that spans all levels of the execution stack, such as the hardware, operating system, virtual machine, and application. In this work, we suggest an approach, called <i>vertical profiling</i>, that enables this level of understanding. We illustrate the efficacy of this approach by providing deep understandings of performance problems of Java applications run on a VM with vertical profiling support. By incorporating vertical profiling into a programming environment, the programmer will be able to understand how their program interacts with the underlying abstraction levels, such as application server, VM, operating system, and hardware.


conference on object-oriented programming systems, languages, and applications | 2011

Catch me if you can: performance bug detection in the wild

Milan Jovic; Andrea Adamoli; Matthias Hauswirth

Profilers help developers to find and fix performance problems. But do they find performance bugs -- performance problems that real users actually notice? In this paper we argue that -- especially in the case of interactive applications -- traditional profilers find irrelevant problems but fail to find relevant bugs. We then introduce lag hunting, an approach that identifies perceptible performance bugs by monitoring the behavior of applications deployed in the wild. The approach transparently produces a list of performance issues, and for each issue provides the developer with information that helps in finding the cause of the problem. We evaluate our approach with an experiment where we monitor an application used by 24 users for 1958 hours over the course of 3-months. We characterize the resulting 881 issues, and we find and fix the causes of a set of representative examples.


international symposium on performance analysis of systems and software | 2009

Accuracy of performance counter measurements

Dmitrijs Zaparanuks; Milan Jovic; Matthias Hauswirth

Many experimental performance evaluations depend on accurate measurements of the cost of executing a piece of code. Often these measurements are conducted using infrastructures to access hardware performance counters. Most modern processors provide such counters to count micro-architectural events such as retired instructions or clock cycles. These counters can be difficult to configure, may not be programmable or readable from user-level code, and can not discriminate between events caused by different software threads. Various software infrastructures address this problem, providing access to per-thread counters from application code. This paper constitutes the first comparative study of the accuracy of three commonly used measurement infrastructures (perfctr, perfmon2, and PAPI) on three common processors (Pentium D, Core 2 Duo, and AMD ATHLON 64 X2).


conference on object-oriented programming systems, languages, and applications | 2005

Automating vertical profiling

Matthias Hauswirth; Amer Diwan; Peter F. Sweeney; Michael C. Mozer

Last year at OOPSLA we presented a methodology, vertical profiling, for understanding the performance of object-oriented programs. The key insight behind this methodology is that modern programs run on top of many layers (virtual machine, middleware, etc) and thus we need to collect and combine information from all layers in order to understand system performance. Although our methodology was able to explain previously unexplained performance phenomena, it was extremely labor intensive. In this paper we describe and evaluate techniques for automating two significant activities of vertical profiling: trace alignment and correlation. Trace alignment aligns traces obtained from separate runs so that one can reason across the traces. We are not aware of any prior approach that effectively and automatically aligns traces. Correlation sifts through hundreds of metrics to find ones that have a bearing on a performance anomaly of interest. In prior work we found that statistical correlation was only sometimes effective. We have identified highly-effective approaches for both activities.For aligning traces we explore dynamic time warping, and for correlation we explore eight correlators based on statistical correlation, distance measures, and piecewise linear segmentation. Although we explore these activities in the context of vertical profiling, both activities are widely applicable in the performance analysis area.


international symposium on microarchitecture | 2007

Time Interpolation: So Many Metrics, So Few Registers

Todd Mytkowicz; Peter F. Sweeney; Matthias Hauswirth; Amer Diwan

The performance of computer systems varies over the course of their execution. A system may perform well during some parts of its execution and poorly during others. To understand why a system behaves in this way performance analysts need to study its time-varying behavior. Fortunately, modern microprocessors support hardware performance monitors which enable performance analysts to collect time-varying metrics with relative ease. Unfortunately, even though modern microprocessors can collect hundreds of metrics, they can collect only a few of these metrics simultaneously. Prior work has proposed time-interpolation techniques for circumventing this limitation. Time interpolation collects different metrics at different points in time, either within the same trace (multiplexing) or in different traces (trace alignment), and interpolates the results to allow reasoning across all metrics at the same points in time. This paper introduces and uses a novel approach for evaluating time interpolation techniques. This evaluation leads to insights that improve both multiplexing and trace-alignment. Specifically, this paper (i) improves the effectiveness and applicability of the best performing trace alignment technique in prior work; and (ii) introduces criteria that performance analysts can use to determine whether or not to trust multiplexing or trace alignment results for their particular situation. Finally, this paper evaluates time interpolation techniques by exploring their performance in a wide variety of situations and on programs written in two different programming languages, C and Java, and on two different architectures, Pentium 4 and POWER4.


principles and practice of programming in java | 2008

Measuring the performance of interactive applications with listener latency profiling

Milan Jovic; Matthias Hauswirth

When Java developers need to improve the performance of their applications, they usually use one of the many existing profilers for Java. These profilers generally capture a profile that represents the execution time spent in each method. The developer can thus focus her optimization efforts on the methods that consume the most time. In this paper we argue that this type of profile is insufficient for tuning interactive applications. Interactive applications respond to user events, such as mouse clicks and key presses. The perceived performance of interactive applications is directly related to the response time of the program. In this paper we present listener latency profiling, a profiling approach with two distinctive characteristics. First, we call it latency profiling because it helps developers to find long latency operations. Second, we call it listener profiling because it abstracts away from method-level profiles to compute the latency of the various listeners. This allows a developer to reason about performance with respect to listeners, also called observers, the high level abstraction at the core of any interactive Java application. We present our listener latency profiling approach, describe LiLa, our implementation, validate it on a set of micro-benchmarks, and evaluate it on a complex real-world interactive application.


Software Quality Journal | 2011

Automated GUI performance testing

Andrea Adamoli; Dmitrijs Zaparanuks; Milan Jovic; Matthias Hauswirth

A significant body of prior work has devised approaches for automating the functional testing of interactive applications. However, little work exists for automatically testing their performance. Performance testing imposes additional requirements upon GUI test automation tools: the tools have to be able to replay complex interactive sessions, and they have to avoid perturbing the application’s performance. We study the feasibility of using five Java GUI capture and replay tools for GUI performance test automation. Besides confirming the severity of the previously known GUI element identification problem, we also describe a related problem, the temporal synchronization problem, which is of increasing importance for GUI applications that use timer-driven activity. We find that most of the tools we study have severe limitations when used for recording and replaying realistic sessions of real-world Java applications and that all of them suffer from the temporal synchronization problem. However, we find that the most reliable tool, Pounder, causes only limited perturbation and thus can be used to automate performance testing. Based on an investigation of Pounder’s approach, we further improve its robustness and reduce its perturbation. Finally, we demonstrate in a set of case studies that the conclusions about perceptible performance drawn from manual tests still hold when using automated tests driven by Pounder. Besides the significance of our findings to GUI performance testing, the results are also relevant to capture and replay-based functional GUI test automation approaches.


programming language design and implementation | 2002

Static load classification for improving the value predictability of data-cache misses

Martin Burtscher; Amer Diwan; Matthias Hauswirth

While caches are effective at avoiding most main-memory accesses, the few remaining memory references are still expensive. Even one cache miss per one hundred accesses can double a programs execution time. To better tolerate the data-cache miss latency, architects have proposed various speculation mechanisms, including load-value prediction. A load-value predictor guesses the result of a load so that the dependent instructions can immediately proceed without having to wait for the memory access to complete. To use the prediction resources most effectively, speculation should be restricted to loads that are likely to miss in the cache and that are likely to be predicted correctly. Prior work has considered hardware- and profile-based methods to make these decisions. Our work focuses on making these decisions at compile time. We show that a simple compiler classification is effective at separating the loads that should be speculated from the loads that should not. We present results for a number of C and Java programs and demonstrate that our results are consistent across programming languages and across program inputs.


Science of Computer Programming | 2013

Teaching Java programming with the Informa clicker system

Matthias Hauswirth; Andrea Adamoli

This paper describes the use of clickers in a Java programming course. However, instead of using ordinary hardware clickers, we use software clickers, implemented in Java, that allow for much richer problem types than the traditional multiple-choice question. The problem types we introduce in this paper give students a much higher degree of freedom in solving a problem, and thus more opportunities for making mistakes. We look at mistakes as learning opportunities, and we introduce a pedagogical approach that allows students to learn from mistakes of their peers. We finish with a case study and an evaluation of our approach based on the detailed analysis of its use in two semesters of an undergraduate Java programming course.

Collaboration


Dive into the Matthias Hauswirth's collaboration.

Top Co-Authors

Avatar

Amer Diwan

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Todd Mytkowicz

University of Colorado Boulder

View shared research outputs
Researchain Logo
Decentralizing Knowledge