Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Tarvo is active.

Publication


Featured researches published by Alexander Tarvo.


IEEE Software | 2009

Mining Software History to Improve Software Maintenance Quality: A Case Study

Alexander Tarvo

Errors in software updates can cause regressions failures in stable parts of the system. The Binary Change Tracer collects data on software projects and helps predict regressions in software projects.


international conference on software maintenance | 2011

An integration resolution algorithm for mining multiple branches in version control systems

Alexander Tarvo; Thomas Zimmermann; Jacek Czerwonka

The high cost of software maintenance necessitates methods to improve the efficiency of the maintenance process. Such methods typically need a vast amount of knowledge about a system, which is often mined from software repositories. Collecting this data becomes a challenge if the system was developed using multiple code branches. In this paper we present an integration resolution algorithm that facilitates data collection across multiple code branches. The algorithm tracks code integrations across different branches and associates code changes in the main development branch with corresponding changes in other branches. We provide evidence for the practical relevance of this algorithm during the development of the Windows Vista Service Pack 2.


international symposium on software reliability engineering | 2008

Using Statistical Models to Predict Software Regressions

Alexander Tarvo

Incorrect changes made to the stable parts of a software system can cause failures - software regressions. Early detection of faulty code changes can be beneficial for the quality of a software system when these errors can be fixed before the system is released. In this paper, a statistical model for predicting software regressions is proposed. The model predicts risk of regression for a code change by using software metrics: type and size of the change, number of affected components, dependency metrics, developerpsilas experience and code metrics of the affected components. Prediction results could be used to prioritize testing of changes: the higher is the risk of regression for the change, the more thorough testing it should receive.


software visualization | 2013

Automatic categorization and visualization of lock behavior

Steven P. Reiss; Alexander Tarvo

We consider the problem of understanding locking behavior in large Java programs using a combination of data collection, data analysis, and visualization. Our technique starts by collecting partial information about all locks used in the program. It then analyzes this information to determine sets of locks with common behaviors and to determine, for each set of locks, how that lock is used, e.g. if it is used as a mutex, semaphore, read-write lock, etc. The result of the analysis is then presented to the user who can select specific locks for full analysis during a subsequent run. Visualizing locking information is particularly difficult since the time scale of a lock can be ten or more orders of magnitude different from the time scale of the overall run and locks can be used millions of times. We provide different visualizations and visualization techniques for this purpose. First, we analyze either the partial or full traces and identify patterns of how each lock is used and display just those patterns along with their frequency. Second, we provide a thread-centric view of locking that supports fish-eye views at the microsecond level as well as time compression. Third, we provide a lock-centric view that is based on the specific type of lock to show its particular behavior.


international symposium on software reliability engineering | 2013

Predicting risk of pre-release code changes with Checkinmentor

Alexander Tarvo; Nachiappan Nagappan; Thomas Zimmermann; Thirumalesh Bhat; Jacek Czerwonka

Code defects introduced during the development of the software system can result in failures after its release. Such post-release failures are costly to fix and have negative impact on the reputation of the released software. In this paper we propose a methodology for early detection of faulty code changes. We describe code changes with metrics and then use a statistical model that discriminates between faulty and non-faulty changes. The predictions are done not at a file or binary level but at the change level thereby assessing the impact of each change. We also study the impact of code branches on collecting code metrics and on the accuracy of the model. The model has shown high accuracy and was developed into a tool called CheckinMentor. CheckinMentor was deployed to predict risk for the Windows Phone software. However, our methodology is versatile and can be used to predict risk in a variety of large complex software systems.


runtime verification | 2011

What is my program doing? program dynamics in programmer's terms

Steven P. Reiss; Alexander Tarvo

Programmers need to understand their systems. They need to understand how their systems work and why they fail; why they perform well or poorly, and when the systems are behaving abnormally. Much of this involves understanding the dynamic behavior of complex software systems. These systems can involve multiple processes and threads, thousands of classes, and millions of lines of code. These systems are designed to run continuously, often running for months at a time. We consider the problem of using dynamic analysis and visualization to help programmers achieve the necessary understanding. To be effective this needs to be done on running applications with minimal overhead and in the high-level terms programmers use to think about their system. After going over past efforts in this area we look at our current work and then present a number of challenges for the future.


measurement and modeling of computer systems | 2014

Automated analysis of multithreaded programs for performance modeling

Alexander Tarvo; Steven P. Reiss

We present an approach for building performance models of multithreaded programs automatically. We use a combination of static and a dynamic analyses of a single representative run of the program to build its model. The model can predict performance of the program under a variety of configurations. This paper outlines how we construct the model and demonstrates how the resultant models accurately predict the performance %and resource utilization of complex multithreaded programs.


software visualization | 2013

Tool demonstration: The visualizations of code bubbles

Steven P. Reiss; Alexander Tarvo

Code Bubbles is an integrated development environment that concentrates on the user experience. The environment is very visual and includes a number of different visualizations, both static and dynamic. We will demonstrate the environment and the various visualizations on a realistic scenario based on our current work.


international conference on software engineering | 2014

Automatic performance modeling of multithreaded programs

Alexander Tarvo

Multithreaded programs express a complex non-linear dependency between their configuration and the performance. To better understand this dependency performance prediction models are used. However, building performance models manually is time-consuming and error-prone. We present a novel methodology for automatically building performance models of industrial multithreaded programs.


international conference on performance engineering | 2012

Using computer simulation to predict the performance of multithreaded programs

Alexander Tarvo; Steven P. Reiss

Collaboration


Dive into the Alexander Tarvo's collaboration.

Researchain Logo
Decentralizing Knowledge