Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Moritz Beller is active.

Publication


Featured researches published by Moritz Beller.


mining software repositories | 2014

Modern code reviews in open-source projects: which problems do they fix?

Moritz Beller; Alberto Bacchelli; Andy Zaidman; Elmar Juergens

Code review is the manual assessment of source code by humans, mainly intended to identify defects and quality problems. Modern Code Review (MCR), a lightweight variant of the code inspections investigated since the 1970s, prevails today both in industry and open-source software (OSS) systems. The objective of this paper is to increase our understanding of the practical benefits that the MCR process produces on reviewed source code. To that end, we empirically explore the problems fixed through MCR in OSS systems. We manually classified over 1,400 changes taking place in reviewed code from two OSS projects into a validated categorization scheme. Surprisingly, results show that the types of changes due to the MCR process in OSS are strikingly similar to those in the industry and academic systems from literature, featuring the similar 75:25 ratio of maintainability-related to functional problems. We also reveal that 7–35% of review comments are discarded and that 10–22% of the changes are not triggered by an explicit review comment. Patterns emerged in the review data; we investigated them revealing the technical factors that influence the number of changes due to the MCR process. We found that bug-fixing tasks lead to fewer changes and tasks with more altered files and a higher code churn have more changes. Contrary to intuition, the person of the reviewer had no impact on the number of changes.


mining software repositories | 2017

TravisTorrent: synthesizing Travis CI and GitHub for full-stack research on continuous integration

Moritz Beller; Georgios Gousios; Andy Zaidman

Continuous Integration (CI) has become a best practice of modern software development. Thanks in part to its tight integration with GitHub, Travis CI has emerged as arguably the most widely used CI platform for Open-Source Software (OSS) development. However, despite its prominent role in Software Engineering in practice, the benefits, costs, and implications of doing CI are all but clear from an academic standpoint. Little research has been done, and even less was of quantitative nature. In order to lay the groundwork for data-driven research on CI, we built TravisTorrent, travistorrent.testroots.org, a freely available data set based on Travis CI and GitHub that provides easy access to hundreds of thousands of analyzed builds from more than 1,000 projects. Unique to TravisTorrent is that each of its 2,640,825 Travis builds is synthesized with meta data from Travis CIs API, the results of analyzing its textual build log, a link to the GitHub commit which triggered the build, and dynamically aggregated project data from the time of commit extracted through GHTorrent.


foundations of software engineering | 2015

When, how, and why developers (do not) test in their IDEs

Moritz Beller; Georgios Gousios; Annibale Panichella; Andy Zaidman

The research community in Software Engineering and Software Testing in particular builds many of its contributions on a set of mutually shared expectations. Despite the fact that they form the basis of many publications as well as open-source and commercial testing applications, these common expectations and beliefs are rarely ever questioned. For example, Frederic Brooks’ statement that testing takes half of the development time seems to have manifested itself within the community since he first made it in the “Mythical Man Month” in 1975. With this paper, we report on the surprising results of a large-scale field study with 416 software engineers whose development activity we closely monitored over the course of five months, resulting in over 13 years of recorded work time in their integrated development environments (IDEs). Our findings question several commonly shared assumptions and beliefs about testing and might be contributing factors to the observed bug proneness of software in practice: the majority of developers in our study does not test; developers rarely run their tests in the IDE; Test-Driven Development (TDD) is not widely practiced; and, last but not least, software developers only spend a quarter of their work time engineering tests, whereas they think they test half of their time.


ieee international conference on software analysis evolution and reengineering | 2016

Analyzing the State of Static Analysis: A Large-Scale Evaluation in Open Source Software

Moritz Beller; Radjino Bholanath; Shane McIntosh; Andy Zaidman

The use of automatic static analysis has been a software engineering best practice for decades. However, we still do not know a lot about its use in real-world software projects: How prevalent is the use of Automated Static Analysis Tools (ASATs) such as FindBugs and JSHint? How do developers use these tools, and how does their use evolve over time? We research these questions in two studies on nine different ASATs for Java, JavaScript, Ruby, and Python with a population of 122 and 168,214 open-source projects. To compare warnings across the ASATs, we introduce the General Defect Classification (GDC) and provide a grounded-theory-derived mapping of 1,825 ASAT-specific warnings to 16 top-level GDC classes. Our results show that ASAT use is widespread, but not ubiquitous, and that projects typically do not enforce a strict policy on ASAT use. Most ASAT configurations deviate slightly from the default, but hardly any introduce new custom analyses. Only a very small set of default ASAT analyses is widely changed. Finally, most ASAT configurations, once introduced, never change. If they do, the changes are small and have a tendency to occur within one day of the configurations initial introduction.


international conference on software engineering | 2016

The impact of test case summaries on bug fixing performance: an empirical investigation

Sebastiano Panichella; Annibale Panichella; Moritz Beller; Andy Zaidman; Harald C. Gall

Automated test generation tools have been widely investigated with the goal of reducing the cost of testing activities. However, generated tests have been shown not to help developers in detecting and finding more bugs even though they reach higher structural coverage compared to manual testing. The main reason is that generated tests are difficult to understand and maintain. Our paper proposes an approach, coined TestDescriber, which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability. We argue that this approach can complement the current techniques around automated unit test generation or search-based techniques designed to generate a possibly minimal set of test cases. In evaluating our approach we found that (1) developers find twice as many bugs, and (2) test case summaries significantly improve the comprehensibility of test cases, which is considered particularly useful by developers.


mining software repositories | 2017

Oops, my tests broke the build: an explorative analysis of Travis CI with GitHub

Moritz Beller; Georgios Gousios; Andy Zaidman

Continuous Integration (CI) has become a best practice of modern software development. Yet, at present, we have a shortfall of insight into the testing practices that are common in CI-based software development. In particular, we seek quantifiable evidence on how central testing is to the CI process, how strongly the project language influences testing, whether different integration environments are valuable and if testing on the CI can serve as a surrogate to local testing in the IDE. In an analysis of 2,640,825 Java and Ruby builds on Travis CI, we find that testing is the single most important reason why builds fail. Moreover, the programming language has a strong influence on both the number of executed tests, their run time, and proneness to fail. The use of multiple integration environments leads to 10% more failures being caught at build time. However, testing on Travis CI does not seem an adequate surrogate for running tests locally in the IDE. To further research on Travis CI with GitHub, we introduce TravisTorrent.


PeerJ | 2016

Oops, my tests broke the build: An analysis of Travis CI builds with GitHub

Moritz Beller; Georgios Gousios; Andy Zaidman

Continuous Integration (CI) has become a best practice of modern software development. At present, we have a shortfall of insight into the testing practices that are common in CI-based software development. In particular, we seek quantifiable evidence on how central testing really is in CI, how strongly the project language influences testing, whether different integration environments are valuable and if testing on the CI can serve as a surrogate to local testing in the IDE. In an analysis of 2,640,825 Java and Ruby builds on TRAVIS CI, we find that testing is the single most important reason why builds fail. Moreover, the programming language has a strong influence on both the number of executed tests, their test run time and proneness to fail. The use of multiple integration environments leads to 10% more failures being caught at build time. However, testing in the CI does not seem to be a good surrogate for running tests in the IDE. To facilitate further research on TRAVIS CI with GITHUB, we introduce TRAVISTORRENT.


2016 IEEE/ACM 3rd International Workshop on Software Engineering Research and Industrial Practice (SER&IP) | 2016

How to catch 'em all: WatchDog, a family of IDE plug-ins to assess testing

Moritz Beller; Igor Levaja; Annibale Panichella; Georgios Gousios; Andy Zaidman

As software engineering researchers, we are also zealous tool smiths. Building a research prototype is often a daunting task, let alone building an industry-grade family of tools supporting multiple platforms to ensure the generalizability of results. In this paper, we give advice to academic and industrial tool smiths on how to design and build an easy-to-maintain architecture capable of supporting multiple integrated development environments (IDEs). Our experiences stem from WatchDog, a multi-IDE infrastructure that assesses developer testing activities in vivo and that over 2,000 registered developers use. To these software engineering practitioners, WatchDog provides real-time and aggregated feedback in the form of individual testing reports.Project Website: http://www.testroots.org Demonstration Video: https://youtu.be/zXIihnmx3UE


ieee international conference on software analysis evolution and reengineering | 2017

UAV: Warnings from multiple Automated Static Analysis Tools at a glance

Tim Buckers; Clinton Cao; Michiel Doesburg; Boning Gong; Sunwei Wang; Moritz Beller; Andy Zaidman

Automated Static Analysis Tools (ASATs) are an integral part of todays software quality assurance practices. At present, a plethora of ASATs exist, each with different strengths. However, there is little guidance for developers on which of these ASATs to choose and combine for a project. As a result, many projects still only employ one ASAT with practically no customization. With UAV, the Unified ASAT Visualizer, we created an intuitive visualization that enables developers, researchers, and tool creators to compare the complementary strengths and overlaps of different Java ASATs. UAVs enriched treemap and source code views provide its users with a seamless exploration of the warning distribution from a high-level overview down to the source code. We have evaluated our UAV prototype in a user study with ten second-year Computer Science (CS) students, a visualization expert and tested it on large Java repositories with several thousands of PMD, FindBugs, and Checkstyle warnings. Project Website: https://clintoncao.github.io/uav/


PeerJ | 2017

How developers debug

Moritz Beller; Niels Spruit; Andy Zaidman

Debugging software is an inevitable chore, often difficult and more time-consuming than expected, giving it the nickname the “dirty little secret of computer science.” Surprisingly, we have little knowledge on how software engineers debug software problems in the real world, whether they use dedicated debugging tools, and how knowledgeable they are about debugging. This study aims to shed light on these aspects by following a mixed-methods research approach. We conduct an online survey capturing how 176 developers reflect on debugging. We augment this subjective survey data with objective observations from how 458 developers use the debugger included in their Integrated Development Environments (IDEs) by instrumenting the popular ECLIPSE and INTELLIJ IDEs with our purpose-built plugin WATCHDOG 2.0. To better explain the insights and controversies obtained from the previous steps, we followed up by conducting interviews with debugging experts and regular debugging users. Our results indicate that the the IDE-provided debugger is not used as often as expected, since “printf debugging” remains a feasible choice for many programmers. Furthermore, both knowledge and use of advanced debugging features are low. Our results call to strengthen hands-on debugging experience in Computer Science curricula and can and have already influenced the design of modern IDE debuggers.

Collaboration


Dive into the Moritz Beller's collaboration.

Top Co-Authors

Avatar

Andy Zaidman

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Annibale Panichella

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Georgios Gousios

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alberto Bacchelli

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Niels Spruit

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Boning Gong

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Clinton Cao

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge