Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher Vendome is active.

Publication


Featured researches published by Christopher Vendome.


automated software engineering | 2016

Deep learning code fragments for code clone detection

Martin White; Michele Tufano; Christopher Vendome; Denys Poshyvanyk

Code clone detection is an important problem for software maintenance and evolution. Many approaches consider either structure or identifiers, but none of the existing detection techniques model both sources of information. These techniques also depend on generic, handcrafted features to represent code fragments. We introduce learning-based detection techniques where everything for representing terms and fragments in source code is mined from the repository. Our code analysis supports a framework, which relies on deep learning, for automatically linking patterns mined at the lexical level with patterns mined at the syntactic level. We evaluated our novel learning-based approach for code clone detection with respect to feasibility from the point of view of software maintainers. We sampled and manually evaluated 398 file- and 480 method-level pairs across eight real-world Java systems; 93% of the file- and method-level samples were evaluated to be true positives. Among the true positives, we found pairs mapping to all four clone types. We compared our approach to a traditional structure-oriented technique and found that our learning-based approach detected clones that were either undetected or suboptimally reported by the prominent tool Deckard. Our results affirm that our learning-based approach is suitable for clone detection and a tenable technique for researchers.


international conference on software testing verification and validation | 2016

Automatically Discovering, Reporting and Reproducing Android Application Crashes

Kevin Moran; Mario Linares-Vásquez; Carlos Bernal-Cárdenas; Christopher Vendome; Denys Poshyvanyk

Mobile developers face unique challenges when detecting and reporting crashes in apps due to their prevailing GUI event-driven nature and additional sources of inputs (e.g., sensor readings). To support developers in these tasks, we introduce a novel, automated approach called CRASHSCOPE. This tool explores a given Android app using systematic input generation, according to several strategies informed by static and dynamic analyses, with the intrinsic goal of triggering crashes. When a crash is detected, CRASHSCOPE generates an augmented crash report containing screenshots, detailed crash reproduction steps, the captured exception stack trace, and a fully replayable script that automatically reproduces the crash on a target device(s). We evaluated CRASHSCOPEs effectiveness in discovering crashes as compared to five state-of-the-art Android input generation tools on 61 applications. The results demonstrate that CRASHSCOPE performs about as well as current tools for detecting crashes and provides more detailed fault information. Additionally, in a study analyzing eight real-world Android app crashes, we found that CRASHSCOPEs reports are easily readable and allow for reliable reproduction of crashes by presenting more explicit information than human written reports.


international conference on software testing verification and validation | 2016

Automatically Documenting Unit Test Cases

Boyang Li; Christopher Vendome; Mario Linares-Vásquez; Denys Poshyvanyk; Nicholas A. Kraft

Maintaining unit test cases is important during the maintenance and evolution of a software system. In particular, automatically documenting these unit test cases can ameliorate the burden on developers maintaining them. For instance, by relying on up-to-date documentation, developers can more easily identify test cases that relate to some new or modified functionality of the system. We surveyed 212 developers (both industrial and open-source) to understand their perspective towards writing, maintaining, and documenting unit test cases. In addition, we mined change histories of C# software systems and empirically found that unit test methods seldom had preceding comments and infrequently had inner comments, and both were rarely modified as those methods were modified. In order to support developers in maintaining unit test cases, we propose a novel approach - UnitTestScribe - that combines static analysis, natural language processing, backward slicing, and code summarization techniques to automatically generate natural language documentation of unit test cases. We evaluated UnitTestScribe on four subject systems by means of an online survey with industrial developers and graduate students. In general, participants indicated that UnitTestScribe descriptions are complete, concise, and easy to read.


international conference on software engineering | 2017

CrashScope: a practical tool for automated testing of Android applications

Kevin Moran; Mario Linares-Vásquez; Carlos Bernal-Cárdenas; Christopher Vendome; Denys Poshyvanyk

Unique challenges arise when testing mobile applications due to their prevailing event-driven nature and complex contextual features (e.g. sensors, notifications). Current automated input generation approaches for Android apps are typically not practical for developers to use due to required instrumentation or platform dependence and generally do not effectively exercise contextual features. To better support developers in mobile testing tasks, in this demo we present a novel, automated tool called CRASHSCOPE. This tool explores a given Android app using systematic input generation, according to several strategies informed by static and dynamic analyses, with the intrinsic goal of triggering crashes. When a crash is detected, CRASHSCOPE generates an augmented crash report containing screenshots, detailed crash reproduction steps, the captured exception stack trace, and a fully replayable script that automatically reproduces the crash on a target device(s). Results of preliminary studies show that CRASHSCOPE is able to uncover about as many crashes as other state of the art tools, while providing detailed useful crash reports and test scripts to developers. Website: www.android-dev-tools.com/crashscope-home Video url: https://youtu.be/ii6S1JF6xDw.


foundations of software engineering | 2017

Enabling mutation testing for Android apps

Mario Linares-Vásquez; Gabriele Bavota; Michele Tufano; Kevin Moran; Massimiliano Di Penta; Christopher Vendome; Carlos Bernal-Cárdenas; Denys Poshyvanyk

Mutation testing has been widely used to assess the fault-detection effectiveness of a test suite, as well as to guide test case generation or prioritization. Empirical studies have shown that, while mutants are generally representative of real faults, an effective application of mutation testing requires “traditional” operators designed for programming languages to be augmented with operators specific to an application domain and/or technology. This paper proposes MDroid+, a framework for effective mutation testing of Android apps. First, we systematically devise a taxonomy of 262 types of Android faults grouped in 14 categories by manually analyzing 2,023 so ware artifacts from different sources (e.g., bug reports, commits). Then, we identified a set of 38 mutation operators, and implemented an infrastructure to automatically seed mutations in Android apps with 35 of the identified operators. The taxonomy and the proposed operators have been evaluated in terms of stillborn/trivial mutants generated as compared to well know mutation tools, and their capacity to represent real faults in Android apps


international conference on software engineering | 2015

A large scale study of license usage on GitHub

Christopher Vendome

The open source community relies upon licensing in order to govern the distribution, modification, and reuse of existing code. These licenses evolve to better suit the requirements of the development communities and to cope with unaddressed or new legal issues. In this paper, we report the results of a large empirical study conducted over the change history of 16,221 open source Java projects mined from Git Hub. Our study investigates how licensing usage and adoption changes over a period of ten years. We consider both the distribution of license usage within projects of a rapidly growing forge and the extent that new versions of licenses are introduced in these projects.


automated software engineering | 2015

How do Developers Document Database Usages in Source Code? (N)

Mario Linares-Vásquez; Boyang Li; Christopher Vendome; Denys Poshyvanyk

Database-centric applications (DCAs) usually contain a large number of tables, attributes, and constraints describing the underlying data model. Understanding how database tables and attributes are used in the source code along with the constraints related to these usages is an important component of DCA maintenance. However, documenting database-related operations and their constraints in the source code is neither easy nor common in practice. In this paper, we present a two-fold empirical study aimed at identifying how developers document database usages at source code method level. In particular, (i) we surveyed open source developers to understand their practices on documenting database usages in source code, and (ii) we mined a large set of open source projects to measure to what extent database-related methods are commented and if these comments are updated during evolution. Although 58% of the developers claimed to find value in method comments describing database usages, our findings suggest that 77% of 33K+ methods in 3.1K+ open-source Java projects with database accesses were completely undocumented.


Empirical Software Engineering | 2017

License usage and changes: a large-scale study on gitHub

Christopher Vendome; Gabriele Bavota; Massimiliano Di Penta; Mario Linares-Vásquez; Daniel M. German; Denys Poshyvanyk

Open source software licenses determine, from a legal point of view, under which conditions software can be integrated and redistributed. The reason why developers of a project adopt (or change) a license may depend on various factors, e.g., the need for ensuring compatibility with certain third-party components, the perspective towards redistribution or commercialization of the software, or the need for protecting against somebody else’s commercial usage of the software. This paper reports a large empirical study aimed at quantitatively and qualitatively investigating when and why developers adopt or change software licenses. Specifically, we first identify license changes in 1,731,828 commits, representing the entire history of 16,221 Java projects hosted on GitHub. Then, to understand the rationale of license changes, we perform a qualitative analysis on 1,160 projects written in seven different programming languages, namely C, C++, C#, Java, Javascript, Python, and Ruby—following an open coding approach inspired by grounded theory—on commit messages and issue tracker discussions concerning licensing topics, and whenever possible, try to build traceability links between discussions and changes. On one hand, our results highlight how, in different contexts, license adoption or changes can be triggered by various reasons. On the other hand, the results also highlight a lack of traceability of when and why licensing changes are made. This can be a major concern, because a change in the license of a system can negatively impact those that reuse it. In conclusion, results of the study trigger the need for better tool support in guiding developers in choosing/changing licenses and in keeping track of the rationale of license changes.


automated software engineering | 2017

Automatically assessing code understandability: How far are we?

Simone Scalabrino; Gabriele Bavota; Christopher Vendome; Mario Linares-Vásquez; Denys Poshyvanyk

Program understanding plays a pivotal role in software maintenance and evolution: a deep understanding of code is the stepping stone for most software-related activities, such as bug fixing or testing. Being able to measure the understandability of a piece of code might help in estimating the effort required for a maintenance activity, in comparing the quality of alternative implementations, or even in predicting bugs. Unfortunately, there are no existing metrics specifically designed to assess the understandability of a given code snippet. In this paper, we perform a first step in this direction, by studying the extent to which several types of metrics computed on code, documentation, and developers correlate with code understandability. To perform such an investigation we ran a study with 46 participants who were asked to understand eight code snippets each. We collected a total of 324 evaluations aiming at assessing the perceived understandability, the actual level of understanding, and the time needed to understand a code snippet. Our results demonstrate that none of the (existing and new) metrics we considered is able to capture code understandability, not even the ones assumed to assess quality attributes strongly related with it, such as code readability and complexity.


international conference on software engineering | 2017

Machine learning-based detection of open source license exceptions

Christopher Vendome; Mario Linares-Vásquez; Gabriele Bavota; Massimiliano Di Penta; Daniel M. German; Denys Poshyvanyk

From a legal perspective, software licenses govern the redistribution, reuse, and modification of software as both source and binary code. Free and Open Source Software (FOSS) licenses vary in the degree to which they are permissive or restrictive in allowing redistribution or modification under licenses different from the original one(s). In certain cases, developers may modify the license by appending to it an exception to specifically allow reuse or modification under a particular condition. These exceptions are an important factor to consider for license compliance analysis since they modify the standard (and widely understood) terms of the original license. In this work, we first perform a large-scale empirical study on the change history of over 51K FOSS systems aimed at quantitatively investigating the prevalence of known license exceptions and identifying new ones. Subsequently, we performed a study on the detection of license exceptions by relying on machine learning. We evaluated the license exception classification with four different supervised learners and sensitivity analysis. Finally, we present a categorization of license exceptions and explain their implications.

Collaboration


Dive into the Christopher Vendome's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge