Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gustavo Soares is active.

Publication


Featured researches published by Gustavo Soares.


IEEE Transactions on Software Engineering | 2013

Automated Behavioral Testing of Refactoring Engines

Gustavo Soares; Rohit Gheyi; Tiago Massoni

Refactoring is a transformation that preserves the external behavior of a program and improves its internal quality. Usually, compilation errors and behavioral changes are avoided by preconditions determined for each refactoring transformation. However, to formally define these preconditions and transfer them to program checks is a rather complex task. In practice, refactoring engine developers commonly implement refactorings in an ad hoc manner since no guidelines are available for evaluating the correctness of refactoring implementations. As a result, even mainstream refactoring engines contain critical bugs. We present a technique to test Java refactoring engines. It automates test input generation by using a Java program generator that exhaustively generates programs for a given scope of Java declarations. The refactoring under test is applied to each generated program. The technique uses SafeRefactor, a tool for detecting behavioral changes, as an oracle to evaluate the correctness of these transformations. Finally, the technique classifies the failing transformations by the kind of behavioral change or compilation error introduced by them. We have evaluated this technique by testing 29 refactorings in Eclipse JDT, NetBeans, and the JastAdd Refactoring Tools. We analyzed 153,444 transformations, and identified 57 bugs related to compilation errors, and 63 bugs related to behavioral changes.


learning at scale | 2017

Writing Reusable Code Feedback at Scale with Mixed-Initiative Program Synthesis

Andrew Head; Elena L. Glassman; Gustavo Soares; Ryo Suzuki; Lucas Figueredo; Loris D'Antoni; Björn Hartmann

In large introductory programming classes, teacher feedback on individual incorrect student submissions is often infeasible. Program synthesis techniques are capable of fixing student bugs and generating hints automatically, but they lack the deep domain knowledge of a teacher and can generate functionally correct but stylistically poor fixes. We introduce a mixed-initiative approach which combines teacher expertise with data-driven program synthesis techniques. We demonstrate our novel approach in two systems that use different interaction mechanisms. Our systems use program synthesis to learn bug-fixing code transformations and then cluster incorrect submissions by the transformations that correct them. The MistakeBrowser system learns transformations from examples of students fixing bugs in their own submissions. The FixPropagator system learns transformations from teachers fixing bugs in incorrect student submissions. Teachers can write feedback about a single submission or a cluster of submissions and propagate the feedback to all other submissions that can be fixed by the same transformation. Two studies suggest this approach helps teachers better understand student bugs and write reusable feedback that scales to a massive introductory programming classroom.


international conference on software maintenance | 2011

Identifying overly strong conditions in refactoring implementations

Gustavo Soares; Melina Mongiovi; Rohit Gheyi

Each refactoring implementation must check a number of conditions to guarantee behavior preservation. However, specifying and checking them are difficult. Sometimes refactoring tool developers may define overly strong conditions that prevent useful behavior-preserving transformations to be performed. We propose an approach for identifying overly strong conditions in refactoring implementations. We automatically generate a number of programs as test inputs for refactoring implementations. Then, we apply the same refactoring to each test input using two different implementations, and compare both results. We use Safe Refactor to evaluate whether a transformation preserves behavior. We evaluated our approach in 10 kinds of refactorings for Java implemented by three tools: Eclipse and Netbeans, and the JastAdd Refactoring Tool (JRRT). In a sample of 42,774 transformations, we identified 17 and 7 kinds of overly strong conditions in Eclipse and JRRT, respectively.


Science of Computer Programming | 2014

Making refactoring safer through impact analysis

Melina Mongiovi; Rohit Gheyi; Gustavo Soares; Leopoldo Teixeira; Paulo Borba

Currently most developers have to apply manual steps and use test suites to improve confidence that transformations applied to object-oriented (OO) and aspect-oriented (AO) programs are correct. However, it is not simple to do manual reasoning, due to the nontrivial semantics of OO and AO languages. Moreover, most refactoring implementations contain a number of bugs since it is difficult to establish all conditions required for a transformation to be behavior preserving. In this article, we propose a tool (SafeRefactorImpact) that analyzes the transformation and generates tests only for the methods impacted by a transformation identified by our change impact analyzer (Safira). We compare SafeRefactorImpact with our previous tool (SafeRefactor) with respect to correctness, performance, number of methods passed to the automatic test suite generator, change coverage, and number of relevant tests generated in 45 transformations. SafeRefactorImpact identifies behavioral changes undetected by SafeRefactor. Moreover, it reduces the number of methods passed to the test suite generator. Finally, SafeRefactorImpact has a better change coverage in larger subjects, and generates more relevant tests than SafeRefactor.


international conference on software engineering | 2010

Making program refactoring safer

Gustavo Soares

Automated refactorings may change the program behavior. We propose an approach and its implementation called SafeRefactor for making program refactoring safer. We applied 10 Eclipse refactorings in a number of automatically generated programs, and used SafeRefactor to identify 50 bugs that lead to behavioral changes or compilation errors.


brazilian symposium on software engineering | 2011

Analyzing Refactorings on Software Repositories

Gustavo Soares; Bruno Catao; Catuxe Varjao; Solon Aguiar; Rohit Gheyi; Tiago Massoni

Currently analysis of refactoring in software reposi- tories is either manual or only syntactic, which is time-consuming, error-prone, and non-scalable. Such analysis is useful to understand the dynamics of refactoring throughout development, especially in multi-developer environments, such as open source projects. In this work, we propose a fully automatic technique to analyze refactoring frequency, granularity and scope in software repositories. It is based on SAFEREFACTOR, a tool that analyzes transformations by generating tests to detect behavioral changes -- it has found a number of bugs in refactoring implementations within some IDEs, such as Eclipse and Netbeans. We use our technique to analyze five open source Java projects (JHotDraw, ArgoUML, SweetHome 3D, HSQLDB and jEdit). From more than 40,723 software versions, 39 years of software development, 80 developers and 1.5 TLOC, we have found that: 27% of changes are refactorings. Regarding the refactorings, 63,83% are Low level, and 71% have local scope. Our results indicate that refactorings are frequently applied before likely functionality changes, in order to better prepare design for accommodating additions.


international conference on software maintenance | 2014

Scaling Testing of Refactoring Engines

Melina Mongiovi; Gustavo Mendes; Rohit Gheyi; Gustavo Soares; Márcio Ribeiro

Proving refactoring sound with respect to a formal semantics is considered a challenge. In practice, developers write test cases to check their refactoring implementations. However, it is difficult and time consuming to have a good test suite since it requires complex inputs (programs) and an oracle to check whether it is possible to apply the transformation. If it is possible, the resulting program must preserve the observable behavior. There are some automated techniques for testing refactoring engines. Nevertheless, they may have limitations related to the program generator (exhaustiveness, setup, expressiveness), automation (types of oracles, bug categorization), time consumption or kinds of refactorings that can be tested. In this paper, we extend our previous technique to test refactoring engines. We improve expressiveness of the program generator for testing more kinds of refactorings, such as Extract Function. Moreover, developers just need to specify the inputs structure in a declarative language. They may also set the technique to skip some consecutive test inputs to improve performance. We evaluate our technique in 18 refactoring implementations of Java (Eclipse and JRRT) and C (Eclipse). We identify 76 bugs (53 new bugs) related to compilation errors, behavioral changes, and overly strong conditions. We also compare the impact of the skip on the time consumption and bug detection in our technique. By using a skip of 25 in the program generator, it reduces in 96% the time to test the refactoring implementations while missing only 3.9% of the bugs. In a few seconds, it finds the first failure related to compilation error or behavioral change.


2012 Sixth Brazilian Symposium on Software Components, Architectures and Reuse | 2012

Making Software Product Line Evolution Safer

Felype Ferreira; Paulo Borba; Gustavo Soares; Rohit Gheyi

Developers evolve software product lines (SPLs) manually or using typical program refactoring tools. However, when evolving a product line to introduce new features or to improve its design, it is important to make sure that the behavior of existing products is not affected. Typical program refactorings cannot guarantee that because the SPL context goes beyond code and other kinds of core assets, and involves additional artifacts such as feature models and configuration knowledge. Besides that, in a SPL we typically have to deal with a set of possibly alternative assets that do not constitute a well-formed program. As a result, manual changes and existing program refactoring tools may introduce behavioral changes or invalidate existing product configurations. To avoid that, we propose approaches and implement tools for making product line evolution safer; these tools check whether SPL transformations are refinements in the sense that they preserve the behavior of the original SPL products. They implement different and practical approximations of a formal definition of SPL refinement. We evaluate the approaches in concrete SPL evolution scenarios where existing products behavior must be preserved. However, our tools found that some transformations introduced behavioral changes. Moreover, we evaluate defective refinements, and the toolset detects the behavioral changes.


human factors in computing systems | 2017

Exploring the Design Space of Automatically Synthesized Hints for Introductory Programming Assignments

Ryo Suzuki; Gustavo Soares; Elena L. Glassman; Andrew Head; Loris D'Antoni; Björn Hartmann

For massive programming classrooms, recent advances in program synthesis offer means to automatically grade and debug student submissions, and generate feedback at scale. A key challenge for synthesis-based autograders is how to design personalized feedback for students that is as effective as manual feedback given by teachers today. To understand the state of hint-giving practice, we analyzed 132 online Q&A posts and conducted a semi-structured interview with a teacher from a local massive programming class. We identified five types of teacher hints that can also be generated by program synthesis. These hints describe transformations, locations, data, behavior, and examples. We describe our implementation of three of these hint types. This work paves the way for future deployments of automatic, pedagogically-useful programming hints driven by program synthesis.


Proceedings of the 1st International Workshop on Live Programming | 2013

Live feedback on behavioral changes

Gustavo Soares; Emerson R. Murphy-Hill; Rohit Gheyi

The costs to find and fix bugs grows over time, to the point where fixing a bug after release may cost as much as 100 times more than before release. To help programmers find bugs as soon as they are introduced, we sketch a plugin for an integrated development environment that provides live feedback about behavioral changes to Java programs by continuously generating tests, running the tests on the current and previous versions of the program, and comparing the results. Such a tool would allow programmers to better understand how their changes affect the behavior of their programs. As a proof of concept, we developed a prototype that found a bug that remained undetected by pair programmers working on JHotDraw in a previous study. Had the programmers performed this change with our plugin, they would have been notified about the bug as soon as they introduced it.

Collaboration


Dive into the Gustavo Soares's collaboration.

Top Co-Authors

Avatar

Rohit Gheyi

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Melina Mongiovi

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Loris D'Antoni

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Angelo Perkusich

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Hyggo Oliveira de Almeida

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Paulo Borba

Federal University of Pernambuco

View shared research outputs
Top Co-Authors

Avatar

Ryo Suzuki

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Reudismam Rolim

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Tiago Massoni

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Andrew Head

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge