Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Melina Mongiovi is active.

Publication


Featured researches published by Melina Mongiovi.


international conference on software maintenance | 2011

Identifying overly strong conditions in refactoring implementations

Gustavo Soares; Melina Mongiovi; Rohit Gheyi

Each refactoring implementation must check a number of conditions to guarantee behavior preservation. However, specifying and checking them are difficult. Sometimes refactoring tool developers may define overly strong conditions that prevent useful behavior-preserving transformations to be performed. We propose an approach for identifying overly strong conditions in refactoring implementations. We automatically generate a number of programs as test inputs for refactoring implementations. Then, we apply the same refactoring to each test input using two different implementations, and compare both results. We use Safe Refactor to evaluate whether a transformation preserves behavior. We evaluated our approach in 10 kinds of refactorings for Java implemented by three tools: Eclipse and Netbeans, and the JastAdd Refactoring Tool (JRRT). In a sample of 42,774 transformations, we identified 17 and 7 kinds of overly strong conditions in Eclipse and JRRT, respectively.


Science of Computer Programming | 2014

Making refactoring safer through impact analysis

Melina Mongiovi; Rohit Gheyi; Gustavo Soares; Leopoldo Teixeira; Paulo Borba

Currently most developers have to apply manual steps and use test suites to improve confidence that transformations applied to object-oriented (OO) and aspect-oriented (AO) programs are correct. However, it is not simple to do manual reasoning, due to the nontrivial semantics of OO and AO languages. Moreover, most refactoring implementations contain a number of bugs since it is difficult to establish all conditions required for a transformation to be behavior preserving. In this article, we propose a tool (SafeRefactorImpact) that analyzes the transformation and generates tests only for the methods impacted by a transformation identified by our change impact analyzer (Safira). We compare SafeRefactorImpact with our previous tool (SafeRefactor) with respect to correctness, performance, number of methods passed to the automatic test suite generator, change coverage, and number of relevant tests generated in 45 transformations. SafeRefactorImpact identifies behavioral changes undetected by SafeRefactor. Moreover, it reduces the number of methods passed to the test suite generator. Finally, SafeRefactorImpact has a better change coverage in larger subjects, and generates more relevant tests than SafeRefactor.


international conference on software maintenance | 2014

Scaling Testing of Refactoring Engines

Melina Mongiovi; Gustavo Mendes; Rohit Gheyi; Gustavo Soares; Márcio Ribeiro

Proving refactoring sound with respect to a formal semantics is considered a challenge. In practice, developers write test cases to check their refactoring implementations. However, it is difficult and time consuming to have a good test suite since it requires complex inputs (programs) and an oracle to check whether it is possible to apply the transformation. If it is possible, the resulting program must preserve the observable behavior. There are some automated techniques for testing refactoring engines. Nevertheless, they may have limitations related to the program generator (exhaustiveness, setup, expressiveness), automation (types of oracles, bug categorization), time consumption or kinds of refactorings that can be tested. In this paper, we extend our previous technique to test refactoring engines. We improve expressiveness of the program generator for testing more kinds of refactorings, such as Extract Function. Moreover, developers just need to specify the inputs structure in a declarative language. They may also set the technique to skip some consecutive test inputs to improve performance. We evaluate our technique in 18 refactoring implementations of Java (Eclipse and JRRT) and C (Eclipse). We identify 76 bugs (53 new bugs) related to compilation errors, behavioral changes, and overly strong conditions. We also compare the impact of the skip on the time consumption and bug detection in our technique. By using a skip of 25 in the program generator, it reduces in 96% the time to test the refactoring implementations while missing only 3.9% of the bugs. In a few seconds, it finds the first failure related to compilation error or behavioral change.


Sigplan Notices | 2016

A change-centric approach to compile configurable systems with #ifdefs

Larissa Braz; Rohit Gheyi; Melina Mongiovi; Márcio Ribeiro; Flávio Medeiros; Leopoldo Teixeira

Configurable systems typically use #ifdefs to denote variability. Generating and compiling all configurations may be time-consuming. An alternative consists of using variability-aware parsers, such as TypeChef. However, they may not scale. In practice, compiling the complete systems may be costly. Therefore, developers can use sampling strategies to compile only a subset of the configurations. We propose a change-centric approach to compile configurable systems with #ifdefs by analyzing only configurations impacted by a code change (transformation). We implement it in a tool called CHECKCONFIGMX, which reports the new compilation errors introduced by the transformation. We perform an empirical study to evaluate 3,913 transformations applied to the 14 largest files of BusyBox, Apache HTTPD, and Expat configurable systems. CHECKCONFIGMX finds 595 compilation errors of 20 types introduced by 41 developers in 214 commits (5.46% of the analyzed transformations). In our study, it reduces by at least 50% (an average of 99%) the effort of evaluating the analyzed transformations by comparing with the exhaustive approach without considering a feature model. CHECKCONFIGMX may help developers to reduce compilation effort to evaluate fine-grained transformations applied to configurable systems with #ifdefs.


conference on object-oriented programming systems, languages, and applications | 2011

Safira: a tool for evaluating behavior preservation

Melina Mongiovi

We propose a tool (Safira) capable of determining if a transformation is behavior preserving through test generation for entities impacted by transformation. We use Safira to evaluate mutation testing and refactoring tools. We have detected 17 bugs in MuJava, and 27 bugs in refactorings implemented by Eclipse and JRRT.


IEEE Transactions on Software Engineering | 2018

Detecting Overly Strong Preconditions in Refactoring Engines

Melina Mongiovi; Rohit Gheyi; Gustavo Soares; Márcio Ribeiro; Paulo Borba; Leopoldo Teixeira

Refactoring engines may have overly strong preconditions preventing developers from applying useful transformations. We find that 32 percent of the Eclipse and JRRT test suites are concerned with detecting overly strong preconditions. In general, developers manually write test cases, which is costly and error prone. Our previous technique detects overly strong preconditions using differential testing. However, it needs at least two refactoring engines. In this work, we propose a technique to detect overly strong preconditions in refactoring engines without needing reference implementations. We automatically generate programs and attempt to refactor them. For each rejected transformation, we attempt to apply it again after disabling the preconditions that lead the refactoring engine to reject the transformation. If it applies a behavior preserving transformation, we consider the disabled preconditions overly strong. We evaluate 10 refactorings of Eclipse and JRRT by generating 154,040 programs. We find 15 overly strong preconditions in Eclipse and 15 in JRRT. Our technique detects 11 bugs that our previous technique cannot detect while missing 5 bugs. We evaluate the technique by replacing the programs generated by JDolly with the input programs of Eclipse and JRRT test suites. Our technique detects 14 overly strong preconditions in Eclipse and 4 in JRRT.


Computer Languages, Systems & Structures | 2018

A change-aware per-file analysis to compile configurable systems with #ifdefs

Larissa Braz; Rohit Gheyi; Melina Mongiovi; Márcio Ribeiro; Flávio Medeiros; Leopoldo Teixeira; Sabrina Souto

Abstract Configurable systems typically use #ifdefs to denote variability. Generating and compiling all configurations may be time-consuming. An alternative consists of using variability-aware parsers, such as TypeChef. In practice, compiling complete systems may be costly. Therefore, developers use sampling strategies to compile only a subset of the configurations. In our previous work, we propose a change-aware per-file analysis to compile configurable systems with #ifdefs by analyzing only configurations impacted by a code change (transformation). We implement it in a tool called CheckConfigMX , which reports the new compilation errors introduced by the transformation. We extend our previous work by performing an empirical study to evaluate 7,891 transformations applied to 32 files of configurable systems such as Linux and OpenSSL. CheckConfigMX finds 1,699 compilation errors of 34 types introduced by 155 distinct developers in 756 commits (9.19% of the analyzed transformations). In our study, the tool reduces by at least 50% (an average of 99%) the effort of evaluating the analyzed transformations compared to the exhaustive approach and without considering a feature model. In addition, we also evaluate the effectiveness of CheckConfigMX by using mutation testing. We generate 11,229 mutants by applying eight mutant operators to some evaluated files. CheckConfigMX kills all mutants. Therefore, it may help developers to reduce compilation effort to evaluate fine-grained transformations applied to configurable systems with #ifdefs .


foundations of software engineering | 2017

Understanding the impact of refactoring on smells: a longitudinal study of 23 software projects

Diego Cedrim; Alessandro Garcia; Melina Mongiovi; Rohit Gheyi; Leonardo da Silva Sousa; Rafael Maiani de Mello; Baldoino Fonseca; Márcio Ribeiro; Alexander Chávez

Code smells in a program represent indications of structural quality problems, which can be addressed by software refactoring. However, refactoring intends to achieve different goals in practice, and its application may not reduce smelly structures. Developers may neglect or end up creating new code smells through refactoring. Unfortunately, little has been reported about the beneficial and harmful effects of refactoring on code smells. This paper reports a longitudinal study intended to address this gap. We analyze how often commonly-used refactoring types affect the density of 13 types of code smells along the version histories of 23 projects. Our findings are based on the analysis of 16,566 refactorings distributed in 10 different types. Even though 79.4% of the refactorings touched smelly elements, 57% did not reduce their occurrences. Surprisingly, only 9.7% of refactorings removed smells, while 33.3% induced the introduction of new ones. More than 95% of such refactoring-induced smells were not removed in successive commits, which suggest refactorings tend to more frequently introduce long-living smells instead of eliminating existing ones. We also characterized and quantified typical refactoring-smell patterns, and observed that harmful patterns are frequent, including: (i) approximately 30% of the Move Method and Pull Up Method refactorings induced the emergence of God Class, and (ii) the Extract Superclass refactoring creates the smell Speculative Generality in 68% of the cases.


Sigplan Notices | 2017

Avoiding useless mutants

Leonardo Fernandes; Márcio Ribeiro; Luiz Carvalho; Rohit Gheyi; Melina Mongiovi; André L. M. Santos; Ana Cavalcanti; Fabiano Cutigi Ferrari; José Carlos Maldonado


symposium on visual languages and human-centric computing | 2017

TraceDiff: Debugging unexpected code behavior using trace divergences

Ryo Suzuki; Gustavo Soares; Andrew Head; Elena L. Glassman; Ruan Reis; Melina Mongiovi; Loris D'Antoni; Bjoern Hartmann

Collaboration


Dive into the Melina Mongiovi's collaboration.

Top Co-Authors

Avatar

Rohit Gheyi

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Márcio Ribeiro

Federal University of Alagoas

View shared research outputs
Top Co-Authors

Avatar

Gustavo Soares

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Leopoldo Teixeira

Federal University of Pernambuco

View shared research outputs
Top Co-Authors

Avatar

Flávio Medeiros

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Larissa Braz

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Paulo Borba

Federal University of Pernambuco

View shared research outputs
Top Co-Authors

Avatar

Alessandro Garcia

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Alexander Chávez

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

André L. M. Santos

Federal University of Pernambuco

View shared research outputs
Researchain Logo
Decentralizing Knowledge