Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Everton L. G. Alves is active.

Publication


Featured researches published by Everton L. G. Alves.


automation of software test | 2013

A refactoring-based approach for test case selection and prioritization

Everton L. G. Alves; Patrícia D. L. Machado; Tiago Massoni; Samuel T. C. Santos

Refactoring edits, commonly applied during software development, may introduce faults in a previously-stable code. Therefore, regression testing is usually applied to check whether the code maintains its previous behavior. In order to avoid rerunning the whole regression suite, test case prioritization techniques have been developed to order test cases for earlier achievement of a given goal, for instance, improving the rate of fault detection during regression testing execution. However, as current techniques are usually general purpose, they may not be effective for early detection of refactoring faults. In this paper, we propose a refactoring-based approach for selecting and prioritizing regression test cases, which specializes selection/prioritization tasks according to the type of edit made. The approach has been evaluated through a case study that compares it to well-known prioritization techniques by using a real open-source Java system. This case study indicates that the approach can be more suitable for early detection of refactoring faults when comparing to the other prioritization techniques.


formal methods | 2009

Test Case Generation of Embedded Real-Time Systems with Interruptions for FreeRTOS

Wilkerson de L. Andrade; Patrícia D. L. Machado; Everton L. G. Alves; Diego R. Almeida

This paper discusses issues raised in the construction of test models and automatic generation of test cases for embedded real-time systems with interruptions that can run on the FreeRTOS operating system. The focus is on the use of symbolic transition systems (STSs) as the formalism from which test cases are generated by using the STG tool. The solution presented considers a test case execution model for real-time systems with interruptions that can be based on the integrated use of FreeRTOS components. A case study is presented to illustrate all steps from the construction of the test model to test case generation.


foundations of software engineering | 2014

RefDistiller: a refactoring aware code review tool for inspecting manual refactoring edits

Everton L. G. Alves; Myoungkyu Song; Miryung Kim

Manual refactoring edits are error prone, as refactoring requires developers to coordinate related transformations and understand the complex inter-relationship between affected types, methods, and variables. We present RefDistiller, a refactoring-aware code review tool that can help developers detect potential behavioral changes in manual refactoring edits. It first detects the types and locations of refactoring edits by comparing two program versions. Based on the reconstructed refactoring information, it then detects potential anomalies in refactoring edits using two techniques: (1) a template-based checker for detecting missing edits and (2) a refactoring separator for detecting extra edits that may change a programs behavior. By helping developers be aware of deviations from pure refactoring edits, RefDistiller can help developers have high confidence about the correctness of manual refactoring edits. RefDistiller is available as an Eclipse plug-in at https://sites.google.com/site/refdistiller/ and its demonstration video is available at http://youtu.be/0Iseoc5HRpU.


Software and Systems Modeling | 2014

Automatic generation of built-in contract test drivers

Everton L. G. Alves; Patrícia D. L. Machado; Franklin Ramalho

Automatic generation of platform-independent and -dependent built-in contract test drivers that check pairwise interactions between client and server components is presented, focusing on the built-in contract testing (BIT) method and the model-driven testing approach. Components are specified by UML diagrams that define the contract between client and server, independent of a specific platform. MDA approaches are applied to formalize and perform automatic transformations from a platform-independent model to a platform-independent test architecture according to a BIT profile. The test architecture is mapped to Java platform models and then to test code. All these transformations are specified by a set of transformation rules written in the Atlas Transformation Language (ATL) that are automatically performed by the ATL engine. The solution named the MoBIT tool is applied to case studies in order to investigate the expected benefits and challenges to be faced.


Software Testing, Verification & Reliability | 2016

Prioritizing test cases for early detection of refactoring faults

Everton L. G. Alves; Patrícia D. L. Machado; Tiago Massoni; Miryung Kim

Refactoring edits are error‐prone, requiring cost‐effective testing. Regression test suites are often used as a safety net for decreasing the chances of behavioural changes. Because of the high costs related to handling massive test suites, prioritization techniques can be applied to reorder test case execution, fostering early fault detection. However, traditional prioritization techniques are not specifically designed for detecting refactoring‐related faults. This article proposes refactoring‐based approach (RBA), a refactoring‐aware strategy for prioritizing regression test cases. RBA reorders an existing test sequence, using a set of proposed refactoring fault models that define the refactorings impact on program methods.


acm symposium on applied computing | 2015

Test coverage and impact analysis for detecting refactoring faults: a study on the extract method refactoring

Everton L. G. Alves; Tiago Massoni; Patrícia D. L. Machado

Refactoring validation by automated testing is a common practice in agile development processes. However, this practice can be misleading when the test suite is not adequate. Particularly, refactoring faults can be tricky and difficult to detect. While coverage analysis is a standard practice to evaluate a test suites fault detection capability, there is usually low correlation between coverage and fault detection. In this paper, we present an exploratory study on coverage of refactoring-impacted code, in order to identify shortcomings of test suites, focusing on the Extract Method Refactoring. We consider three open-source projects and their test suites. The results show that, in most cases, the lacking of test case calling the method changed in the refactoring increases the chance of missing faults. Also, a high proportion of test cases that do not cover the callers of that method does not reveal the fault either. Additional analysis of branch coverage on the test cases exercising impacted elements show a higher chance of detecting a fault when branch coverage is also high. It seems reasonable to conclude that a combination of impact analysis with branch coverage could be highly effective in detecting faults introduced by Extract Method.


Journal of Systems and Software | 2017

Test coverage of impacted code elements for detecting refactoring faults: An exploratory study

Everton L. G. Alves; Tiago Massoni; Patrícia D. L. Machado

Abstract Refactoring validation by testing is critical for quality in agile development. However, this activity may be misleading when a test suite is insufficiently robust for revealing faults. Particularly, refactoring faults can be tricky and difficult to detect. Coverage analysis is a standard practice to evaluate fault detection capability of test suites. However, there is usually a low correlation between coverage and fault detection. In this paper, we present an exploratory study on the use of coverage data of mostly impacted code elements to identify shortcomings in a test suite. We consider three real open source projects and their original test suites. The results show that a test suite not directly calling the refactored method and/or its callers increases the chance of missing the fault. Additional analysis of branch coverage on test cases shows that there are higher chances of detecting a refactoring fault when branch coverage is high. These results give evidence that a combination of impact analysis with branch coverage could be highly effective in detecting faults introduced by refactoring edits. Furthermore, we propose a statistic model that evidences the correlation of coverage over certain code elements and the suite’s capability of revealing refactoring faults.


IEEE Transactions on Software Engineering | 2018

Refactoring Inspection Support for Manual Refactoring Edits

Everton L. G. Alves; Myoungkyu Song; Tiago Massoni; Patrícia D. L. Machado; Miryung Kim

Refactoring is commonly performed manually, supported by regression testing, which serves as a safety net to provide confidence on the edits performed. However, inadequate test suites may prevent developers from initiating or performing refactorings. We propose RefDistiller, a static analysis approach to support the inspection of manual refactorings. It combines two techniques. First, it applies predefined templates to identify potential missed edits during manual refactoring. Second, it leverages an automated refactoring engine to identify extra edits that might be incorrect. RefDistiller also helps determine the root cause of detected anomalies. In our evaluation, RefDistiller identifies 97 percent of seeded anomalies, of which 24 percent are not detected by generated test suites. Compared to running existing regression test suites, it detects 22 times more anomalies, with 94 percent precision on average. In a study with 15 professional developers, the participants inspected problematic refactorings with RefDistiller versus testing only. With RefDistiller, participants located 90 percent of the seeded anomalies, while they located only 13 percent with testing. The results show RefDistiller can help check the correctness of manual refactorings.


2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST) | 2017

Analyzing automatic test generation tools for refactoring validation

Indy P. S. C. Silva; Everton L. G. Alves; Wilkerson de L. Andrade

Refactoring edits are very common during agile development. Due to their inherent complexity, refactorings are know to be error prone. In this sense, refactoring edits require validation to check whether no behavior change was introduced. A valid way for validating refactorings is the use of automatically generated regression test suites. However, although popular, it is not certain whether the tools for generating tests (e.g., Randoop and EvoSuite) are in fact suitable in this context. This paper presents an exploratory study that investigated the efficiency of suites generated by automatic tools regarding their capacity of detecting refactoring faults. Our results show that both Randoop and EvoSuite suites missed more than 50% of all injected faults. Moreover, their suites include a great number of tests that could not be run integrally after the edits (obsolete test cases).


brazilian symposium on software engineering | 2018

Can automated test case generation cope with extract method validation

Indy P. S. C. Silva; Everton L. G. Alves; Patrícia Duarte de Lima Machado

Refactoring often requires regression testing to check whether changes applied to the code have preserved its behavior. It is usually tricky to create an effective test suite for this task, since refactoring is not often applied in isolated steps. Rather refactoring edits may be combined with other edits in the code. In this sense, test case generation can contributed to this task by systematically analyzing the code and providing a wide range of test cases that address different constructions in the code. However, a number of studies in the literature have found that current tools can be ineffective regarding fault detection. In this paper, we present an empirical study that applies the Randoop and Evosuite tools for generating regression test suites, focusing on detecting Extract Method faults. Based on the study results, we identify factors that may influence on the performance of the tools for effectively testing the edits. To validate our findings, we present a set of regression models that associate the presence of these factors with the capability of the test suite detect faults related to the refactoring edit.

Collaboration


Dive into the Everton L. G. Alves's collaboration.

Top Co-Authors

Avatar

Patrícia D. L. Machado

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Tiago Massoni

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Wilkerson de L. Andrade

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Franklin Ramalho

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Miryung Kim

University of California

View shared research outputs
Top Co-Authors

Avatar

Indy P. S. C. Silva

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anderson G.F. Silva

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Diego R. Almeida

Federal University of Campina Grande

View shared research outputs
Top Co-Authors

Avatar

Emanuela Gadelha Cartaxo

Federal University of Campina Grande

View shared research outputs
Researchain Logo
Decentralizing Knowledge